CN102158684A - Self-adapting scene image auxiliary system with image enhancement function - Google Patents

Self-adapting scene image auxiliary system with image enhancement function Download PDF

Info

Publication number
CN102158684A
CN102158684A CN2010101105731A CN201010110573A CN102158684A CN 102158684 A CN102158684 A CN 102158684A CN 2010101105731 A CN2010101105731 A CN 2010101105731A CN 201010110573 A CN201010110573 A CN 201010110573A CN 102158684 A CN102158684 A CN 102158684A
Authority
CN
China
Prior art keywords
image
scene
unit
information
self adaptation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010101105731A
Other languages
Chinese (zh)
Inventor
王炳立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN2010101105731A priority Critical patent/CN102158684A/en
Publication of CN102158684A publication Critical patent/CN102158684A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a self-adapting scene image auxiliary system with an image enhancement function, which can provide most intuitive and effective information for operating personnel through self-adapting judgment on an environment at the periphery of a vehicle and the enhancement of images. The system comprises a unit I (image sensor unit), a unit II (image processing unit), a unit III (decision making unit) and a unit IV (output unit), wherein the unit II (image processing unit) comprises an image acquisition unit, a distortion correction unit, an image anti-shake and image enhancement unit, a vision angle shifting unit and an image fusion unit. The system can obviously reduce the sight blind area of the operating personnel and enhance the display effect of the images, so that the operating personnel can see the intuitive and effective information as much as possible.

Description

Self adaptation scene image auxiliary system with image enhancement functions
Technical field
The present invention relates to the self adaptation scene image auxiliary system used in the systems such as automobile and mechanically moving with image enhancement functions.
Background technology
In various mechanically moving field operation, travel as motor vehicles, ship entry and by narrow zone etc., loading machines such as tower crane goods grasp and move, when having displacement between this type of different objects such as placement, particularly under the narrower and small situation of the permission mobile route of mobile object, need operating personnel to consider very much careful surrounding enviroment when mobile, touch, collide and rub to prevent.
A common example is that the driver is the driver of oversize vehicle especially. owing to a series of blind areas can appear in covering of car body: there is viewing blind zone in the deficiency of the viewing angle of rearview mirror and rear-view mirror.Because the existence of this type of blind area, vehicle body space sense to the driver proposes high requirement, especially when steering vehicle during by narrow zone and parking, reversing, human pilot be easy to because the existence of vision blind area and when advancing owing to observe not cause and scratch comprehensively, collision and produce dangerous.
Settling mode commonly used is ultrasonic radar for backing car and reversing camera, radar for backing car can be better carries out distance detecting to the barrier in the region between the heart and the diaphragm district, and the testing result of barrier pointed out by the mode of sound and figure, can carry out limited prompting to the driver like this.But there is a significantly problem directly perceived in ultrasonic radar, simultaneously because the number of range finding probe is limited, can't accomplish that therefore comprehensive all angles all carry out reliable Detection.Simultaneously because range finding exists certain precision error, this type of distance-finding method that range finding accurately and reliably can't be provided.
The another one solution is the reversing camera, and the method is finished picked-up and demonstration to the rearview picture by a wide-angle imaging head is installed at the tailstock, and this method cooperates range radar etc. can finish collection and demonstration to the subsequent figures picture simultaneously.But there is a tangible drawback in this method: only to the collection and the demonstration of the realtime graphic of vehicle back, he can't carry out panoramic picture this method, is that this method lost efficacy when vehicle is in non-reversing state therefore.
Along with the development of image processing techniques, a kind of backing system of panorama has appearred at present, and this system splices after the image of a plurality of cameras is calibrated through distortion, and is shown to the driver, to help human pilot the environment of periphery is tested and assessed.The highly effective problem that solves the peripheral vehicle visual field of the method, as patent: 200710106237.8, they have well solved the problems referred to above.
But also there are some places to be improved in above-mentioned patent: owing to only adopt 3 road IMAQs excessive vehicle of volume or trailer are carried out image when synthetic such as 200710106237.8, can cause the visual field and resolution requirement raising to the optical module of imageing sensor, this patent is not spoken of the distortion in images rectification simultaneously, when adopting the wide-angle imaging head, the fault image of not correcting will make the observable spatial relationship variation of observer; Simultaneously above-mentioned patent also can't be targetedly turns to trend to highlight to travel direction and speed and driver, the function that also image is not strengthened when rainy day in greasy weather or lighting condition are poor is so that the clear effective image of the adaptive acquisition of operating personnel.
Summary of the invention
The objective of the invention is the defective of above-mentioned prior art is improved, a kind of self adaptation scene image auxiliary system with image enhancement functions is provided, so that peripheral in all directions image information to be provided, this system can provide adaptive scene variation and image is carried out enhancement process so that more effective information to be provided according to monitoring object of which movement trend.
The objective of the invention is to be achieved through the following technical solutions, have the self adaptation scene image auxiliary system of image enhancement functions, comprise as lower unit:
Unit 1, comprise a plurality of imageing sensors in the image sensor cell, transducer is installed on monitored object periphery, and (monitored object refers in this patent, need native system to carry out the equipment of surrounding environment monitoring, comprise vehicle, machinery etc.), and two adjacent area of visual field that the imageing sensors existence overlaps each other, and guarantee not stay blind area; This imageing sensor is being used in combination of color image sensor, black and white image transducer, infrared image sensor or they, such as the black and white image transducer with the combination that color image sensor combines or the black and white image transducer combines with infrared image sensor or color image sensor combines with infrared image sensor, black and white image transducer and color image sensor and infrared image sensor combine.
Unit 2, graphics processing unit, IMAQ, processing, fusion are finished in this unit.
Unit 3, the scene decision package, this unit is used to produce and helps the scene parameter that the observer understands scene information, this unit receives translational speed and the movement tendency information that the operator operates intent information and/or object to be detected, it is output as best view information and image scaled information, if image scaled information is steady state value, then can ignore image scaled information, only export best view information.
Comprise four subelements in the unit 2 of the present invention:
Subelement 21, the image temporary storage location is mainly finished the collection of each imageing sensor image and temporary.
Subelement 22, the distortion correction unit, this unit is finished fault image and is corrected, being used to correct because the distortion of lens distortion and the undesirable generation of imageing sensor. these aberrations comprise: tangential distortion, radial distortion, the thin prism distortion, decentering distortion etc., these distortion are described by imageing sensor lens optical parameter (having comprised the undesirable distortion parameter that causes of optical system) and imageing sensor distortion parameter, and this class parameter also is known as inner parameter in field of machine vision.Image is corrected the lens optical parameter and the imageing sensor distortion parameter that just are to use imageing sensor and is finished the process that distortion is eliminated.
The view data of image rectification module before to distortion correction adopts the view data after the method for interpolation or convolution is tried to achieve distortion correction.
The lens optical parameter of the imageing sensor that this unit needs and imageing sensor distortion parameter are obtained through " demarcation " by demarcating module 5.
Subelement 23, the anti-shake and image enhancing unit of image comprises the anti-shake module of subelement 231 images, subelement 232 image enhancing unit.
The anti-shake unit of image adopts electronic flutter-proof and algorithm for image enhancement to improve the synthetic quality of image, and opposing is owing to the inter frame image that the imageing sensor slight jitter produces is fuzzy.Promptly, the anti-shake inter frame image jitter problem that mainly solves of image: because imageing sensor is constantly shaken, therefore the image that photographs might can move by occurrence positions between different frames, and this unit is finished the detection of shake and eliminated function, to improve the quality of image co-registration and splicing.The anti-shake unit of image comprises three subelements: motion estimation unit, motion estimation unit, motion compensation units.
Image enhancing unit is finished the enhancing of the adjustment of image brightness and picture contrast and image is sketched the contours, to improve the readability of composograph.Image enhanson 232 comprises three subelements: subelement 2331 brightness adjustment units, and subelement 2332 contrast enhancement unit, subelement 2333 edges sketch the contours the unit.
Since each camera photo-character there are differences, the image brightness adjustment is mainly used in the inconsistent situation of obvious brightness that exists in the image from the different images transducer after correcting image merges, after the image brightness rectification, will not exist in the composograph because the tangible segmented areas that photobehavior causes.
The figure image intensifying is to strengthen obtaining image degree of comparing, when raining, can causing imageing sensor to photograph fuzzy degraded image when greasy weather or lighting condition difference, it is exactly to be used to handle the image deterioration that causes owing to this type of reason that picture contrast strengthens module, to improve the visual effect of image.
The figure image intensifying comprises that also the edge sketches the contours the unit, that is, and and to sketching the contours and,, effectively point out guide in the image border to observer's sight so that image has stronger identifiability to the unit that barrier is given prominence to and delineated.
Subelement 24, the view transformation unit is used for scene parameter according to the scene decision package image after to each imageing sensor distortion correction of the output of subelement 22, as the ratio of visual angle and image various piece, carry out view transformation and image zoom, to form the certain viewing angles image.
With linear camera model (pinhole imaging system model) is that example is described this view transformation:
u ′ v ′ 1 = AR 2 ′ R 1 ′ - 1 A - 1 u v 1
In the formula: [u v] TBe the pictorial element before the view transformation, i.e. image under original visual angle, [u ' v '] TBe the pictorial element behind the view transformation, R 1Be original rotation homography matrix, be the spin matrix and the translation vector formation R of imageing sensor position 1=[r 1, r 2, t '], r 1And r 2Be spin matrix
Figure GSA00000033630600042
First column vector With the secondary series vector R ' 2Be new viewpoint rotation homography matrix, spin matrix and the translation matrix by new viewpoint position constituted equally;
A is the inner parameter model of camera, is defined as
α x, α y, u 0, v 0, γ is the linear model inner parameter:
α x, α yBe respectively u, the scale factor of v axle, or be called effective focal length α x=f/dx, α y=f/dy, dx, dy are respectively the pixel spacing of horizontal direction and vertical direction;
u 0, v 0It is optical centre;
γ is the out of plumb factor of u axle and v axle, γ=0 under a lot of situations;
The attitude parameter and the location parameter that need each imageing sensor during view transformation, this parameter is installed the back in system and is recorded through " demarcation " process by demarcating module 5 in " demarcating the zone ".
The image that subelement 24, image co-registration unit, this unit are finished the image of each imageing sensor picked-up and monitored device or car body merges and splices, and forms the composograph of monitored object peripheral visual field.The method that merges is spliced for the direct image pixel from two imageing sensors with registration position.
The position of registration, i.e. splicing line position is to determine according to the field range of two cameras and definition scope, guarantees that the image definition from two imageing sensors is consistent in the splicing line both sides.The registration line is camber line or straight line.Can select for use the original image that directly will decompose the line both sides by moving the image that forms after merging during image co-registration, splicing line promptly can show special color and gray scale, be beneficial to the observer position that defines boundaries, also display splicing line not is not to destroy the globality of image.
Except the image splicing, the fusion method that also has a kind of image is that integration region adopts the image from two adjacent regions to be weighted on average at same merging point, does not just have splicing line in splice region like this, and image correspondence can be better.
To merge the position more accurate in order to make when image co-registration, need carry out registration to the image of integration region, the method of registration is: seek characteristics of image near splicing regions, as angle point, profile, features such as edge, utilize these characteristics of image that the image of two imageing sensors is mated then, to seek best matched line, image merges on this optimum Match line then, to avoid the appearance of situations such as ghost image.Can adopt the mode of variable weight to carry out the variable weight fusion during fusion, in splice region, any image of after the image co-registration certain, more little away from the point of imageing sensor shared weight in fused images more, come then big more from the weight of the image of another one imageing sensor at same position.
In the scene image of image co-registration unit after fusion, need to increase the plan view from above of monitored system (as car body), perhaps overlook 3-D view, perhaps with the corresponding to three-dimensional perspective image in visual angle of decision package output, this image has certain transparency, to show the peripheral image information of being covered by equipment.The transparency of monitored object can be set by hand, and the color of the color of monitored object images of while etc. also can be set.
The image co-registration module also can increase such as satellite informations such as obstacle distances except the image information with each imageing sensor merges; The signal of external sensor can be able to be merged in the scene image that is presented at final panorama, as range information, obstacle information waits external parameter to be merged.
Module 5, demarcating module, be used to calculate the lens optical parameter of imageing sensor and imageing sensor distortion parameter (promptly, this class parameter is known as inner parameter in field of machine vision) and the attitude and the location parameter (that is, this class parameter is known as external parameter in field of machine vision) of imageing sensor.
Demarcate the unit, the demarcation of imageing sensor is finished in this unit.The demarcation of imageing sensor is called camera calibration (Camera Calibration) in field of machine vision.
The scaling method of imageing sensor commonly used is divided into the camera calibration method of nonlinear model or linear model, and classic algorithm comprises:
Non-linear camera calibration method (RAC) based on radial constraint, list of references is: " A versatileCamera Calibration Technique for High-Accuracy 3D Machine Vision MetrologyUsing Off-the-Shelf TV Cameras and Lenses ", Roger Y.Tsai, IEEE Journal ofRobotics and Automation, Vol.RA-3, No.4, August 1987, pages 323-344.
The camera calibration algorithm of Zhang Zhengyou based on the 2D target, list of references: " Z.Zhang.A flexiblenew technique for camera calibration " .IEEE Transactions on Pattern Analysis andMachine Intelligence, 22 (11): 1330-1334,2000.
The scaling method of catadioptric formula camera and fisheye camera, list of references is: Scaramuzza, D. (2008). " Omnidirectional Vision:from Calibration to Robot Motion Estimation " ETHZurich Thesis no.17635.
System needs the lens optical parameter and the imageing sensor distortion parameter of imageing sensor in use, and this type of parameter can be calculated after device fabrication or system install.That is: can be by imageing sensor in production process " demarcation " calculate them; If do not calculate these parameters in process of production, must at first measure them when then installing in system.
Demarcation needs to use " calibrating template " to realize, should " calibrating template " can also can be independent " calibrating template " for " calibrating template " in " demarcating the zone ".
The method of demarcating is: each imageing sensor is taken pictures to " calibrating template " in different azimuth and attitude, and obtaining different attitudes " calibrating template " photo, ordinary circumstance photo number will at least four.Demarcating module calculates each imageing sensor lens optical parameter and imageing sensor distortion parameter by these photos, finishes calibration process.
" calibrating template ": calibrating template is made of the plane pattern with special frame structure or d pattern or wire pattern, the structure of calibrating template and size are pre-set, calibrating template is to have to comprise the square template that several have straight line or curvilinear characteristic or angle point feature, as gridiron pattern or grid or some discrete square pattern compositions; Perhaps be the template that some polygons are formed, polygonal as triangle etc.; Perhaps be round template, promptly have the calibrating template of some round patterns in the template; Perhaps constitute template by straight line.
System also must demarcate imageing sensor after installation, and with attitude parameter and the location parameter of calculating each imageing sensor, calibration process need realize in " demarcating the zone ":
" demarcating the zone " refers to several " calibrating template " is placed in the same area, each " calibrating template " placed with predefined attitude and position in this " demarcates regional ", as: horizontal positioned, the vertical combination of placing, tilting placement or above-mentioned attitude.
In " demarcating the zone " " benchmark stop position " is set, this position is limited and is formed by some " benchmark stop reference lines ", and the shape of this reference line position and coordinate are to preestablish.Need rest against this benchmark stop position by calibration facility (monitored device) at timing signal.The reference line with matched by some resemblance of calibration facility, accurately stopped easily by calibration facility guaranteeing.
Timing signal will be stopped " benchmark stop position " by calibration facility, keeps constant by the calibration facility position then, and each imageing sensor is carried out IMAQ, calculates each imageing sensor then with respect to " calibrating template " attitude rotation amount and displacement.Because each " calibrating template " position and attitude in the whole calibrating zone all is known, the position and the attitude in the overall situation of each imageing sensor all can be calculated like this, so just can obtain the attitude parameter and the displacement parameter of each imageing sensor, i.e. external parameter;
If datum line is not set in the demarcation zone, in " demarcating the zone " several " calibrating templates " can be set, module is placed in the overlapping region of two adjacent images transducer, make two imageing sensors can both photograph the pattern of this template simultaneously, the attitude and the position of same these " calibrating templates " all are predefined, timing signal carries out the location in twos of each imageing sensor, then can calculate the attitude parameter and the displacement parameter of each imageing sensor successively.
Being demarcated the dimension information of object (monitored object) can measure in advance.
Output unit 4 is exported the picture signal of handling: output to the image device and show and/or output to that storage device is stored and/or carry out communication by communication equipment and other equipment.
The invention provides and a kind ofly have the self adaptation scene image auxiliary system of image enhancement functions so that peripheral in all directions image information to be provided, this system can provide adaptive scene variation and image is carried out enhancement process so that more effective information to be provided according to monitoring object of which movement trend, obtains good result.
Description of drawings
Further specify the present invention below in conjunction with the drawings and specific embodiments.
Fig. 1 is this concrete composition structure chart of implementing.
Fig. 2 is this concrete calibrating template schematic diagram of implementing use.
Fig. 3 is this concrete viewpoint position schematic diagram of implementing to use the decision-making of self adaptation scene.
Fig. 4 is this concrete viewpoint position schematic diagram of implementing to use the decision-making of non-self-adapting scene.
Fig. 5 is this concrete each imageing sensor splicing geometric areas schematic diagram of implementing to use, splicing geometric areas schematic diagram when 5A travels also right hand steering for the vehicle forward direction, 5B is that vehicle is thought the splicing geometric areas schematic diagram of left side when retreating, 5C is that vehicle moves ahead and travels and do not have a splicing geometric areas schematic diagram when turning to trend, and 5D is low vehicle speeds and the nothing splicing geometric areas schematic diagram when turning to trend.
Fig. 6 A is this concrete plane reference zone of implementing use, and A1, A2, A3, A4 are for calculating external parameter with demarcating target in the scene, and the P point is the intersection point of two reference lines.
Fig. 6 B is that this on-plane surface of specifically implementing to use is demarcated the zone, is a scene of demarcating the zone in the surroundings wall installation, and on the wall around calibrating template is positioned at, the reference line is present on ground and the wall.
Fig. 7: the anti-shake and image enhancing unit structural representation of image of implementing use for this is concrete.
Fig. 8: be this concrete equal proportion scene display mode of implementing the scene decision-making of use, curve is apart from car geometric center point line of equidistance.
Fig. 9: be this concrete non-equal proportion scene display mode of implementing the scene decision-making of use, curve is apart from car geometric center point line of equidistance.
Embodiment
Below, introduce specific embodiments of the invention in detail, so that further understand content of the present invention by Fig. 1~Fig. 9.
To be installed in the self adaptation scene image auxiliary system that automobile has image enhancement functions is example, and the present invention is described, it specifically comprises following steps:
Module 1, image sensor module, obtain the image information of effective surrounding enviroment by this module, imageing sensor be distributed in vehicle around, General System comprises more than or equal to an image sensing module, less car body can adopt 4 tunnel or 6 road image sensor packages, and bigger car body will need the more imageing sensor of more number.There is the area of visual field that overlaps each other in each adjacent imageing sensor, and does not have blind area.When transducer is installed, can consider to comprise in the visual field part car body, the part of promptly taking car body is to show the position of car body and barrier relation to guarantee in splicing;
Module 2, image processing module is finished the collection of image and processing, fusion by this module;
Module 21, image acquisition units is mainly finished the collection of image of each imageing sensor and temporary;
Module 22, the picture distortion rectification module, this module is used the calibrated parameter of correspondence image transducer, and the image that photographs is carried out distortion correction;
Module 23, the anti-shake and Image Enhancement Based piece of image, as shown in Figure 7, this module adopts electronic flutter-proof and algorithm for image enhancement to improve picture quality, is mainly used in the opposing image owing to the inter frame image that slight jitter produces is fuzzy; And finish image brightness adjustment and/or picture contrast enhancing and/or image and sketch the contours, to improve the readability of composograph.
The anti-shake module of image, because by the vibration that car body can unavoidably take place, promptly imageing sensor can constantly be shaken, therefore the image of taking might move by occurrence positions between different frames, at this moment, image is anti-shake will to cause the image blurring and piece mistake of image if do not carry out.This module is promptly finished the detection of shake and is eliminated function, eliminates shake and can improve the quality of image co-registration and splicing.
The flating cancellation module comprises: motion detection, estimation and motion compensation:
Motion detection algorithm commonly used comprises: sciagraphy (PA:Projection Algorithm), representative point matching algorithm (RPM:Representative Point Matching), bit plane matching method (BPM:Bit PlaneMatching) etc., adopt said method, can obtain the translation of image and the estimation of rotation and person's zoom level.
Estimation is estimated effective exercise, and it estimates the kinematic parameter of the existence of consecutive frame, influence by the filtering random motion, and obtain effective global motion vector, and obtain the movement tendency of sequence frame, obtain the value of actual image shift, rotation and convergent-divergent etc.
Motion compensation, skew, rotation and the convergent-divergent of the image that calculates according to motion estimation module compensate the image that the randomized jitter that is eliminated is later to original image.
Image brightness is adjusted, and is mainly used in to correct the image brightness difference that causes owing to the photo-character of each camera is inconsistent, and this difference will cause having the inconsistent zone of obvious brightness in the image after fusion.The brightness regulation coefficient passes through to obtain in the brightness measurement of demarcating scene when using for the first time in system, calculates in real time when perhaps moving in system.Through the image that brightness is adjusted, the zone of different brightness can not appear when splicing.
It is to strengthen obtaining image degree of comparing that picture contrast strengthens, common picture contrast enhancement algorithms comprises: histogram equalization method (HE), local histogram's equalization (AHE), non overlapping blocks histogram equalization method (POSHE), interpolation adaptive histogram equalization, and broad sense histogram equalization method.All can realize the self adaptation contrast enhancing of the local message of image with the contrast of increase image, is improved the naked eyes identifiability of degraded image by above-mentioned method, improve the visual effect of image.
The figure image intensifying also comprises sketches the contours outstanding module of delineating with barrier to the image border, and the edge sketches the contours module.This module makes image have stronger identifiability, effectively points out guide to observer's sight.In operating process, operation needs to use the surplus light of eyes to observe native system sometimes, therefore needs to adopt appropriate method that image is given prominence to.Can adopt numerous image border operators that the edge of image feature is extracted, and can carry out feature detection to special pattern according to the feature of barrier, to detect special object, after detecting barrier, position according to its edge, to sketching the contours or adopt special shape (circle, rectangle etc.) in the edge with in barrier sign and the image.The thickness etc. that whether adopts image to sketch the contours the color edges of the boundary line that function or image sketch the contours can be controlled by operating personnel, can as data such as obstacle distances, special area be delineated according to the sensor external data simultaneously.
Four submodules in the anti-shake and Image Enhancement Based piece of image can make up as required, when each imageing sensor vibration can be ignored, can save this module.When not needing the image border sketched the contours, can remove image and sketch the contours module etc. equally.Same brightness adjustment and contrast-enhancement module also can be by selectively removing.
Module 3, the scene decision-making module, the scene decision-making generates the scene configuration parameter according to the movement tendency information of vehicle or operator's operation intention, this module receives the movement tendency information of object to be detected or the information that the operator operates intention, calculates self adaptation scene parameter.The scene parameter is in this example: each regional percentage of view directions and image, and case method is as follows:
When system configuration when speedometer and direction rotary angle transmitter, the scene decision-making can adopt adaptive mode to carry out choosing of scene viewpoint, if the speed of a motor vehicle detected and was v this moment, the virtual view of system will be positioned at the distance L place of the direction of motion of a direction of motion, L=f (vT s), T in the formula sBe the demonstration time, f (x) is the function of independent variable x, and according to the real system needs, this function f is definite value, piecewise function or analytical function.L can not be greater than the maximum in this directional image transducer visual field simultaneously.Simultaneously whether choosing also with in the scene of L exists barrier that relation is arranged, the scene display effect as shown in Figure 3, promptly viewpoint is relevant with steering angle in the speed of a motor vehicle;
The scene decision-making also can be selected several points of fixing, and as shown in Figure 4, the scene viewpoint can be selected P0 several constant bearing points to P5.The scene decision-making is according to direction of vehicle movement and turn to trend to determine suitable scene viewpoint.Vehicle when nothing turns to trend, and be in forward travel state, the scene visual point selection is selected the P0 point; Vehicle when nothing turns to trend, and be in fallback state, the scene visual point selection is selected the P4 point; When vehicle is in forward travel state, the vehicle when trend of turning right, the scene visual point selection is selected the P2 point; When vehicle is in forward travel state, the vehicle when trend of turning left, the scene visual point selection is selected the P1 point; Equally, when vehicle is in fallback state, the vehicle when trend of turning right, the scene visual point selection is selected the P5 point; When vehicle is in fallback state, the vehicle when trend of turning left, the scene visual point selection is selected the P3 point;
If the unassembled vehicle speed sensor of system uses gear position sensor can determine the traffic direction of current vehicle, if when gear is positioned at drive shift, decidable is that vehicle is in direction of advance or there is the operation intention of advancing in the operator.When gear is positioned at when retreating grade, decidable is that vehicle is in fallback state, and there is the operation intention that retreats in perhaps current operator.
If the unassembled steering wheel angle sensor of system then turns to the trend can be by the state justify of direction indicator lamp:, judge that automobile storage is in left steering trend when direction indicator lamp is in when opening for the left steering lamp; Same be in when opening for the right turn lamp when direction indicator lamp, the judgement automobile storage is in right turn trend;
When acceleration transducer has assembled in system, and the vehicle body longitudinal axis of the sensitive direction of transducer and vehicle is vertical and be positioned on the horizontal plane, and can judge the trend that turns to that vehicle body exists this moment by the output of acceleration transducer; This acceleration signal will be through filtering removing The noise, when filtered acceleration signal detect left to acceleration when spending, can be judged to be automobile storage and turn to trend left; When the acceleration signal that detects to the right, then, be judged to be and exist acceleration to turn to trend to the right.When filtered acceleration signal is lower than certain thresholding, then is judged to and does not have the trend that effectively turns to.
Multiple when turning to the trend transducer when existing, as possessing the direction indicator lamp transducer simultaneously, the deflection transducer, and during acceleration transducer, system turns to the trend judgement with comprehensive the sensor information, also can adopt priority orders to judge, an available priority is: the direction indicator lamp signal is higher than steering wheel angle sensor information, and steering wheel angle sensor information priority level is higher than acceleration transducer;
In the time can supplying decision-making without any information, the scene decision-making will be selected the top view point, and PT as shown in Figure 4 is default viewpoint, promptly be selected from autocentre point top; Except self adaptation scene conversion, the scene of system can be according to operator's needs and the selection of hand-guided and scene;
The scene decision-making module also will produce control with changed scale chi information except the visual angle is adjusted, this engineer's scale is to be definite value, mean that in spliced picture, ratio is consistent, as shown in Figure 7; When the bigger viewpoint of field range, because display sizes is limited, therefore to select be not a definite value but one and the function of distance to engineer's scale, to reach apart from the vehicle position far away more, image compression severe more, and in the nearer scope of distance vehicle, the picture compression is less with respect to far-end, as shown in Figure 6;
Module 23, visual angle change module, this module are finished the visual angle conversion, and its visual angle change in location is obtained by the output of step 3, this module is determined mapping relations between image by known image sensing station and attitude and new viewpoint position and attitude, finishes view transformation by the picture element interpolation mapping;
Module 24, image co-registration module, this module be according to different view information, with each through the image sets of view transformation altogether; As the different splicing integration programs of Fig. 7 A to different viewpoint shown in Fig. 7 D and direction of visual lines formation, in Fig. 7 A, when vehicle movement trend is the right front, direction of visual lines be the car forward right side to the left back to, splice among the figure this moment, the image of forward right side is preferential the demonstration, can obtain the effective information of forward right side to guarantee human pilot; In like manner, in Fig. 7 B, when vehicle movement trend is the left back, thus direction of visual lines be the car left rear side to the right front to, splice among the figure this moment, the image of left rear side be preferential demonstration, can obtain the effective information of left rear side to guarantee human pilot; Same 7C will guarantee that also the driver can obtain the effective information of the direction of motion; Fig. 7 D has shown the scene of a no preferential display message, and this moment, the image viewpoint was positioned at the top of vehicle geometric center, and the image of vehicle's surroundings is impartial to be shown;
The image co-registration module also can increase such as satellite informations such as obstacle distances except the image information with each imageing sensor merges; The signal of external sensor can be able to be merged in the scene image that is presented at final panorama, as range information, obstacle information waits external parameter to be merged.With the ranging system is example, the panorama system is according to the position and the range measurements of probe, corresponding hazardous area line will be shown as among the figure of this information after fusion, this line is to highlight with the fan-shaped consistent arc of detection, and the edge is carried out in this zone strengthen, perhaps the form with numeral marks;
Module 5, demarcating module, in system during first the installation or when calibrating specially, needs being carried out the attitude and the location parameter of imageing sensor demarcates, this example has adopted the example of " the demarcating the zone " of enumerating a plane, vehicle parking " benchmark stop position " like this, by adopting the demarcation target example among Fig. 2 (also to be, A1 among Fig. 6 A, A2, A3, the demarcation target of A4) carries out the calibrating external parameters of each imageing sensor, by this attitude and location parameter parameter and calibrating template data, can obtain position and attitude information, promptly obtain the original position-information of imageing sensor with respect to each imageing sensor of whole calibrating zone.Among the figure, Ls is a calibration zone length, and Ws is the calibration zone width, and Wc is the tested vehicle width, and Lc is a tested vehicle length, and Lp is a reference line apart from calibration zone left side distance, and Wp is that reference line is apart from calibration zone below distance.A1, A2, A3, A4 are four " calibrating templates ".
Fig. 6 A has illustrated a kind of plane calibration zone, it is characterized by each calibrating template is placed in the horizontal zone, the timing signal imageing sensor is taken the calibrating template that is in horizontal plane, Fig. 6 B has illustrated one to demarcate the zone, its calibrating template is perpendicular to the horizontal plane placement, promptly places at vertical metope.In fact according to the needs of system, calibrating template can be taked vertically, level or the combination of placing with specific attitude.
Can adopt " the camera calibration algorithm of Zhang Zhengyou " based on the 2D target, perhaps " scaling method of catadioptric formula camera and fisheye camera " to the lens optical parameter and the imageing sensor distortion parameter of the imageing sensor of each imageing sensor, and attitude parameter and location parameter to calculate each imageing sensor.
After finishing the video camera calibrating external parameters, calculate each adjacent image-position sensor overlapping region coordinate, and get the brightness adjustment factor of overlapping region;
Module 4, output module is exported the picture signal of handling: output to the image device and show and/or output to that storage device is stored and/or carry out communication by communication equipment and other equipment.
Self adaptation scene image householder method provided by the invention, adjudicate by adaptive environment vehicle periphery, be that operating personnel can see effective information directly perceived as much as possible, with effective raising fail safe, equally in various movable machineries field, also can enhance productivity for the operator provides effective information as much as possible.
More than show and described basic principle of the present invention, principal character and advantage of the present invention.The technical staff of the industry should understand; the present invention is not restricted to the described embodiments; that describes in the foregoing description and the specification just illustrates principle of the present invention; the present invention also has various changes and modifications without departing from the spirit and scope of the present invention, and these changes and improvements all fall in the claimed scope of the invention.

Claims (15)

1. self adaptation scene image auxiliary system with image enhancement functions, it is characterized in that: this system comprises as lower unit:
Image sensor cell, this unit obtains the image information of effective surrounding enviroment;
Graphics processing unit, IMAQ, processing, fusion are finished in this unit;
The scene decision package, the generation of self adaptation scene parameter is finished in this unit;
Output unit, this unit are used for the picture signal output after handling;
In the described graphics processing unit, comprise and utilize the image enhancing unit that realizes the figure image intensifying that image brightness adjustment and picture contrast strengthen and sketch the contours the image border, and/or one by utilizing motion detection algorithm and estimation and motion compensation to realize to prevent the anti-shake unit of image of flating.
2. the self adaptation scene image auxiliary system with image enhancement functions according to claim 1, it is characterized in that: described image sensor cell comprises a plurality of imageing sensors, transducer is installed on monitored object periphery, and there is the area of visual field that overlaps each other in two adjacent imageing sensors, and described imageing sensor is color image sensor, black and white image transducer, infrared image sensor or their applied in any combination.
3. the self adaptation scene image auxiliary system with image enhancement functions according to claim 1, it is characterized in that: described graphics processing unit also comprises image acquisition units, distortion correction unit, view transformation unit and image co-registration unit.
4. the self adaptation scene image auxiliary system with image enhancement functions according to claim 3, it is characterized in that: described distortion correction unit is used to correct because the distortion that the imageing sensor camera lens produces and the distortion of the undesirable generation of imageing sensor image device.
5. the self adaptation scene image auxiliary system with image enhancement functions according to claim 3, it is characterized in that: described view transformation unit, by viewpoint and the sight line information after imageing sensor attitude and positional information and the scene decision-making, carry out view transformation and image zoom, to form the image of certain viewing angles.
6. the self adaptation scene image auxiliary system with image enhancement functions according to claim 3, it is characterized in that: described image co-registration unit, the image of the image of each imageing sensor picked-up and monitored device or car body is merged and splice, with the formation composograph.
7. the self adaptation scene image auxiliary system with image enhancement functions according to claim 1, it is characterized in that: described scene decision package, be used to produce and help the scene parameter that the observer understands scene information, this unit receives translational speed and the movement tendency information that the operator operates intent information and/or object to be detected, and it is output as best view information and image scaled information.
8. the self adaptation scene image auxiliary system with image enhancement functions according to claim 7, it is characterized in that: described scene decision package receives translational speed and the movement tendency information that the operator operates intent information and/or object to be detected and comprises: rate signal, gear signal, turn to one of trend signal, acceleration signal, direction indicator lamp signal or its combination;
9. the self adaptation scene image auxiliary system with image enhancement functions according to claim 8, it is characterized in that: described image enhancing unit comprises three subelements: the brightness adjustment unit, the contrast enhancement unit, the edge sketches the contours the unit.
10. the self adaptation scene image auxiliary system with image enhancement functions according to claim 7, it is characterized in that: described scene decision package can obtain to turn to the information of trend according to direction indicator lamp signal or acceleration signal, and this information can be used for the scene decision package and carries out the scene decision-making.
11. the self adaptation scene image auxiliary system with image enhancement functions according to claim 6, it is characterized in that: described scene decision package can obtain to advance or retreats the information of trend according to velocity transducer and TR, and this information can be used for the scene decision package and carries out the scene decision-making
12. the self adaptation scene image auxiliary system with image enhancement functions according to claim 7, it is characterized in that: described scene decision package adopts adaptive mode to carry out choosing of scene viewpoint, according to the movement tendency direction to determine viewpoint direction, to determine the distance of viewpoint, constitute adaptive visual angle according to movement velocity by viewpoint direction and viewpoint distance.
13. the self adaptation scene image auxiliary system with image enhancement functions according to claim 7, it is characterized in that: the optional scene point set of pre-set several fixed-site of described scene decision package, the scene decision package is according to direction of vehicle movement and turn to trend, selects certain suitable scene point in this set.
14. the self adaptation scene image auxiliary system with image enhancement functions according to claim 7, it is characterized in that: described scene decision package export ratio chi information, this engineer's scale information can be steady state value, and promptly image each several part ratio and this part are to the range-independence of picture centre; Or the engineer's scale of output is not steady state value, and promptly image each several part ratio and this part to the distance dependent of picture centre are.
15. the self adaptation scene image auxiliary system with image enhancement functions according to claim 7, it is characterized in that: in the scene image of described image co-registration unit after fusion, increase the plan view from above of monitored system, perhaps overlook 3-D view, perhaps with the corresponding to three-dimensional perspective image in visual angle of decision package output, this image has certain transparency, to show the peripheral image information of being covered by equipment.
CN2010101105731A 2010-02-12 2010-02-12 Self-adapting scene image auxiliary system with image enhancement function Pending CN102158684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101105731A CN102158684A (en) 2010-02-12 2010-02-12 Self-adapting scene image auxiliary system with image enhancement function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101105731A CN102158684A (en) 2010-02-12 2010-02-12 Self-adapting scene image auxiliary system with image enhancement function

Publications (1)

Publication Number Publication Date
CN102158684A true CN102158684A (en) 2011-08-17

Family

ID=44439833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101105731A Pending CN102158684A (en) 2010-02-12 2010-02-12 Self-adapting scene image auxiliary system with image enhancement function

Country Status (1)

Country Link
CN (1) CN102158684A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103253193A (en) * 2013-04-23 2013-08-21 上海纵目科技有限公司 Method and system of calibration of panoramic parking based on touch screen operation
CN103297668A (en) * 2012-02-29 2013-09-11 深圳市振华微电子有限公司 Panoramic video image recording system and method
CN103916699A (en) * 2012-12-31 2014-07-09 德州仪器公司 System and method for generating 360 degree video recording using MVC
CN103971350A (en) * 2013-02-04 2014-08-06 上海机电工程研究所 High-fidelity infrared complex scene fast synthesizing method and device
CN103986912A (en) * 2014-05-21 2014-08-13 南京大学 Double-direction real-time vehicle chassis image synthetic method based on civil IPC
CN104221364A (en) * 2012-04-20 2014-12-17 株式会社理光 Imaging device and image processing method
CN104520904A (en) * 2012-08-10 2015-04-15 赫力环球有限公司 Method and apparatus for layout for augmented reality view
CN104516923A (en) * 2013-10-08 2015-04-15 王景弘 Image note-taking method and system
CN105438071A (en) * 2015-12-28 2016-03-30 深圳市灵动飞扬科技有限公司 Vehicle turning blind area display method and system
WO2016086489A1 (en) * 2014-12-03 2016-06-09 东莞宇龙通信科技有限公司 Image noise reduction method and device thereof
CN105721793A (en) * 2016-05-05 2016-06-29 深圳市歌美迪电子技术发展有限公司 Driving distance correction method and device
CN106062823A (en) * 2014-04-24 2016-10-26 日立建机株式会社 Device for monitoring area around working machine
CN103763517B (en) * 2014-03-03 2017-02-15 惠州华阳通用电子有限公司 Vehicle-mounted around view display method and system
CN106855999A (en) * 2015-12-09 2017-06-16 宁波芯路通讯科技有限公司 The generation method and device of automobile panoramic view picture
WO2017113403A1 (en) * 2015-12-31 2017-07-06 华为技术有限公司 Image information processing method and augmented reality ar device
CN107063276A (en) * 2016-12-12 2017-08-18 成都育芽科技有限公司 One kind is without the high-precision unmanned vehicle on-vehicle navigation apparatus of delay and method
CN107820002A (en) * 2016-09-12 2018-03-20 安讯士有限公司 Improved monitoring camera direction control
CN108122259A (en) * 2017-12-20 2018-06-05 厦门美图之家科技有限公司 Binocular camera scaling method, device, electronic equipment and readable storage medium storing program for executing
CN108665415A (en) * 2017-03-27 2018-10-16 纵目科技(上海)股份有限公司 Picture quality method for improving based on deep learning and its device
CN108928348A (en) * 2017-05-26 2018-12-04 德韧营运有限责任公司 Generate the method and system of wide area perception scene figure
CN109040552A (en) * 2013-06-13 2018-12-18 核心光电有限公司 Based on Dual-Aperture zoom digital camera
CN109455142A (en) * 2018-12-29 2019-03-12 上海梅克朗汽车镜有限公司 Visual field pattern of fusion panorama electronics rearview mirror system
CN111277796A (en) * 2020-01-21 2020-06-12 深圳市德赛微电子技术有限公司 Image processing method, vehicle-mounted vision auxiliary system and storage device
CN111797810A (en) * 2020-07-20 2020-10-20 吉林大学 Method for acquiring forward-looking preview area of driver in driving process
CN112702515A (en) * 2020-12-23 2021-04-23 上海立可芯半导体科技有限公司 Image processing method, system and computer readable medium in camera system
CN113329164A (en) * 2020-02-28 2021-08-31 华为技术有限公司 Lens correction method and device, shooting terminal and storage medium
CN114897707A (en) * 2022-02-28 2022-08-12 合肥指南针电子科技有限责任公司 Intelligent prevention and control method based on surveillance video image quality enhancement algorithm
CN115225875A (en) * 2022-06-17 2022-10-21 苏州蓝博控制技术有限公司 Auxiliary display device of excavator and display method thereof
CN115798400A (en) * 2023-01-09 2023-03-14 永林电子股份有限公司 LED display control method and device based on image processing and LED display system
CN117635506A (en) * 2024-01-24 2024-03-01 成都航天凯特机电科技有限公司 Image enhancement method and device based on AI-energized Mean Shift algorithm

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297668A (en) * 2012-02-29 2013-09-11 深圳市振华微电子有限公司 Panoramic video image recording system and method
CN104221364A (en) * 2012-04-20 2014-12-17 株式会社理光 Imaging device and image processing method
CN104520904A (en) * 2012-08-10 2015-04-15 赫力环球有限公司 Method and apparatus for layout for augmented reality view
CN103916699A (en) * 2012-12-31 2014-07-09 德州仪器公司 System and method for generating 360 degree video recording using MVC
CN103971350A (en) * 2013-02-04 2014-08-06 上海机电工程研究所 High-fidelity infrared complex scene fast synthesizing method and device
CN103253193A (en) * 2013-04-23 2013-08-21 上海纵目科技有限公司 Method and system of calibration of panoramic parking based on touch screen operation
CN109040552A (en) * 2013-06-13 2018-12-18 核心光电有限公司 Based on Dual-Aperture zoom digital camera
CN109040552B (en) * 2013-06-13 2021-06-22 核心光电有限公司 Double-aperture zooming digital camera
CN104516923A (en) * 2013-10-08 2015-04-15 王景弘 Image note-taking method and system
CN103763517B (en) * 2014-03-03 2017-02-15 惠州华阳通用电子有限公司 Vehicle-mounted around view display method and system
CN106062823B (en) * 2014-04-24 2019-04-02 日立建机株式会社 The surroundings monitoring apparatus of Work machine
CN106062823A (en) * 2014-04-24 2016-10-26 日立建机株式会社 Device for monitoring area around working machine
CN103986912A (en) * 2014-05-21 2014-08-13 南京大学 Double-direction real-time vehicle chassis image synthetic method based on civil IPC
CN103986912B (en) * 2014-05-21 2017-04-12 南京大学 Bidirectional real-time vehicle chassis image synthesis method based on civil IPC
WO2016086489A1 (en) * 2014-12-03 2016-06-09 东莞宇龙通信科技有限公司 Image noise reduction method and device thereof
CN106855999A (en) * 2015-12-09 2017-06-16 宁波芯路通讯科技有限公司 The generation method and device of automobile panoramic view picture
CN105438071A (en) * 2015-12-28 2016-03-30 深圳市灵动飞扬科技有限公司 Vehicle turning blind area display method and system
WO2017113403A1 (en) * 2015-12-31 2017-07-06 华为技术有限公司 Image information processing method and augmented reality ar device
US10712556B2 (en) 2015-12-31 2020-07-14 Huawei Technologies Co., Ltd. Image information processing method and augmented reality AR device
CN105721793A (en) * 2016-05-05 2016-06-29 深圳市歌美迪电子技术发展有限公司 Driving distance correction method and device
CN105721793B (en) * 2016-05-05 2019-03-12 深圳市歌美迪电子技术发展有限公司 A kind of driving distance bearing calibration and device
CN107820002A (en) * 2016-09-12 2018-03-20 安讯士有限公司 Improved monitoring camera direction control
CN107820002B (en) * 2016-09-12 2020-03-27 安讯士有限公司 Improved surveillance camera directional control
CN107063276A (en) * 2016-12-12 2017-08-18 成都育芽科技有限公司 One kind is without the high-precision unmanned vehicle on-vehicle navigation apparatus of delay and method
CN108665415B (en) * 2017-03-27 2021-11-09 深圳纵目安驰科技有限公司 Image quality improving method and device based on deep learning
CN108665415A (en) * 2017-03-27 2018-10-16 纵目科技(上海)股份有限公司 Picture quality method for improving based on deep learning and its device
CN108928348A (en) * 2017-05-26 2018-12-04 德韧营运有限责任公司 Generate the method and system of wide area perception scene figure
CN108122259A (en) * 2017-12-20 2018-06-05 厦门美图之家科技有限公司 Binocular camera scaling method, device, electronic equipment and readable storage medium storing program for executing
CN109455142A (en) * 2018-12-29 2019-03-12 上海梅克朗汽车镜有限公司 Visual field pattern of fusion panorama electronics rearview mirror system
CN111277796A (en) * 2020-01-21 2020-06-12 深圳市德赛微电子技术有限公司 Image processing method, vehicle-mounted vision auxiliary system and storage device
CN113329164B (en) * 2020-02-28 2023-01-20 华为技术有限公司 Lens correction method and device, shooting terminal and storage medium
CN113329164A (en) * 2020-02-28 2021-08-31 华为技术有限公司 Lens correction method and device, shooting terminal and storage medium
CN111797810A (en) * 2020-07-20 2020-10-20 吉林大学 Method for acquiring forward-looking preview area of driver in driving process
CN112702515A (en) * 2020-12-23 2021-04-23 上海立可芯半导体科技有限公司 Image processing method, system and computer readable medium in camera system
CN114897707A (en) * 2022-02-28 2022-08-12 合肥指南针电子科技有限责任公司 Intelligent prevention and control method based on surveillance video image quality enhancement algorithm
CN115225875A (en) * 2022-06-17 2022-10-21 苏州蓝博控制技术有限公司 Auxiliary display device of excavator and display method thereof
CN115225875B (en) * 2022-06-17 2023-12-01 苏州蓝博控制技术有限公司 Display method of auxiliary display device of excavator
CN115798400A (en) * 2023-01-09 2023-03-14 永林电子股份有限公司 LED display control method and device based on image processing and LED display system
CN115798400B (en) * 2023-01-09 2023-04-18 永林电子股份有限公司 LED display control method and device based on image processing and LED display system
CN117635506A (en) * 2024-01-24 2024-03-01 成都航天凯特机电科技有限公司 Image enhancement method and device based on AI-energized Mean Shift algorithm
CN117635506B (en) * 2024-01-24 2024-04-05 成都航天凯特机电科技有限公司 Image enhancement method and device based on AI-energized Mean Shift algorithm

Similar Documents

Publication Publication Date Title
CN102158684A (en) Self-adapting scene image auxiliary system with image enhancement function
CN202035096U (en) Mobile operation monitoring system for mobile machine
CN102163331A (en) Image-assisting system using calibration method
US11024055B2 (en) Vehicle, vehicle positioning system, and vehicle positioning method
CN109360245B (en) External parameter calibration method for multi-camera system of unmanned vehicle
JP4695167B2 (en) Method and apparatus for correcting distortion and enhancing an image in a vehicle rear view system
US8842181B2 (en) Camera calibration apparatus
US9738223B2 (en) Dynamic guideline overlay with image cropping
CN111046743B (en) Barrier information labeling method and device, electronic equipment and storage medium
CN104442567B (en) Object Highlighting And Sensing In Vehicle Image Display Systems
CN111815641A (en) Camera and radar fusion
CN108269235A (en) A kind of vehicle-mounted based on OPENGL looks around various visual angles panorama generation method
US20140267415A1 (en) Road marking illuminattion system and method
US20110169957A1 (en) Vehicle Image Processing Method
US8169309B2 (en) Image processing apparatus, driving support system, and image processing method
CN103802725B (en) A kind of new vehicle carried driving assistant images generation method
CN103728727A (en) Information display system capable of automatically adjusting visual range and display method of information display system
US20090179916A1 (en) Method and apparatus for calibrating a video display overlay
CN102842127A (en) Automatic calibration for extrinsic parameters of camera of surround view system camera
WO2020011670A1 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
KR20180112010A (en) A method of detecting an object on the road side of a car, a computing device, a driver assistance system and an automobile
CN107027329A (en) The topography of the surrounding environment of traveling instrument is spliced into an image
CN109345591B (en) Vehicle posture detection method and device
CN202111802U (en) Calibration device for monitoring apparatus with multiple image sensors
US8860810B2 (en) Method and device for extending a visibility area

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
DD01 Delivery of document by public notice

Addressee: Wang Bingli

Document name: Notification of before Expiration of Request of Examination as to Substance

DD01 Delivery of document by public notice

Addressee: Wang Bingli

Document name: Notification that Application Deemed to be Withdrawn

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110817