CN106892356B - A kind of tyre crane running gear automatic correction method based on machine vision - Google Patents

A kind of tyre crane running gear automatic correction method based on machine vision Download PDF

Info

Publication number
CN106892356B
CN106892356B CN201710070800.4A CN201710070800A CN106892356B CN 106892356 B CN106892356 B CN 106892356B CN 201710070800 A CN201710070800 A CN 201710070800A CN 106892356 B CN106892356 B CN 106892356B
Authority
CN
China
Prior art keywords
image
camera
path planning
tyre crane
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710070800.4A
Other languages
Chinese (zh)
Other versions
CN106892356A (en
Inventor
赵德安
刘晓洋
陈玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201710070800.4A priority Critical patent/CN106892356B/en
Publication of CN106892356A publication Critical patent/CN106892356A/en
Application granted granted Critical
Publication of CN106892356B publication Critical patent/CN106892356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • B66C13/48Automatic control of crane drives for producing a single or repeated working cycle; Programme control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/16Applications of indicating, registering, or weighing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C9/00Travelling gear incorporated in or fitted to trolleys or cranes
    • B66C9/16Travelling gear incorporated in or fitted to trolleys or cranes with means for maintaining alignment between wheels and track
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The invention discloses a kind of tyre crane running gear automatic correction method based on machine vision, selects tyre crane side to install a camera respectively before and after it as guiding side and calibrated;Collection image simultaneously carries out appropriate pretreatment;Establish BP neural network and selected pixels point set is repeatedly trained;The minimum network of test error is selected image is identified;By the point that the image simplification after identification is series of discrete, straight-line detection is carried out using Hough transform and least square method and institute's detection of straight lines is merged into one as path planning;Splice the image of front camera and rear camera, and new coordinate system is established as origin using the upper left corner of image after splicing and re-establishes two planning path equations;Euclidean distance of two path plannings in pole coordinate parameter space is calculated, if distance is more than setting value, recalibrates camera, otherwise merges two path plannings;The angle and position of intersecting point of calculating path planning and image longitudinal midline simultaneously formulate correction strategy.

Description

A kind of tyre crane running gear automatic correction method based on machine vision
Technical field
The present invention relates to machine vision and field of image recognition, particularly a kind of automobile navigation side based on machine vision Method.
Background technology
Tyre crane has that airdrome maneuver is flexible, is not limited at harbour by Fixed Point Operation region, it is possible to increase harbour facilities profit With rate, the features such as portal crane configures quantity is reduced.However as the fast development of logistics and the rising of cost of labor, to wheel The service efficiency that tire is hung proposes higher requirement.It is that driver observes by the naked eye drawing for ground during traditional tyre crane traveling Wire is using mode of manually rectifying a deviation, to ensure that tyre crane drives safely on fixed course.Such a drive manner efficiency is not It is high and need driver to concentrate, energy is consumed larger, the accident such as easily cause maloperation or misoperation to collide.
Therefore realize that the automatic deviation correction of tyre crane running gear is significant, can further improve operating efficiency. Gps carrier phase difference location technology is a kind of in the skill that location navigation and automatic deviation correction can be provided for tyre crane of utilization Art.But the technical scheme needs to build GPS reference station and movement station in stockyard and differential received antenna is installed on tyre crane Etc. equipment, equipment investment is big, technical requirements are high, and in order to reduce screening effect of the tyre crane body to gps signal, GPS days Line need to be arranged on tyre crane top, be had a great influence away from ground level up to 15m-20m, positioning precision by stockyard surface water Pingdu. Therefore the present invention proposes the tyre crane automatic correction method based on machine vision, by the identification tyre crane guide line of camera simultaneously Path planning is calculated accordingly formulates correction strategy.It is more economical compared to using gps carrier phase difference technology, the program.To the greatest extent Pipe can limit the normal work of the visual field effects camera of camera, the present invention is to weather under the weather conditions such as heavy snow dense fog There is preferable robustness, draw not forming a wide range of ponding, accumulated snow and the enough misty rain weather of visible range and remaining to normal identification Wire.
The content of the invention
It is an object of the invention to provide a kind of tyre crane running gear automatic correction method based on machine vision, can pass through The guide line of camera effective detection tyre crane running gear simultaneously calculates path planning formulation correction strategy.
1. the technical solution adopted for the present invention to solve the technical problems comprises the following steps:
(a) selection selection tyre crane running gear side is as guiding side, its front and back position install respectively one it is same Model and the camera of configuration are simultaneously calibrated;
(b) mapping to be checked of particular size and resolution ratio is obtained by appropriate image preprocessing to the image collected Picture;
(c) establish BP neural network and choose a large amount of pixel point sets and repeatedly trained;
(d) choose the minimum network of test error image is identified, described image recognition is by each pixel And its rgb value input neutral net of pixel is differentiated in peripheral extent;
(e) point by the image simplification after identification for series of discrete, using mixing Hough transform and the side of least square Method carries out straight-line detection to image, by taking the average value of detected line correspondences parameter to merge the straight line detected, and As path planning;
(f) splice front camera and rear camera image, new coordinate system and again is established by origin of the upper left corner of image after splicing Establish the equation of two path plannings;
(g) two planning path parameters are transformed into pole coordinate parameter space, are expressed as (ρ11), (ρ22), meter 2 points of Euclidean distance is calculated, if distance is more than setting value, front camera and rear camera position there are variation needs to re-start calibration, otherwise It is one to merge two planning path equations;
(h) according to the center of guiding side and the folder of the position relationship and path planning and image longitudinal midline of path planning Formulate correction strategy in angle.
Further, the Installation And Calibration of the camera described in the step a, before front camera and rear camera should be arranged on guiding side Rear sustained height and with horizontal plane folded by angle it is consistent, specific calibration steps is as follows:
Step1:Folded by straight line where making the guiding side of tyre crane be centrally located on guide line and making guiding side and guide line Angle is more than 15 degree;
Step2:Front camera and rear camera gathers image and pre-processed respectively, the guide line and image extended in each image Longitudinal midline makes it intersect at outside image.
Step3:The distance of measure intersection point to image is calculated as l1, l2 respectively, if the two is inconsistent, adjustment camera makes it It is consistent and be calculated as l;
Further, the large format high-definition picture that the pretreatment described in the step b refers to collect is divided Resolution reduces and intercepts area-of-interest, if picture noise seriously will also carry out image filtering processing.
Further, the foundation and training of the BP neural network described in the step c, it is 75 to establish input layer, defeated Go out the BP neural network that node layer number is 1, choosing size outside the pixel point set and guide line that size on a large amount of guide lines is 5*5 is 5*5 pixel point set is simultaneously trained it as positive example and counter-example to neutral net, and this positive counter-example, which is chosen, should be included respectively The image shot under kind working condition, and the positive and negative number of cases amount chosen under various operating modes needs to reach certain amount level.
Further, the image simplification described in the step e and the straight-line detection side of mixing Hough transform and least square The method of method, wherein image simplification is as follows:
Step1:The bianry image obtained to identification takes a horizontal straight line every the width of 5 pixels;
Step2:The midpoint of each connected region on the straight line is taken to simplify bianry image.
The line detection method for mixing Hough transform and least square is as follows:
Step1:Point in simplification figure picture is changed from rectangular co-ordinate image space to pole coordinate parameter space;
Step2:Maximum is carried out to parameter space and suppresses the point that simultaneously Selecting All Parameters spatial value is more than 0.3 times of maximum;
Step3:Calculate the pixel and linear equation of image space corresponding to selected point;
Step4:Reject laterally or close to the very few straight line of the straight line and corresponding pixel points of cross direction profiles;
Step5:Straight line close to linear equation parameter and that corresponding pixel points are inconsistent merges;
Step6:The point of every institute's detection of straight lines certain distance of selected distance, re-starts least squares line fitting.
Further, the image mosaic described in the step f is to keep preceding camera image constant, rear camera image It is positioned over after rotation 180 degree immediately below preceding camera image and is spaced 2*l.
Further, rectified a deviation in the step h formulation of strategy, concrete mode is as follows:
The angle span for defining path planning and image longitudinal midline first be [- 90,90), it is right positioned at image center line Side is just, left side is negative;
(1) intersection point of path planning and stitching image longitudinal midline is located at stitching image overcentre, and left and right tire is synchronized Motion;
(2) intersection point of path planning and stitching image longitudinal midline is located at stitching image center or below and angle is Just, left side tire speed is more than right side tire speed, tyre crane is deflected to the right;
(3) intersection point of path planning and stitching image longitudinal midline is located at stitching image center or below and angle is Negative, right side tire speed is more than left side tire speed, tyre crane is deflected to the left;
(4) path planning is parallel with image longitudinal midline, if path planning is located on the right side of image longitudinal midline, left side wheels Tire speed is more than right side tire speed, tyre crane is deflected to the right;If path planning is located on the left of image longitudinal midline, right-hand wheel Tire speed is more than left side tire speed, tyre crane is deflected to the left;
(5) path planning is completely superposed with image longitudinal midline, and left and right tire moves at the same speed.
The beneficial effects of the invention are as follows:
Compared to the correction airmanship based on GPS, the automatic correction method based on machine vision can effectively reduce tire Hang the cost of self-navigation correction.Simultaneously to overcome bad weather to identify the influence of guide line to camera, the present invention is using god The image under various operating modes is trained through network, therefore there is stronger robustness to a variety of bad working environments and weather.I.e. Make have situations such as car trace, greasy dirt, fade on guide line, can still accurately identify.In addition do not formed a wide range of ponding, accumulated snow and The enough misty rain weather of visible range remains to normally identify guide line, does not influence tyre crane normal work.
Brief description of the drawings
Fig. 1:Preceding camera calibrates schematic diagram
Fig. 2:Pretreated image
Fig. 3:The location drawing picture of neural network recognization
Fig. 4:Bianry image simplifies design sketch
Fig. 5:Straight-line detection design sketch
Fig. 6:Stitching image schematic diagram
Fig. 7:Straight line transformation parameter schematic diagram
Embodiment
The embodiment of the present invention is described further below in conjunction with the accompanying drawings.
For the present invention using tyre crane running gear as control object, the system as core and includes steering using PLC control circuits The mechanical structures such as device, wheel, driver.In specific embodiment, the posture of tyre crane is judged by installing machine visual unit additional With the next step direction of motion and result is transferred to PLC control unit, so as to realize the correction of tyre crane running gear.The machine Visual unit is handled by former and later two camera collection images and being transferred in industrial computer, finally transmits result To PLC.
Specific implementation step is broadly divided into image preprocessing, guide line identification, image mosaic, tactful four portions of formulation correction Point.Image preprocessing is mainly to be filtered denoising to the image of collection, and image resolution ratio is reduced to size and intercepted and feels emerging Interesting region;The identification of guide line is trained and identified using BP neural network and entered by Hough transform and least square method The further detection of row;Image mosaic is the image mosaic by front camera and rear camera shooting into piece image;Finally according to guiding side Center and the angle of position relationship and path planning and image longitudinal midline of path planning formulate correction strategy.Specifically such as Under:
1st, image preprocessing
1.1 install and calibrate camera
The CCD camera of two same models and configuration is separately mounted to the anteroposterior position above the tire of tyre crane side The sustained height put and respectively with horizontal plane folded by angle it is consistent.To ensure that installation is correct, it is necessary to carry out necessary calibration:
Step1:Folded by straight line where making the guiding side of tyre crane be centrally located on guide line and making guiding side and guide line Angle is more than 15 degree;
Step2:Front camera and rear camera gathers image and pre-processed respectively, the guide line and image extended in each image It is outer (as shown in Figure 1) that longitudinal midline makes it intersect at image.
Step3:The distance of measure intersection point to image is calculated as l1, l2 respectively, if the two is inconsistent, adjustment camera makes it It is consistent and be calculated as l;
1.2 images obtain and pretreatment
The frame per second for setting camera is 30fps, and setting image resolution ratio is 1280*960, and it is in image to set image ROI Heart 350*650 region.A sub-picture is gathered every 3 frames, and is 175*325 by image down, then using in 3*3 templates Value filtering carries out medium filtering, and the pending image finally obtained is as shown in Figure 2.
2nd, guide line identifies
2.1 training BP neural networks
First, the image totally 500 that should include being shot under various working conditions is chosen, includes each period of fine day, it is cloudy My god, (there is secondary light source at night), rain, mist, snow (not causing deposite snow situation) etc..Then guiding is chosen in each sub-picture The positive example that the pixel point set that size is 5*5 on line is trained as BP neural network, choosing non-guide line part size in image is The counter-example that 5*5 pixel point set is trained as BP neural network.The quantity that each sub-picture gathers positive and negative example is respectively 15~25 Individual, 500 images, which amount to, gathers positive example 11342, counter-example 10975.
It is 75 to establish input layer, and output layer nodes are 1, and hidden layer node is 172 BP neural network.Nerve net The transmission function of network hidden layer is bipolarity S function, and output layer transmission function is linear function.By each positive example or counter-example number It is trained according to form and the neutral net that has built up of input that 75*1 is stored as by row, trains 20 times altogether and count each instruction Practice the test error of successful network.
2.2 image recognition
Choose the minimum neutral net of test error to differentiate each pixel in image, and will differentiate that result is carried out Binaryzation be identified after bianry image, as shown in Figure 3.For identifying width of the obtained bianry image every 5 pixels Degree takes a horizontal straight line and takes the midpoint of each connected region on the straight line to simplify bianry image, as shown in Figure 4.In figure " * " represents the point after simplifying.
2.3 inductive line detection
Straight-line detection and every institute's detection of straight lines one of selected distance are carried out to the bianry image after simplification using Hough transform The point of set a distance, re-starts least squares line fitting, and testing result is as shown in Figure 5.Four straightways in figure represent inspection The guide line measured.The line detection method further constrained the straight line of detection on the basis of Hough transform, including with Lower step:
The line detection method for mixing Hough transform and least square is as follows:
Step1:Point in simplification figure picture is changed from rectangular co-ordinate image space to pole coordinate parameter space;
Step2:Maximum is carried out to parameter space and suppresses the point that simultaneously Selecting All Parameters spatial value is more than 0.3 times of maximum;
Step3:Calculate the pixel and linear equation of image space corresponding to selected point;
Step4:Reject horizontal or the straight line close to cross direction profiles and corresponding straight line of the points less than 10;
Step5:Straight line close to linear equation parameter and that corresponding pixel points are inconsistent merges;
Step6:The point of every institute's detection of straight lines certain distance of selected distance, re-starts least squares line fitting.
By taking the average value of detected line correspondences parameter to merge the straight line detected, and it is used as path planning.
3 image mosaics
Image mosaic is the information for the detection for merging front camera and rear camera, to carry out self-test to camera position state and to be Determine that the formulation of tyre crane posture and correction strategy provides help.
It is constant that image split-joint method is to maintain preceding camera image, is positioned over after rear camera image rotation 180 degree proactive As head image underface and it is spaced 2*l.New coordinate system is established as origin using the upper left corner of image after splicing and re-establishes two The equation (as shown in Figure 6) of bar path planning.Two planning path parameters are transformed into pole coordinate parameter space (such as Fig. 7 institutes Show), it is expressed as (ρ11), (ρ22), the Euclidean distance of 2 points of calculating, if distance is more than setting value, front camera and rear camera Position has variation needs to re-start calibration, and it is one otherwise to merge two planning path equations.
4 formulate correction strategy
According to the center of guiding side and the angle of the position relationship and path planning and image longitudinal midline of path planning Formulate correction strategy.The angle span for defining path planning and image longitudinal midline first be [- 90,90), positioned at image For just, left side is negative on the right side of center line.
(1) intersection point of path planning and stitching image longitudinal midline is located at stitching image overcentre, and left and right tire is synchronized Motion;
(2) intersection point of path planning and stitching image longitudinal midline is located at stitching image center or below and angle is Just, left side tire speed is more than right side tire speed, tyre crane is deflected to the right;
(3) intersection point of path planning and stitching image longitudinal midline is located at stitching image center or below and angle is Negative, right side tire speed is more than left side tire speed, tyre crane is deflected to the left;
(4) path planning is parallel with image longitudinal midline, if path planning is located on the right side of image longitudinal midline, left side wheels Tire speed is more than right side tire speed, tyre crane is deflected to the right;If path planning is located on the left of image longitudinal midline, right-hand wheel Tire speed is more than left side tire speed, tyre crane is deflected to the left;
(5) path planning is completely superposed with image longitudinal midline, and left and right tire moves at the same speed.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " illustrative examples ", The description of " example ", " specific example " or " some examples " etc. means to combine specific features, the knot that the embodiment or example describe Structure, material or feature are contained at least one embodiment or example of the present invention.In this manual, to above-mentioned term Schematic representation is not necessarily referring to identical embodiment or example.Moreover, specific features, structure, material or the spy of description Point can combine in an appropriate manner in any one or more embodiments or example.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that:Not In the case of departing from the principle and objective of the present invention a variety of change, modification, replacement and modification can be carried out to these embodiments, this The scope of invention is limited by claim and its equivalent.

Claims (6)

1. a kind of tyre crane running gear automatic correction method based on machine vision, it is characterised in that comprise the following steps:
(a) select tyre crane running gear side that one same model is installed respectively in its front and back position and matched somebody with somebody as guiding side The camera put simultaneously is calibrated;The Installation And Calibration of middle camera, front camera and rear camera should be installed in same before and after guiding side Height and with horizontal plane folded by angle it is consistent, specific calibration steps is as follows:Stepa1:Make the guiding side of tyre crane be centrally located to draw On wire and guiding side place straight line is set to be more than 15 degree with angle folded by guide line;Stepa2:Front camera and rear camera gathers figure respectively Picture is simultaneously pre-processed, and the guide line and image longitudinal midline extended in each image makes it intersect at outside image;Stepa3:Survey The distance for determining intersection point to image is calculated as l1, l2 respectively, if the two is inconsistent, adjustment camera makes it consistent and is calculated as l;
(b) image to be detected of particular size and resolution ratio is obtained by appropriate image preprocessing to the image collected;
(c) establish BP neural network and choose a large amount of pixel point sets and repeatedly trained;
(d) the minimum network of test error is chosen image is identified, described image recognition be by each pixel and its The rgb value input neutral net of pixel is differentiated in the certain limit of periphery;
(e) point by the image simplification after identification for series of discrete, using mixing Hough transform and the method pair of least square Image carries out straight-line detection, by taking the average value of detected line correspondences parameter to merge the straight line detected, and conduct Path planning;
(f) splice front camera and rear camera image, using the upper left corner of image after splicing as origin establish new coordinate system lay equal stress on it is newly-built Found the equation of two path plannings;
(g) two planning path parameters are transformed into pole coordinate parameter space, are expressed as (ρ11), (ρ22), calculate two The Euclidean distance of point, if distance is more than setting value, front camera and rear camera position has variation needs to re-start calibration, otherwise merged Two planning path equation is one;
(h) according to the center of guiding side and the angle system of the position relationship and path planning and image longitudinal midline of path planning Fixed correction strategy.
2. according to the method for claim 1, it is characterised in that pretreatment refers to collect significantly in the step b Face high-definition picture carries out resolution ratio reduction and intercepts area-of-interest, if picture noise seriously will be also carried out at image filtering Reason.
3. according to the method for claim 1, it is characterised in that the foundation of the BP neural network described in the step c and Training, it is 75 to establish input layer, and output layer nodes are 1 BP neural network, and it is 5*5 to choose size on a large amount of guide lines Pixel point set and the outer size of guide line be 5*5 pixel point set and neutral net is carried out using it as positive example and counter-example Training, this positive counter-example choose the image that should include being shot under various working conditions, and the positive and negative number of cases amount chosen under various operating modes Need to reach certain amount level.
4. according to the method for claim 1, it is characterised in that in the step e image simplification and mixing Hough transform and The method of the line detection method of least square, wherein image simplification is as follows:
Stepe1:The bianry image obtained to identification takes a horizontal straight line every the width of 5 pixels;
Stepe2:The midpoint of each connected region on the straight line is taken to simplify bianry image;
The line detection method for mixing Hough transform and least square is as follows:
Step1:Point in simplification figure picture is changed from rectangular co-ordinate image space to pole coordinate parameter space;
Step2:Maximum is carried out to parameter space and suppresses the point that simultaneously Selecting All Parameters spatial value is more than 0.3 times of maximum;
Step3:Calculate the pixel and linear equation of image space corresponding to selected point;
Step4:Reject laterally or close to the very few straight line of the straight line and corresponding pixel points of cross direction profiles;
Step5:Straight line close to linear equation parameter and that corresponding pixel points are inconsistent merges;
Step6:The point of every institute's detection of straight lines certain distance of selected distance, re-starts least squares line fitting.
5. according to the method for claim 1, it is characterised in that image mosaic in the step f, keep preceding camera image It is constant, it is positioned over after rear camera image rotation 180 degree immediately below preceding camera image and is spaced 2*l.
6. according to the method for claim 1, it is characterised in that the formulation for strategy of being rectified a deviation in the step h, concrete mode is such as Under:
The angle span for defining path planning and image longitudinal midline first be [- 90,90), be on the right side of image center line Just, left side is negative;
The intersection point of path planning and stitching image longitudinal midline is located at stitching image overcentre, and left and right tire moves at the same speed;
The intersection point of path planning and stitching image longitudinal midline is located at stitching image center or below and angle is just left side Tire speed is more than right side tire speed, tyre crane is deflected to the right;
The intersection point of path planning and stitching image longitudinal midline is located at stitching image center or below and angle is negative, right side Tire speed is more than left side tire speed, tyre crane is deflected to the left;
Path planning is parallel with image longitudinal midline, if path planning is located on the right side of image longitudinal midline, left side tire speed More than right side tire speed, tyre crane is set to deflect to the right;If path planning is located on the left of image longitudinal midline, right side tire speed More than left side tire speed, tyre crane is set to deflect to the left;
Path planning is completely superposed with image longitudinal midline, and left and right tire moves at the same speed.
CN201710070800.4A 2017-02-09 2017-02-09 A kind of tyre crane running gear automatic correction method based on machine vision Active CN106892356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710070800.4A CN106892356B (en) 2017-02-09 2017-02-09 A kind of tyre crane running gear automatic correction method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710070800.4A CN106892356B (en) 2017-02-09 2017-02-09 A kind of tyre crane running gear automatic correction method based on machine vision

Publications (2)

Publication Number Publication Date
CN106892356A CN106892356A (en) 2017-06-27
CN106892356B true CN106892356B (en) 2018-02-27

Family

ID=59198094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710070800.4A Active CN106892356B (en) 2017-02-09 2017-02-09 A kind of tyre crane running gear automatic correction method based on machine vision

Country Status (1)

Country Link
CN (1) CN106892356B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993788B (en) * 2017-12-29 2021-10-26 西门子(中国)有限公司 Deviation rectifying method, device and system for tyre crane
CN107934789B (en) * 2018-01-17 2020-03-31 潘莲英 Intelligent container loading and unloading method based on polar coordinates
CN107973221B (en) * 2018-01-17 2020-04-21 温州铭泰工业设计有限公司 Intelligent crane based on infrared measurement
CN108549877B (en) * 2018-04-23 2022-02-18 重庆大学 Tracking robot track identification method based on neural network
CN110054089B (en) * 2019-04-29 2020-06-09 北京航天自动控制研究所 Automatic vision deviation rectifying system and method for tire crane
CN110844785B (en) * 2019-11-28 2020-12-29 重庆中星微人工智能芯片技术有限公司 Method, device, equipment and medium for generating information of tower crane boom
CN110963409B (en) * 2019-11-29 2021-02-09 北京航天自动控制研究所 Method for measuring automatic deviation rectification deviation of tire crane machine vision
CN111017727B (en) * 2019-11-29 2021-11-16 北京航天自动控制研究所 Automatic deviation-rectifying control shutdown judgment method for tire crane
CN113419526B (en) * 2021-06-17 2022-08-23 江苏大学 Automatic pond transferring system of aquaculture workboat and pose adjusting method
CN113860172A (en) * 2021-09-30 2021-12-31 广州文远知行科技有限公司 Deviation rectifying method and device, vehicle and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SU1440853A1 (en) * 1986-01-15 1988-11-30 П. П. Гил ров Running gear of overhead traveling crane
CN201280386Y (en) * 2008-09-12 2009-07-29 天津港联盟国际集装箱码头有限公司 Automatic deviation correction system of tire crane
CN102060234A (en) * 2010-10-26 2011-05-18 常州超媒体与感知技术研究所有限公司 Tire crane traveling track video correction device and method
CN201882785U (en) * 2010-02-02 2011-06-29 南通通镭软件有限公司 Position image recognition and correction system for container gantry crane
CN105438998A (en) * 2014-09-26 2016-03-30 上海海镭激光科技有限公司 Rubber-tired crane walking positioning and deviation rectifying anticollision method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SU1440853A1 (en) * 1986-01-15 1988-11-30 П. П. Гил ров Running gear of overhead traveling crane
CN201280386Y (en) * 2008-09-12 2009-07-29 天津港联盟国际集装箱码头有限公司 Automatic deviation correction system of tire crane
CN201882785U (en) * 2010-02-02 2011-06-29 南通通镭软件有限公司 Position image recognition and correction system for container gantry crane
CN102060234A (en) * 2010-10-26 2011-05-18 常州超媒体与感知技术研究所有限公司 Tire crane traveling track video correction device and method
CN105438998A (en) * 2014-09-26 2016-03-30 上海海镭激光科技有限公司 Rubber-tired crane walking positioning and deviation rectifying anticollision method

Also Published As

Publication number Publication date
CN106892356A (en) 2017-06-27

Similar Documents

Publication Publication Date Title
CN106892356B (en) A kind of tyre crane running gear automatic correction method based on machine vision
CN104299240B (en) Camera calibration method and system for lane shift early warning
CN105955259A (en) Monocular vision AGV accurate positioning method and system based on multi-window real-time range finding
CN107025432B (en) A kind of efficient lane detection tracking and system
CN106441286B (en) Unmanned plane tunnel cruising inspection system based on BIM technology
CN103413313B (en) The binocular vision navigation system of electrically-based robot and method
CN105700532B (en) The Intelligent Mobile Robot navigator fix control method of view-based access control model
CN104933409B (en) A kind of parking stall recognition methods based on panoramic picture dotted line feature
CN107462223A (en) Driving sight distance self-operated measuring unit and measuring method before a kind of highway is turned
CN103714538B (en) road edge detection method, device and vehicle
CN104637073B (en) It is a kind of based on the banding underground structure detection method for shining upon shadow compensation
CN103593671B (en) The wide-range lane line visible detection method worked in coordination with based on three video cameras
CN106651953A (en) Vehicle position and gesture estimation method based on traffic sign
CN105511462B (en) A kind of AGV air navigation aids of view-based access control model
CN104848851A (en) Transformer substation patrol robot based on multi-sensor data fusion picture composition and method thereof
CN106056100A (en) Vehicle auxiliary positioning method based on lane detection and object tracking
CN106774313A (en) A kind of outdoor automatic obstacle-avoiding AGV air navigation aids based on multisensor
CN107229908A (en) A kind of method for detecting lane lines
CN105447853A (en) Flight device, flight control system and flight control method
CN103386975A (en) Vehicle obstacle avoidance method and system based on machine vision
CN102682292A (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN105136153B (en) A kind of lane line exact position harvester and acquisition method
CN108364466A (en) A kind of statistical method of traffic flow based on unmanned plane traffic video
CN106584451A (en) Visual navigation based transformer substation automatic composition robot and method
CN110398979A (en) A kind of unmanned engineer operation equipment tracking method and device that view-based access control model is merged with posture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant