CN104835173B - A kind of localization method based on machine vision - Google Patents
A kind of localization method based on machine vision Download PDFInfo
- Publication number
- CN104835173B CN104835173B CN201510263245.8A CN201510263245A CN104835173B CN 104835173 B CN104835173 B CN 104835173B CN 201510263245 A CN201510263245 A CN 201510263245A CN 104835173 B CN104835173 B CN 104835173B
- Authority
- CN
- China
- Prior art keywords
- label
- image
- positioning
- mrow
- positioning label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/48—Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of AGV localization methods based on machine vision.The present invention is first digitized environment by Design Orientation digital label, and vehicle-mounted vision system obtains the image for including label.Then image is handled with certain algorithm and identifies position, content and the deflection angle of outgoing label, trolley position in the environment and attitude information are translated into eventually through distance relation table and the dependent coordinate conversion of calibration for cameras, realizes the positioning of trolley in the environment.Different from other AGV localization methods based on machine vision, this method identification process operand is small, and positioning accuracy can meet AGV navigation requests with speed.
Description
Technical field
The present invention relates to a kind of field of machine vision, particularly a kind of localization method.
Background technology
Machine vision is to obtain picture using visual sensor and carry out various measurements and judgement using image processing system.
It is an important branch of Computer Subject, combines the technology of optics, machinery, electronics, computer software and hardware etc., relates to
And to multiple fields such as computer, image procossing, pattern-recognition, artificial intelligence, signal processing, optical, mechanical and electronic integration.Vision is led
Boat is to carry out respective handling to the image that visual sensor obtains so as to obtain a kind of technology of carrier pose parameter.It is mobile at present
Robotic vision airmanship is mainly used in the racing contest of mobile robot, industry AGV, the independent navigation of intelligent vehicle
With this four aspects of science and techniques of defence technical research.
The transport of articles from the storeroom relies primarily on manpower at present, there are the problems such as work efficiency is low, waste of human resource is serious,
Therefore there is an urgent need to industrial automated guided vehicle (Autonomous Guide Vehicle, abbreviation AGV) to substitute this aspect
Work, improve production efficiency and resource utilization.The rapid development of machine vision provides more for the self-navigation of industry AGV
The thinkings solved the problems, such as more.Machine vision navigation system for industrial AGV self-navigations can be generally divided into:Image acquisition part
Point, image processing section and motion control portion.
The main process of one complete machine vision navigation system is as follows:
1st, camera automatically adjusts exposure parameter as needed according to instruction real-time image acquisition;
2nd, the data collected are converted into picture format, and be stored in processor or calculator memory;
3rd, processor analyzes image, is identified, obtaining carrier posture information and interrelated logic controlling value;
4th, recognition result control vector is mobile, stops, correcting kinematic error etc..
The content of the invention
Goal of the invention:In order to overcome the deficiencies in the prior art, present invention offer is a kind of to be determined based on machine vision
Position method, applies the present invention in AGV, for solve traditional hand haulage's work efficiency low, waste of human resource, into
This high technical problem.
Technical solution:To achieve the above object, the technical solution adopted by the present invention is:
A kind of localization method based on machine vision, includes the following steps:
Step 1: setting camera on object to be positioned, and camera is demarcated, obtain each pixel in image
Calibration relation table between relative position and actual relative position;
Step 2: Design Orientation label, and positioning label is placed in object institute to be positioned in the environment, position label substance
Positional information and directional information comprising itself position;
Step 3: being shot using camera to place environment, the image for including positioning label is obtained, analysis image obtains
Label position, direction and the content for positioning label are positioned in image;
Step 4: solve the relative position relation that label is positioned in picture centre and image, with reference to the content of positioning label,
Obtain the pose of picture centre in the actual environment.
Further, in the present invention, the calibration process of camera comprises the steps of:
Step 1: shooting standard calibration image;
Step 2: uniformly choosing index point based on the grid in standard calibration image, each index point is recorded in standard mark
Determine the location of pixels and its physical location in image;
Step 3: according to the location of pixels and physical location of index point, calibration relation table is established.
Further, in the present invention, label profile is positioned as square, and color category thereon there are 2 kinds, and positions mark
Label are made of multiple color lumps inside outline border and outline border, and each color lump has a kind of color, and outline border is a kind of color, passes through different color blocks
Combination reflects label substance.
Further, in the present invention, when analyzing image, image is subjected to binary conversion treatment, is extracted according to colouring information
It may be the pixel of positioning label in image, connected domain detection be carried out, with reference to connected domain size, the ratio of width to height, position and surrounding
Background filters out positioning label, obtains its position in the picture;Straight-line detection is carried out to positioning label by Hough transformation, is obtained
Obtain the direction that label is positioned in image;Then the content of positioning label is read.
Beneficial effect:
The present invention is digitized environment by Design Orientation label, and camera obtains the image for including positioning label, with one
Fixed algorithm handles image and identifies position, direction and the content of positioning label, is according to image center
Position, angular relationship between positioning label, the calibration relation table of combining camera, dependent coordinate conversion obtain determinand in environment
In position and attitude information, realize the positioning of AGV in the environment, to trolley transport automation realize have critically important meaning
Justice.
The present invention using machine vision replaces artificial method, using camera working stability, cost is low the characteristics of, pass through one
Fixed processing routine, with advanced image processing algorithm, can greatly improve positioning accuracy and speed, reduce articles from the storeroom fortune
The defeated degree of dependence to people, raises labour productivity.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is positioning label schematic diagram;
Fig. 3 divides schematic diagram for positioning label area;
Fig. 4 is positioning label substance schematic diagram;
The uncalibrated image that Fig. 5 is shot when being away from target 3m;
Fig. 6 is the mark point away from target 3m uncalibrated images;
Fig. 7 is image processing module flow chart;
Fig. 8 is positioning label region image;
Fig. 9 is positioning label angle calculation schematic diagram;
Figure 10 is four kinds of directions of positioning label after rotation.
Embodiment
The present invention is further described below in conjunction with the accompanying drawings.
1 Design Orientation label
The positioning label of this method design is divided into 3 parts as shown in Fig. 2, contain 16 binary messages:10 data bit, 4
Position direction flag, 2 bit check positions.Wherein data bit represents the position of positioning label in the environment, and direction flag represents fixed
The direction of position label in the environment.
1.1 positioning tag size shapes
The positioning label that the present invention designs includes red white two kinds of colors, and size is the square of 18cm*18cm, is divided into two
Region:Rim area and data field.As shown in figure 3, exterior hatched example areas is rim area, centre is data field without hatched example areas.Side
Frame area is the red boxes that width is 2.25cm, and data field is the red white side by 16 3.375cm*3.375cm of 4*4 distributions
Block, after two-value processing, diamonds represents that 0, white square represents 1.
1.2 positioning label substances
Label substance is positioned as shown in figure 4, by 16 square number consecutivelies, since color lump color is different, therefore data district's groups
Into including the binary message A of 160A1A2A3A4A5A6A7A8A9A10A11A12A13A14A15, it is divided into 3 parts:10 data bit
A2A4A5A6A7A9A10A11A13A14, 4 direction flag A0A3A12A15And 2 bit check position A1A8.10 data bit have 1024 kinds
Various combination, you can represent 1024 different labels.4 direction flags are respectively at four vertex, and select wherein one
A vertex, all colors for positioning the vertex in label are different from the color on other 3 vertex, so can be used for the later stage only
One determines the direction of positioning label, and the present embodiment chooses A15For white, A0A3A12It is red.2 bit check positions are used to verify 10
Whether position data bit is correct, and those skilled in the art can voluntarily select specific verification mode as needed.
1.3 environment digitize
Wide angle camera is fixed on AGV, field range of the camera in distance objective 3m can reach 4m × 3m, therefore
Positioning label is pasted in the top of AGV running regions according to the distance of 3m × 3m, it is each position in label data bit with its own
Coordinate, that is, positional information in environment corresponds, and positions label towards being consistent, i.e. the position of direction flag is unified
And placing direction is consistent, the digitlization of working environment is completed.Take pictures in the underface alignment positioning label of positioning label, at this time
The travel direction of AGV is positive direction, while the angle of tag orientation position relative positioning tag hub is positioned in the photo for acquisition of taking pictures
Degree relation.
2 calibration for cameras
In order to ensure that AGV can obtain clearly image during the motion, therefore camera uses the work of full frame exposure mode
Industry camera, it is so programmable that the time for exposure is set.To ensure to obtain, image range is as big as possible, and the camera lens used is wide-angle lens
Head, field angle are about 90 °.Wide-angle lens causes pattern distortion larger, is not directly available by 2 position relationships in image
2 points of actual positional relationship, therefore the first step of the present invention is calibration for cameras, establishes the relation table of pixel distance and actual range,
The position relationship of each point in image is converted into actual positional relationship.
2.1 shooting pictures
For convenience of measurement image midpoint and the actual range of point, photographic subjects elect latticed metope as.Due to camera away from
Different from the distance of target, the scope of shooting also can be different, therefore the present invention is between 2.5 meters and 3.4 meters of camera distance target
10 pictures of equidistant shooting, the distance different to 10 are demarcated respectively, so that the later stage in actual use can be with
Selected according to each shooting distance accordingly apart from corresponding calibration relation table.Fig. 5 is clapped when showing camera distance target 3m
The uncalibrated image taken the photograph, horizontally and vertically whether the center line in direction is used for observing the image of shooting and aligns in figure.
2.2 mark pictures
The width and height of each grid of reference object are measured, some mark points are selected according to certain density, in Fig. 6
Shown in triangle.
2.3 opening relationships tables
On the basis of picture centre, the location of pixels of each mark point in Fig. 6 is recorded, according to square residing for each mark point
Position can obtain the physical location at their range image centers, location of pixels and physical location are corresponded, are recorded in mark
Determine in relation table.
In practical application, assume to detect that the central point pixel coordinate that label is positioned in image is (xlabel,ylabel),
Its four nearest pixel of detection range in relation table is demarcated:Upper left angle point (xtl,ytl), upper right angle point (xtr,ytr), it is left
Lower angle point (xbl,ybl) and bottom right angle point (xbr,ybr), the physical location at their relative image centers is obtained according to calibration relation table
Respectively (Xtl,Ytl)、(Xtr,Ytr)、(Xbl,Ybl)、(Xbr,Ybr).It is relatively above-mentioned in the picture according to the central point of positioning label
The location of pixels of four closest pixels, the physical location that its relative image center is obtained using geometric proportion relationship are
(Xlabel,Ylabel)。
3 positioning tag recognitions and processing
The present invention obtains the image containing positioning label, with digital image analysis treatment technology, is marked by being positioned in image
The position of label and direction calculating obtain true bearing of the image center location (AGV positions) relative to positioning label, pass through knowledge
Do not go out to position label substance can determine specifically which positioning label, you can to know the corresponding actual seat of the positioning label
Cursor position, both, which are combined, can calculate the position and direction of AGV in the actual environment.Image processing module flow chart is as schemed
Shown in 7.
3.1 pretreatment
The top shooting for being directed at running region by wide angle camera first is obtained containing the picture for positioning label, is schemed by extracting
Red pixel information in piece can be effectively by image binaryzation.
Image after binaryzation is carried out first corrode the processing expanded afterwards, can effective filter background noise, raising label
The smoothness at edge.
3.2 positioning label information identifications
Theoretical foundation:Image recognition.The image recognition first step is the effective characteristics of image of extraction, and herein, we are main
Straight-line detection is carried out by colouring information and Hough transformation, the image detected and positioning label image are compared and analyzed,
Obtain recognition result.
3.2.1 position label position identification
Connected domain detection is carried out to the two-value picture for having carried out ambient noise filtering, each connection in image can be obtained
Position, width and the height in domain.Whether connected domain is that positioning label mainly has following standard:
1st, the size of connected domain.Positioning tag size is fixed, and after camera is fixed, the size of positioning label in the picture should
This is within limits;
2nd, the ratio of width to height of connected domain.The ratio of width to height for positioning label itself is 1:1, since the present invention uses wide-angle lens,
The distortion in image border region is larger, so the ratio of width to height is set as between 0.3-1.7;
3rd, label position is positioned.Since using wide-angle lens, image border distortion is larger, if connected domain appears in image
Marginal position, then be identified not as positioning label, otherwise influences positioning label substance discrimination;
4th, label ambient background is positioned.It is non-red metope around positioning label, therefore positions around label in binary map
Black background is should be as in.According to this feature, the connected domain to meeting foregoing 3 conditions carries out background detection.In distance even
The location determination of each about 4 pixels goes out a square frame and surrounds by connected domain up and down in logical domain.What decisional block was included
Whether the background parts black picture element proportion in pixel beyond connected domain is marked more than 98% if being identified as positioning more than if
Label, otherwise nonrecognition is positioning label.
Further, by the width of connected domain, height and vertex position, the positioning can be determined using geometric proportion relationship
Position (the x of tag hub in the picturelabel,ylabel)。
3.2.2 position tag orientation identification
It may be the connected domain for positioning label in image that can be obtained by the identification process that label position is located above, and
The width W of connected domainL, height HLWith the center point coordinate (x of positioning labellabel,ylabel).To position tag hub point (xlabel,
ylabel) centered on, it is 2*W to select a widthL, be highly 2*HLRegion, as shown in Fig. 8 (a).To two in this region
Value image is further processed, and identifies the direction that label is positioned in image, operation specific as follows.
Region where label is positioned carries out Hough transformation and carries out straight-line detection, and obtained straight line should be parallel to each other or hang down
Directly, above-mentioned undesirable straight line is filtered, the edge line of positioning label can be obtained.
By observing the direction flag of positioning label, the photo originally shot when environment digitizes is contrasted, if direction
Angle change occurs for the center of flag bit relative positioning label, then can judge the directions of AGV at present in the environment with it is defined
Positive direction is inconsistent.
Specific AGV angles can position the anti-deflection for pushing away AGV of deflection angle of label there occurs great change by observing
Angle.
As shown in figure 9, in order to unifiedly calculate, using the positioning label edges straight line in Fig. 4 on the right side of direction flag as ginseng
According to line L, observe the vertical direction of line of reference L and picture in picture and, there are 90 ° of-α of angle, revolved counterclockwise around the center of positioning label
Turn α and cause sides aligned parallel or vertical of the edge of positioning label with picture, but since positioning label contains directional information, 4 sides
To flag bit A0A3A12A15In only A15It is white, its excess-three square is red, therefore it is possible that such as Figure 10 after rotation
Four kinds of shown situations, will make positioning tag orientation completely correctly i.e. consistent with the actual placement position of direction flag, positioning
Label also needs to rotation angle θ again, (a) (b) (c) (d) 4 kinds of situations in corresponding diagram 10, the value of θ is respectively 0 °, 90 °, 180 °,
270°.It may insure the in the right direction of positioning label after i.e. in fig.9 label rotation angle α+θ.It is relatively square to be back-calculated to obtain AGV
To there occurs the deflection of α+θ, and yawing moment is counterclockwise.
3.2.3 label substance identifies
According to the angle α being calculated, the area image where rotational positioning label, and 9 times of artwork are enlarged into, to carry
High recognition accuracy, as shown in Fig. 8 (b).In rotation and amplified area image, binary conversion treatment is carried out by red, into
The multiple dilation erosion operation of row, fills up blank square in positioning label, as shown in Fig. 8 (c).The detection connected domain in Fig. 8 (c),
Obtain the amplified position for positioning label in area image, positioning tag width WNLWith positioning label height HNL。
By the content of the position of positioning label in Fig. 8 (c) detection positioning label in Fig. 8 (b).It is used in the present invention fixed
Position label border width be 2.25cm, the width of data field square be 3.375cm, so positioning label often a line or each row
Composition is 2:3:3:3:3:2.According to the width W of label after calculated above amplifiedNLWith height HNL, in this ratio
Identify the content of 16 data blocks, illustrate recognition methods by taking the 7th (the 2nd row the 3rd row) data block as an example below:
1st, each square Width x Height is determined, the width that each data block can be calculated according to ratio above is
(3/16)*WNL, it is highly (3/16) * HNL。
2nd, the 7th position of square in the label is determined, the central point of the 7th square should be apart from label left side edge
(2/16+6/16+(3/16)/2)*WNL, i.e. (19/32) * WNL, should be apart from label upper edge (2/16+3/16+ (3/16)/
2)*HNL, i.e. (13/32) * HNL。
3rd, since the analytic ability of camera is limited, the color of red white adjacent area is judged there may be certain error, in addition
Since image may produce distortion, positioning label is not necessarily the square of standard, therefore only knows when identifying each square
Other square central area, the width and height of central area are respectively the 1/5 to 3/5 of square width and height.
4th, detect whether each pixel is red in obtained central area calculated above, if not red pixel point
Quantity exceedes the certain proportion of pixel sum, then the 7th data block is identified as " 1 ", is otherwise " 0 ".
4 calculate relative position relation
Fig. 9 intermediate cams shape represents the center of image, according to picture centre coordinate and tag hub coordinate (xlabel,ylabel),
Calculate the angle β of both lines and vertical direction.
Counterclockwise after the rotation common α+θ angles of line of reference L so that positioning label, to rotate to direction position correct, is then further continued for
After rotation β angle counterclockwise so that line of reference L coincides with picture centre M and the line of tag hub, that is, obtains picture centre M
The angle at the center of relative positioning label is alpha+beta+θ.
According to picture centre coordinate and positioning tag hub coordinate (xlabel,ylabel) 2 points of pixel distance is calculated, utilize
The calibration relation table of calibration for cameras, is translated into actual range m, with reference to obtained picture centre relative positioning mark calculated above
Angle alpha+beta+the θ of label, is converted into rectangular co-ordinate relation by its position relationship, obtains the physical location of AGV relative positioning labels, from
Position the real coordinate position (X at label substance Search and Orientation tag hub relative image center in relation table is demarcatedlabel,
Ylabel), physical location (Xs of the AGV in working environment can obtain by formula (1)AGV,YAGV)。
To sum up obtain, the position (X of AGVAGV,YAGV) and deflection angle α+θ.
The above is only the preferred embodiment of the present invention, it should be pointed out that:For the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (4)
- A kind of 1. localization method based on machine vision, it is characterised in that:Include the following steps:Step 1: setting camera on object to be positioned, and camera is demarcated, obtain the picture of each pixel in image Calibration relation table between plain position and physical location;Step 2: Design Orientation label, and positioning label is placed in object institute to be positioned in the environment, positioning label substance includes The positional information and directional information of itself position;Step 3: being shot using camera to place environment, the image for including positioning label is obtained, analysis image obtains image Middle positioning label position, direction and the content for positioning label;Step 4: solving picture centre with positioning the relative position relation of label in image, with reference to the content of positioning label, obtain The position of picture centre in the actual environment;In step 4, the change of picture centre angle, the inclined of picture centre is pushed away by the way that the deflection angle of observation positioning label is counter Gyration, that is, obtain the angle γ of picture centre relative positioning tag hub, according to picture centre coordinate and positioning tag hub Coordinate (xlabel,ylabel) calculate 2 points of pixel distance, using the calibration relation table of calibration for cameras, be translated into it is actual away from From m, with reference to angle γ, its position relationship is converted into rectangular co-ordinate relation, obtains the physical location of AGV relative positioning labels, Real coordinate position (X from positioning label substance Search and Orientation tag hub relative image center in relation table is demarcatedlabel, Ylabel), picture centre physical location (XAGV,YAGV) tried to achieve by the following formula:<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>X</mi> <mrow> <mi>A</mi> <mi>G</mi> <mi>V</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>X</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>+</mo> <mi>m</mi> <mo>&times;</mo> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&gamma;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>Y</mi> <mrow> <mi>A</mi> <mi>G</mi> <mi>V</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>Y</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>+</mo> <mi>m</mi> <mo>&times;</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&gamma;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow>In step 1, camera is set on object to be positioned, and camera is demarcated, obtains each pixel in image Location of pixels and physical location between calibration relation table, specifically include:On the basis of picture centre, record each in image The location of pixels of mark point and the physical location at each mark point range image center, by location of pixels with physical location one by one It is corresponding, that is, obtain the calibration relation table demarcated on the basis of picture centre.
- 2. the localization method according to claim 1 based on machine vision, it is characterised in that:The calibration process of camera includes Following steps:Step 1: shooting standard calibration image;Step 2: uniformly choosing index point based on the grid in standard calibration image, each index point is recorded in standard calibration figure Location of pixels and its physical location as in;Step 3: according to the location of pixels and physical location of index point, calibration relation table is established.
- 3. the localization method according to claim 1 based on machine vision, it is characterised in that:Label profile is positioned as pros Shape, color category thereon have 2 kinds, and position label and be made of multiple color lumps inside outline border and outline border, and each color lump has a kind of face Color, outline border are a kind of color, reflect label substance by the combination of different color blocks.
- 4. the localization method according to claim 3 based on machine vision, it is characterised in that:When analyzing image, by image Binary conversion treatment is carried out, it may be the pixel for positioning label to be extracted according to colouring information in image, carry out connected domain detection, with reference to Connected domain size, the ratio of width to height, position and ambient background filter out positioning label and obtain its position in the picture;Pass through Hough Conversion carries out straight-line detection to positioning label, obtains the direction that label is positioned in image;Then the content of positioning label is read.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510263245.8A CN104835173B (en) | 2015-05-21 | 2015-05-21 | A kind of localization method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510263245.8A CN104835173B (en) | 2015-05-21 | 2015-05-21 | A kind of localization method based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104835173A CN104835173A (en) | 2015-08-12 |
CN104835173B true CN104835173B (en) | 2018-04-24 |
Family
ID=53813038
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510263245.8A Active CN104835173B (en) | 2015-05-21 | 2015-05-21 | A kind of localization method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104835173B (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105243665A (en) * | 2015-10-10 | 2016-01-13 | 中国科学院深圳先进技术研究院 | Robot biped positioning method and apparatus |
US10173324B2 (en) * | 2015-11-16 | 2019-01-08 | Abb Schweiz Ag | Facilitating robot positioning |
CN106225787B (en) * | 2016-07-29 | 2019-03-29 | 北方工业大学 | Unmanned aerial vehicle visual positioning method |
CN106500714B (en) * | 2016-09-22 | 2019-11-29 | 福建网龙计算机网络信息技术有限公司 | A kind of robot navigation method and system based on video |
CN106595634A (en) * | 2016-11-30 | 2017-04-26 | 深圳市有光图像科技有限公司 | Method for recognizing mobile robot by comparing images and mobile robot |
CN109308072A (en) * | 2017-07-28 | 2019-02-05 | 杭州海康机器人技术有限公司 | The Transmission Connection method and AGV of automated guided vehicle AGV |
CN108280853A (en) * | 2018-01-11 | 2018-07-13 | 深圳市易成自动驾驶技术有限公司 | Vehicle-mounted vision positioning method, device and computer readable storage medium |
WO2019154435A1 (en) * | 2018-05-31 | 2019-08-15 | 上海快仓智能科技有限公司 | Mapping method, image acquisition and processing system, and positioning method |
CN110006420B (en) * | 2018-05-31 | 2024-04-23 | 上海快仓智能科技有限公司 | Picture construction method, image acquisition and processing system and positioning method |
CN112446916B (en) * | 2019-09-02 | 2024-09-20 | 北京京东乾石科技有限公司 | Method and device for determining parking position of unmanned vehicle |
CN110658215B (en) * | 2019-09-30 | 2022-04-22 | 武汉纺织大学 | PCB automatic splicing detection method and device based on machine vision |
CN110887488A (en) * | 2019-11-18 | 2020-03-17 | 天津大学 | Unmanned rolling machine positioning method |
CN111273052A (en) * | 2020-03-03 | 2020-06-12 | 浙江省特种设备科学研究院 | Escalator handrail speed measurement method based on machine vision |
CN112902843B (en) * | 2021-02-04 | 2022-12-09 | 北京创源微致软件有限公司 | Label attaching effect detection method |
CN113554591B (en) * | 2021-06-08 | 2023-09-01 | 联宝(合肥)电子科技有限公司 | Label positioning method and device |
CN113343962B (en) * | 2021-08-09 | 2021-10-29 | 山东华力机电有限公司 | Visual perception-based multi-AGV trolley working area maximization implementation method |
CN113781566A (en) * | 2021-09-16 | 2021-12-10 | 北京清飞科技有限公司 | Positioning method and system for automatic image acquisition trolley based on high-speed camera vision |
CN113758423B (en) * | 2021-11-10 | 2022-02-15 | 风脉能源(武汉)股份有限公司 | Method for determining position of image acquisition equipment based on image inner scale |
CN115872018B (en) * | 2022-12-16 | 2024-06-25 | 河南埃尔森智能科技有限公司 | Electronic tray labeling correction system and method based on 3D visual sensing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102773862A (en) * | 2012-07-31 | 2012-11-14 | 山东大学 | Quick and accurate locating system used for indoor mobile robot and working method thereof |
CN103400373A (en) * | 2013-07-13 | 2013-11-20 | 西安科技大学 | Method for automatically identifying and positioning coordinates of image point of artificial mark in camera calibration control field |
CN103729892A (en) * | 2013-06-20 | 2014-04-16 | 深圳市金溢科技有限公司 | Vehicle positioning method and device and processor |
CN103994762A (en) * | 2014-04-21 | 2014-08-20 | 刘冰冰 | Mobile robot localization method based on data matrix code |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8488001B2 (en) * | 2008-12-10 | 2013-07-16 | Honeywell International Inc. | Semi-automatic relative calibration method for master slave camera control |
-
2015
- 2015-05-21 CN CN201510263245.8A patent/CN104835173B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102773862A (en) * | 2012-07-31 | 2012-11-14 | 山东大学 | Quick and accurate locating system used for indoor mobile robot and working method thereof |
CN103729892A (en) * | 2013-06-20 | 2014-04-16 | 深圳市金溢科技有限公司 | Vehicle positioning method and device and processor |
CN103400373A (en) * | 2013-07-13 | 2013-11-20 | 西安科技大学 | Method for automatically identifying and positioning coordinates of image point of artificial mark in camera calibration control field |
CN103994762A (en) * | 2014-04-21 | 2014-08-20 | 刘冰冰 | Mobile robot localization method based on data matrix code |
Also Published As
Publication number | Publication date |
---|---|
CN104835173A (en) | 2015-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104835173B (en) | A kind of localization method based on machine vision | |
CN111340797B (en) | Laser radar and binocular camera data fusion detection method and system | |
CN108827316B (en) | Mobile robot visual positioning method based on improved Apriltag | |
CN110458161B (en) | Mobile robot doorplate positioning method combined with deep learning | |
CN108305264B (en) | A kind of unmanned plane precision landing method based on image procossing | |
CN103411553B (en) | The quick calibrating method of multi-linear structured light vision sensors | |
CN102773862B (en) | Quick and accurate locating system used for indoor mobile robot and working method thereof | |
CN105528789B (en) | Robot visual orientation method and device, vision calibration method and device | |
CN202702247U (en) | Rapid and accurate positioning system used for indoor mobile robot | |
CN105574161B (en) | A kind of brand logo key element recognition methods, device and system | |
CN107766859A (en) | Method for positioning mobile robot, device and mobile robot | |
CN104217441A (en) | Mechanical arm positioning fetching method based on machine vision | |
CN115609591B (en) | Visual positioning method and system based on 2D Marker and compound robot | |
CN105335973A (en) | Visual processing method for strip steel processing production line | |
CN107808123A (en) | The feasible area detecting method of image, electronic equipment, storage medium, detecting system | |
CN110378957B (en) | Torpedo tank car visual identification and positioning method and system for metallurgical operation | |
CN108764234A (en) | A kind of liquid level instrument Recognition of Reading method based on crusing robot | |
CN114331986A (en) | Dam crack identification and measurement method based on unmanned aerial vehicle vision | |
CN114882109A (en) | Robot grabbing detection method and system for sheltering and disordered scenes | |
CN115063579B (en) | Train positioning pin looseness detection method based on two-dimensional image and three-dimensional point cloud projection | |
CN115597494A (en) | Precision detection method and system for prefabricated part preformed hole based on point cloud | |
CN118411507A (en) | Semantic map construction method and system for scene with dynamic target | |
CN113705564A (en) | Pointer type instrument identification reading method | |
CN110853103B (en) | Data set manufacturing method for deep learning attitude estimation | |
Jiang et al. | Mobile robot gas source localization via top-down visual attention mechanism and shape analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |