CN107169923A - A kind of image position method, device for building drawing method and robot - Google Patents
A kind of image position method, device for building drawing method and robot Download PDFInfo
- Publication number
- CN107169923A CN107169923A CN201710407062.8A CN201710407062A CN107169923A CN 107169923 A CN107169923 A CN 107169923A CN 201710407062 A CN201710407062 A CN 201710407062A CN 107169923 A CN107169923 A CN 107169923A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- image
- robot
- stroboscopic
- drawing method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000001514 detection method Methods 0.000 claims description 6
- 230000000903 blocking effect Effects 0.000 claims 1
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 238000009432 framing Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000000686 essence Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/60—Rotation of a whole image or part thereof
- G06T3/608—Skewing or deskewing, e.g. by two-pass or three-pass rotation
-
- G06T3/18—
Abstract
The invention discloses a kind of image position method, device for building drawing method and robot, the problem of conventional images position fixing process has deviation is solved.The image position method includes:The Distortion Law of the video image shot according to fixing camera sets up real coordinate system;Image coordinate system is set up using real coordinate system standardization.
Description
Technical field
The present invention relates to framing technical field, and in particular to a kind of framing side for building drawing method and robot
Method, device.
Background technology
Framing refers to calculate its real coordinate in true environment according to the image coordinate of object in the picture,
To meet the requirement of location navigation.Generally, framing process mainly includes three parts:First, build figure, that is, determine image coordinate with
The corresponding relation of real coordinate;2nd, the image coordinate of detection object;3rd, according to the image coordinate of object, with reference to image coordinate with
The corresponding relation of real coordinate calculates the real coordinate of object.
Figure process of the prior art of building includes:Real coordinate system is set up first in true environment, then by reality seat
Mark is tied to form scale smaller, forms image coordinate system.However, due to the distortion of camera shooting image, object is at a time
Real coordinate corresponding to image coordinate, and object can produce deviation between the moment real real coordinate, thus can
Positioning is caused deviation occur.
The content of the invention
In view of this, the embodiments of the invention provide a kind of image position method, device for building drawing method and robot, with
Solve during framing, deviation is matched because the distortion of camera shooting image causes image coordinate and real coordinate to exist
The problem of.
Drawing method is built the invention provides one kind, including:The Distortion Law of the video image shot according to fixing camera
Set up real coordinate system;Image coordinate system is set up using real coordinate system standardization.
In one embodiment, concentric circles distortion is presented in video image, and real coordinate system is concentric circles coordinate system.
In one embodiment, concentric circles coordinate system includes equidistant concentric circles coordinate line.
In one embodiment, barrel distortion is presented in video image, and real coordinate system is Grid Coordinate System.
In one embodiment, Grid Coordinate System includes the rectilinear coordinates line at equal intervals in x-axis and y-axis.
In one embodiment, setting up image coordinate system using real coordinate system standardization includes:Gather fixing camera
What is shot includes a two field picture of real coordinate system;Real coordinate system in a two field picture comprising real coordinate system is retouched
Point, obtains image coordinate system.
Present invention also offers a kind of image position method of robot, fixed stroboscopic is identified in robot, the robot
Image position method include:Prestore and build the image coordinate system that drawing method is obtained according to above-mentioned;Gather what fixing camera was shot
Video image;Position of the robot in video image is detected using stroboscopic Three image difference;To the position comprising robot
Image uses image coordinate system, and it is scanned, according to the real coordinate of scanning result calculating robot.
In one embodiment, stroboscopic mark includes that the marker of the visible ray of flicker can be sent.
In one embodiment, stroboscopic mark is bright in adjacent two field picture, be secretly alternately present.
In one embodiment, stroboscopic is designated stroboscopic LED.
Invention further provides a kind of image positioning device of robot, fixed stroboscopic is identified in robot, robot
Positioner includes:Memory module, the image coordinate system that drawing method is obtained is built for prestoring according to above-mentioned;Acquisition module, is used for
Gather the video image that fixing camera is shot;Detection module, for detecting robot in video using stroboscopic Three image difference
Position in image;Computing module, uses image coordinate system, and it is swept for the image to the position comprising robot
Retouch, according to the real coordinate of scanning result calculating robot.
In one embodiment, stroboscopic mark includes that the marker of the visible ray of flicker can be sent.
In one embodiment, stroboscopic mark is bright in adjacent two field picture, be secretly alternately present.
In one embodiment, stroboscopic is designated stroboscopic LED.
Image position method, the device of building drawing method and robot provided according to the present invention, meets fixation by setting up
The real coordinate system of the Distortion Law of camera, the image coordinate system obtained with reference to real coordinate system standardization, can weaken by
The site error caused in the distortion of camera shooting image, it is ensured that the accuracy of follow-up position fixing process.
Brief description of the drawings
Fig. 1 show the flow chart for building drawing method that the present invention is provided.
Fig. 2A show the real coordinate system of one embodiment of the invention offer.
Fig. 2 B show the image coordinate system corresponding with real coordinate system shown in Fig. 2A of one embodiment of the invention offer.
Fig. 2 C show position of the robot of one embodiment of the invention offer in image coordinate system as shown in Figure 2 B
Schematic diagram.
Fig. 3 A show the real coordinate system of another embodiment of the present invention offer.
Fig. 3 B show the image coordinate system corresponding with real coordinate system shown in Fig. 3 A of one embodiment of the invention offer.
Fig. 3 C show position of the robot of one embodiment of the invention offer in image coordinate system as shown in Figure 3 B
Schematic diagram.
Fig. 4 show the flow chart of the image position method of the robot of one embodiment of the invention offer.
Fig. 5 show the structured flowchart of the image positioning device of the robot of one embodiment of the invention offer.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Based on this
Embodiment in invention, the every other reality that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example is applied, the scope of protection of the invention is belonged to.
Drawing method is built the invention provides one kind, for framing process.Fig. 1, which show the image of the invention provided, to be determined
The flow chart of figure process is built in the method for position.Include it can be seen that this builds figure process 100:
Step S101, the Distortion Law of the video image shot according to fixing camera sets up real coordinate system.Here institute
The fixing camera said can be standard lens (45 ° -55 ° of visual angle), wide-angle lens (70 ° of visual angle), bugeye lens (visual angle
90 °), any one of fish eye lens (180 ° of visual angle), the present invention is not construed as limiting to the species of camera.
Generally, the Distortion Law for the video image that fixing camera is shot can be divided into two according to the setting angle of camera
Class:
The first kind, when fixing camera primary optical axis vertically downward when, its shoot image present concentric circles distortion, i.e.
Using the position of fixing camera in the picture as the center of circle, the distortion effect of each coordinate points on a concentric circles is identical.
In this case, the real coordinate system of concentric ring type can be set up.
Refering to Fig. 2A, concentric ring type reality coordinate system can be made with the projection of fixing camera on the ground
For the origin of coordinates, any selection two also cross the origin of coordinates and orthogonal straight line as x-axis and y-axis, it is former with coordinate
Point is the center of circle, and a concentric circles coordinate line is drawn at equal intervals.
Equations of The Second Kind, when fixing camera primary optical axis diagonally downward when, its shoot video image present barrel distortion,
That is, the distortion effect of two end points of the line segment vertically divided equally by the projection of primary optical axis on image is identical.In this case, may be used
To set up the real coordinate system of grid type.
Refering to Fig. 3 A, grid type reality coordinate system can be specifically using fixing camera projection on the ground as
The origin of coordinates, with the y-axis that is projected as of fixing camera primary optical axis, then x-axis is through the origin of coordinates and one vertical with y-axis
Straight line is determined, straight line coordinate line is drawn at equal intervals in x-axis and y-axis respectively.
Step S102, image coordinate system is set up using real coordinate system standardization.Real coordinate system standardization refers to existing
Real coordinate system uncalibrated image coordinate system, i.e., set up image coordinate system according to the mapping of real coordinate system in the picture.
The implementation process of real coordinate system standardization, for example, can be that what collection fixing camera was shot sits comprising reality
Mark a two field picture of system;Described point formation image coordinate system is carried out to the real coordinate system in the two field picture.The process of the described point can
To be realized using matlab.
In this way, the real coordinate system according to Fig. 2A can be obtained by image coordinate system as shown in Figure 2 B, correspondingly,
Real coordinate system according to Fig. 3 A can be obtained by image coordinate system as shown in Figure 3 B.
Drawing method is built during the framing provided according to the present invention, the distortion of fixing camera is met by setting up
The real coordinate system of rule, the image coordinate system obtained with reference to real coordinate system standardization, can weaken because pattern distortion is made
Into site error, it is ensured that the accuracy of follow-up position fixing process.
Present invention also offers a kind of image position method of robot, the image coordinate system that this method is used is basis
Build what drawing method was obtained shown in Fig. 1.The flow chart of this method is as shown in Figure 4.
Before being positioned using this method to robot, the coverage of fixing camera should be made to cover robot
Plane is run, while fixed stroboscopic to be identified in robot.Stroboscopic mark is the mark for the visible ray for referring to send flicker
Know thing, such as stroboscopic LED.The position identified for stroboscopic, it is preferable that the top of robot can be arranged on so that no matter
The relative position of robot and fixing camera how (just to, back to or tiltedly to), fixing camera can photograph the frequency
Dodge mark.
From fig. 4, it can be seen that the image position method 400 of the robot includes:
Step S401, the image coordinate system that drawing method is obtained of building prestored according to Fig. 1.
Step S402, the video image that collection fixing camera is shot.
Step S403, position of the robot in video image is detected using stroboscopic Three image difference.
Stroboscopic Three image difference refers to carry out the position of robot using traditional Three image difference combination stroboscopic mark
The method of detection.It is because the Computing Principle of traditional Three image difference is, by adjacent two frame why to need to set stroboscopic to identify
Image makes the difference, then, is dark space, the different pixel phase of content after content identical pixel subtracts each other in the adjacent two field pictures
It is clear zone after subtracting, so can be obtained by the position of the different pixel of content in the picture.And carrying out robot location
, it is necessary to ensure robot in this three two field picture without the movement of generation position, i.e. pixel phase where robot when detection
It is dark space after subtracting, just can not now detects the position of robot, is so accomplished by utilizing the stroboscopic mark system in robot
Make difference, ideally stroboscopic mark it is bright in adjacent two field picture, be secretly alternately present so that the position identified using stroboscopic come
The position of identified machine people.
The calculating process of stroboscopic Three image difference includes:(1) gather what t0, t1, t2 moment monitoring camera were shot respectively
Image A, B, C;(2) difference is made to every two frames adjacent image respectively, i.e. deltaBA=B-A, deltaCB=C-B;Then, it is right
DeltaBA and deltaCB do and operated, i.e. resultABC=deltaBA | deltaCB, it is corresponding as resultABC=1
Clear zone position in the picture be the position of LED in the picture, the corresponding images of resultABC=1 are robot
Current position image because, when deltaBA and deltaCB occur highlighted position it is identical when, illustrate that LED does not have
It is moved, now, the result of calculation to its position is only accurately.
It will be understood by those skilled in the art that in actual application, in order to further improve computational accuracy, Ke Yi
Principle of Statistics is further added on the basis of stroboscopic Three image difference, for example, counts 8 three-frame difference results, specifically, point
It is other to A, B, C, B, C, D ... H, I, J, this 8 group of three two field picture occur in Difference Calculation, 8 groups of difference results of statistics high
As area-of-interest, i.e. the robot position of bright area number of times at most.One threshold value can certainly be set, work as appearance
The number of times of highlight regions exceedes the threshold value, then it represents that it is area-of-interest.Statistic algorithm is added to be also an advantage that and be, can be with
The interference of other moving objects is filtered out by statistics, so as to accurately detect the position of robot.
Step S404, image coordinate system is used to the image of the position comprising robot, and it is scanned, according to sweeping
Retouch the real coordinate of object computer device people.
Below by two specific embodiments, the specific implementation procedure to step S404 is illustrated.
Example 1, the image coordinate system prestored as shown in Figure 2 B, the spacing of concentric circles coordinate line in its corresponding real coordinate system
D=1 meters.Image as shown in Figure 2 C is obtained using image coordinate system to the image of the position comprising robot, wherein A points represent machine
The position of device people.Then, step S404 implementation procedure includes:Obtained by scanning in robot position and image coordinate system
X-axis between angle, i.e. course angle θ, for example, 30 °, course angle namely robot in the image coordinate system are sat in reality
Course angle in mark system.
In units of pixel, it is scanned from the origin of image coordinate system to the direction of robot, counts scanning process
The quantity of the quantity of the concentric circles coordinate line of middle process and the pixel of process.When the pixel of scanning to robot position
When, record now counts the quantity n of obtained concentric circles coordinate line, and for example, 5, and the 5th article of concentric circles coordinate line is calculated to machine
The quantity q for the pixel that device people is passed through position, for example, 15.When the quantity for the concentric circles coordinate line that statistics is obtained is 6
When, calculate the total m of the pixel passed through between the 5th article of concentric circles coordinate line and the 6th article of concentric circles coordinate line, for example, 30.
Then now position of the robot in real coordinate system to the origin of coordinates distance be (n × d+ (d/m) × q), i.e. 5 × 1+ (1/
30) × 15=5.5 meters.
In this manner it is possible to obtain coordinate (5.5 × cos30 °, 5.5 × sin30 °) of the robot in real coordinate system.
Example 2, the image coordinate system prestored as shown in Figure 3 B, the spacing d of grid coordinate line in its corresponding real coordinate system
=1 meter.Image as shown in Figure 3 C is obtained using image coordinate system to the image of the position comprising robot, wherein B points represent machine
The position of device people.Then, step S404 implementation procedure includes:In units of pixel, from the origin of image coordinate system to machine
The quantity of the quantity of the gridline intersection point passed through in the direction progressive scan of people, statistics scanning process and the pixel of process.When
When scanning the pixel of robot position, for example, the 1000th row (i.e. pixel column), record now counts obtained grid
The quantity n of ruling intersection pointx, for example, 3, the 3rd gridline intersection point is calculated to the quantity q of the 1000th pixel passed throughx, example
Such as it is 50.When the quantity for the gridline intersection point that statistics is obtained is 4, the 3rd gridline intersection point and the 3rd gridline intersection point are calculated
Between the total m of pixel that passes throughx, for example, 100.Then now x-axis coordinate of the robot in real coordinate system is (nx×d
+(d/mx)×qx), i.e. 3 × 1+ (1/100) × 50=3.5 meters.
Y-axis coordinate (n of the robot in real coordinate system can be obtained according to identical calculating processy×d+(d/my)×
qy), wherein, nyBy the number of the gridline intersection point passed through on y-axis direction from the origin of described image coordinate system to the robot
Amount;qyFor from n-thyThe quantity for the pixel that individual gridline intersection point is passed through to the robot;myFor from n-thyIndividual gridline is handed over
Spot scan is to (ny+ 1) quantity for total pixel that individual gridline intersection point is passed through.
In this manner it is possible to obtain position of the robot in real coordinate system.
The image position method of the robot provided according to the present invention, it is possible to use common camera realizes the standard of robot
It is determined that position.
Fig. 5 show the structured flowchart of the image positioning device of the robot of one embodiment of the invention offer.The robot
Upper fixed stroboscopic mark.It can be seen that the positioner 50 includes:
Memory module 51, for the image coordinate system that drawing method is obtained of building prestored according to Fig. 1.
Acquisition module 52, the video image for gathering fixing camera shooting.
Detection module 53, for detecting position of the robot in video image using stroboscopic Three image difference.
Computing module 54, image coordinate system is used for the image to the position comprising robot, and it is scanned,
According to the real coordinate of scanning result calculating robot.
It should be appreciated that each device described in equipment 50 and each step phase in the method 400 with reference to described by Fig. 4
Correspondence, therefore, the device that the operation and feature described above with respect to method 400 is equally applicable to equipment 50 and wherein included, weight
Multiple content will not be repeated here.
The image positioning device of the robot provided according to the present invention is engaged with common camera can realize robot
Be accurately positioned, certainly, it can also be embedded in common camera, be used as a part for common camera.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
God is with principle, and any modification, equivalent substitution for being made etc. should be included in the scope of the protection.
Claims (11)
1. one kind builds drawing method, it is characterised in that including:
The Distortion Law of the video image shot according to fixing camera sets up real coordinate system;
Image coordinate system is set up using real coordinate system standardization.
2. build drawing method as claimed in claim 1, it is characterised in that concentric circles distortion is presented in the video image, described existing
Real coordinate system is concentric circles coordinate system.
3. build drawing method as claimed in claim 2, it is characterised in that the concentric circles coordinate system includes equidistant circle coordinates with one heart
Line.
4. build drawing method as claimed in claim 1, it is characterised in that barrel distortion, the reality is presented in the video image
Coordinate system is Grid Coordinate System.
5. build drawing method as claimed in claim 4, it is characterised in that the Grid Coordinate System is included between the grade in x-axis and y-axis
Blocking line coordinates line.
6. build drawing method as claimed in claim 1, it is characterised in that described that image seat is set up using real coordinate system standardization
Mark system includes:
Gather the two field picture for including the real coordinate system that the fixing camera is shot;
Described point is carried out to the real coordinate system in the two field picture comprising real coordinate system, image coordinate system is obtained.
7. a kind of image position method of robot, it is characterised in that
Prestore and described build the described image coordinate system that drawing method is obtained according to any in claim 1-6;
Gather the video image that the fixing camera is shot;
Position of the robot in the video image is detected using stroboscopic Three image difference;
Described image coordinate system is used to the image of the position comprising the robot, and it is scanned, is tied according to scanning
Fruit calculates the real coordinate of the robot.
8. the image position method of robot as claimed in claim 7, it is characterised in that fixed stroboscopic mark in the robot
Know.
9. the image position method of robot as claimed in claim 8, it is characterised in that the stroboscopic mark is in consecutive frame figure
As in it is bright, be secretly alternately present.
10. a kind of image positioning device of robot, it is characterised in that fixed stroboscopic mark, the machine in the robot
The positioner of people includes:
Memory module, described the described image coordinate system that drawing method is obtained is built for any in the claim 1-6 that prestores;
Acquisition module, for gathering the video image that the fixing camera is shot;
Detection module, for detecting position of the robot in the video image using stroboscopic Three image difference;
Computing module, uses described image coordinate system, and it is swept for the image to the position comprising the robot
Retouch, the real coordinate of the robot is calculated according to scanning result.
11. the image positioning device of robot as claimed in claim 10, it is characterised in that the stroboscopic is designated stroboscopic
LED.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710407062.8A CN107169923A (en) | 2017-06-01 | 2017-06-01 | A kind of image position method, device for building drawing method and robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710407062.8A CN107169923A (en) | 2017-06-01 | 2017-06-01 | A kind of image position method, device for building drawing method and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107169923A true CN107169923A (en) | 2017-09-15 |
Family
ID=59824197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710407062.8A Pending CN107169923A (en) | 2017-06-01 | 2017-06-01 | A kind of image position method, device for building drawing method and robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107169923A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110473256A (en) * | 2019-07-18 | 2019-11-19 | 中国第一汽车股份有限公司 | A kind of vehicle positioning method and system |
CN111504270A (en) * | 2020-06-16 | 2020-08-07 | 常州市盈能电气有限公司 | Robot positioning device |
CN111986553A (en) * | 2020-08-19 | 2020-11-24 | 炬星科技(深圳)有限公司 | Method, device and storage medium for map association based on semantic label |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102141398A (en) * | 2010-12-28 | 2011-08-03 | 北京航空航天大学 | Monocular vision-based method for measuring positions and postures of multiple robots |
CN103234537A (en) * | 2013-01-24 | 2013-08-07 | 上海市上海中学 | Method for positioning automatic-navigation machinery vehicle in warehouse |
CN103279949A (en) * | 2013-05-09 | 2013-09-04 | 浙江大学 | Operation method of self-positioning robot-based multi-camera parameter automatic calibration system |
CN103440643A (en) * | 2013-08-07 | 2013-12-11 | 河南科技大学 | Single-linear-array camera calibration method |
CN104776832A (en) * | 2015-04-16 | 2015-07-15 | 浪潮软件集团有限公司 | Method, set top box and system for positioning objects in space |
-
2017
- 2017-06-01 CN CN201710407062.8A patent/CN107169923A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102141398A (en) * | 2010-12-28 | 2011-08-03 | 北京航空航天大学 | Monocular vision-based method for measuring positions and postures of multiple robots |
CN103234537A (en) * | 2013-01-24 | 2013-08-07 | 上海市上海中学 | Method for positioning automatic-navigation machinery vehicle in warehouse |
CN103279949A (en) * | 2013-05-09 | 2013-09-04 | 浙江大学 | Operation method of self-positioning robot-based multi-camera parameter automatic calibration system |
CN103440643A (en) * | 2013-08-07 | 2013-12-11 | 河南科技大学 | Single-linear-array camera calibration method |
CN104776832A (en) * | 2015-04-16 | 2015-07-15 | 浪潮软件集团有限公司 | Method, set top box and system for positioning objects in space |
Non-Patent Citations (2)
Title |
---|
卢泉 等: "基于同心圆现场标定的实时畸变校正方法", 《光电工程》 * |
田涌涛 等: "基于同心圆求取图像畸变中心的简便方法", 《计算机工程》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110473256A (en) * | 2019-07-18 | 2019-11-19 | 中国第一汽车股份有限公司 | A kind of vehicle positioning method and system |
CN111504270A (en) * | 2020-06-16 | 2020-08-07 | 常州市盈能电气有限公司 | Robot positioning device |
CN111986553A (en) * | 2020-08-19 | 2020-11-24 | 炬星科技(深圳)有限公司 | Method, device and storage medium for map association based on semantic label |
CN111986553B (en) * | 2020-08-19 | 2022-07-26 | 炬星科技(深圳)有限公司 | Method, device and storage medium for map association based on semantic label |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103517041B (en) | Based on real time panoramic method for supervising and the device of polyphaser rotation sweep | |
CN104142157B (en) | A kind of scaling method, device and equipment | |
CA2078556C (en) | Computer assisted video surveying and method therefor | |
JP5586765B2 (en) | Camera calibration result verification apparatus and method | |
CN104299215B (en) | The image split-joint method that a kind of characteristic point is demarcated and matched | |
CN103033132B (en) | Plane survey method and device based on monocular vision | |
CN106981081A (en) | A kind of degree of plainness for wall surface detection method based on extraction of depth information | |
CN103852060B (en) | A kind of based on single visual visible images distance-finding method felt | |
CN108734744A (en) | A kind of remote big field-of-view binocular scaling method based on total powerstation | |
GB2429057A (en) | A transparent camera calibration tool employing a colour filter | |
CN107169923A (en) | A kind of image position method, device for building drawing method and robot | |
CN104867113B (en) | The method and system of perspective image distortion correction | |
JP2008140370A (en) | Stereo camera intrusion detection system | |
US20130113897A1 (en) | Process and arrangement for determining the position of a measuring point in geometrical space | |
CN102609941A (en) | Three-dimensional registering method based on ToF (Time-of-Flight) depth camera | |
CN107592922A (en) | Method for implementing operation to ground | |
CN104361603B (en) | Gun camera image target designating method and system | |
CN101002069A (en) | Method of preparing a composite image with non-uniform resolution | |
JP5079547B2 (en) | Camera calibration apparatus and camera calibration method | |
CN104463899A (en) | Target object detecting and monitoring method and device | |
CN106600647A (en) | Binocular visual multi-line projection structured light calibration method | |
CN106803913A (en) | A kind of detection method and its device of the action that taken the floor for Auto-Sensing student | |
CN107295230A (en) | A kind of miniature object movement detection device and method based on thermal infrared imager | |
CN110146030A (en) | Side slope surface DEFORMATION MONITORING SYSTEM and method based on gridiron pattern notation | |
CN206611521U (en) | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170915 |
|
RJ01 | Rejection of invention patent application after publication |