CN115546318B - Automatic high-speed trajectory calibration method - Google Patents
Automatic high-speed trajectory calibration method Download PDFInfo
- Publication number
- CN115546318B CN115546318B CN202211473467.9A CN202211473467A CN115546318B CN 115546318 B CN115546318 B CN 115546318B CN 202211473467 A CN202211473467 A CN 202211473467A CN 115546318 B CN115546318 B CN 115546318B
- Authority
- CN
- China
- Prior art keywords
- camera
- coordinate system
- point
- coordinates
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/60—Rotation of a whole image or part thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Abstract
An automatic high-speed trajectory calibration method comprises the following steps: s1, fixing a sensor on a load platform, resolving a binocular camera, reconstructing three-dimensional point coordinates under a binocular space coordinate system, and establishing a camera geometric model; s2, fixing the laser sighting device to a load platform, and collecting three-dimensional point coordinates of a plurality of laser rays at different depths; s3, fitting a space linear equation of the laser ray based on the RANSAC algorithm; s4, collecting three-dimensional point coordinates of an image aiming point and a laser aiming point; s5, calculating rotation angles of the horizontal axis and the vertical axis of the load platform through three-dimensional coordinates; and S6, transmitting the rotation angle of the load platform to finish the automatic tracking and aiming of the transmitting device. The method and the device can reduce the requirement on transmitting personnel, can accurately and quickly lock the target point position in a severe scene, and realize efficient and quick completion of tasks.
Description
Technical Field
The invention relates to the technical field of robots, in particular to an automatic high-speed trajectory calibration method.
Background
In daily life, apart from the scenery which can be seen in front of eyes, things far away need to be seen in many times, but the naked eye observation has certain limitation, and things far away can be seen and are difficult to see. Under the condition of simple observation, the object magnification can be enlarged through the telescope, and objects at far distance can be drawn. And as some launching behaviors, the accurate aiming of the target is needed, so that auxiliary tools such as a sighting telescope are needed to help us. Sighting telescope is the auxiliary design device that is assembled on the emitter on military defence generally. Both optical sighting telescope and night vision sighting telescope have become necessary sighting devices in military. In military training and other preparatory tasks, various types of launch equipment require aiming devices, which are critical to ensure accurate aiming and efficient execution of military missions. Therefore, how to accurately and efficiently aim and lock the target point position of the object in military missions is an urgent problem to be solved.
Depending on the sighting telescope, manually aiming the target at a long distance is the most common and convenient traditional aiming mode, and a launcher can quickly finish a launching task in a specific scene through the mode. However, since each shooter has a different degree of mastery of aiming skills, experience in use is different, and some departments do not necessarily hold corresponding professional certificates. With the continuous updating of the launching device, more precise and complex equipment also puts higher requirements on professional workers, and the daily aiming task faces the following difficulties:
(1) Although manual aiming has high requirements on the physical quality of a launcher, some tasks have severe environments, and the aiming posture is difficult to maintain stably for a long time, so that the aiming precision is influenced;
(2) When the target point is changed, the emitting personnel needs to aim and estimate the target point again, and the task cannot be finished efficiently and quickly.
Disclosure of Invention
In order to solve the requirements of aiming at the shooting personnel and the environment in the background technology and quickly finish the estimation of the trajectory in the space, the invention provides an automatic high-speed trajectory calibration method, which comprises the following specific steps:
s1, fixing a sensor on a load platform, resolving a binocular camera, reconstructing three-dimensional point coordinates under a binocular space coordinate system, and establishing a camera geometric model;
s2, fixing the laser sighting device to a load platform, and collecting three-dimensional point coordinates of a plurality of laser rays at different depths;
s3, fitting a space linear equation of the laser light based on an RANSAC algorithm;
s4, collecting three-dimensional point coordinates of an image aiming point and a laser aiming point;
s5, calculating rotation angles of the horizontal and vertical shafts of the load platform through the three-dimensional coordinates;
and S6, transmitting the rotation angle of the load platform to complete automatic tracking and aiming of the transmitting device.
Specifically, step S1 is as follows:
s11, obtaining a set model of the monocular camera through conversion of a coordinate system, and calculating camera internal parameters;
s12, obtaining a binocular camera geometric model through camera internal parameters; the binocular geometry model also needs to take into account an external parameter, i.e., the roto-translational relationship between the left and right cameras.
Specifically, step S11 is specifically as follows:
s111, converting the world coordinate systemRigid body transformation to camera coordinate system->The world coordinate is a reference coordinate system of the position of a real object in a three-dimensional space, a camera coordinate system is used as a space coordinate system, and a left camera optical center is used as a coordinate origin of the world coordinate; the camera coordinates take the optical center of the camera itself as the origin of coordinates, wherein the Z axis is parallel to the optical axis of the camera; the conversion is specifically as follows:
s112, camera coordinate systemProjection into the image coordinate system by transmission>The image coordinate system is a coordinate system established by taking a two-dimensional image shot by a camera as a reference, and is used for indicating the position of the measured object in the image, and the coordinate system comprises a continuous image coordinate and/or a space image coordinate; the origin of the coordinate system of the image coordinates lies in the focal point of the camera optical axis and the imaging plane->Upper, in mm; />A perspective projection relationship for representing the translation of the object from the camera coordinate system to the image coordinate system, the projection, the transmission projection calculated as:
s113, image coordinate systemBy discretizing into a pixel coordinate system->Performing the following steps; the pixel coordinate system is a discrete image coordinate system, the origin is at the upper left corner of the image, and the unit is a pixel; is converted as follows
Wherein the content of the first and second substances,represents the actual width of the unit pixel in the x-axis direction on the CCD of the camera, and>representing the actual width of the unit pixel in the y-axis direction on the CCD of the camera; pixel coordinates correspond to discretization of the x-axis and the y-axis; the homogeneous coordinates of the above formula are written in a matrix form as follows: />
The inverse relationship can be written as:
Specifically, the extrinsic parameter calculation method is as follows:
assuming that the world coordinate system coincides with the left camera coordinate system,represents the left camera coordinate system, and>represents the image coordinate system, is>Represents the left camera focal length; />Represents the right camera coordinate system, and>represents the image coordinate system, is>Represents a right camera focal length; p is any point in space, and the projection points of the P point on the left camera and the right camera are respectively ^ and ^>、:
The following relation can be obtained by the camera perspective transformation model:
assuming a rotation matrixThe translation vector is->Then, the following transformation relationship is given:
as can be seen from (1), (2), and (3), the correspondence between the pixel points of the left and right cameras is as follows:
thus, three-dimensional spatial points in the world coordinate systemCan be expressed as the following formula (4), and realizes the world coordinate calculation of any point of the common view field of the camera
Specifically, step S3 specifically includes:
s31, randomly selecting the minimum sample point coordinate capable of estimating the space linear model, and initializing the number of points in an error range, namely, the number of inner points, wherein the number of the optimal inner points is N, and the model threshold value t and the optimal iteration number K are obtained; for the straight line fitting condition, the minimum data set is 2 point coordinates;
s32, calculating a spatial straight line model by using the minimum data set;
s33, substituting all data into the space straight line model, counting the number of inner points and coordinates thereof by calculating the distance from the points to the straight line, and taking the coordinates as an optimal inner point coordinate set;
s34, judging whether the current iteration number ck is larger than or equal to the optimal iteration number k, if so, directly entering a step S36, and if not, entering a step S35;
s35, judging whether the current inner point number CN is larger than the optimal inner point number N during initialization, if so, assigning CN to N as the optimal inner point number N, returning to the step S31, and if not, entering the step S32;
and S36, substituting the optimal inner point coordinate set into a least square fitting algorithm, and solving an optimal space linear equation.
Specifically, step S4 specifically includes: metrology within image vision systemsCollecting the coordinate P of the laser aiming point view (ii) a Then, based on the optimal space straight line model in step S3, the current target aiming point P is obtained by substituting the current target point bit depth value z view Laser aiming point P under depth laser (ii) a At the moment, three-dimensional points P of two world coordinate systems under the binocular system are obtained view 、P laser And calculating the horizontal and vertical angles between two points in space through the conversion relation of the space coordinates.
Specifically, step S5 specifically includes:
rotating the laser aiming point to the target aiming point in two steps, dividing the angle into a transverse delta H and a vertical delta V, and aiming the point to P view And P laser The distance between the two points can be obtained through binocular resolving and is set as the depth z;
the rotation angle of the loading platform can be calculated in two steps:
the invention has the beneficial effects that: the method and the device can reduce the requirement on transmitting personnel, can accurately and quickly lock the target point position in a severe scene, and realize efficient and quick completion of tasks.
Drawings
Fig. 1 and fig. 2 are structural diagrams of an automatic high-speed ballistic calibration method according to the present invention.
Fig. 3 is a connection diagram of unit modules of an automatic high-speed trajectory calibration method according to the present invention.
Fig. 4 is a schematic diagram of laser sight index point acquisition.
FIG. 5 is a diagram of a calibration point acquisition and detection process.
Fig. 6 is a three-dimensional index point acquisition visualization.
Fig. 7 is a RANSAC spatial line fitting flowchart.
FIG. 8 is a best interior plot diagram for fitting a straight line after RANSAC screening.
FIG. 9 is a schematic diagram illustrating the calculation of the angle between the target point and the aiming point.
Detailed Description
As shown in fig. 1-3, an automatic high-speed ballistic calibration method includes the following steps:
s1, fixing a sensor on a load platform, resolving a binocular camera, reconstructing three-dimensional point coordinates under a binocular space coordinate system, and establishing a camera geometric model; the method comprises the following specific steps:
s11, obtaining a set model of the monocular camera through conversion of a coordinate system, and calculating camera internal parameters;
s111, converting the world coordinate systemRigid body transformation to camera coordinate system->The world coordinate is a reference coordinate system of the position of a real object in a three-dimensional space, a camera coordinate system is used as a space coordinate system, and for calculation convenience, a left camera optical center is used as a coordinate origin of the world coordinate; the camera coordinates take the optical center of the camera itself as the origin of coordinates, wherein the Z axis is parallel to the optical axis of the camera; the conversion is specifically as follows:
S112, camera coordinate systemProjection in transmission into an image coordinate system>The image coordinate system is a coordinate system established by taking a two-dimensional image shot by a camera as a reference, and is used for indicating the position of the measured object in the image, and the image coordinate system comprises continuous image coordinates and/or space image coordinates. The origin of the coordinate system of the image coordinates is located at the focal point ≥ of the optical axis of the camera and the imaging plane>Upper, in mm. />A perspective projection relationship for representing the translation of the object from the camera coordinate system to the image coordinate system, the projection, the transmission projection calculated as:
S113, image coordinate systemBy discretizing into a pixel coordinate system->In (1). The pixel coordinate system is a discrete image coordinate system, the origin is at the upper left corner of the image, and the unit is a pixel. Is converted as follows
Wherein the content of the first and second substances,represents the actual width of the unit pixel in the x-axis direction on the CCD of the camera, and>representing the actual width of the y-axis unit pixel (single CCD or CMOS sensor size) across the camera CCD. The pixel coordinates correspond to a discretization of the x-axis and y-axis. The homogeneous coordinates of the above formula are written in a matrix form as follows:
the inverse relationship can be written as:
S12, obtaining a binocular camera geometric model through camera internal parameters; the binocular geometric model also needs to consider external parameters, namely the rotational-translational relation between the left camera and the right camera, and the calculation method of the external parameters is as follows:
assuming that the world coordinate system coincides with the left camera coordinate system,represents the left camera coordinate system, and>represents an image coordinate system>Represents the left camera focal length; />Represents the right camera coordinate system, and>represents the image coordinate system, is>Represents a right camera focal length; p is any point in space, and the projection points of the P point on the left camera and the right camera are respectively ^ and ^>、/>The binocular camera geometric model is shown in fig. 3:
the following relation can be obtained by transforming the model through the camera perspective:
assuming a rotation matrixThe translation vector is->Then, the following transformation relationship is given:
as can be seen from (1), (2), and (3), the correspondence between the pixel points of the left and right cameras is as follows:
thus, three-dimensional spatial points in the world coordinate systemCan be expressed as the following formula (4), and realizes the world coordinate calculation of any point of the common view field of the camera
S2, fixing the laser sighting device to a load platform, and collecting three-dimensional point coordinates of a plurality of laser rays with different depths; the method comprises the following specific steps:
as shown in fig. 4, the laser sight is fixed to a load platform, the angle of the load platform is adjusted, so that a laser beam emitted is positioned in a contactable range of a calibration worker, then a laser spot emitted by laser is imaged on a receiving white board (or white cardboard), and three-dimensional point coordinates of laser spots P1 and P2.. Pn at different distances are acquired through a binocular camera; in this example, the laser point data acquisition and detection process is as shown in fig. 5 (a pair of left and right images is a set of spatial points, and the black position is an abnormal value), the obtained laser sighting device calibration point is visualized as shown in fig. 6, and a series of points obtained in a three-dimensional space are basically located in the same spatial straight line.
S3, fitting a space linear equation of the laser light based on an RANSAC algorithm; the method comprises the following specific steps:
the RANSAC algorithm divides all sample data sets into medium correct data (inerals, sample data that the model can describe), and abnormal data (outlies, sample data that is far from a normal range and cannot conform to the mathematical model), which may be generated due to erroneous measurement, erroneous assumption, erroneous calculation, and the like, assuming that some samples in the data sets have large noise. Based on RANSAC algorithm, fitting a space linear equation of laser light, estimating a model by repeatedly selecting a data set, and iterating until the model which is considered to be better is estimated. The specific implementation steps are shown in fig. 7 and are divided into the following steps:
s31, randomly selecting the minimum sample point coordinate capable of estimating the space linear model, and initializing the number of points in an error range, namely, the number of inner points, wherein the number of the optimal inner points is N, and the model threshold value t and the optimal iteration number K are obtained; for the straight line fitting condition, the minimum data set is 2 point coordinates;
s32, calculating a spatial straight line model by using the minimum data set;
s33, substituting all data into the space straight line model, counting the number of inner points and coordinates thereof by calculating the distance from the points to the straight line, and taking the coordinates as an optimal inner point coordinate set;
s34, judging whether the current iteration number ck is larger than or equal to the optimal iteration number k, if so, directly entering a step S36, and if not, entering a step S35;
s35, judging whether the current inner point number CN is larger than the optimal inner point number N during initialization, if so, assigning CN to N as the optimal inner point number N, returning to the step S31, and if not, entering the step S32;
and S36, substituting the optimal inner point coordinate set into a least square fitting algorithm, and solving an optimal space linear equation.
Fig. 8 shows the three-dimensional visualization effect of the interior points screened out from the collection space points by the RANSAC algorithm.
S4, collecting three-dimensional point coordinates of an image aiming point and a laser aiming point; specifically, measurement is carried out in an image vision system, and the coordinate P of a laser aiming point is collected view . Then, based on the optimal space straight line model in step S3, the current target aiming point P is obtained by substituting the current target point bit depth value z view Laser aiming point P under depth laser . At the moment, three-dimensional points P of two world coordinate systems under the binocular system are obtained view 、P laser Through the spaceAnd (4) calculating horizontal and vertical angles between two points in space according to the transformation relation of the coordinates.
S5, calculating rotation angles of the horizontal axis and the vertical axis of the load platform through three-dimensional coordinates;
setting a laser aiming point P view Laser aiming point P laser In the image of fig. 9, the laser aiming point is now rotated to the target aiming point (the real position to be aimed) in two steps, the angle is divided into a transverse direction δ H and a direction δ V perpendicular to the transverse direction δ H, and the aiming point is located at P view And P laser The distance therebetween can be obtained by binocular solution (set as the depth z), and the schematic calculation thereof is shown in fig. 9.
The rotation angle of the loading platform can be calculated in two steps:
and S6, transmitting the rotation angle of the load platform to complete automatic tracking and aiming of the transmitting device.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (7)
1. An automatic high-speed trajectory calibration method is characterized by comprising the following steps:
s1, fixing a sensor on a load platform, resolving a binocular camera, reconstructing three-dimensional point coordinates under a binocular space coordinate system, and establishing a camera geometric model;
s2, fixing the laser sighting device to a load platform, and collecting three-dimensional point coordinates of a plurality of laser rays at different depths;
s3, fitting a space linear equation of the laser light based on an RANSAC algorithm;
s4, collecting three-dimensional point coordinates of an image aiming point and a laser aiming point;
s5, calculating rotation angles of the horizontal and vertical shafts of the load platform through the three-dimensional coordinates;
and S6, transmitting the rotation angle of the load platform to complete automatic tracking and aiming of the transmitting device.
2. The automatic high-speed ballistic calibration method according to claim 1, wherein step S1 is specifically as follows:
s11, obtaining a set model of the monocular camera through the conversion of a coordinate system, and calculating camera internal parameters;
s12, obtaining a binocular camera geometric model through camera internal parameters; the binocular geometry model also needs to take into account an external parameter, i.e., the roto-translational relationship between the left and right cameras.
3. The automatic high-speed ballistic calibration method according to claim 2, wherein step S11 is specifically as follows:
s111, converting the world coordinate systemRigid body transformation to camera coordinate system->The world coordinate is a reference coordinate system of the position of a real object in a three-dimensional space, a camera coordinate system is used as a space coordinate system, and the optical center of a left camera is used as the origin of coordinates of the world coordinate; the camera coordinates take the optical center of the camera itself as the origin of coordinates, wherein the Z axis is parallel to the optical axis of the camera; the conversion is specifically as follows:
s112, camera coordinate systemProjection into the image coordinate system by transmission>The image coordinate system is a coordinate system established by taking a two-dimensional image shot by a camera as a reference, and is used for indicating the position of a measured object in the image, and comprises continuous image coordinates and/or space image coordinates; the origin of the coordinate system of the image coordinates is located at the focal point ≥ of the optical axis of the camera and the imaging plane>Upper, in mm; />A perspective projection relationship for representing the translation of the object from the camera coordinate system to the image coordinate system, the projection, the transmission projection calculated as:
S113, image coordinate systemBy discretizing into a pixel coordinate system->Performing the following steps; the pixel coordinate system is a discrete image coordinateThe origin is at the upper left corner of the image, and the unit is pixel; is converted as follows
Wherein, the first and the second end of the pipe are connected with each other,represents the actual width of the unit pixel in the x-axis direction on the CCD of the camera, and>representing the actual width of the unit pixel in the y-axis direction on the CCD of the camera; pixel coordinates correspond to discretization of the x-axis and the y-axis; the homogeneous coordinates of the above formula are written in a matrix form as follows:
the inverse relationship can be written as:
4. The automatic high-speed ballistic calibration method according to claim 1, characterized in that the extrinsic parameter calculation method is as follows:
assuming that the world coordinate system coincides with the left camera coordinate system,represents the left camera coordinate system, and>represents the image coordinate system, is>Represents the left camera focal length; />Represents the right camera coordinate system, and>represents the image coordinate system, is>Represents a right camera focal length; p is any point in space, and the projection points of the P point on the left camera and the right camera are respectively ^ and ^>、/>:
The following relation can be obtained by transforming the model through the camera perspective:
assuming a rotation matrixThe translation vector is->Then, the following transformation relationship is given:
as can be seen from (1), (2), and (3), the correspondence between the pixel points of the left and right cameras is as follows:
thus, three-dimensional spatial points in the world coordinate systemCan be expressed as the following formula (4), and realizes the world coordinate calculation of any point of the common view field of the camera
5. The automatic high-speed ballistic calibration method according to claim 1, wherein step S3 specifically comprises:
s31, randomly selecting the minimum sample point coordinate capable of estimating the space linear model, and initializing the number of points in an error range, namely, the number of inner points, wherein the number of the optimal inner points is N, and the model threshold value t and the optimal iteration number K are obtained; for the straight line fitting condition, the minimum data set is 2 point coordinates;
s32, calculating a spatial straight line model by using the minimum data set;
s33, substituting all data into the space straight line model, counting the number of inner points and coordinates thereof by calculating the distance from the points to the straight line, and taking the coordinates as an optimal inner point coordinate set;
s34, judging whether the current iteration number ck is larger than or equal to the optimal iteration number k, if so, directly entering a step S36, and if not, entering a step S35;
s35, judging whether the current inner point number CN is larger than the optimal inner point number N during initialization, if so, assigning CN to N as the optimal inner point number N, returning to the step S31, and if not, entering the step S32;
and S36, substituting the optimal inner point coordinate set into a least square fitting algorithm, and solving an optimal space linear equation.
6. The automatic high-speed trajectory calibration method according to claim 1, wherein step S4 specifically comprises: measuring in image vision system, and collecting laser aiming point coordinate P view (ii) a Then, based on the optimal space straight line model in step S3, the current target aiming point P is obtained by substituting the depth value z of the current target point bit view Laser aiming point P under depth laser (ii) a At the moment, three-dimensional points P of two world coordinate systems under the binocular system are obtained view 、P laser And calculating the horizontal and vertical angles between two points in space through the conversion relation of the space coordinates.
7. The automatic high-speed ballistic calibration method according to claim 6, wherein step S5 specifically comprises:
rotating the laser aiming point to the target aiming point in two steps, dividing the angle into a transverse delta H and a vertical delta V, and aiming the point to P view And P laser The distance between the two points can be obtained through binocular resolving and set as the depth z;
the rotation angle of the loading platform can be calculated in two steps:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211473467.9A CN115546318B (en) | 2022-11-23 | 2022-11-23 | Automatic high-speed trajectory calibration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211473467.9A CN115546318B (en) | 2022-11-23 | 2022-11-23 | Automatic high-speed trajectory calibration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115546318A CN115546318A (en) | 2022-12-30 |
CN115546318B true CN115546318B (en) | 2023-04-07 |
Family
ID=84719980
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211473467.9A Active CN115546318B (en) | 2022-11-23 | 2022-11-23 | Automatic high-speed trajectory calibration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115546318B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116718109A (en) * | 2023-02-10 | 2023-09-08 | 深圳市中图仪器股份有限公司 | Target capturing method based on binocular camera |
CN116823937B (en) * | 2023-08-28 | 2024-02-23 | 成都飞机工业(集团)有限责任公司 | High-precision quick aiming method for plane horizontal point based on visual guidance |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103105858A (en) * | 2012-12-29 | 2013-05-15 | 上海安维尔信息科技有限公司 | Method capable of amplifying and tracking goal in master-slave mode between fixed camera and pan tilt zoom camera |
CN103679741A (en) * | 2013-12-30 | 2014-03-26 | 北京建筑大学 | Method for automatically registering cloud data of laser dots based on three-dimensional line characters |
CN106327532A (en) * | 2016-08-31 | 2017-01-11 | 北京天睿空间科技股份有限公司 | Three-dimensional registering method for single image |
WO2017195801A1 (en) * | 2016-05-13 | 2017-11-16 | オリンパス株式会社 | Calibration device, calibration method, optical device, imaging device, projection device, measurement system and measurement method |
CN107741175A (en) * | 2017-10-21 | 2018-02-27 | 聚鑫智能科技(武汉)股份有限公司 | A kind of artificial intelligence fine sight method and system |
CN110142805A (en) * | 2019-05-22 | 2019-08-20 | 武汉爱速达机器人科技有限公司 | A kind of robot end's calibration method based on laser radar |
CN110942477A (en) * | 2019-11-21 | 2020-03-31 | 大连理工大学 | Method for depth map fusion by using binocular camera and laser radar |
CN111356893A (en) * | 2019-02-28 | 2020-06-30 | 深圳市大疆创新科技有限公司 | Shooting aiming control method and device for movable platform and readable storage medium |
CN111803842A (en) * | 2020-08-08 | 2020-10-23 | 应急管理部上海消防研究所 | Automatic aiming device of fire monitor |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113013781B (en) * | 2021-02-05 | 2022-04-01 | 安阳一都网络科技有限公司 | Laser emission and dynamic calibration device, method, equipment and medium based on image processing |
-
2022
- 2022-11-23 CN CN202211473467.9A patent/CN115546318B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103105858A (en) * | 2012-12-29 | 2013-05-15 | 上海安维尔信息科技有限公司 | Method capable of amplifying and tracking goal in master-slave mode between fixed camera and pan tilt zoom camera |
CN103679741A (en) * | 2013-12-30 | 2014-03-26 | 北京建筑大学 | Method for automatically registering cloud data of laser dots based on three-dimensional line characters |
WO2017195801A1 (en) * | 2016-05-13 | 2017-11-16 | オリンパス株式会社 | Calibration device, calibration method, optical device, imaging device, projection device, measurement system and measurement method |
CN106327532A (en) * | 2016-08-31 | 2017-01-11 | 北京天睿空间科技股份有限公司 | Three-dimensional registering method for single image |
CN107741175A (en) * | 2017-10-21 | 2018-02-27 | 聚鑫智能科技(武汉)股份有限公司 | A kind of artificial intelligence fine sight method and system |
CN111356893A (en) * | 2019-02-28 | 2020-06-30 | 深圳市大疆创新科技有限公司 | Shooting aiming control method and device for movable platform and readable storage medium |
CN110142805A (en) * | 2019-05-22 | 2019-08-20 | 武汉爱速达机器人科技有限公司 | A kind of robot end's calibration method based on laser radar |
CN110942477A (en) * | 2019-11-21 | 2020-03-31 | 大连理工大学 | Method for depth map fusion by using binocular camera and laser radar |
CN111803842A (en) * | 2020-08-08 | 2020-10-23 | 应急管理部上海消防研究所 | Automatic aiming device of fire monitor |
Non-Patent Citations (2)
Title |
---|
基于双目视觉的目标运动参数高速实时测量方法研究;原崧育;《中国优秀硕士学位论文全文数据库》;20180228;全文 * |
基于激光与视觉信息融合的运动目标检测关键技术研究;陈明;《中国优秀硕士学位论文全文数据库》;20200731;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115546318A (en) | 2022-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115546318B (en) | Automatic high-speed trajectory calibration method | |
CN110296691B (en) | IMU calibration-fused binocular stereo vision measurement method and system | |
CN106934809B (en) | Unmanned aerial vehicle aerial autonomous refueling rapid docking navigation method based on binocular vision | |
CN109559355B (en) | Multi-camera global calibration device and method without public view field based on camera set | |
CN111563878B (en) | Space target positioning method | |
CN107741175B (en) | A kind of artificial intelligence fine sight method | |
CN107248178A (en) | A kind of fisheye camera scaling method based on distortion parameter | |
CN109087355B (en) | Monocular camera pose measuring device and method based on iterative updating | |
CN110889873A (en) | Target positioning method and device, electronic equipment and storage medium | |
CN112949478A (en) | Target detection method based on holder camera | |
US20220230348A1 (en) | Method and apparatus for determining a three-dimensional position and pose of a fiducial marker | |
CN105306922A (en) | Method and device for obtaining depth camera reference diagram | |
RU2695141C2 (en) | Method of automatic adjustment of zero lines of sighting of optoelectronic channels of sighting of armored vehicles | |
Dolereit et al. | Underwater stereo calibration utilizing virtual object points | |
TWI502162B (en) | Twin image guiding-tracking shooting system and method | |
Dolereit et al. | Converting underwater imaging into imaging in air | |
CN112556657B (en) | Multi-view vision measurement system for flight motion parameters of separating body in vacuum environment | |
CN114241059B (en) | Synchronous calibration method for camera and light source in photometric stereo vision system | |
CN113324538B (en) | Cooperative target remote high-precision six-degree-of-freedom pose measurement method | |
CN111504255B (en) | Three-dimensional alignment precision automatic measuring device and method based on machine vision | |
CN114415155B (en) | Position calibration method for single-point laser range finder and visible light camera | |
CN112584041A (en) | Image identification dynamic deviation rectifying method | |
CN112432594A (en) | Machine vision six-degree-of-freedom measurement method based on physical decoupling | |
Li et al. | Method for horizontal alignment deviation measurement using binocular camera without common target | |
Maddalena et al. | Innovations on underwater stereoscopy: the new developments of the TV-Trackmeter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 35th Floor, Building A1, Phase I, Zhongan Chuanggu Science and Technology Park, No. 900, Wangjiang West Road, High-tech Zone, Hefei City, Anhui Province, 230000 Applicant after: Zhongke Xingtu Measurement and Control Technology Co.,Ltd. Address before: 35th Floor, Building A1, Phase I, Zhongan Chuanggu Science and Technology Park, No. 900, Wangjiang West Road, High-tech Zone, Hefei City, Anhui Province, 230000 Applicant before: Zhongke Xingtu measurement and control technology (Hefei) Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |