CN116892940A - Target structure diagram positioning method, device and medium based on sensor detection - Google Patents

Target structure diagram positioning method, device and medium based on sensor detection Download PDF

Info

Publication number
CN116892940A
CN116892940A CN202310866323.8A CN202310866323A CN116892940A CN 116892940 A CN116892940 A CN 116892940A CN 202310866323 A CN202310866323 A CN 202310866323A CN 116892940 A CN116892940 A CN 116892940A
Authority
CN
China
Prior art keywords
coordinate system
target
sensor
structure diagram
world
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310866323.8A
Other languages
Chinese (zh)
Inventor
陈宇
蔡亚
刘晓黎
李润童
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Zhongke Shengu Technology Development Co ltd
Original Assignee
Hefei Zhongke Shengu Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Zhongke Shengu Technology Development Co ltd filed Critical Hefei Zhongke Shengu Technology Development Co ltd
Priority to CN202310866323.8A priority Critical patent/CN116892940A/en
Publication of CN116892940A publication Critical patent/CN116892940A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to the technical field of robot positioning, in particular to a method, a device and a medium for positioning a target structure diagram based on sensor detection. The traditional structure diagram positioning mode generally needs to completely scan the space to be positioned to form a world diagram, then comprehensively map the world diagram and the structure diagram, and finally, the target is positioned in a sensor coordinate system O 2 Positioning the middle coordinates on the structure diagram; the application only needs to acquire the image data of the coordinate origin and the two reference point parts, and iteratively adjusts the coordinate origin on the basis of the superposition origin to lead the world coordinate system O 1 And a sensor coordinate system O 2 The reference points in the system are overlapped, so that the positioning can be finished, a large amount of time loss generated by comprehensive mapping is saved, the calculated amount is reduced, and the target can be positioned in real time after reaching the space to be positioned on the basisNew. And the uncertainty under the positioning frame is expressed through an information matrix, so that the positioning frame system can converge to an accurate position even if a user has a large error in initial guessing of the position of the robot.

Description

Target structure diagram positioning method, device and medium based on sensor detection
Technical Field
The application relates to the technical field of robot positioning, in particular to a method, a device and a medium for positioning a target structure diagram based on sensor detection.
Background
Accurate positioning is a key technology for realizing flexible automation. Modern automation requires robust and accurate positioning of mobile robots in complex scenes. Some of the most advanced positioning techniques in industrial environments use onboard safety lidar sensors and rely on maps that need to be built in advance, typically by solving so-called simultaneous positioning and mapping (SLAM) problems. However, in many cases, acquiring these maps can be cumbersome because it requires cumbersome and time-consuming preliminary operations, which increase the deployment time and cost for the robot manufacturer. In complex environments, expert operators are often required to ensure consistency of the generated map and its availability to the robot. The most common approach to robotic mapping is to use occupied grid maps, which typically suffer from local distortion and pixelation, as well as unusual color conventions and encodings, which make them difficult for unskilled users to read and understand.
CAD plane building drawing is an interesting bridge between accurate sensor-based robot center map representation and visual indoor environment humanized description. They are common in everyday life and can be easily manipulated and expanded using modern CAD software. They represent an unchangeable structure in the building and are therefore abstract representations of the environment, which can be flexibly used independently of the actual configuration, constituting a natural means of intuitive communication with the navigation tool. Building plan as an aid to robotic positioning, few work has been done to create accurate and robust positioning systems for such maps.
Disclosure of Invention
The application discloses a method, a device and a medium for positioning a target structure diagram based on sensor detection, which can position a target on the structure diagram of a space to be positioned after the target is positioned in the space to be positioned.
In order to achieve the above purpose, on the one hand, a method for locating a target structure diagram based on sensor detection is provided, which specifically comprises the following steps:
the target is randomly positioned in the space to be positioned, a structure diagram of the space to be positioned is obtained, an origin is selected in the structure diagram, and a world coordinate system O is generated 1
The target generates a world map through the image data scanned by the sensor in the space to be positioned, and the world coordinate system O 1 Is used as the origin of the world map to generate a sensor coordinate system O 2
In world coordinate system O 1 And a sensor coordinate system O 2 Two identical reference points are selected respectively; the same reference point is judged after image recognition through a trained convolutional neural network;
after the origins of the two coordinate systems are overlapped, the two reference points are aligned to serve as iteration targets, alignment association transformation is carried out on the two coordinate systems, and the target in the world coordinate system O is obtained 1 Coordinate conversion equation in (a);
and acquiring an information matrix according to the coordinate conversion equation, and adjusting the information matrix to enhance positioning robustness.
The traditional structure diagram positioning mode generally needs to completely scan the space to be positioned to form a world diagram, then comprehensively map the world diagram and the structure diagram, and finally, the target is positioned in a sensor coordinate system O 2 Positioning the middle coordinates on the structure diagram; the embodiment has the advantages that only the image data of the coordinate origin and the two reference point parts are needed to be acquired, and the coordinate origin is adjusted on the basis of the iteration, so that the world coordinate system O 1 And a sensor coordinate system O 2 The reference points in the map are overlapped, so that the target can be positioned on the structure map, a large amount of time loss caused by comprehensive mapping is saved, the calculated amount is reduced, and the target can be positioned and updated in real time after reaching the space to be positioned on the basis, and the preparation time and the process are not needed.
Further, a world coordinate system O is generated 1 The specific method comprises the following steps:
combining the future actual motion situation of the target in the space to be positionedSelecting a point in the environment, and marking the point as a world coordinate system O in a target control system 1 Designating the X-axis and Y-axis directions;
uploading the structure diagram of the space to be positioned to a target control system, encoding the structure diagram into a binary image with preset resolution in the target control system, and adding the binary image to world coordinates O 1 Is a kind of medium.
Further, in world coordinate system O 1 And a sensor coordinate system O 2 The specific method for selecting the same two reference points is as follows:
sensor coordinate system O 2 In each scan S of the sensor is considered as a 2D cartesian point (S i ) i Is a sequence of (2);
s and m are selected from the two coordinate systems as two associated reference points, and the two associated reference points meet the following conditions:
(1)||z iter s-m||≤δ iter
(2) Image normals with m
(3)Wherein->Is the scan normal of s
Wherein z is iter Representing the rigid body transformation group z, delta after each iteration iter Is a constant value, and is set to be a constant value,is the transformed scan normal; the effect of condition (3) is to force the structural diagram normal +.>And transformed scan normal ∈ ->The relative angle between the two is not more than 90 degrees; for each endpoint s, we first choose the sum z iter The nearest occupied pixel is taken as a candidate pixel on the binary image; if the pixel satisfies the condition, then the association is added to the association set, otherwise another candidate is selected by ray tracing along the beam direction.
The embodiment has the advantages that the selection condition of the reference point can exclude a large number of pixels which are not suitable for the reference point, the consumption of invalid association is reduced, and the whole positioning efficiency is improved while the positioning accuracy is ensured.
Further, world coordinate system O is estimated by solving nonlinear optimization 1 And a sensor coordinate system O 2 Is transformed in world coordinate system O at time t 1 Z for the target coordinates t The representation is:
z t =arg min∑κ(||zs-m|| ∑(z;s,m) )
wherein zm-s For mahalanobis distance, z is a rigid-body transform group, κ is a robust kernel that limits the impact of false correlations, Σ (z; s, m) is a two-dimensional variant of the covariance matrix, expressed as:
wherein R is s And R is m Is a 2D rotation matrix aligned with the scan normal at s and the structure diagram normal at m, respectively; z R Is a rigid body transformation matrix and v and η are covariance terms for each association weighted along the correlation normal.
The embodiment has the advantages that the obtained conversion formula is suitable for conversion of any structural diagram and world diagram through a large number of deductions and experiments, and parameters and matrixes in the conversion formula can be used for reflecting conversion flow and errors, so that subsequent understanding, fault diagnosis and optimization are facilitated.
Further, the information matrix Ω t The estimation is:
wherein f (z; s, m) =r (Λ (z; s, m) Τ (zs-m))
And Λ (z; s, m) is the lower triangular Cholesky factor of Ω (z; s, m), the symbol +.>Representing a complex operator on the transformation of the two-dimensional coordinate system of the prune group, wherein v is a rigid body transformation group;
by increasing matrix coefficients of fixed environment information, the weight occupied in prior information utilized by the target pose is increased, the situation that a local optimal solution is trapped is avoided, and positioning robustness is enhanced.
An advantage of this embodiment is that the information matrix expresses the uncertainty under this positioning frame, and the positioning frame system can converge to a precise position even if the user has a large error in initially guessing the robot position. By the target coordinate z t Is used for obtaining an information matrix omega of the system t The method comprises the steps of carrying out a first treatment on the surface of the Based on the proposed information matrix expression, the matrix coefficient of fixed environment information such as a wall is increased, so that the weight occupied in prior information utilized in estimating the pose of the robot is larger, the positioning frame does not fall into local optimum, and the robustness and the accuracy of the positioning method are enhanced.
Preferably, the object is a mobile robot.
Preferably, the sensor is a lidar.
Preferably, the structure diagram is a CADN building plan.
The embodiment has the advantages that the map generated by the laser radar is fitted to the plane graph to overcome the problem of lack of features in the CAD drawing, a user does not need specific training to read and understand the grid graph any more, and the robot can be guided to complete functions of positioning, navigation and the like according to detailed information of the CAD plane graph containing laser radar information generated in the control system.
In order to achieve the above object, another aspect provides a target structure diagram positioning device based on sensor detection, including: the system comprises an information acquisition module, a coordinate system generation module, a coordinate system anchoring module and a robustness optimization module;
the information acquisition module acquires a structure diagram of a space to be positioned where a target is located and acquires image data of the space to be positioned, which is scanned by the target sensor;
the coordinate system generation module generates a world coordinate system O according to the structure diagram 1 Generating a sensor coordinate system O from image data of the space to be determined 2
The coordinate system anchoring module is arranged in the world coordinate system O 1 And a sensor coordinate system O 2 In the method, two identical reference points are selected respectively, after the origins of the two coordinate systems are overlapped, the two reference points are aligned to serve as iteration targets, alignment association transformation is carried out on the two coordinate systems, and the targets are obtained in the world coordinate system O 1 Coordinate conversion equation in (a);
the robustness optimization module acquires an information matrix according to a coordinate conversion equation, and adjusts the information matrix to enhance positioning robustness.
In order to achieve the above object, another aspect provides a storage medium, where a plurality of instructions are stored, where the instructions are adapted to be loaded by a processor to perform the above method for locating a target structure based on sensor detection.
Additional advantages, objects, and features of the application will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof.
Drawings
The drawings of the present application are described below.
Fig. 1 is a schematic diagram of a positioning process according to the present application.
Detailed Description
The application is further described below with reference to the drawings and examples.
Example 1:
a target structure diagram positioning method based on sensor detection is shown in figure 1, and the specific method is as follows:
s1, acquiring a structure diagram of a space to be positioned where a target is located, and acquiring image data of the space to be positioned scanned by a target sensor;
in particular, the target is a mobile robot, but may be any other device or structure that has a mobile function and requires positioning.
In particular, the sensor is a lidar, but may of course be other sensors that may be used for mapping, such as cameras, sonar detection devices, etc.
Specifically, the structure diagram is a CADN building plan diagram, and of course, the structure diagram can also be a plan structure diagram of other formats such as JPG and the like.
S2, generating a world coordinate system O according to the structure diagram 1 Generating a sensor coordinate system O from image data of the space to be determined 2
Specifically, a world coordinate system O is generated 1 The specific method comprises the following steps:
s21, selecting a point in the environment in combination with the future actual motion situation of the target in the space to be positioned, and marking the point as a world coordinate system O in a target control system 1 Designating the X-axis and Y-axis directions;
s22, uploading the structure diagram of the space to be positioned to a target control system, encoding the structure diagram into a binary image with preset resolution in the target control system, and adding the binary image to world coordinates O 1 Is a kind of medium.
S3, in world coordinate system O 1 And a sensor coordinate system O 2 In the method, two identical reference points are selected respectively, after the origins of the two coordinate systems are overlapped, the two reference points are aligned to serve as iteration targets, alignment association transformation is carried out on the two coordinate systems, and the targets are obtained in the world coordinate system O 1 Coordinates of (a)Converting an equation;
specifically, in world coordinate system O 1 And a sensor coordinate system O 2 The specific method for selecting the same two reference points is as follows:
s31 sensor coordinate System O 2 In each scan S of the sensor is considered as a 2D cartesian point (S i ) i Is a sequence of (2);
s32, selecting S and m from two coordinate systems as two associated reference points, wherein the two associated reference points meet the following conditions:
||z iter s-m||≤δ iter
image normals with m
Wherein->Is the scan normal of s
Wherein z is iter Representing the rigid body transformation group z, delta after each iteration iter Is a constant value, and is set to be a constant value,is the transformed scan normal; the effect of condition (3) is to force the structural diagram normal +.>And transformed scan normal ∈ ->The relative angle between the two is not more than 90 degrees; for each endpoint s, we first choose the sum z iter The nearest occupied pixel is taken as a candidate pixel on the binary image; if the pixel satisfies the condition, then the association is added to the association set, otherwise another candidate is selected by ray tracing along the beam direction.
In this embodiment, if the pixel satisfies the above condition, then the association is added to the association set, otherwise another candidate is selected by ray tracing along the beam direction. These criteria may result in many pixel rejections, but reduce the consumption of invalid associations, preserving the efficiency of the system.
Specifically, in sensor coordinate system O 2 Specifying two associated reference points, image of two coordinate systems and O in sensor coordinate system 2 The two appointed relevant reference points are input into a trained CNN neural network model, and the CNN neural network model is used for inputting the CNN neural network model according to O in a sensor coordinate system 2 Features in the vicinity of two associated reference points in world coordinate system O 1 Corresponding two associated reference points are found.
Specifically, world coordinate system O is estimated by solving nonlinear optimization 1 And a sensor coordinate system O 2 Is transformed in world coordinate system O at time t 1 Z for the target coordinates t The representation is:
z t =arg min∑κ(||zs-m|| ∑(z;s,m) )
wherein zm-s For mahalanobis distance, z is a rigid-body transform group, κ is a robust kernel that limits the impact of false correlations, Σ (z; s, m) is a two-dimensional variant of the covariance matrix, expressed as:
wherein R is s And R is m Is a 2D rotation matrix aligned with the scan normal at s and the structure diagram normal at m, respectively; z R Is a rigid body transformation matrix, v and η are covariance terms of each association weighted along the correlation normal, which are greater than zero. For the sake of fitness we always consider the image normal pointing from the occupied pixel to the free pixel, and pointing to the lidar coordinate system O 2 The scan normal to the origin. By this method we anchor the real-time measured 2D lidar map to the horizonAnd (3) on the surface map, thereby completing positioning based on the CAD map.
S4, acquiring an information matrix according to the coordinate conversion equation, and adjusting the information matrix to enhance positioning robustness.
In particular, the method comprises the steps of,
information matrix Ω t The estimation is:
wherein f (z; s, m) =r (Λ (z; s, m) Τ (zs-m))
And Λ (z; s, m) is the lower triangular Cholesky factor of Ω (z; s, m), the symbol +.>Representing a complex operator on the transformation of the two-dimensional coordinate system of the prune group, wherein v is a rigid body transformation group;
based on the proposed information matrix expression, the matrix coefficient of fixed environment information such as a wall is increased, so that the weight occupied in prior information utilized in estimating the pose of the robot is larger, the positioning frame does not fall into local optimum, and the robustness and the accuracy of the positioning method are enhanced.
In the association method, the laser radar map is registered on the plan view of the environment system through continuous iteration, so that positioning is completed. When a user runs the positioning system for the first time, the approximate pose of the mobile robot when the mobile robot starts to execute tasks needs to be measured under a world coordinate system, and then the pose is input into the system, and the system automatically calculates the accurate pose of the robot and executes the subsequent operations in the frame. In the subsequent positioning and other tasks, the operation is not required to be repeated. The CAD plan view can enable a user to easily understand the map, and guide the robot to finish functions of positioning, navigation and the like according to detailed information of the map.
Example 2:
a target structure map locating device based on sensor detection, comprising: the system comprises an information acquisition module, a coordinate system generation module, a coordinate system anchoring module and a robustness optimization module;
the information acquisition module acquires a structure diagram of a space to be positioned where a target is located and acquires image data of the space to be positioned, which is scanned by the target sensor;
the coordinate system generation module generates a world coordinate system O according to the structure diagram 1 Generating a sensor coordinate system O from image data of the space to be determined 2
The coordinate system anchoring module is arranged in the world coordinate system O 1 And a sensor coordinate system O 2 In the method, two identical reference points are selected respectively, after the origins of the two coordinate systems are overlapped, the two reference points are aligned to serve as iteration targets, alignment association transformation is carried out on the two coordinate systems, and the targets are obtained in the world coordinate system O 1 Coordinate conversion equation in (a);
the robustness optimization module acquires an information matrix according to a coordinate conversion equation, and adjusts the information matrix to enhance positioning robustness.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the application without departing from the spirit and scope of the application, which is intended to be covered by the claims.

Claims (10)

1. The target structure diagram positioning method based on sensor detection is characterized by comprising the following steps of:
the target is randomly positioned in the space to be positioned, a structure diagram of the space to be positioned is obtained, an origin is selected in the structure diagram, and a world coordinate system O is generated 1
The target generates a world map through the image data scanned by the sensor in the space to be positioned, and coordinates the worldIs O of 1 Is used as the origin of the world map to generate a sensor coordinate system O 2
In world coordinate system O 1 And a sensor coordinate system O 2 Two identical reference points are selected respectively; the same reference point is judged after image recognition through a trained convolutional neural network;
after the origins of the two coordinate systems are overlapped, the two reference points are aligned to serve as iteration targets, alignment association transformation is carried out on the two coordinate systems, and the target in the world coordinate system O is obtained 1 Coordinate conversion equation in (a);
and acquiring an information matrix according to the coordinate conversion equation, and adjusting the information matrix to enhance positioning robustness.
2. The method for locating a structure of a target based on sensor detection according to claim 1, wherein a world coordinate system O is generated 1 The specific method comprises the following steps:
in the space to be positioned, in combination with the future actual motion situation of the target, selecting a point in the environment, and marking the point as a world coordinate system O in a target control system 1 Designating the X-axis and Y-axis directions;
uploading the structure diagram of the space to be positioned to a target control system, encoding the structure diagram into a binary image with preset resolution in the target control system, and adding the binary image to world coordinates O 1 Is a kind of medium.
3. The method for locating a structure of a target based on sensor detection as claimed in claim 2, wherein the reference is made to the world coordinate system O 1 And a sensor coordinate system O 2 The specific method for selecting the same two reference points is as follows:
sensor coordinate system O 2 In each scan S of the sensor is considered as a 2D cartesian point (S i ) i Is a sequence of (2);
s and m are selected from the two coordinate systems as two associated reference points, and the two associated reference points meet the following conditions:
(1)||z iter s-m||≤δ iter
(2) Image normals with m
(3)Wherein->Is the scan normal of s
Wherein z is iter Representing the rigid body transformation group z, delta after each iteration iter Is a constant value, and is set to be a constant value,is the transformed scan normal; the effect of condition (3) is to force the structural diagram normal +.>And transformed scan normal ∈ ->The relative angle between the two is not more than 90 degrees; for each endpoint s, we first choose the sum z iter The nearest occupied pixel is taken as a candidate pixel on the binary image; if the pixel satisfies the condition, then the association is added to the association set, otherwise another candidate is selected by ray tracing along the beam direction.
4. The method for locating a target structure based on sensor detection according to claim 1, wherein the world coordinate system O is estimated by solving nonlinear optimization 1 And a sensor coordinate system O 2 Is transformed in world coordinate system O at time t 1 Z for the target coordinates t The representation is:
wherein zm-s For mahalanobis distance, z is a rigid-body transform group, κ is a robust kernel that limits the impact of false correlations, Σ (z; s, m) is a two-dimensional variant of the covariance matrix, expressed as:
wherein R is s And R is m Is a 2D rotation matrix aligned with the scan normal at s and the structure diagram normal at m, respectively; z R Is a rigid body transformation matrix and v and η are covariance terms for each association weighted along the correlation normal.
5. The method for locating a structure of a target based on sensor detection as claimed in claim 4, wherein the information matrix Ω t The estimation is:
wherein f (z; s, m) =r (Λ (z; s, m) Τ (zs-m))
And Λ (z; s, m) is the lower triangular Cholesky factor of Ω (z; s, m), the symbol +.>Representing a complex operator on the transformation of the two-dimensional coordinate system of the prune group, wherein v is a rigid body transformation group;
by increasing matrix coefficients of fixed environment information, the weight occupied in prior information utilized by the target pose is increased, the situation that a local optimal solution is trapped is avoided, and positioning robustness is enhanced.
6. The method for locating a structure of a target based on sensor detection of claim 1, wherein the target is a mobile robot.
7. The method for locating a structure of a target based on detection by a sensor according to claim 1, wherein the sensor is a lidar.
8. The method for locating a target structure based on sensor detection according to claim 1, wherein the structure is a CADN building plan.
9. A target structure map locating device based on sensor detection, comprising: the system comprises an information acquisition module, a coordinate system generation module, a coordinate system anchoring module and a robustness optimization module;
the information acquisition module acquires a structure diagram of a space to be positioned where a target is located and acquires image data of the space to be positioned, which is scanned by the target sensor;
the coordinate system generation module generates a world coordinate system O according to the structure diagram 1 Generating a sensor coordinate system O from image data of the space to be determined 2
The coordinate system anchoring module is arranged in the world coordinate system O 1 And a sensor coordinate system O 2 In the method, two identical reference points are selected respectively, after the origins of the two coordinate systems are overlapped, the two reference points are aligned to serve as iteration targets, alignment association transformation is carried out on the two coordinate systems, and the targets are obtained in the world coordinate system O 1 Coordinate conversion equation in (a);
the robustness optimization module acquires an information matrix according to a coordinate conversion equation, and adjusts the information matrix to enhance positioning robustness.
10. A storage medium storing instructions adapted to be loaded by a processor to perform the method of any one of claims 1 to 8.
CN202310866323.8A 2023-07-14 2023-07-14 Target structure diagram positioning method, device and medium based on sensor detection Pending CN116892940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310866323.8A CN116892940A (en) 2023-07-14 2023-07-14 Target structure diagram positioning method, device and medium based on sensor detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310866323.8A CN116892940A (en) 2023-07-14 2023-07-14 Target structure diagram positioning method, device and medium based on sensor detection

Publications (1)

Publication Number Publication Date
CN116892940A true CN116892940A (en) 2023-10-17

Family

ID=88313242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310866323.8A Pending CN116892940A (en) 2023-07-14 2023-07-14 Target structure diagram positioning method, device and medium based on sensor detection

Country Status (1)

Country Link
CN (1) CN116892940A (en)

Similar Documents

Publication Publication Date Title
JP7326911B2 (en) Control system and control method
Bodenhagen et al. An adaptable robot vision system performing manipulation actions with flexible objects
CN110634161B (en) Rapid high-precision estimation method and device for workpiece pose based on point cloud data
US11960259B2 (en) Control system using autoencoder
RU2700246C1 (en) Method and system for capturing an object using a robot device
US20220024042A1 (en) Method and system for robot control using visual feedback
Kästner et al. A 3d-deep-learning-based augmented reality calibration method for robotic environments using depth sensor data
JP7458741B2 (en) Robot control device and its control method and program
JP5092711B2 (en) Object recognition apparatus and robot apparatus
CN111695562A (en) Autonomous robot grabbing method based on convolutional neural network
CN115351780A (en) Method for controlling a robotic device
CN116249607A (en) Method and device for robotically gripping three-dimensional objects
CN116363205A (en) Space target pose resolving method based on deep learning and computer program product
CN109542094B (en) Mobile robot vision stabilization control without desired images
CN117769724A (en) Synthetic dataset creation using deep-learned object detection and classification
Skaldebø et al. Dynamic positioning of an underwater vehicle using monocular vision-based object detection with machine learning
Marchionne et al. GNC architecture solutions for robust operations of a free-floating space manipulator via image based visual servoing
CN109934155B (en) Depth vision-based collaborative robot gesture recognition method and device
Xiao et al. One-shot sim-to-real transfer policy for robotic assembly via reinforcement learning with visual demonstration
CN116892940A (en) Target structure diagram positioning method, device and medium based on sensor detection
CN116079727A (en) Humanoid robot motion simulation method and device based on 3D human body posture estimation
Fornas et al. Fitting primitive shapes in point clouds: a practical approach to improve autonomous underwater grasp specification of unknown objects
CN114782592A (en) Cartoon animation generation method, device and equipment based on image and storage medium
Arbeiter et al. Towards geometric mapping for semi-autonomous mobile robots
CN113592907A (en) Visual servo tracking method and device based on optical flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination