CN116342899A - Target detection positioning method, device, equipment and storage medium - Google Patents

Target detection positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN116342899A
CN116342899A CN202211635138.XA CN202211635138A CN116342899A CN 116342899 A CN116342899 A CN 116342899A CN 202211635138 A CN202211635138 A CN 202211635138A CN 116342899 A CN116342899 A CN 116342899A
Authority
CN
China
Prior art keywords
point cloud
target
cloud data
determining
bounding box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211635138.XA
Other languages
Chinese (zh)
Inventor
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyland Technology Co Ltd
Original Assignee
Kyland Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kyland Technology Co Ltd filed Critical Kyland Technology Co Ltd
Priority to CN202211635138.XA priority Critical patent/CN116342899A/en
Publication of CN116342899A publication Critical patent/CN116342899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a target detection positioning method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring point cloud data captured by a laser radar on a current tower crane area; clustering the point cloud data to obtain target point cloud clusters corresponding to each target class; and acquiring bounding boxes corresponding to the targets according to the target points Yun Cu to determine the position information of the targets in the current tower crane region. Compared with the prior art, the method is different from the prior art in that the method only aims at an automatic driving scene but does not support a tower crane scene, and in the current stage, a preset frame is often used for detecting a target to cause inaccurate target detection positioning.

Description

Target detection positioning method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to a method, an apparatus, a device, and a storage medium for detecting and positioning a target.
Background
At present, the building industry develops well, a plurality of building equipment are widely applied, and in the building construction process, a tower crane is used as an important hoisting equipment in a construction site and is mainly applied to materials such as steel bars and steel pipes for hoisting construction, and the tower crane has the characteristics of high lifting height and high operation efficiency, so that the tower crane is widely used in construction. Due to the characteristic of high lifting height of the tower crane, in a complex working environment such as a construction site, the safety problem of tower crane construction is gradually highlighted, and in order to timely detect and early warn a target object entering the range of a tower crane arm in the construction site, the target object needs to be detected.
The conventional target detection and positioning method is generally applied to road surface data, the algorithm is usually implemented without considering height information, and the evaluation index is basically calculated by using a two-dimensional bounding box projected by a bird's eye view. Therefore, some means are used in the existing model development to extract and project the features of the three-dimensional point cloud into a two-dimensional image, and the target is perceived in the two-dimensional image, for example, a point-pillar (point-pallar) algorithm. It can be seen that the current target detection algorithm is only aimed at the autopilot scenario, but not the tower crane scenario. In addition, the current target detection algorithm is used to detect targets in a form of a preset frame, so that the target detection positioning is inaccurate.
Disclosure of Invention
The embodiment of the invention provides a target detection positioning method, device, equipment and storage medium, which are used for realizing the detection positioning of a target in a tower crane scene and improving the accuracy of target detection.
In a first aspect, an embodiment of the present invention provides a target detection positioning method, including:
acquiring point cloud data captured by a laser radar on a current tower crane area;
clustering the point cloud data to obtain target point cloud clusters corresponding to each target class;
and acquiring bounding boxes corresponding to the targets according to the target points Yun Cu to determine the position information of the targets in the current tower crane region.
In a second aspect, an embodiment of the present invention provides an object detection positioning device, including:
the data acquisition module is used for acquiring point cloud data captured by the laser radar on the current tower crane area;
the point cloud cluster determining module is used for carrying out clustering processing on the point cloud data to obtain target point cloud clusters corresponding to each target category;
and the position information determining module is used for obtaining bounding boxes corresponding to the targets according to the target points Yun Cu so as to determine the position information of the targets in the current tower crane region.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the object detection positioning method according to any one of the embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium, where computer instructions are stored, where the computer instructions are configured to cause a processor to execute the method for detecting and positioning an object according to any one of the embodiments of the present invention.
The embodiment of the invention provides a target detection positioning method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring point cloud data captured by a laser radar on a current tower crane area; clustering the point cloud data to obtain target point cloud clusters corresponding to each target class; and acquiring bounding boxes corresponding to the targets according to the target points Yun Cu to determine the position information of the targets in the current tower crane region. Compared with the prior art, the method only aims at an automatic driving scene and does not support a tower crane scene, in the prior art, a discriminant mode is often used, namely, a target is detected through a preset frame, so that the target detection positioning is inaccurate, the technical scheme is used for capturing the point cloud data of the current tower crane region, and because the point cloud data comprises three-dimensional coordinates, the target positioning information determined based on the point cloud data can contain height information, compared with the prior art, the method is characterized in that the three-dimensional point cloud is firstly extracted and projected into a two-dimensional image, the target is perceived in the two-dimensional image, so that the detection positioning is inaccurate, in the technical scheme, the three-dimensional coordinates are determined by adopting the point cloud data, the method is suitable for the condition that the target height information needs to be known in the tower crane scene, and the tower crane region can be clearly expressed Scene(s) The accuracy of the height information is improved. In addition, compared with the prior art that a discriminant mode of setting a preset frame in advance is adopted for target detection, the method and the device for detecting targets in the technical scheme adopt a generation mode of determining the corresponding bounding box of the targets based on the point cloud clusters, and accuracy of target detection is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a target detection positioning method according to a first embodiment of the present invention;
fig. 2 is a flow chart of a target detection positioning method according to a second embodiment of the present invention;
Fig. 2a is a flowchart illustrating an execution of a target detection positioning method in an application scenario according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a target detecting and positioning device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "original," "target," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Considering that the prior art is generally used for automatic driving service, the application scene is basically long and thin pavement data, the adopted algorithm is relatively irrelevant to the height, and the evaluation index is basically projected into a two-dimensional bounding box by using a bird's eye view for intersection ratio calculation. The model development time is as follows: firstly, extracting features of the point cloud data, projecting the feature extraction data into a two-dimensional image, and sensing a target in the two-dimensional image. In addition, at present, a discriminant mode is adopted to detect an object, namely, a plurality of three-dimensional frames are pre-established in space, and then whether objects exist in the three-dimensional frames or not is identified through a network. This approach can lead to inaccurate target detection positioning. Therefore, there is a need for a target detection positioning method to solve the above-mentioned problems.
Example 1
Fig. 1 is a schematic flow chart of a target detection positioning method according to an embodiment of the present invention, where the method is applicable to a situation of performing target detection positioning in a tower crane scenario, and the method may be performed by a target detection positioning device, and the target detection positioning device may be implemented in a form of hardware and/or software and is generally integrated in an electronic device.
As shown in fig. 1, the method for detecting and positioning a target provided in the first embodiment may specifically include the following steps:
S110, acquiring point cloud data captured by a laser radar on a current tower crane area.
The current tower crane area can be understood as the area to be targeted for identification and positioning. For the scene of the tower crane area, when the tower crane is used for operation, the position information of the target needs to be known, and the height, the contour information and the like of the target need to be known so as to avoid influencing the normal operation of the tower crane. The point cloud data can be specifically understood as a massive point set of target surface characteristics, and the point cloud data can reflect the real situation of the object surface with higher precision. The point cloud data can be obtained by measuring the laser radar in real time. The point cloud data contains rich information such as three-dimensional coordinates, colors, classification values, intensity values, time and the like. In this embodiment, the installation position of the lidar and the type of the lidar are not particularly limited, so long as the entire point cloud data capturing of the current tower crane area can be realized. Specifically, point cloud data captured by a laser radar on a current tower crane area are obtained. The real scene can be atomized through the point cloud data, and the real world can be restored through the high-precision point cloud data, so that the current tower crane region scene can be clearly depicted.
It should be noted that, unlike the point cloud data and the image data, the image data can present a better picture only in tens or milliseconds, and the point cloud data needs to be accumulated for a long time to achieve a better presentation effect, so that the integration time can be set as required, and the integration time is set to be two seconds as an example.
And S120, clustering the point cloud data to obtain target point cloud clusters corresponding to each target category.
In this embodiment, the target detection and positioning is performed by using a generating equation, unlike the prior art that uses a discriminant equation. In this step, after a large amount of point cloud data is acquired, processing is performed from the perspective of semantic segmentation, and each point cloud data is assigned with a category based on semantic segmentation. Where semantic segmentation is a fundamental task in computer vision. Visual input is divided into different semantically interpretable categories in semantic segmentation. The task of semantic segmentation is to assign each point cloud data to a category. Illustratively, whether the point cloud is a building, a vehicle, or other object is determined by semantic segmentation. In this embodiment, a semantic division model may be trained in advance, and the point cloud data is input into the semantic division model, so that a category is assigned to each point cloud data. The semantic segmentation model can adopt a KPFCNN model, which is a network with the best precision for performing semantic segmentation tasks at present, and can ensure the output precision. However, the KPFCNN model output accuracy is higher, but may be improved to increase the accuracy, and the improvement may be to filter the output result.
In this embodiment, after obtaining the category corresponding to each point cloud data, clustering may be performed on the point cloud data according to the category of the point cloud data to form a point cloud cluster. And clustering the point cloud data corresponding to the same category according to a set clustering algorithm, and clustering the point cloud data of the same category into point cloud clusters. A unique identification code is assigned to each point cloud cluster. Clustering is the partitioning of a data set into different classes or clusters according to a certain specific criteria, such that the similarity of data objects within the same cluster is as large as possible, while the variability of data objects that are not in the same cluster is as large as possible. That is, the data of the same class after clustering are gathered together as much as possible, and the data of different classes are separated as much as possible. By way of example, the set clustering algorithm may employ a Density-based clustering algorithm DBSCAN (Density-Based Spatial Clustering ofApplications with Noise). The DBSCAN clustering algorithm, unlike the partitioning and hierarchical clustering methods, defines clusters as the largest set of densely connected points, can partition areas with a sufficiently high density into clusters, and can find clusters of arbitrary shape.
In this embodiment, after each point cloud cluster is obtained, the point cloud clusters are respectively matched with the corresponding template point clouds, and the accuracy of the point cloud clusters is further determined. The template point cloud can be specifically understood as standard point cloud data. The matching algorithm may be set according to practical situations, for example, an iterative closest point ICP (Iterative Closest Point) algorithm is adopted. Specifically, the point cloud data in each point cloud cluster is matched with the reference point cloud data, and the matching score of each point cloud data is obtained. If the matching score is greater than the set threshold of matching scores, the point cloud data may be retained in the point cloud cluster. If the matching score is less than or equal to the set threshold of the matching score, the point cloud data can be removed from the point cloud cluster. And updating each point cloud cluster through a matching algorithm, so that the most accurate point cloud clusters corresponding to each class can be determined. In this embodiment, the most accurate point cloud cluster determined through the processing is recorded as the target point cloud cluster.
S130, according to the cloud clusters of each target point, acquiring bounding boxes corresponding to each target to determine the position information of the target in the current tower crane area.
Considering that in the prior art, a discriminant mode is adopted to detect the target, namely, a plurality of three-dimensional frames are pre-established in space, and then whether objects exist in the three-dimensional frames or not is identified through a network, the inaccuracy of target detection and positioning can be caused. In this embodiment, in order to improve accuracy of target detection positioning, a generation type manner is adopted, that is, detection positioning of a target is realized in a form of actively generating a bounding box corresponding to the target according to a point cloud cluster.
It should be appreciated that the current tower crane area may include one or more targets, and is not particularly limited herein. In this step, the position information of each target in the current tower crane area needs to be determined. After the position information of each target is determined, collision with the target or the like can be avoided when performing the work. The set feature extraction method is principal component analysis (Principal Components Analysis, PCA). In this embodiment, in order to determine the position information of the target in the current tower crane region, analysis may be performed to obtain the minimum bounding box and the vertex information of the minimum bounding box of each target based on the point cloud data included in the target point cloud cluster. Based on the minimum bounding box corresponding to the target and the vertex information of the minimum bounding box, the position information of the target in the current tower crane region can be accurately determined. Bounding boxes, also called bounding minimum rectangles, are an algorithm for solving the optimal bounding space of a set of discrete points, the basic idea being to replace approximately complex geometric objects with somewhat larger and simpler-to-property geometric volumes (also called bounding boxes). The smallest directed bounding box (Oriented Bounding Box, OBB) is a more common bounding box type. It is a smallest cuboid containing the object and arbitrary with respect to the coordinate axis direction. It will also be appreciated that the OBB is a function of the geometry of the object itself, and that the box is not necessarily oriented perpendicular to the coordinate axes. The biggest feature of an OBB is its arbitrary orientation, which makes it possible to enclose objects as closely as possible according to their shape characteristics. In this embodiment, the minimum bounding box uses an OBB bounding box type. After determining each target point cloud cluster, an OBB bounding box may be derived based on PCA analysis.
Specifically, the determination process of the OBB bounding box may be specifically expressed as: and (3) obtaining three main directions of the point cloud cluster by using a PCA analysis method, calculating covariance, obtaining a covariance matrix, and solving eigenvalues and eigenvectors of the covariance matrix, wherein the eigenvectors are the main directions. And converting the input point cloud data to the original point by using the obtained main direction and centroid, wherein the main direction coincides with the direction of the coordinate system, and establishing a bounding box of the point cloud converted to the original point. And setting a main direction and a bounding box for the input point cloud through the inverse transformation implementation of the transformation from the input point cloud to the original point cloud.
The process of minimum bounding box vertex computation is generally as follows: after the input point cloud is converted to the original point, the coordinates of the maximum x, y and z axes of the converted point cloud are obtained, and the maximum and minimum coordinates on each axis can be respectively expressed as: (max.x, max.y, max.z), (max.x, min.y, max.z), (max.x, max.y, min.z), (min.x, max.y, max.z), (min.x, max.y, min.z), (min.x, min.y, max.z), (min.x, min.y, min.z), wherein max.x represents the maximum value of the transformed point cloud in the x-axis direction, min.x represents the minimum value of the transformed point cloud in the x-axis direction, max.y represents the maximum value of the transformed point cloud in the y-axis direction, min.y represents the minimum value of the transformed point cloud in the y-axis direction, max.z represents the maximum value of the transformed point cloud in the z-axis direction, and min.z represents the minimum value of the transformed point cloud in the z-axis direction. The range determined by the coordinate values is the bounding box of the transformed point cloud, and is also the coordinate of the vertex coordinate of the bounding box of the original input point cloud after the change. And further inversely converting the obtained bounding box coordinates back to a coordinate system of the input point cloud to obtain bounding box vertex coordinates of the original input point cloud.
After the OBB bounding box and the vertex coordinates of the OBB bounding box corresponding to the cloud cluster of each target point are determined, the position information of each target in the current tower crane area is determined.
The embodiment of the invention provides a target detection positioning method, which comprises the following steps: firstly, acquiring point cloud data captured by a laser radar on a current tower crane area; then clustering is carried out on the point cloud data to obtain target point cloud clusters corresponding to each target category; and finally, according to the cloud clusters of each target point, acquiring bounding boxes corresponding to each target to determine the position information of the target in the current tower crane area. Compared with the prior art, the method is characterized in that the three-dimensional point cloud is firstly subjected to characteristic extraction and projected into a two-dimensional image, the target is perceived in the two-dimensional image to cause inaccurate detection and positioning, the three-dimensional coordinate is determined by adopting the point cloud data in the technical scheme, the method can be suitable for the condition that the target height information is required to be known in the tower crane scene, and the scene of the tower crane region can be clearly expressed, so that the accuracy of the height information is improved. In addition, compared with the prior art that a discriminant mode of setting a preset frame in advance is adopted for target detection, the method and the device for detecting targets in the technical scheme adopt a generation mode of determining the corresponding bounding box of the targets based on the point cloud clusters, and accuracy of target detection is improved.
Example two
Fig. 2 is a flow chart of a target detection positioning method provided by a second embodiment of the present invention, where the embodiment is further optimized in the foregoing embodiment, and in the present embodiment, clustering processing is further performed on the point cloud data to obtain a target point cloud cluster corresponding to each target class, which is specifically: inputting the point cloud data into a pre-trained semantic segmentation model to obtain category information of each point cloud data; and clustering the point cloud data according to the category information to obtain target point cloud clusters corresponding to the target categories.
And, further, according to each target point Yun Cu, obtaining a bounding box corresponding to each target to determine the position information of the target in the current tower crane region is embodied as follows: extracting characteristics of point cloud data in each target point cloud cluster according to a principal component analysis method, and determining bounding boxes and bounding box vertex coordinates corresponding to each target; and determining the position information of the target in the current tower crane region according to the bounding box and the vertex coordinates of the bounding box.
As shown in fig. 2, the second embodiment provides a target detection positioning method, which specifically includes the following steps:
S210, acquiring point cloud data captured by a laser radar on a current tower crane area.
Specifically, the point cloud data capturing is carried out on the current tower crane area in real time through a laser radar.
S220, inputting the point cloud data into a pre-trained semantic segmentation model to obtain category information of the point cloud data.
Wherein, the pre-trained semantic segmentation model can adopt a KPFCNN model. The training process of the semantic segmentation model may be training based on a point cloud data sample set containing labeled classes. In the present embodiment, the training process of the semantic division model is not particularly limited. The method comprises the steps of firstly constructing an initial semantic segmentation model, inputting a point cloud data sample into the semantic segmentation model, obtaining a marked sample calculation loss value of the category of the point cloud data corresponding to the point cloud data, updating the semantic segmentation model according to the loss value, updating the semantic segmentation model through continuous iteration, and taking the updated semantic segmentation model as a trained semantic segmentation model when the iteration termination condition is met.
In this embodiment, the point cloud data is classified based on the trained semantic segmentation model. Specifically, the point cloud data are input into a pre-trained semantic segmentation model, and category information corresponding to each point cloud data is obtained. The class of the cloud data of a certain point is a building, the class of the cloud data of a certain point is a tool car, and the class of the cloud data of a certain point is a person.
And S230, clustering the cloud data of each point according to the category information to obtain target point cloud clusters corresponding to each target category.
In this step, the clustering algorithm adopts a DBSCAN algorithm, as an example. And clustering the classified point cloud data by adopting a DBSCAN algorithm, wherein the DBSCAN algorithm is a clustering algorithm with outlier denoising, and after clustering, the point cloud data with association are clustered into the same cluster. The same identification code (ID) will be assigned to the point cloud data belonging to the same cluster. Specifically, clustering is performed on point cloud data with the same category information to obtain point cloud clusters corresponding to each target category, matching calculation is performed on the point cloud data in each point cloud cluster, unmatched point cloud data are extracted from the point cloud clusters, and accurate target point cloud clusters are determined.
Further, clustering is performed on the cloud data of each point according to the category information to obtain a target point cloud cluster corresponding to each target category, including:
and a1, clustering point cloud data belonging to the same category of information according to a density clustering algorithm to obtain initial point cloud clusters corresponding to each target category.
Specifically, the density clustering algorithm adopts a DBSCAN algorithm, and the clustering processing is carried out on the point cloud data of the same category of information by adopting the DBSCAN algorithm. It can be understood that each category information corresponds to a point cloud cluster, and in this embodiment, the point cloud cluster obtained through the clustering process is referred to as an initial point cloud cluster. Each initial point cloud cluster is assigned an ID, which may be understood as point cloud data in the same initial point cloud cluster having the same ID.
b1, respectively matching the point cloud data in each initial point cloud cluster with the reference point cloud data to obtain the matching score of each point cloud data.
Specifically, the point cloud data with the same ID are subjected to model matching respectively, and each point cloud cluster is screened, so that finally each point cloud cluster obtains the best matched point cloud data. The reference point cloud data may be specifically understood as standard template point cloud data acquired in advance. And matching the point cloud data in each initial point cloud cluster with the reference point cloud data according to each initial point cloud cluster. The matching algorithm adopts an ICP algorithm. The ICP algorithm is a point set-to-point set registration method. Assuming that A and B are two sets of points, the algorithm calculates whether A and B overlap. Specifically, for each initial point cloud cluster, matching point cloud data in the point cloud cluster with reference point cloud data corresponding to the point cloud cluster to obtain a matching score of each point cloud data.
And c1, updating the initial point cloud cluster according to the matching score, and determining the target point cloud cluster corresponding to each target category.
In the present embodiment, a score threshold value is set in advance. The matching score of the point cloud data may be greater than, equal to, or less than a score threshold. And when the matching score of the point cloud data is smaller than or equal to the score threshold value, the point cloud data does not meet the characteristics of the point cloud cluster. Accordingly, the initial point cloud cluster can be updated, for example, point cloud data meeting the conditions are reserved, point cloud data not meeting the conditions are removed from the point cloud cluster, and accordingly the updated initial point cloud cluster is determined to be the target point cloud cluster. It should be noted that each of the above determined categories corresponds to its target point cloud cluster.
Further, updating the initial point cloud cluster according to the matching score, and determining the target point cloud cluster corresponding to each category, including:
and c11, if the matching score is larger than the set score threshold, keeping the point cloud data in the initial point cloud cluster corresponding to the target class.
Specifically, if the matching score of the point cloud data is greater than the set score threshold, the point cloud data is indicated to meet the characteristics of the point cloud cluster, and the point cloud data is reserved in the initial point cloud cluster.
And c12, if the matching score is smaller than or equal to the set score threshold, eliminating the point cloud data from the initial point cloud cluster corresponding to the target class.
Specifically, if the matching score of the point cloud data is smaller than or equal to the set score threshold, the point cloud data is indicated to not meet the characteristics of the point cloud cluster, and the point cloud data is removed from the initial point cloud cluster.
And c13, determining the updated initial point cloud clusters as target point cloud clusters corresponding to the target categories.
Specifically, the updated initial point cloud clusters are determined as target point cloud clusters, and each initial point cloud cluster determines a corresponding target point cloud cluster.
S240, extracting characteristics of point cloud data in each target point cloud cluster according to a principal component analysis method, and determining bounding boxes and bounding box vertex coordinates corresponding to each target.
The method of generating the OBB is to obtain a feature vector, i.e., a principal axis of the OBB, by PCA analysis based on vertices of the object surface. In this embodiment, feature extraction is performed on point cloud data in each target point cloud cluster according to a principal component analysis method, and a feature extraction method adopts a covariance calculation method, and feature vectors in a covariance matrix are used as principal directions corresponding to each target point cloud cluster. And acquiring the mass center corresponding to each target point cloud cluster according to the point cloud data in the target point cloud clusters. It can be understood that the feature vector of the target point cloud cluster forms a main direction coordinate system, and the origin of the coordinate system is the centroid corresponding to the point cloud. It is considered that the boundary value calculated using the set function is a boundary value of the reference coordinate system, not a value in the point cloud main direction coordinate system. Therefore, the point cloud data needs to be converted from the main direction coordinate system into the reference coordinate system. At this time, the three main directions of the point cloud coincide with the coordinate axes of the reference coordinate system, and the centroid coincides with the origin of the reference coordinate system, and in this state, the boundary values in the three main directions of the point cloud are accurate.
And obtaining bounding box and bounding box vertex coordinates under a reference coordinate system, then carrying out inverse transformation, converting into bounding box and bounding box vertex coordinates corresponding to each target under a main direction coordinate system, and determining the bounding box and bounding box vertex coordinates corresponding to each target.
Further, feature extraction is performed on point cloud data in each target point cloud cluster according to a principal component analysis method, and bounding boxes and bounding box vertex coordinates corresponding to each target are determined, including:
and a2, extracting characteristics of point cloud data in each target point cloud cluster according to a principal component analysis method, and determining a principal direction and a mass center corresponding to each target point cloud cluster.
In the step, covariance calculation is carried out on point cloud data in each target point cloud cluster, a covariance matrix is determined, the covariance matrix comprises feature vectors and feature values, and the feature vectors are used as main directions corresponding to the target point cloud clusters. And determining the mass center according to the point cloud data in each target point cloud cluster.
In this embodiment, the coordinate system corresponding to the target point cloud cluster is denoted as a main direction coordinate system, and the centroid is used as the origin of the main direction coordinate system, and the main direction is used as a coordinate axis.
Further, feature extraction is performed on point cloud data in each target point cloud cluster according to a principal component analysis method, and a principal direction and a centroid corresponding to each target point cloud cluster are determined, including:
a21, calculating point cloud data in each target point cloud cluster, and determining a centroid corresponding to each target point cloud cluster.
In the present embodiment, the determination manner of the centroid is not particularly limited, and for example, the calculation may be performed using a correlation function in the point cloud processing library (The Point Cloud Library, PCL), or the average value of all the point cloud coordinates may be solved mathematically.
and a22, performing covariance calculation on the point cloud data in each target point cloud cluster, and determining a covariance matrix corresponding to each target point cloud cluster.
Specifically, for each target point cloud cluster, covariance calculation is performed on the point cloud data contained in the target point cloud cluster, and a covariance matrix is obtained.
a23, determining the eigenvectors of each covariance matrix as the main directions corresponding to each target point cloud cluster.
Specifically, the eigenvalue and eigenvector of the covariance matrix are obtained, and the eigenvector is used as the main direction corresponding to the target point cloud cluster.
And b2, converting the point cloud data into a reference coordinate system according to each main direction and the mass center, and determining an initial bounding box and initial vertex coordinates in the reference coordinate system.
Specifically, the point cloud data is converted into a reference coordinate system, and only the position of the object is changed without changing the shape and size of the object. I.e. the distance between the points is constant and the angle between the lines is constant. And converting the point cloud data in the target point cloud cluster into a reference coordinate system, and then obtaining the coordinates of the maximum and minimum x, y and z axes of the converted point cloud data. At this time, (max.x, max.y, max.z), (max.x, min.y, max.z), (max.x, max.y, min.z), (min.x, max.y, max.z), (min.x, max.y, min.z), (min.x, min.y, max.z), (min.x, min.y, min.z), wherein max.x represents the maximum value of the converted point cloud data in the x-axis direction, min.x represents the minimum value of the converted point cloud data in the x-axis direction, max.y represents the maximum value of the converted point cloud data in the y-axis direction, min.y represents the minimum value of the converted point cloud data in the y-axis direction, max.z represents the maximum value of the converted point cloud data in the z-axis direction, and min.z represents the minimum value of the converted point cloud data in the z-axis direction. The range determined according to the maximum value and the minimum value on each coordinate axis is the bounding box of the transformed point cloud, and is also the coordinate of the vertex coordinate of the bounding box of the original input point cloud after the change.
And c2, carrying out inverse transformation on the initial bounding box and the initial vertex coordinates, and determining bounding box and bounding box vertex coordinates in a main direction coordinate system as bounding box and bounding box vertex coordinates corresponding to the target.
Specifically, the obtained bounding box coordinates are inversely converted back into a main direction coordinate system, and then the bounding box and bounding box vertex coordinates of the original input point cloud are obtained. And taking the bounding box and the bounding box vertex coordinates in the main direction coordinate system as the bounding box and the bounding box vertex coordinates corresponding to the target.
S250, determining the position information of the target in the current tower crane region according to the bounding box and the vertex coordinates of the bounding box.
Specifically, after determining the bounding boxes corresponding to the targets, the location information of the targets included in the current tower crane area may be determined according to the bounding boxes of the targets and the vertex coordinates of the targets.
The embodiment embodies the step of determining the target point cloud cluster corresponding to the target category and the step of determining the target position information according to each target point cloud cluster. By adopting a generating mode, the captured point cloud data are subjected to semantic segmentation, the point cloud data are classified, a principal component analysis method is further adopted for the point cloud data of each category, surrounding frames and surrounding frame vertexes of each target are determined, and accordingly, the position information of the targets contained in the current tower crane area is determined. The method is different from the method of judging whether the accuracy is inaccurate due to the fact that whether the target exists in the preset frame or not in the prior art by adopting a discriminant mode, the method of generating the formula is adopted in the technical scheme, the height information is considered, the method is suitable for a crane tower scene, and the accuracy of target detection can be improved.
In order to more clearly describe the target detection and positioning method provided by the embodiment of the invention, the target detection and positioning of the tower crane area in a certain practical application scene is taken as an example for explanation. Fig. 2a is a flowchart illustrating execution of a target detection positioning method in an application scenario according to a second embodiment of the present invention, where, as shown in fig. 2a, the execution steps of the target detection positioning method specifically include:
s1, acquiring point cloud data captured by a laser radar on a current tower crane area.
S2, inputting the point cloud data into a pre-trained semantic segmentation model to obtain category information of the point cloud data.
And S3, clustering the point cloud data belonging to the same category of information according to a density clustering algorithm to obtain initial point cloud clusters corresponding to each target category.
And S4, respectively matching the point cloud data in each initial point cloud cluster with the reference point cloud data to obtain the matching score of each point cloud data.
And S5, if the matching score is larger than the set score threshold, keeping the point cloud data in the initial point cloud cluster corresponding to the target class.
And S6, if the matching score is smaller than or equal to the set score threshold, eliminating the point cloud data from the initial point cloud cluster corresponding to the target class.
And S7, determining the updated initial point cloud clusters as target point cloud clusters corresponding to the target categories.
And S8, calculating point cloud data in each target point cloud cluster, and determining a centroid corresponding to each target point cloud cluster.
S9, performing covariance calculation on the point cloud data in each target point cloud cluster, and determining a covariance matrix corresponding to each target point cloud cluster.
And S10, determining the eigenvectors of each covariance matrix as the main directions corresponding to the cloud clusters of each target point.
And S11, converting the point cloud data into a reference coordinate system according to each main direction and the mass center, and determining an initial bounding box and an initial vertex coordinate in the reference coordinate system.
S12, carrying out inverse transformation on the initial bounding box and the initial vertex coordinates, and determining bounding box and bounding box vertex coordinates in a main direction coordinate system as bounding box and bounding box vertex coordinates corresponding to the target.
S13, determining the position information of the target in the current tower crane region according to the bounding box and the vertex coordinates of the bounding box.
Example III
Fig. 3 is a schematic structural diagram of a target detection positioning device according to a third embodiment of the present invention, which is applicable to the situation of performing target detection positioning in a tower crane scenario, where the device may be implemented in a hardware and/or software manner and is generally integrated in an electronic device. As shown in fig. 3, the apparatus includes: a data acquisition module 31, a point cloud cluster determination module 32, and a location information determination module 33, wherein,
The data acquisition module 31 is used for acquiring point cloud data captured by the laser radar on the current tower crane region;
the point cloud cluster determining module 32 is configured to perform clustering processing on the point cloud data to obtain a target point cloud cluster corresponding to each target class;
the location information determining module 33 is configured to obtain bounding boxes corresponding to the targets according to the target points Yun Cu to determine location information of the targets in the current tower crane area.
The embodiment of the invention provides a target detection positioning device, which comprises: the data acquisition module 31 is used for acquiring point cloud data captured by the laser radar on the current tower crane region; the point cloud cluster determining module 32 is configured to perform clustering processing on the point cloud data to obtain a target point cloud cluster corresponding to each target class; the location information determining module 33 is configured to obtain bounding boxes corresponding to the targets according to the target points Yun Cu to determine location information of the targets in the current tower crane area. Compared with the prior art, the method is characterized in that the three-dimensional point cloud is firstly subjected to characteristic extraction and projected into a two-dimensional image, the target is perceived in the two-dimensional image to cause inaccurate detection and positioning, the three-dimensional coordinate is determined by adopting the point cloud data in the technical scheme, the method can be suitable for the condition that the target height information is required to be known in the tower crane scene, and the scene of the tower crane region can be clearly expressed, so that the accuracy of the height information is improved. In addition, compared with the prior art that a discriminant mode of setting a preset frame in advance is adopted for target detection, the method and the device for detecting targets in the technical scheme adopt a generation mode of determining the corresponding bounding box of the targets based on the point cloud clusters, and accuracy of target detection is improved.
Further, the point cloud cluster determining module 32 specifically includes:
the category determining unit is used for inputting the point cloud data into a pre-trained semantic segmentation model to obtain category information of the point cloud data;
and the point cloud cluster determining unit is used for carrying out clustering processing on the point cloud data according to the category information to obtain target point cloud clusters corresponding to each target category.
Further, the point cloud cluster determining unit is specifically configured to:
clustering the point cloud data belonging to the same category of information according to a density clustering algorithm to obtain initial point cloud clusters corresponding to each target category;
respectively matching point cloud data in each initial point cloud cluster with reference point cloud data to obtain matching scores of the point cloud data;
and updating the initial point cloud cluster according to the matching score, and determining the target point cloud cluster corresponding to each target category.
Further, the point cloud cluster determining unit is configured to update the initial point cloud cluster according to the matching score, and the step of determining the target point cloud cluster corresponding to each category includes:
if the matching score is larger than the set score threshold, the point cloud data is kept in the initial point cloud cluster corresponding to the target class;
if the matching score is smaller than or equal to the set score threshold, eliminating the point cloud data from the initial point cloud cluster corresponding to the target class;
And determining the updated initial point cloud clusters as target point cloud clusters corresponding to the target categories.
Further, the location information determining module 33 includes:
the bounding box determining unit is used for extracting characteristics of point cloud data in each target point cloud cluster according to a principal component analysis method and determining bounding boxes and bounding box vertex coordinates corresponding to each target;
and the position determining unit is used for determining the position information of the target in the current tower crane region according to the bounding box and the vertex coordinates of the bounding box.
Further, the bounding box determining unit is specifically configured to:
extracting characteristics of point cloud data in each target point cloud cluster according to a principal component analysis method, and determining a principal direction and a mass center corresponding to each target point cloud cluster;
according to each main direction and the mass center, converting the point cloud data into a reference coordinate system, and determining an initial bounding box and an initial vertex coordinate in the reference coordinate system;
and carrying out inverse transformation on the initial bounding box and the initial vertex coordinates, and determining bounding box and bounding box vertex coordinates in a main direction coordinate system as bounding box and bounding box vertex coordinates corresponding to the target.
Further, the bounding box determining unit is configured to perform feature extraction on point cloud data in each target point cloud cluster according to a principal component analysis method, and determine a principal direction and a centroid corresponding to each target point cloud cluster, including:
Calculating point cloud data in each target point cloud cluster, and determining a centroid corresponding to each target point cloud cluster;
performing covariance calculation on point cloud data in each target point cloud cluster, and determining a covariance matrix corresponding to each target point cloud cluster;
and determining the eigenvectors of each covariance matrix as the main directions corresponding to the cloud clusters of each target point.
The target detection and positioning device provided by the embodiment of the invention can execute the target detection and positioning method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 40 includes at least one processor 41, and a memory communicatively connected to the at least one processor 41, such as a Read Only Memory (ROM) 42, a Random Access Memory (RAM) 43, etc., in which the memory stores a computer program executable by the at least one processor, and the processor 41 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 42 or the computer program loaded from the storage unit 48 into the Random Access Memory (RAM) 43. In the RAM 43, various programs and data required for the operation of the electronic device 40 may also be stored. The processor 41, the ROM 42 and the RAM 43 are connected to each other via a bus 44. An input/output (I/O) interface 45 is also connected to bus 44.
Various components in electronic device 40 are connected to I/O interface 45, including: an input unit 46 such as a keyboard, a mouse, etc.; an output unit 47 such as various types of displays, speakers, and the like; a storage unit 48 such as a magnetic disk, an optical disk, or the like; and a communication unit 49 such as a network card, modem, wireless communication transceiver, etc. The communication unit 49 allows the electronic device 40 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 41 may be various general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 41 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 41 performs the various methods and processes described above, such as the object detection positioning method.
In some embodiments, the object detection localization method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 48. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 40 via the ROM 42 and/or the communication unit 49. When the computer program is loaded into the RAM 43 and executed by the processor 41, one or more steps of the object detection positioning method described above may be performed. Alternatively, in other embodiments, the processor 41 may be configured to perform the object detection positioning method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A target detection positioning method, comprising:
acquiring point cloud data captured by a laser radar on a current tower crane area;
clustering the point cloud data to obtain target point cloud clusters corresponding to each target class;
and acquiring bounding boxes corresponding to the targets according to the target points Yun Cu to determine the position information of the targets in the current tower crane region.
2. The method of claim 1, wherein the clustering the point cloud data to obtain target point cloud clusters corresponding to each target class comprises:
Inputting the point cloud data into a pre-trained semantic segmentation model to obtain category information of each point cloud data;
and clustering the point cloud data according to the category information to obtain target point cloud clusters corresponding to the target categories.
3. The method according to claim 2, wherein the clustering the point cloud data according to the category information to obtain target point cloud clusters corresponding to each target category includes:
clustering the point cloud data belonging to the same category of information according to a density clustering algorithm to obtain initial point cloud clusters corresponding to each target category;
respectively matching point cloud data in each initial point cloud cluster with reference point cloud data to obtain matching scores of the point cloud data;
and updating the initial point cloud cluster according to the matching score, and determining the target point cloud cluster corresponding to each target category.
4. The method of claim 3, wherein updating the initial point cloud clusters according to the matching scores, determining target point cloud clusters corresponding to each category, comprises:
if the matching score is larger than a set score threshold, keeping the point cloud data in an initial point cloud cluster corresponding to the target class;
If the matching score is smaller than or equal to a set score threshold, eliminating the point cloud data from the initial point cloud cluster of the corresponding target class;
and determining the updated initial point cloud clusters as target point cloud clusters corresponding to the target categories.
5. The method according to claim 1, wherein the obtaining, according to each target point Yun Cu, a bounding box corresponding to each target to determine the location information of the target in the current tower crane area includes:
extracting characteristics of point cloud data in each target point cloud cluster according to a principal component analysis method, and determining bounding boxes and bounding box vertex coordinates corresponding to each target;
and determining the position information of the target in the current tower crane region according to the bounding box and the vertex coordinates of the bounding box.
6. The method according to claim 5, wherein the feature extraction of the point cloud data in each of the target point cloud clusters according to the principal component analysis method, and determining bounding boxes and bounding box vertex coordinates corresponding to each target, comprises:
extracting characteristics of point cloud data in each target point cloud cluster according to a principal component analysis method, and determining a principal direction and a mass center corresponding to each target point cloud cluster;
According to the main directions and the centroids, converting the point cloud data into a reference coordinate system, and determining an initial bounding box and initial vertex coordinates in the reference coordinate system;
and carrying out inverse transformation on the initial bounding box and the initial vertex coordinates, and determining bounding box and bounding box vertex coordinates in the main direction coordinate system as bounding box and bounding box vertex coordinates corresponding to the target.
7. The method of claim 6, wherein the feature extraction of the point cloud data in each of the target point cloud clusters according to the principal component analysis method, and determining the principal direction and the centroid corresponding to each of the target point cloud clusters, comprises:
calculating point cloud data in each target point cloud cluster, and determining a centroid corresponding to each target point cloud cluster;
performing covariance calculation on the point cloud data in each target point cloud cluster, and determining a covariance matrix corresponding to each target point cloud cluster;
and determining the eigenvectors of the covariance matrixes as the main directions corresponding to the target point cloud clusters.
8. An object detection positioning device, characterized by comprising:
the data acquisition module is used for acquiring point cloud data captured by the laser radar on the current tower crane area;
The point cloud cluster determining module is used for carrying out clustering processing on the point cloud data to obtain target point cloud clusters corresponding to each target category;
and the position information determining module is used for obtaining bounding boxes corresponding to the targets according to the target points Yun Cu so as to determine the position information of the targets in the current tower crane region.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the object detection positioning method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the object detection positioning method according to any one of claims 1-7.
CN202211635138.XA 2022-12-19 2022-12-19 Target detection positioning method, device, equipment and storage medium Pending CN116342899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211635138.XA CN116342899A (en) 2022-12-19 2022-12-19 Target detection positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211635138.XA CN116342899A (en) 2022-12-19 2022-12-19 Target detection positioning method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116342899A true CN116342899A (en) 2023-06-27

Family

ID=86891877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211635138.XA Pending CN116342899A (en) 2022-12-19 2022-12-19 Target detection positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116342899A (en)

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
US20220383535A1 (en) Object Tracking Method and Device, Electronic Device, and Computer-Readable Storage Medium
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
CN109559330B (en) Visual tracking method and device for moving target, electronic equipment and storage medium
CN111402161B (en) Denoising method, device, equipment and storage medium for point cloud obstacle
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
WO2023030062A1 (en) Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
CN115272465A (en) Object positioning method, device, autonomous mobile device and storage medium
CN116342899A (en) Target detection positioning method, device, equipment and storage medium
CN112364751B (en) Obstacle state judgment method, device, equipment and storage medium
US20200242819A1 (en) Polyline drawing device
CN113688880A (en) Obstacle map creating method based on cloud computing
CN114694138B (en) Road surface detection method, device and equipment applied to intelligent driving
CN117392000B (en) Noise removing method and device, electronic equipment and storage medium
CN116012624B (en) Positioning method, positioning device, electronic equipment, medium and automatic driving equipment
CN113012281B (en) Determination method and device for human body model, electronic equipment and storage medium
CN113033270B (en) 3D object local surface description method and device adopting auxiliary axis and storage medium
CN117889855A (en) Mobile robot positioning method, mobile robot positioning device, mobile robot positioning equipment and storage medium
CN115774844A (en) Category determination method, device, equipment and storage medium
Xiao et al. BIR-AHC: Balanced Iterative Reducing and Agglomerative Hierarchical Clustering for Stair Detection
CN116824638A (en) Dynamic object feature point detection method and device, electronic equipment and storage medium
Ji et al. Adaptive Denoising-Enhanced LiDAR Odometry for Degeneration Resilience in Diverse Terrains
CN117474998A (en) Goods shelf position detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination