CN112700474A - Collision detection method, device and computer-readable storage medium - Google Patents

Collision detection method, device and computer-readable storage medium Download PDF

Info

Publication number
CN112700474A
CN112700474A CN202011633914.3A CN202011633914A CN112700474A CN 112700474 A CN112700474 A CN 112700474A CN 202011633914 A CN202011633914 A CN 202011633914A CN 112700474 A CN112700474 A CN 112700474A
Authority
CN
China
Prior art keywords
boundary
predicted
region
collision detection
coated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011633914.3A
Other languages
Chinese (zh)
Other versions
CN112700474B (en
Inventor
吴博文
朱林楠
杨林
黄健东
陈凌之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Group Co Ltd
Guangdong Midea White Goods Technology Innovation Center Co Ltd
Original Assignee
Midea Group Co Ltd
Guangdong Midea White Goods Technology Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Group Co Ltd, Guangdong Midea White Goods Technology Innovation Center Co Ltd filed Critical Midea Group Co Ltd
Priority to CN202011633914.3A priority Critical patent/CN112700474B/en
Publication of CN112700474A publication Critical patent/CN112700474A/en
Application granted granted Critical
Publication of CN112700474B publication Critical patent/CN112700474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Robotics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a collision detection method, a device and a computer readable storage medium, wherein the collision detection method comprises the following steps: determining a current boundary of the first object; determining a predicted boundary of the first object at the next moment by using the current boundary and the running speed of the first object; and judging whether the predicted boundary covers at least partial area of the second object so as to judge whether the first object and the second object collide at the next moment. Through the mode, the collision detection precision can be improved.

Description

Collision detection method, device and computer-readable storage medium
Technical Field
The present invention relates to the field of automation technologies, and in particular, to a collision detection method, device, and computer-readable storage medium.
Background
With the rapid development of artificial intelligence, the field of intelligent manufacturing gradually becomes a focus of attention. For example, the assembly robot can effectively replace the traditional complex assembly process, particularly for batch production, greatly reduce the manufacturing cost, improve the production efficiency and accelerate the industrial automation and intelligent process in the manufacturing field. However, due to the tendency of relative motion of objects, collision may occur between different objects, and for the industrial field with high precision requirement, the whole production line is often in a stagnation state due to extremely small collision accidents. Therefore, when an industrial robot performs path planning on an assembly process, a moving or static obstacle needs to be avoided in real time, and the process can involve a collision detection problem. In the process of collision detection, the speed and accuracy of detection cannot be balanced, and improvement is urgently needed.
Disclosure of Invention
The invention mainly solves the technical problem of providing a collision detection method, equipment and a computer readable storage medium, which can improve the precision and speed of collision detection.
In order to solve the technical problems, the invention adopts a technical scheme that: there is provided a collision detection method, the method comprising: determining a current boundary of the first object; determining a predicted boundary of the first object at the next moment by using the current boundary and the running speed of the first object; and judging whether the predicted boundary covers at least partial area of the second object so as to judge whether the first object and the second object collide at the next moment.
Wherein, in response to predicting that the boundary covers a partial region of the second object, determining whether the first object collides with the second object at the next time comprises: determining a first boundary of a second object cladding region; determining a second boundary of the second object covered region by using the first boundary and the prediction coefficient; and judging whether the second boundary covers at least partial region of the first object.
Wherein, in response to the second boundary covering a partial region of the first object, determining whether the first object collides with the second object at the next moment comprises: and judging whether the minimum distance between the first coated area and the second coated area is smaller than the early warning value or not, wherein the first coated area is the coated area of the first object, and the second coated area is the coated area of the second object.
And determining that the first object and the second object collide at the next moment in response to the fact that the minimum distance between the first coated region and the second coated region is smaller than the early warning value.
And determining that the first object and the second object do not collide at the next moment in response to the fact that the minimum distance between the first coated region and the second coated region is larger than or equal to the early warning value.
Wherein the prediction coefficient is equal to the product of the operating speed of the first object and a prediction interval, and the prediction interval is the interval between the current time and the predicted next time.
The predicted boundary coordinate value is the sum of the current boundary coordinate value and the displacement value; the predicted boundary coordinate value is a coordinate value of a boundary point on the predicted boundary, the current boundary coordinate value is a coordinate value of a boundary point on the current boundary, the displacement value is a product of the running speed of the first object and the predicted interval, and the predicted interval is an interval between the current time and the predicted next time.
Acquiring all track points from an initial moment to a current moment on a running track of a first object; obtaining coordinate values of all track points; combining the coordinate values of all track points into a feature vector; and processing the characteristic vector by using a noise density-based clustering method to judge whether the running track of the first object is abnormal or not.
Wherein, include after obtaining the coordinate value of all track points: and removing abnormal track points by using an isolated forest algorithm.
Wherein, include after the coordinate value of all track points makes up the eigenvector: and performing dimensionality reduction on the feature vectors by using a principal component analysis method.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a collision detection apparatus comprising a processor for executing instructions to implement the collision detection method of any one of the above.
In order to solve the technical problem, the invention adopts another technical scheme that: there is provided a computer readable storage medium for storing instructions/program data executable to implement the collision method of any one of the above.
The invention has the beneficial effects that: in contrast to the state of the art, the present invention provides a collision detection method by determining a current boundary of a first object; determining a predicted boundary of the first object at the next moment by using the current boundary and the running speed of the first object; and judging whether the predicted boundary covers at least partial area of the second object so as to judge whether the first object and the second object collide at the next moment. Whether the next frame has the danger of collision or not is estimated by utilizing the running speed of the object in the current frame and the time interval between two adjacent frames, advance judgment is made, and the probability of collision is reduced.
Drawings
FIG. 1 is a schematic flow chart of a collision detection method in an embodiment of the present application;
FIG. 2 is a schematic flow chart of another collision detection method in an embodiment of the present application;
FIG. 3 is a schematic flow chart of yet another collision detection method in an embodiment of the present application;
fig. 4 is a schematic structural view of a collision detecting apparatus in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
Due to the tendency of objects to move relative to each other, there may be a risk of collision between different objects. Particularly for the industrial field with high precision requirement, such as the field of industrial robots, when the robot performs path planning, a moving or static obstacle needs to be avoided in real time, and collision detection is needed in the process. Collision detection typically involves at least two objects moving relative to each other, but may involve more than one object moving relative to each other. The present application will describe the collision detection method by taking two objects moving relative to each other as an example, but is not limited thereto. For convenience of description, two objects participating in collision detection are referred to as a first object and a second object respectively, the two objects are opposite, when one object is referred to as the first object, the other opposite object is the second object, specific identities of the two objects are not limited herein, and the second object and the first object may be interchanged.
Collision detection involves two relatively moving objects, typically one stationary and one moving; of course, both objects may be moving. For ease of calculation, when both objects are moving, one of the objects may be considered stationary relative to the other, in which case the speed of movement of the moving object is the sum of the speeds of movement of the two objects. The collision detection method will be described by taking an example in which one object moves and one object is stationary, but is not limited thereto. The moving object may be referred to as a first object and the stationary object may be referred to as a second object, and vice versa.
Referring to fig. 1, fig. 1 is a schematic flow chart of a collision detection method according to an embodiment of the present disclosure. In this embodiment, the collision detection method includes:
s110: a current boundary of the first object is determined.
Wherein the boundary of the object can be determined using an envelope algorithm.
When collision detection is performed, the geometry of an object that may collide is usually quite complex, and any position of the object may collide, which may cause a large amount of calculation for collision detection, and make collision detection very complex. In order to simplify the collision detection method, a simple geometric object (such as a sphere, a cuboid, an ellipsoid and the like) can be used for approaching a complex object, namely, a simple enclosing body can be used for enclosing the original object, and if an object does not collide with the enclosing body, the object does not collide with the object in the enclosing body. An envelope algorithm may be utilized to achieve a simplified bounding of an object. The envelope algorithm may be any one of an Axis Aligned Bounding Box (AABB) algorithm, a directed bounding box (OBB) algorithm, a discrete directed polyhedral bounding box (k-DOP) algorithm, and a sphere bounding box algorithm. Different envelope algorithms surround the original object with a correspondingly shaped bounding volume. The present application will explain the collision detection method by taking an AABB algorithm as an example.
The current boundary of the first object is the boundary of the first object at the current moment, and may be the minimum boundary capable of surrounding the first object, the spatial position data of the first object may be acquired, the minimum circumscribed cube of the first object is obtained, the minimum surrounding body capable of surrounding the first object is obtained, and then the current boundary of the first object is obtained.
S120: the predicted boundary of the first object at the next time is determined using the current boundary and the speed of travel of the first object.
S130: and judging whether the predicted boundary covers at least partial area of the second object so as to judge whether the first object and the second object collide at the next moment.
In the embodiment, the running speed of the object in the current frame and the time interval between two adjacent frames are used for estimating whether the next frame has the risk of collision or not, and the advance judgment is made. The boundary of the object at the next time may be predicted, and whether the predicted boundary covers at least a partial region of the second object is observed to determine whether the first object and the second object collide at the next time. Compared with the existing prediction detection method, the method has the advantages of small calculation amount and high precision.
If the predicted boundary does not cover any region of the second object, it is determined that the first object and the second object do not collide with each other, and if the predicted boundary covers the region of the second object, it is further required to continue to calculate the distance between the covered region of the second object and the first object, so as to further determine whether the covered region of the second object and the first object collide with each other.
According to the method, when the distance between the two objects is calculated, the key part on the object can be selected, the key part is the part of the object which is most likely to collide with the other object, the distance between the key parts of the two objects is calculated, or the distance between the key part of one object and the whole object is calculated, the distance between the two whole objects does not need to be calculated, and the calculation amount of collision detection can be further reduced.
When the partial region of the second object is predicted to be covered by the boundary, it is indicated that the covered region of the second object is likely to collide with the first object, and the distance between the covered region of the second object and the first object needs to be calculated to determine whether the covered region of the second object collides with the first object. Similarly, when determining whether the second object-covered region collides with the first object, a key region on the first object may be selected first, and it may be determined whether the key region collides with the second object-covered region.
Specifically, a first boundary of a second object cladding area is determined; determining a second boundary of the second object covered region by using the first boundary and the prediction coefficient; and judging whether the second boundary covers at least partial region of the first object.
If the second boundary does not cover any region of the first object, it is determined that the first object and the second object do not collide with each other in the covered region, and if the second boundary has a region covering the first object, it is further required to continue calculating the distance between the covered region of the first object and the covered region of the second object, so as to further determine whether the first object and the second object collide with each other.
The first boundary may be a basic boundary capable of surrounding a coated region of the second object, and the second boundary is a safety boundary extended based on the first boundary. The outward safety boundary is used for firstly selecting a key part area on the first object, which is most likely to collide with the second object coated area, so as to reduce the calculation amount when calculating the distance between the second object coated area and the first object. Because when judging whether the second object-covered region collides with the first object, theoretically, the distance from the second object-covered region to any position on the first object needs to be calculated, because any position on the first object may collide with the second object-covered region.
In the application, the key part region of the first object, which is most likely to be in the region covered by the second object, is selected, and only whether the key part region collides with the region covered by the second object needs to be calculated, so that the calculation amount for calculating the distance between the region covered by the second object and the first object can be reduced. Therefore, the prediction coefficient can be set according to the detection precision, and if the calculation speed is high and the calculation amount is small, the prediction coefficient can be set to be a little bit, so that the part of the marked key area is small, and the number of points to be detected which are wrapped in is small. Conversely, if the accuracy is to be improved, the prediction coefficient may be set to be larger, so that a larger safety region may be left for the second object covered region, and the possibility of any collision may be calculated.
The method and the device are used for predicting and evaluating whether collision occurs at the next moment. The predicted boundary of the first object is the boundary of the first object after the current boundary displacement. The displacement value is the product of the running speed of the first object and the prediction interval, and the prediction interval is the interval between the current moment and the predicted next moment.
In one embodiment, the prediction coefficient may be selected as the displacement value of the first object to determine whether the second object is collided with the first object.
Specifically, it is determined whether the minimum distance between the first covered region and the second covered region is smaller than the warning value, where the first covered region is a covered region of the first object and the second covered region is a covered region of the second object, so as to further determine whether the first object and the second object collide with each other.
And determining that the first object and the second object collide at the next moment in response to the fact that the minimum distance between the first coated region and the second coated region is smaller than the early warning value.
And determining that the first object and the second object do not collide at the next moment in response to the fact that the minimum distance between the first coated area and the second coated area is larger than or equal to the early warning value.
In the above embodiment, by using the first object and the second object as the basis and alternately using the envelope algorithm, the key portions (the first covered region and the second covered region) of the first object and the second object that are likely to collide can be selected, and it can be determined whether the second object collides with the first object only by determining whether the first covered region and the second covered region collide. By the method, the calculation amount of collision detection can be reduced, and the precision of collision detection can be ensured.
Wherein, iteration can be repeated for many times, and a more accurate key part is selected, but the calculation time is increased, and the judgment delay is caused; in order to ensure the real-time performance, the accuracy and the calculated amount of the collision detection, iteration can be selected twice, and key parts which are possibly collided on the first object and the second object can be selected respectively.
In one embodiment, determining whether the minimum distance between the first coated region and the second coated region is less than the warning value comprises: respectively acquiring point cloud data of a first coated area and a second coated area; calculating the minimum distance between the point cloud of the first coated area and the point cloud of the second coated area; and judging whether the minimum distance between the point cloud of the first coated area and the point cloud of the second coated area is smaller than an early warning value or not.
The ToF depth camera can be used for acquiring the original point cloud data of the first object and the original point cloud data of the second object respectively, and three-dimensional reconstruction is carried out on the original point cloud data of the first object and the original point cloud data of the second object respectively to obtain dense point cloud data of the first object and the dense point cloud data of the second object. The three-dimensional reconstruction technology based on the depth camera takes an RGB image and a depth image as input to restore a sparse point cloud three-dimensional model of an object. And matching the sparse point cloud with the mathematical model so as to reconstruct a dense point cloud model. Wherein each pixel value in the depth image is the distance from each point in the scene to the vertical plane in which the depth camera is located. And the dense point cloud data of the first object and the second object can be sampled down to obtain sample point cloud data of the first object and the second object, and whether the first object collides with the second object is judged on the basis of the sample point cloud data. By means of downsampling and thinning, the operation speed of collision detection can be increased, and the real-time performance of positioning is improved.
In the above embodiment, the TOF depth camera is used for performing 3D point cloud data acquisition and dense reconstruction on the surrounding environment, the first object and the second object are positioned in real time, and the AABB envelope algorithm is used for performing alternate key part selection. And the detection boundary is expanded by utilizing a preset collision early warning coefficient, so that the minimum distance calculation amount in the collision process is closely related to the distance in the object motion process, and the real-time requirement of the collision detection process is ensured.
Referring to fig. 2, fig. 2 is a schematic flow chart of another collision detection method according to an embodiment of the present disclosure. In this embodiment, the collision detection method includes:
s210: a current boundary of the first object is determined.
The ToF depth camera can be used for collecting 3D point cloud data of the first object at the current moment, and the minimum circumscribed cube of the obtained point cloud data of the first object is solved to obtain a current boundary.
S220: the predicted boundary of the first object at the next time is determined using the current boundary and the speed of travel of the first object.
Boundary points of the current boundary, such as two coordinate points of diagonal angles in the cube, (min (x), min (y), min (z)) and (max (x), max (y), max (z)) can be selected as boundary points, and the minimum cube is expanded to obtain a predicted boundary. And (3) predicting coordinates of boundary points of the boundary, namely two coordinate points of an oblique angle of the cube after the outward expansion: (min (x) + v t, min (y) + v t, min (z) + v t) and (max (x) + v t, max (y) + v t, max (z) + v t).
S230: and judging whether the predicted boundary covers at least partial area of the second object.
The ToF depth camera can be used for acquiring 3D point cloud data of the second object, comparing the point cloud coordinates of the second object with the coordinates of the predicted boundary, and judging whether the predicted boundary covers at least part of the area of the second object.
S240: and determining a first boundary of the second object-covered region, and determining a second boundary of the second object-covered region by using the first boundary and the prediction coefficient.
And marking the point cloud data corresponding to the second object-covered area as T1, solving a minimum circumscribed cube of T1 to obtain a first boundary of the second object-covered area, and determining a second boundary of the second object-covered area by using the first boundary and the prediction coefficient. For example, two coordinate points of diagonal angles in the cube are selected: and (min (x1), min (y1) and min (z1)) and (max (x1), max (y1) and max (z1)) are used as boundary points, and the minimum cube is subjected to outward expansion to obtain a second boundary. Coordinates of boundary points of the second boundary, i.e., diagonally opposite two coordinate points: (min (x1) + v t, min (y1) + v t, min (z1) + v t) and (max (x1) + v t, max (y1) + v t, max (z1) + v t), where v t is the prediction coefficient.
S250: and judging whether the second boundary covers at least partial region of the first object.
And comparing the point cloud coordinate of the first object with the coordinate of the second boundary, and judging whether the second boundary covers at least part of the area of the first object.
S260: and judging whether the minimum distance between the first coated area and the second coated area is smaller than an early warning value or not.
The first coated region is a coated region of the first object, and the second coated region is a coated region of the second object.
The point cloud data corresponding to the covered region of the first object is T2. Calculating the minimum distance d between T1 and T2, and if d is less than h and h is an early warning value, collision occurs; otherwise, no collision occurs.
In one embodiment, the trajectory of the object may also be detected for anomalies prior to collision detection.
Abnormal noise of the position coordinates of the object from the initial moment to the current moment can be filtered by using an isolated forest algorithm, and the track coordinates of the cleaner object are obtained. And then, judging whether the track at the current moment is abnormal or not by utilizing a DBSCAN density clustering algorithm for the track coordinate. The input of the DBSCAN is a feature vector composed of all point coordinates of the current trajectory from the initial time to the current time. Since the feature vectors constructed here are large in dimension, the dimension reduction can be performed using the PCA technique. If the track is abnormal, the whole operation process of the object is terminated in advance. And otherwise, judging whether the minimum distance between the vehicle body and the tire at the next moment predicted by the vehicle body and the tire is smaller than the collision early warning value or not by using the collected point cloud data. If less than the threshold, a collision occurs; otherwise, no collision occurs and operation continues. Whether the trajectory is abnormal or not may be determined once before the collision determination is performed each time, or may be determined at predetermined intervals.
Referring to fig. 3, fig. 3 is a schematic flow chart of another collision detection method according to an embodiment of the present disclosure. In this embodiment, the collision detection method includes:
s310: acquiring all track points from an initial moment to a current moment on a running track of the first object, removing abnormal track points by using an isolated forest algorithm, combining coordinate values of all track points into a feature vector, performing dimension reduction processing on the feature vector by using a principal component analysis method, processing the feature vector by using a noisy density-based clustering method, and judging whether the running track of the first object is abnormal or not.
S320: determining the current boundary of the first object, determining the predicted boundary of the first object at the next moment by using the current boundary and the running speed of the first object, and judging whether the predicted boundary covers at least partial region of the second object.
S330: determining a first boundary of a second object-covered region, determining a second boundary of the second object-covered region by using the first boundary and the prediction coefficient, and judging whether the second boundary covers at least part of the first object.
S340: and judging whether the minimum distance between the first coated area and the second coated area is smaller than an early warning value or not.
The first coated region is a coated region of the first object, and the second coated region is a coated region of the second object.
In the above embodiment, the clustering algorithm is used to determine the abnormal trajectory of the mechanical arm, so as to react to the abnormal motion of the mechanical arm in real time in advance, thereby reducing unnecessary operations. The abnormal track is detected by using a clustering method of DBSCAN, and a feature vector formed by coordinates of all points of the current track from an initial moment to the current moment is used as an input, so that not only can abnormal coordinate points which suddenly occur be detected, but also the abnormal track generated due to time accumulation can be detected. In the collision detection process, the mechanical arm is used for estimating whether the next frame has collision danger or not by using the running speed of the current frame and the time interval between two adjacent frames, and making advance judgment. Through the combination of the functions, the probability of collision is greatly avoided, and meanwhile, the calculated amount in the collision detection process is reduced.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a collision detection apparatus according to an embodiment of the present disclosure. In this embodiment, the collision detecting apparatus 10 includes a processor 11.
The processor 11 may also be referred to as a CPU (Central Processing Unit). The processor 11 may be an integrated circuit chip having signal processing capabilities. The processor 11 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 11 may be any conventional processor or the like.
The collision detecting device 10 may further comprise a memory (not shown in the figures) for storing instructions and data required for the operation of the processor 11.
The processor 11 is configured to execute instructions to implement the methods provided by any of the embodiments of the collision detection method of the present application and any non-conflicting combinations thereof.
The collision detection device may be a computer device such as a server, may be a separate server, may be a server cluster, or the like.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure. The computer readable storage medium 30 of the embodiments of the present application stores instructions/program data 31 that when executed enable the methods provided by any of the embodiments of the collision detection methods described above and any non-conflicting combinations of the embodiments of the present application. The instructions/program data 31 may form a program file stored in the storage medium 30 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium 30 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes performed by the content of the present specification and the attached drawings, or applied to other related technical fields directly or indirectly, are included in the scope of the present invention.

Claims (11)

1. A collision detection method, characterized by comprising:
determining a current boundary of the first object;
determining a predicted boundary of the first object at a next time using the current boundary and the operating speed of the first object;
and judging whether the predicted boundary covers at least partial area of the second object so as to judge whether the first object and the second object collide at the next moment.
2. The collision detecting method according to claim 1,
responding to the predicted boundary covering a partial area of the second object, wherein the judging whether the first object and the second object collide at the next moment comprises:
determining a first boundary of a region of the second object to be coated;
determining a second boundary of the second object covered region by using the first boundary and a prediction coefficient;
and judging whether the second boundary covers at least partial region of the first object.
3. The collision detecting method according to claim 2,
responding to the second boundary covering a partial area of the first object, wherein the judging whether the first object and the second object collide at the next moment comprises:
judging whether the minimum distance between a first coated area and a second coated area is smaller than an early warning value, wherein the first coated area is the coated area of the first object, and the second coated area is the coated area of the second object.
4. The collision detecting method according to claim 3,
in response to the fact that the minimum distance between the first coated region and the second coated region is smaller than the early warning value, determining that the first object and the second object can collide at the next moment; or
And determining that the first object and the second object do not collide at the next moment in response to the minimum distance between the first coated region and the second coated region being greater than or equal to the early warning value.
5. The collision detecting method according to claim 2,
the prediction coefficient is equal to a product of the operating speed of the first object and a prediction interval, which is an interval between a current time and a predicted next time.
6. The collision detecting method according to claim 1,
the predicted boundary coordinate value is the sum of the current boundary coordinate value and the displacement value; the predicted boundary coordinate value is a coordinate value of a boundary point on the predicted boundary, the current boundary coordinate value is a coordinate value of a boundary point on the current boundary, the displacement value is a product of the running speed of the first object and a predicted interval, and the predicted interval is an interval between the current time and the predicted next time.
7. The collision detection method according to claim 1, characterized in that the method further comprises:
acquiring all track points from the initial moment to the current moment on the running track of the first object;
obtaining coordinate values of all the track points;
combining the coordinate values of all the track points into a characteristic vector;
and processing the characteristic vector by using a noise density-based clustering method to judge whether the running track of the first object is abnormal or not.
8. The collision detecting method according to claim 7,
the obtaining of the coordinate values of all the track points comprises the following steps:
and removing abnormal track points by using an isolated forest algorithm.
9. The collision detecting method according to claim 7,
after the coordinate values of all the track points are combined into the feature vector, the method comprises the following steps:
and performing dimensionality reduction on the feature vector by using a principal component analysis method.
10. A collision detection apparatus, characterized in that it comprises a processor for executing instructions to implement a collision detection method according to any one of claims 1-9.
11. A computer-readable storage medium for storing instructions/program data executable to implement a collision detection method according to any one of claims 1-9.
CN202011633914.3A 2020-12-31 2020-12-31 Collision detection method, apparatus, and computer-readable storage medium Active CN112700474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011633914.3A CN112700474B (en) 2020-12-31 2020-12-31 Collision detection method, apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011633914.3A CN112700474B (en) 2020-12-31 2020-12-31 Collision detection method, apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN112700474A true CN112700474A (en) 2021-04-23
CN112700474B CN112700474B (en) 2024-08-20

Family

ID=75513692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011633914.3A Active CN112700474B (en) 2020-12-31 2020-12-31 Collision detection method, apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN112700474B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461693A (en) * 2002-05-30 2003-12-17 库卡-罗伯特有限公司 Method for preventing and device for controlling colliding between cooperated manipulators
JP2008189073A (en) * 2007-02-01 2008-08-21 Toyota Motor Corp Rear window airbag device
JP2012066690A (en) * 2010-09-24 2012-04-05 Fujitsu Ten Ltd Vehicle control system, vehicle control apparatus, and vehicle control method
US20120131595A1 (en) * 2010-11-23 2012-05-24 Ewha University-Industry Collaboration Foundation Parallel collision detection method using load balancing and parallel distance computation method using load balancing
CN106570487A (en) * 2016-11-10 2017-04-19 维森软件技术(上海)有限公司 Method and device for predicting collision between objects
CN107644206A (en) * 2017-09-20 2018-01-30 深圳市晟达机械设计有限公司 A kind of road abnormal behaviour action detection device
CN108528442A (en) * 2017-03-06 2018-09-14 通用汽车环球科技运作有限责任公司 Use the vehicle collision prediction algorithm of radar sensor and UPA sensors
CN108714303A (en) * 2018-05-16 2018-10-30 深圳市腾讯网络信息技术有限公司 Collision checking method, equipment and computer readable storage medium
CN109500811A (en) * 2018-11-13 2019-03-22 华南理工大学 A method of the mankind are actively avoided towards man-machine co-melting robot
CN110807806A (en) * 2020-01-08 2020-02-18 中智行科技有限公司 Obstacle detection method and device, storage medium and terminal equipment
CN111402633A (en) * 2020-03-23 2020-07-10 北京安捷工程咨询有限公司 Object anti-collision method based on UWB positioning and civil engineering anti-collision system
CN111839732A (en) * 2015-02-25 2020-10-30 马科外科公司 Navigation system and method for reducing tracking interruptions during surgical procedures

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461693A (en) * 2002-05-30 2003-12-17 库卡-罗伯特有限公司 Method for preventing and device for controlling colliding between cooperated manipulators
JP2008189073A (en) * 2007-02-01 2008-08-21 Toyota Motor Corp Rear window airbag device
JP2012066690A (en) * 2010-09-24 2012-04-05 Fujitsu Ten Ltd Vehicle control system, vehicle control apparatus, and vehicle control method
US20120131595A1 (en) * 2010-11-23 2012-05-24 Ewha University-Industry Collaboration Foundation Parallel collision detection method using load balancing and parallel distance computation method using load balancing
CN111839732A (en) * 2015-02-25 2020-10-30 马科外科公司 Navigation system and method for reducing tracking interruptions during surgical procedures
CN106570487A (en) * 2016-11-10 2017-04-19 维森软件技术(上海)有限公司 Method and device for predicting collision between objects
CN108528442A (en) * 2017-03-06 2018-09-14 通用汽车环球科技运作有限责任公司 Use the vehicle collision prediction algorithm of radar sensor and UPA sensors
CN107644206A (en) * 2017-09-20 2018-01-30 深圳市晟达机械设计有限公司 A kind of road abnormal behaviour action detection device
CN108714303A (en) * 2018-05-16 2018-10-30 深圳市腾讯网络信息技术有限公司 Collision checking method, equipment and computer readable storage medium
CN109500811A (en) * 2018-11-13 2019-03-22 华南理工大学 A method of the mankind are actively avoided towards man-machine co-melting robot
CN110807806A (en) * 2020-01-08 2020-02-18 中智行科技有限公司 Obstacle detection method and device, storage medium and terminal equipment
CN111402633A (en) * 2020-03-23 2020-07-10 北京安捷工程咨询有限公司 Object anti-collision method based on UWB positioning and civil engineering anti-collision system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FRANCISZHAO: "自动驾驶里面大家是怎么做碰撞检查的啊?", pages 1 - 2, Retrieved from the Internet <URL:https://www.zhihu.com/question/351926770> *
JULIEN BASCH等: "Kinetic Collision Detection Between Two Simple", 《TECHNOLOGY INC.》, 10 October 2003 (2003-10-10), pages 1 - 19 *
孙敬荣 等: "基于混合包围盒与三角形相交的碰撞检测优化算法", 《计算机工程与应用》, vol. 54, no. 19, 31 December 2018 (2018-12-31), pages 198 - 203 *
曹峥 等: "RFID安全防碰撞搜索协议的设计与分析", 《计算机科学》, vol. 41, no. 4, 15 April 2014 (2014-04-15), pages 116 - 119 *
汪玉莹 等: "CMF的设计创新策略与方法研究", 《科技创新与应用》, vol. 2020, no. 1, 8 January 2020 (2020-01-08), pages 98 - 99 *

Also Published As

Publication number Publication date
CN112700474B (en) 2024-08-20

Similar Documents

Publication Publication Date Title
US11216971B2 (en) Three-dimensional bounding box from two-dimensional image and point cloud data
US10726567B2 (en) Associating LIDAR data and image data
CN106780557B (en) Moving object tracking method based on optical flow method and key point features
WO2022099530A1 (en) Motion segmentation method and apparatus for point cloud data, computer device and storage medium
WO2021052283A1 (en) Method for processing three-dimensional point cloud data and computing device
CN107133966B (en) Three-dimensional sonar image background segmentation method based on sampling consistency algorithm
EP3293700A1 (en) 3d reconstruction for vehicle
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
WO2022133770A1 (en) Method for generating point cloud normal vector, apparatus, computer device, and storage medium
CN114764885A (en) Obstacle detection method and device, computer-readable storage medium and processor
Baur et al. Real-time 3D LiDAR flow for autonomous vehicles
CN110070606B (en) Space rendering method, target detection method, detection device, and storage medium
CN112700471B (en) Collision detection method, apparatus, and computer-readable storage medium
CN114663598A (en) Three-dimensional modeling method, device and storage medium
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN112308917A (en) Vision-based mobile robot positioning method
CN114972492A (en) Position and pose determination method and device based on aerial view and computer storage medium
CN117291840A (en) Point cloud noise filtering method, device, medium and equipment of patrol robot
Goforth et al. Joint pose and shape estimation of vehicles from lidar data
CN116931583A (en) Method, device, equipment and storage medium for determining and avoiding moving object
CN112700474B (en) Collision detection method, apparatus, and computer-readable storage medium
WO2017158167A2 (en) A computer implemented method for tracking an object in a 3d scene
CN116465827B (en) Viewpoint path planning method and device, electronic equipment and storage medium
Xia et al. Efficient Large Scale Stereo Matching based on Cross-Scale
CN113609985B (en) Object pose detection method, detection device, robot and storable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant