CN114581512A - Interference detection method and system based on image space collision - Google Patents

Interference detection method and system based on image space collision Download PDF

Info

Publication number
CN114581512A
CN114581512A CN202210213323.3A CN202210213323A CN114581512A CN 114581512 A CN114581512 A CN 114581512A CN 202210213323 A CN202210213323 A CN 202210213323A CN 114581512 A CN114581512 A CN 114581512A
Authority
CN
China
Prior art keywords
interference
main part
auxiliary part
auxiliary
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210213323.3A
Other languages
Chinese (zh)
Other versions
CN114581512B (en
Inventor
贾康
苏琦
苏裕林
刘浩
郑帅
郭俊康
洪军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202210213323.3A priority Critical patent/CN114581512B/en
Publication of CN114581512A publication Critical patent/CN114581512A/en
Application granted granted Critical
Publication of CN114581512B publication Critical patent/CN114581512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an interference detection method and system based on image space collision, wherein interference exists between a main part and an auxiliary part in an assembly model to be detected in a direction D; performing primary detection of collision interference on the main part and the auxiliary part, judging the interference condition between the main part and the auxiliary part according to a separation axis principle, and if the main part and the auxiliary part have interference, performing step S3; respectively rendering the main part and the auxiliary part by adopting OpenGL, obtaining a final template cache of interference pixel point pixel values of interference parts of the main part and the auxiliary part, and judging the interference type of the main part and the auxiliary part according to the number of continuous interference pixel points of which the pixel values are more than +1 of an initial value in the final template cache; on the basis of carrying out assembly interference conservative coarse interference discrimination based on a geometric space bounding box method, the interference calculation is mapped to be OpenGL rendering, and the interference calculation of a swept body of a part and other parts is realized.

Description

Interference detection method and system based on image space collision
Technical Field
The invention belongs to the field, and particularly relates to an interference detection method and system based on image space collision.
Background
The assembly interference matrix is one of important information models based on the assembly sequence planning of the relation matrix, expresses the relative spatial position motion interference relation between parts in an assembly body, and is an important basis for judging the assembly/disassembly feasibility of the parts. In constructing the fitting interference matrix, it is necessary to determine a required fitting interference state for each element. Generally, a collision detection algorithm based on a hierarchical bounding box is mainly adopted to perform multiple detections facing the gap/interference analysis requirement. According to the method, under a given step length, the part moves gradually in the detection direction, and after each movement, the corresponding element value of the interference matrix is obtained by judging the clearance/interference state, but the method has high interference detection algorithm cost and resource consumption and long consumed time for an assembly body of a complex system with a large number of parts; the method cannot effectively determine proper step length to ensure that the discrete interference body is equivalent to the continuous interference body; in particular, due to the expression precision of the model, hard interference and part contact cannot be accurately distinguished, and recognition errors of element states are often caused.
The interference detection method based on projection projects the part model along the direction of the assembly path, and if the projection can be converted into interference detection between parts, the detection efficiency of the assembly interference can be greatly improved. Particularly, with the rapid development of a Graphic Processing Unit (GPU), various collision detection algorithms based on an image space are proposed, wherein the algorithms mainly utilize accelerated drawing and calculation of the GPU to improve algorithm efficiency, and the calculation efficiency is remarkably improved through dimension reduction calculation. However, for complex mechanical equipment, the problem of excessive calculation amount still exists in the problem of interference analysis of assembly of a large number of parts by using the collision detection method based on the image space, and the actual engineering requirements cannot be met.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an interference detection method and system based on image space collision, wherein a geometric space and an image space-based method are combined, interference calculation is mapped into rendering of OpenGL according to the interference situation on the basis of judging assembly interference conservative coarse interference by a geometric space bounding box-based method, and the interference calculation of a swept body of a part and other parts is realized by means of rendering condition setting, depth testing, template testing and the like.
In order to achieve the purpose, the invention provides the following technical scheme: an interference detection method based on image space collision comprises the following specific steps:
s1, setting interference of the main part and the auxiliary part in the to-be-detected assembly model in the direction D, and establishing an interference matrix about the main part, the auxiliary part and the direction D;
s2, carrying out primary detection on collision interference of the main part and the auxiliary part, correcting data in the interference matrix in the step S1 according to the interference condition between the main part and the auxiliary part, and entering into a step S3 if the main part and the auxiliary part are interfered;
s3, respectively rendering the main part and the auxiliary part by adopting OpenGL, obtaining a final template cache comprising pixel values of interference pixel points of the main part and the auxiliary part, determining interference results of the main part and the auxiliary part according to the pixel values of the interference pixel points in the final template cache, correcting data in the interference matrix in the step S2 according to the interference results, and performing the step S4 if the interference exists;
s4, taking an interference pixel point in the final template cache as a starting point to respectively obtain pixel values of continuous interference pixel points in the X direction and the Y direction on the final template cache, judging the interference types of the main part and the auxiliary part according to the number of the continuous interference pixel points with the pixel values larger than the initial value +1, and correcting the data in the interference matrix in the step S3 according to the judgment result of the interference types to obtain a final interference matrix for accurately embodying the interference condition of the main part and the auxiliary part.
Further, the assembly model to be detected comprises a plurality of component groups, each component group comprises a main part and an auxiliary part, and the steps S1-S4 are repeated to obtain an interference matrix of each component group of the assembly model to be detected in the direction D so as to represent the interference condition of each component group in the direction D.
Further, in step S2, performing initial detection of collision interference between the main part and the auxiliary part by using an AABB bounding box algorithm, where when the main part is completely located on the positive direction side of the auxiliary part along the D direction, the main part moves along the + D direction without interference with the auxiliary part; when the main part is completely positioned on the negative direction side of the auxiliary part along the direction D, the main part moves along the direction-D without interference with the auxiliary part; when the two positional and kinematic relationships are not the above, there is interference between the main part and the subsidiary part, and the process proceeds to step S3.
Further, in step S3, the perspective and the view volume for rendering the main part and the auxiliary part in OpenGL are specifically:
1) the perspective settings include viewpoint settings and camera position settings: taking the center of the intersection of the bounding boxes of the main part and the auxiliary part as a viewpoint; the camera position is arranged along the opposite direction of the sight line to the outer sides of the surrounding boxes of the main part and the auxiliary part, and the sight line direction is parallel to the direction D;
2) setting a view body: the depth of the view body exceeds the range of the bounding boxes of the main part and the auxiliary part in the current direction D; the height and the width of the view body are the size of the intersection area of the surrounding boxes of the main part and the auxiliary part, which is perpendicular to the view direction.
Further, in step S3, the specific step of obtaining the final template cache is:
1) establishing an initial template cache, and enabling pixel values in the template cache to be initial values; establishing a depth cache, wherein the depth values in the depth cache are all maximum values;
2) rendering the main part and the auxiliary part which are detected to have interference in the step S2 by utilizing OpenGL;
3) when the depth value of the area projected by the main part on the depth cache is smaller than the original depth value of the corresponding area on the depth cache, the pixel values of interference pixel points projected by the main part on the corresponding area on the template cache are all plus 1, and the pixel values of the interference pixel points of the rest areas are unchanged, so that the template cache after the rendering of the main part is obtained;
4) and when the depth value of the area projected by the auxiliary part on the depth cache is greater than the original depth value of the corresponding area on the depth cache, the pixel values of the interference pixel points of the corresponding area projected by the auxiliary part on the template cache rendered by the main part are all plus 1, and the final template cache is obtained.
Further, the initial value in the template cache is the minimum pixel value.
Further, in step S3, when the pixel values of the interference pixel points in the final template cache are all initial values or initial values +1, the main part and the auxiliary part do not interfere with each other;
and when the pixel value of the interference pixel point in the final template cache is larger than the initial value +1, the main part and the auxiliary part interfere in the region.
Further, in step S4, taking an interference pixel point with a pixel value greater than the initial value +1 on the final template cache as a starting point, continuously retrieving in the X and Y directions on the final template cache:
when the number of interference pixel points with continuous pixel values in the X and Y directions larger than the initial value +1 is larger than a set tolerance value, hard interference exists between the main part and the auxiliary part;
when the number of interference pixel points with continuous pixel values in the X and Y directions larger than the initial value +1 is smaller than a set tolerance value, contact interference exists between the main part and the auxiliary part.
Further, in step S4, the tolerance value is determined according to the requirement of the collision detection accuracy.
The invention also discloses an interference detection system based on image space collision, which comprises a preliminary detection module, an OpenGL rendering module and an interference type judgment module, wherein,
the primary detection module is used for judging the interference condition of the main part and the auxiliary part by utilizing a separation axis principle and transmitting a judgment result to the OpenGL rendering module;
the OpenGL rendering module is used for respectively rendering the main part and the auxiliary part, obtaining a final template cache comprising pixel values of interference pixel points of the interference part of the main part and the auxiliary part, determining interference results of the main part and the auxiliary part according to the pixel values of the interference pixel points in the final template cache, and transmitting the interference results to the interference type judging module if the interference exists;
the interference type judging module is used for respectively obtaining the pixel values of continuous interference pixel points in the X direction and the Y direction on the final template cache by taking an interference pixel point in the final template cache as a starting point, and judging the interference types of the main part and the auxiliary part according to the quantity of the continuous interference pixel points of which the pixel values are more than the initial value + 1.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to an interference detection method and system based on image space collision, in particular to an image collision detection calculation method based on OpenGL, which divides the process of assembly interference detection into a plurality of steps from coarse to fine, realizes the accurate identification of the interference relationship between a main part and an auxiliary part through hierarchical and parameterized process design, can accurately identify the contact condition, has good universality and is specifically as follows:
firstly, the traditional interference detection method adopts collision detection to judge the discrete motion swept volumes of the parts one by one, and has the problems that the contact of the parts is easily judged to be interference by mistake, the moving step length of the swept volume is too small, the calculation amount is large and the like. The method is based on the aspect of graph rendering, the equivalent swept volume is constructed in a projection mode, the method is more accurate, multiple collision detection is not needed, and the detection can be performed once;
secondly, due to the problems of the part modeling and the collision detection algorithm in the precision aspect, the contact interference type cannot be accurately judged by the collision detection method of the three-dimensional model, and the accurate contact interference and hard interference resolution can be realized in a graphic space through a pixel threshold value;
thirdly, the invention is suitable for almost all types of complex assemblies, is not restricted in shape and has wide applicability;
fourthly, the method is based on an image algorithm, is simple to realize, and does not need a complex data structure required to be organized in the traditional collision detection.
Fifthly, the method has high calculation speed and high calculation efficiency in judging the interference relationship.
Drawings
FIG. 1 is a schematic view of a depth test and Z-buffer of a part.
FIG. 2 is a schematic diagram of interference type identification.
FIG. 3 is a schematic diagram of image pixels corresponding to an interference rendering calculation between a shaft and a bushing.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
The invention provides an interference detection method based on image space collision, which comprises the following specific steps:
step 1: determining an assembly body model to be detected, setting interference of part groups in the assembly body model to be detected in a direction D, wherein each part group comprises a main part and an auxiliary part, establishing interference matrixes of the main part, the auxiliary part and the direction D, and setting interference values in the interference matrixes as 1;
wherein, 1 in the interference matrix represents that the main part and the auxiliary part are in an interference state in the direction D, and 0 represents that the main part and the auxiliary part are in a non-interference state;
preferably, the model of the assembly body to be detected can be subjected to rendering calculation;
step 2: whether interfere between main part and the supplementary part is judged according to the separating axis principle to the preliminary detection that main part and supplementary part collided and interfered, it is specific:
and 2.1, when the main part is completely positioned at the positive direction side of the auxiliary part along the direction D and the main part does not interfere with the auxiliary part necessarily when moving along the direction + D, changing the content of the corresponding position in the interference matrix to be 0, and otherwise, keeping the content to be 1. Similarly, when the main part is completely positioned at the negative direction side of the auxiliary part along the D direction, the main part moves along the-D direction without interference with the auxiliary part, the content of the corresponding position in the interference matrix is changed to 0, otherwise, the content is kept to 1;
step 2.2, if the position relation except the step 2.1 appears, interference exists between the main part and the auxiliary part, and accurate interference detection is required to be further carried out, namely, the step 3 is carried out;
preferably, an AABB bounding box algorithm is adopted to carry out primary detection on collision interference between the main part and the auxiliary part;
step 3, rendering the main part and the auxiliary part respectively by adopting OpenGL to obtain depth data of the main part and the auxiliary part, and obtaining a final template cache with pixel values of interference pixel points of the main part and the auxiliary part, which have interference parts, according to the depth data obtained by rendering:
step 3.1, setting a rendering view angle and a rendering view body of OpenGL, specifically:
step 3.1.1, the setting of the viewing angle comprises the setting of a viewpoint and the setting of a camera position, and the setting of the viewing angle is carried out on the interference matrix obtained in the step 2 along the direction D, specifically: taking the center of the intersection of the bounding boxes of the main part and the auxiliary part as a viewpoint; the camera position is arranged along the opposite direction of the sight line to the outer sides of the bounding boxes of the two parts, and the sight line direction is parallel to the direction D; step 3.1.2, setting a visual scene: the depth of the view body exceeds the range of the surrounding boxes of the main part and the auxiliary part in the current direction D, and the height and the width of the view body are set as the size of the intersection area of the surrounding boxes of the main part and the auxiliary part, which is perpendicular to the sight direction;
step 3.2, as shown in fig. 1, establishing an initial template cache, making the pixel values in the template cache all initial values, where the initial values are minimum pixel values, rendering the main part and the auxiliary part which are screened in step 2 by using OpenGL set in step 3.1, to obtain a final template cache, where the final template cache includes pixel values of interference pixel points of the main part and the auxiliary part which have interference parts:
step 3.2.1, establishing a depth cache, wherein the depth values in the depth cache are all maximum values;
step 3.2.2, rendering the master part according to the view point and the sight direction determined in the step 3.1, wherein when the depth value of the area projected on the depth cache by the master part is smaller than the original depth value of the corresponding area on the depth cache, the pixel values of interference pixel points projected on the corresponding area on the template cache by the master part are all +1, and the pixel values of the interference pixel points in the other areas are unchanged, so that the template cache after the master part is rendered is obtained;
step 3.2.3, rendering the auxiliary parts according to the view point and the sight direction determined in the step 3.1, and when the depth value of the area projected by the auxiliary parts on the depth cache is greater than the original depth value of the corresponding area on the depth cache, projecting the auxiliary parts on the template cache rendered by the main parts, wherein the pixel values of interference pixel points in the corresponding area on the template cache are all plus 1, so as to obtain a final template cache;
preferably, in the final template cache, a region with a pixel value as an initial value corresponds to a non-main part rendering region, a region with a pixel value greater than or equal to the initial value +1 corresponds to a main part rendering region, and a region with a template cache value greater than the initial value +2 in the main part rendering region corresponds to an auxiliary part and main part shielding region;
step 4, inquiring the pixel value of the interference pixel point in the final template cache to obtain the interference result of the main part and the auxiliary part:
step 4.1, if the pixel values of the interference pixel points in the final template cache are all initial values or the initial values are +1, the main part and the auxiliary part are not interfered, and the content of the corresponding position in the interference matrix is changed into 0;
step 4.2, if the pixel value of the interference pixel point in the final template cache is larger than the initial value +1, the main part and the auxiliary part interfere in the region, and the interference type needs to be judged;
step 5, if there is interference between the main part and the auxiliary part, obtaining interference pixel points with pixel values larger than the initial value +1 in the final template cache, respectively obtaining pixel values of continuous interference pixel points in the X and Y directions by taking one of the interference pixel points as a starting point, and judging interference types, specifically:
step 5.1, when the number of interference pixel points with pixel values larger than the initial value +1 in the X direction and the Y direction is larger than a set tolerance value, hard interference exists between the main part and the auxiliary part, the hard interference is regarded as interference, and the content of the corresponding position in the interference matrix is changed into 1;
as shown in fig. 2, when the tolerance value is set to be 2, the number of interference pixel points with continuous pixel values of 2 in the X and Y directions is greater than the tolerance value 2 at the same time, which indicates that the interference exceeds the tolerance value range and belongs to hard interference;
and 5.2, when the number of interference pixel points with the pixel values larger than the initial value +1 in the X direction and the Y direction is smaller than the set tolerance value, contact interference exists between the main part and the auxiliary part, the contact interference is regarded as no interference, and the content of the corresponding position in the interference matrix is changed to be 0.
Preferably, the tolerance value is determined according to the requirement of collision detection precision;
and 6, obtaining a final assembly interference matrix of the assembly body model to be detected according to the steps 1-5, wherein the assembly interference matrix can accurately judge the interference condition of a certain part on another part in a certain direction during part assembly.
And 7, repeating the steps 1-6, determining the interference condition of each part group in the assembly body to be detected in all directions of the space, and finally obtaining an accurate interference matrix for the assembly body to be detected. The interference matrix is a vital data base for tasks in actual assembly scenes such as assembly sequence planning, assembly path planning and the like.
The invention also discloses an interference detection system based on image space collision, which comprises a preliminary detection module, an OpenGL rendering module and an interference type judgment module, wherein,
the primary detection module is used for judging the interference condition of the main part and the auxiliary part by utilizing a separation axis principle and transmitting a judgment result to the OpenGL rendering module;
the OpenGL rendering module is used for respectively rendering the main part and the auxiliary part, obtaining a final template cache comprising pixel values of interference pixel points of the interference part of the main part and the auxiliary part, determining interference results of the main part and the auxiliary part according to the pixel values of the interference pixel points in the final template cache, and transmitting the interference results to the interference type judging module if the interference exists;
the interference type judging module is used for respectively obtaining the pixel values of continuous interference pixel points in the X direction and the Y direction on the final template cache by taking an interference pixel point in the final template cache as a starting point, and judging the interference types of the main part and the auxiliary part according to the quantity of the continuous interference pixel points of which the pixel values are more than the initial value + 1.
Example 1
The following takes the calculation of the interference matrix elements of the axis-to-bushing in the-X direction as an example
Step 1, setting a shaft as a main part and a bushing as an auxiliary part, establishing a vertical shaft, the bushing and an interference matrix in a-X direction, and initializing all data in the interference matrix to be 1, namely interference;
step 2, judging that the shaft has an interference relation to the bushing along the-X direction according to the principle of the separation shaft in the coordinate range of the AABB bounding box, and judging by accurate interference detection;
step 3, combining the minimum value E of the coordinates of the bounding box of the axis in the-X direction1,-XAnd the minimum value F of the coordinates of the bounding box of the bushing in the-X direction2,-XThe method comprises the following steps of performing visual angle setting by using a glu LookAt function, specifically:
the view angle setting is performed by the following functions and parameters: gluLookAt (min (E)1,-x,E2,-x),0,0,max(F1,-x,F2,-x),0,0,0,0,1);
Wherein Ei,DMeaning the minimum value of the bounding box coordinates of the part i in the direction D, Fi,DMeaning the maximum value of the bounding box coordinates of the part i in the direction D. Here 1 denotes the first part, which is referred to as the shaft; 2 denotes a second part, referred to as a bush)
The gluLookAt function is a function of setting a view angle by OpenGL, wherein the first three parameters represent x, y and z coordinates of a position of a camera in world coordinates, the middle three parameters represent x, y and z coordinates of a position of an object aligned with a lens of the camera in world coordinates, and the last three parameters represent direction components of an upward direction of the camera in the world coordinates along x, y and z axes;
and 4, setting a view by adopting a glOrtho function, wherein the glOrtho function is used for creating an orthogonal parallel view, and the parameters are as follows in sequence: the coordinate of the left side of the visual body, the coordinate of the right side of the visual body, the coordinate below the visual body, the coordinate above the visual body, the coordinate closest to the visual body and the coordinate farthest from the visual body are specifically as follows:
glOrtho(-PI,y,-JI,y,JI,z,PI,z,0,PO,x,-JO,x)
wherein (J)O,PO) And (J)I,PI) Respectively marking an outer surrounding box and an intersecting part surrounding box of the shaft surrounding box and the lining surrounding box; j represents the coordinates of the leftmost, the lowest and the closest point of the bounding box, and P represents the coordinates of the rightmost, the uppermost and the farthest point of the bounding box; subscript O represents the outer enclosure of the main and auxiliary parts enclosures and subscript I represents the intersection of the main and auxiliary parts enclosures. So as for example PI,yWhat is represented is the "coordinate along the y-axis of the intersection of the primary and secondary parts bounding box, the rightmost, uppermost, and farthest point of the bounding box".
The size of the visual body of the present example is 74mm × 74 mm;
step 5, performing discrete rendering calculation of potential intersecting entities on the shaft and the lining through OpenGL;
step 5.1, setting the depth value on the depth cache as 1.0, and setting the pixel value in the template cache as 0;
and 5.2, rendering the shaft according to the view angle set in the step 3, so as to obtain the rendered depth cache. If the depth test condition is set to be smaller than the preset depth test condition, the template operation is set to be the pixel value +1 of the pixel point passing the depth test, and the template cache is correspondingly modified according to the conditions;
and 5.3, rendering the back surface of the bushing according to the view angle set in the step 3. In the rendering process, only the point of the bushing, which is only overlapped with the axis pixel, passes the template test, if the bushing and the axis are shielded, the pixel value of the corresponding position on the template cache modified in the step 5.2 is increased by 1, the rest is kept unchanged, and the shielding query counts the passing pixel points; step 5.4, the function glReadPixels () reads the stencil buffer, and when the pixel value in the stencil buffer is 0 or 1, the shaft and the bushing do not interfere.
The effect shown in fig. 3 when the pixel point with the pixel value greater than or equal to 2 in the template cache is red is shown, and the existence of a circle of red pixel points in the graph means that the shaft and the bush have an interference relationship, and the interference type needs to be further judged;
step 6, setting a tolerance k equal to 2, as shown in fig. 3, the number of boundary pixels calculated in this embodiment is 1 × 1, and the number of interference pixels having continuous pixel values greater than or equal to 2 appearing in the template cache along the X and Y directions is less than a threshold 2, so that the interference type of the shaft and the bushing in this embodiment is contact interference; and modifying the content of the corresponding position in the interference matrix into 0 to obtain the interference matrix of the shaft and the bush, wherein the content in the interference matrix can be applied to many practical engineering scenes as a data base. For example, in assembly sequence planning, the calculations in this case show that the shaft is accessible for disassembly in the-X direction relative to the bushing.

Claims (10)

1. An interference detection method based on image space collision is characterized by comprising the following specific steps:
s1, setting interference of the main part and the auxiliary part in the to-be-detected assembly model in the direction D, and establishing an interference matrix about the main part, the auxiliary part and the direction D;
s2, carrying out primary detection on collision interference of the main part and the auxiliary part, correcting data in the interference matrix in the step S1 according to the interference condition between the main part and the auxiliary part, and entering into a step S3 if the main part and the auxiliary part are interfered;
s3, respectively rendering the main part and the auxiliary part by adopting OpenGL, obtaining a final template cache including pixel values of interference pixel points of the interference part of the main part and the auxiliary part, determining interference results of the main part and the auxiliary part according to the pixel values of the interference pixel points in the final template cache, correcting data in the interference matrix in the step S2 according to the interference results, and performing the step S4 if the interference exists;
s4, taking an interference pixel point in the final template cache as a starting point to respectively obtain pixel values of continuous interference pixel points in the X direction and the Y direction on the final template cache, judging the interference types of the main part and the auxiliary part according to the number of the continuous interference pixel points with the pixel values larger than the initial value +1, and correcting the data in the interference matrix in the step S3 according to the judgment result of the interference types to obtain a final interference matrix for accurately embodying the interference condition of the main part and the auxiliary part.
2. The interference detection method based on image space collision as claimed in claim 1, wherein the assembly model to be detected includes a plurality of component groups, each component group includes a main part and an auxiliary part, and the steps S1-S4 are repeated to obtain an interference matrix of each component group of the assembly model to be detected in the direction D for representing the interference condition of each component group in the direction D.
3. The interference detection method based on image space collision as claimed in claim 1, wherein in step S2, AABB bounding box algorithm is used to perform initial detection of collision interference between the main part and the auxiliary part, and when the main part is completely located at the positive direction side of the auxiliary part along the D direction, the main part moves along the + D direction without interference with the auxiliary part; when the main part is completely positioned on the negative direction side of the auxiliary part along the direction D, the main part moves along the direction-D without interference with the auxiliary part; when the two positional and kinematic relationships are not the above, there is interference between the main part and the subsidiary part, and the process proceeds to step S3.
4. The interference detection method based on image space collision according to claim 1, wherein in step S3, the view angle and the view volume for rendering the main part and the auxiliary part in OpenGL are specifically:
1) the perspective settings include viewpoint settings and camera position settings: taking the center of the intersection of the bounding boxes of the main part and the auxiliary part as a viewpoint; the camera position is arranged along the opposite direction of the sight line to the outer sides of the surrounding boxes of the main part and the auxiliary part, and the sight line direction is parallel to the direction D;
2) setting a view body: the depth of the view body exceeds the range of the bounding boxes of the main part and the auxiliary part in the current direction D; the height and the width of the view body are the size of the intersection area of the surrounding boxes of the main part and the auxiliary part, which is perpendicular to the view direction.
5. The interference detection method based on image space collision according to claim 1, wherein in step S3, the specific step of obtaining the final template buffer is:
1) establishing an initial template cache, and enabling pixel values in the template cache to be initial values; establishing a depth cache, wherein the depth values in the depth cache are all maximum values;
2) rendering the main part and the auxiliary part which are detected to have interference in the step S2 by utilizing OpenGL;
3) when the depth value of the area projected by the main part on the depth cache is smaller than the original depth value of the corresponding area on the depth cache, the pixel values of interference pixel points projected by the main part on the corresponding area on the template cache are all plus 1, and the pixel values of the interference pixel points of the rest areas are unchanged, so that the template cache after the rendering of the main part is obtained;
4) and when the depth value of the area projected by the auxiliary part on the depth cache is greater than the original depth value of the corresponding area on the depth cache, the pixel values of the interference pixel points of the corresponding area projected by the auxiliary part on the template cache rendered by the main part are all plus 1, and the final template cache is obtained.
6. The interference detection method based on image space collision as claimed in claim 5, wherein the initial value in the template buffer is the minimum pixel value.
7. The interference detection method based on image space collision according to claim 1, wherein in step S3, when the pixel values of the interference pixel points in the final template cache are all initial values or initial values +1, the main part and the auxiliary part are not interfered;
and when the pixel value of the interference pixel point in the final template cache is larger than the initial value +1, the main part and the auxiliary part interfere in the region.
8. The interference detection method based on image space collision as claimed in claim 1, wherein in step S4, taking an interference pixel point with a pixel value greater than the initial value +1 on the final template buffer as a starting point, the continuous search is performed on the final template buffer along X and Y directions:
when the number of interference pixel points with continuous pixel values in the X and Y directions larger than the initial value +1 is larger than a set tolerance value, hard interference exists between the main part and the auxiliary part;
when the number of interference pixel points with continuous pixel values in the X and Y directions larger than the initial value +1 is smaller than a set tolerance value, contact interference exists between the main part and the auxiliary part.
9. The interference detection method based on image space collision according to claim 8, wherein in step S4, the tolerance value is determined according to collision detection accuracy requirement.
10. An interference detection system based on image space collision is characterized by comprising a preliminary detection module, an OpenGL rendering module and an interference type judgment module, wherein,
the primary detection module is used for judging the interference condition of the main part and the auxiliary part by utilizing a separation axis principle and transmitting a judgment result to the OpenGL rendering module;
the OpenGL rendering module is used for respectively rendering the main part and the auxiliary part, obtaining a final template cache comprising pixel values of interference pixel points of the interference part of the main part and the auxiliary part, determining interference results of the main part and the auxiliary part according to the pixel values of the interference pixel points in the final template cache, and transmitting the interference results to the interference type judging module if the interference exists;
the interference type judging module is used for respectively obtaining the pixel values of continuous interference pixel points in the X direction and the Y direction on the final template cache by taking an interference pixel point in the final template cache as a starting point, and judging the interference types of the main part and the auxiliary part according to the quantity of the continuous interference pixel points of which the pixel values are more than the initial value + 1.
CN202210213323.3A 2022-03-04 2022-03-04 Interference detection method and system based on image space collision Active CN114581512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210213323.3A CN114581512B (en) 2022-03-04 2022-03-04 Interference detection method and system based on image space collision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210213323.3A CN114581512B (en) 2022-03-04 2022-03-04 Interference detection method and system based on image space collision

Publications (2)

Publication Number Publication Date
CN114581512A true CN114581512A (en) 2022-06-03
CN114581512B CN114581512B (en) 2024-02-23

Family

ID=81778424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210213323.3A Active CN114581512B (en) 2022-03-04 2022-03-04 Interference detection method and system based on image space collision

Country Status (1)

Country Link
CN (1) CN114581512B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116989717A (en) * 2023-09-26 2023-11-03 玛斯特轻量化科技(天津)有限公司 Product interference detection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020180748A1 (en) * 2001-01-17 2002-12-05 Voicu Popescu Methods and apparatus for rendering images using 3D warping techniques
CN102975082A (en) * 2012-10-29 2013-03-20 上海工程技术大学 Non-interference tool path detection method based on image information assisted multi-shaft processing
CN105469406A (en) * 2015-11-30 2016-04-06 东北大学 Bounding box and space partitioning-based virtual object collision detection method
CN108898676A (en) * 2018-06-19 2018-11-27 青岛理工大学 Method and system for detecting collision and shielding between virtual and real objects
CN113203385A (en) * 2021-04-16 2021-08-03 上海交通大学 Non-interference five-axis scanning track generation method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020180748A1 (en) * 2001-01-17 2002-12-05 Voicu Popescu Methods and apparatus for rendering images using 3D warping techniques
CN102975082A (en) * 2012-10-29 2013-03-20 上海工程技术大学 Non-interference tool path detection method based on image information assisted multi-shaft processing
CN105469406A (en) * 2015-11-30 2016-04-06 东北大学 Bounding box and space partitioning-based virtual object collision detection method
CN108898676A (en) * 2018-06-19 2018-11-27 青岛理工大学 Method and system for detecting collision and shielding between virtual and real objects
CN113203385A (en) * 2021-04-16 2021-08-03 上海交通大学 Non-interference five-axis scanning track generation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李普;孙长乐;熊伟;王海涛;: "一种基于半透明颜色叠加与深度值的碰撞检测算法", 计算机科学, no. 1, 15 June 2018 (2018-06-15) *
邹益胜;丁国富;周晓莉;何邕;贾美薇;: "一种基于图像空间的碰撞检测算法", 系统仿真学报, no. 05, 8 May 2011 (2011-05-08) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116989717A (en) * 2023-09-26 2023-11-03 玛斯特轻量化科技(天津)有限公司 Product interference detection method and device
CN116989717B (en) * 2023-09-26 2024-01-12 玛斯特轻量化科技(天津)有限公司 Product interference detection method and device

Also Published As

Publication number Publication date
CN114581512B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
Liang et al. Rangeioudet: Range image based real-time 3d object detector optimized by intersection over union
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
JP7205613B2 (en) Image processing device, image processing method and program
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
US11442162B2 (en) Millimeter wave radar modeling-based method for object visibility determination
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN102047189A (en) Cutting process simulation display device, method for displaying cutting process simulation, and cutting process simulation display program
CN205451195U (en) Real -time three -dimensional some cloud system that rebuilds based on many cameras
CN111753739B (en) Object detection method, device, equipment and storage medium
CN113034593A (en) 6D pose marking method and system and storage medium
CN109791607A (en) It is detected from a series of images of video camera by homography matrix and identifying object
CN105303554B (en) The 3D method for reconstructing and device of a kind of image characteristic point
CN114792416A (en) Target detection method and device
CN114581512A (en) Interference detection method and system based on image space collision
WO2023004559A1 (en) Editable free-viewpoint video using a layered neural representation
US6518966B1 (en) Method and device for collision detection and recording medium recorded with collision detection method
US11348261B2 (en) Method for processing three-dimensional point cloud data
Giosan et al. Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information
PP et al. Efficient 3D visual hull reconstruction based on marching cube algorithm
CN114882046B (en) Panoramic segmentation method, device, equipment and medium for three-dimensional point cloud data
CN114935316A (en) Standard depth image generation method based on optical tracking and monocular vision
JPH05314244A (en) Three-dimensional information extracting method
CN116402857B (en) Moving target cross-lens tracking method based on three-dimensional calibration
CN118196356B (en) Reconstruction analysis method and system for irregular object image based on point cloud
JPH11328445A (en) Device and method for deciding collision and medium where collision deciding method is recorded

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant