CN115393620A - Part posture recognition method of light pen type three-coordinate measurement system and light pen structure - Google Patents

Part posture recognition method of light pen type three-coordinate measurement system and light pen structure Download PDF

Info

Publication number
CN115393620A
CN115393620A CN202211148489.8A CN202211148489A CN115393620A CN 115393620 A CN115393620 A CN 115393620A CN 202211148489 A CN202211148489 A CN 202211148489A CN 115393620 A CN115393620 A CN 115393620A
Authority
CN
China
Prior art keywords
graph
posture
matching
image
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211148489.8A
Other languages
Chinese (zh)
Other versions
CN115393620B (en
Inventor
张鹏
苗洋洋
单东日
王晓芳
贺冬梅
邹文凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202211148489.8A priority Critical patent/CN115393620B/en
Publication of CN115393620A publication Critical patent/CN115393620A/en
Application granted granted Critical
Publication of CN115393620B publication Critical patent/CN115393620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/42Document-oriented image-based pattern recognition based on the type of document
    • G06V30/422Technical drawings; Geographical maps
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of gesture recognition, in particular to a part gesture recognition method and a light pen structure of a light pen type three-coordinate measuring system, wherein the method is to load a part CAD model; rendering a virtual posture graph of the part; extracting ROI; constructing a virtual observation visual angle spherical surface to obtain a part posture graph library; grouping the graphics in the graphics library according to a similarity strategy; shooting parts by using a light pen with a miniature camera to obtain a part real object image, and extracting a real object image ROI (region of interest); and obtaining a matching similarity value after rough matching and fine matching, wherein the graph with the maximum similarity value is an optimal matching graph, and the coordinate of the optimal matching graph is used for adjusting the part CAD model posture, so that the part CAD model posture is consistent with the shooting posture of the light pen. The invention can improve the measuring efficiency and accuracy of the optical pen type three-coordinate measuring system, the optical pen can complete a part of computer end measuring operation, the measuring environment can be better adapted, the requirements of operators can be met, and the measuring efficiency can be improved.

Description

Part posture recognition method of light pen type three-coordinate measurement system and light pen structure
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a method for recognizing a part gesture of an optical pen type three-coordinate measuring system and an optical pen structure.
Background
The optical pen type three-coordinate measuring system has the advantages of simplicity in operation, convenience in carrying, high measuring precision and the like, and can effectively solve the requirements of industrial field measurement and the measuring problems of complex parts and large-size parts. An important function of the optical pen type three-coordinate measuring system is that a part to be detected and a CAD design drawing thereof can be combined, and a measuring result is directly compared with a CAD design value, so that the measuring efficiency and the measuring accuracy are obviously improved.
The existing optical pen type three-coordinate measuring system has a lot of inconvenience in interacting with a part CAD model, an operator needs to manually adjust the posture of the part CAD model for many times, and the adjustment result is not accurate. The existing light pen type three-coordinate measuring system has the advantages of simple light pen structure, inconvenient measurement and use for large-size parts and low measuring efficiency.
Therefore, the invention designs an optical pen type three-coordinate measuring system part posture identification method and an optical pen structure to solve the problems.
Disclosure of Invention
The invention provides a part posture recognition method of an optical pen type three-coordinate measuring system and an optical pen structure in order to make up for the defects in the prior art.
A part posture recognition method of an optical pen type three-coordinate measurement system is characterized by comprising the following steps:
an off-line stage: s1, acquiring a CAD model of a part to be tested, and loading the CAD model of the part by means of an OpenGL library;
s2, rendering a virtual posture graph of the part with a real display effect by using perspective projection;
s3, extracting a region of interest ROI from the generated virtual gesture graph of the part;
s4, continuously changing the observation visual angle, and constructing a uniform and omnibearing virtual observation visual angle spherical surface to obtain a large number of part posture graphic libraries containing various postures of parts;
s5, grouping graphs according to the similarity strategy for calling in an online stage;
an online stage: s6, shooting the part by using a light pen with a miniature camera to obtain a part real object image, extracting a real object image ROI (region of interest), and sending the real object image with the ROI extracted to a computer;
s7, roughly matching the part posture graphic library graph with the real object ROI graph by the computer;
s8, performing fine matching on the object ROI image and the image obtained by coarse matching;
and S9, obtaining a matching similarity value, taking the graph with the maximum similarity value as an optimal matching graph, and adjusting the part CAD model posture according to the coordinates of the optimal matching graph so that the part CAD model posture is consistent with the shooting posture of the light pen.
Further, in order to better implement the present invention, in S2, a method for specifically performing part model rendering includes: and generating a part 2D virtual attitude graph after S1, and judging whether a rendering point faces an observation visual angle or not through depth information of the rendering point after the OpenGL rendering part 2D virtual attitude graph is subjected to model viewport transformation, projection transformation, perspective division and viewport transformation, so as to determine whether the current rendering point is displayed on a screen or not.
Further, in order to better realize the method, the part model posture under a certain observation visual angle is displayed on the screen, a part virtual posture graph can be obtained by storing the current screen graph, the observation visual angle is continuously changed, a large number of part posture graphs can be obtained, a virtual observation spherical surface is constructed for ensuring the consistent size of the part posture graphs, the sphere center is superposed with the origin of a part model coordinate system, the spherical discrete point is an observation visual angle point, the spherical surface is uniformly sampled for ensuring the uniformity of the spherical observation visual angle point, and a Fibonacci grid method is adopted to generate uniform sampling point coordinates.
Further, in order to better implement the present invention, the specific ROI extraction method in S3 is:
s31, after a part model posture graph is stored, reading the graph immediately, copying the graph and converting the copied graph into a gray graph;
s32, carrying out threshold processing on the gray image, setting a proper gray pixel threshold, setting the pixel value larger than the threshold to be 0, namely black, setting other pixel values to be 255, namely white, and carrying out threshold processing on the gray image to obtain a black and white image of a black-and-white image;
s33, marking a connected domain of the graph in the black and white graph, wherein the connected domain is a pixel collection which has the same pixel value and is formed by adjacent pixels, and for the part posture graph, only a part model exists in the graph, so that the largest connected domain is the part model;
s34, judging the size of the connected domain according to the area of the connected domain, making a maximum connected domain external rectangle according to the maximum connected domain initial point coordinate and the width and height information, cutting the original graph in the maximum connected domain external rectangle area, and replacing the original graph, so that the ROI of the graph can be immediately extracted when each part posture graph is generated.
Further, in order to better implement the present invention, the specific method of coarse matching in S7 is: the computer calculates the r value of the aspect ratio of each graph in the part posture graph library, sorts the r values from small to large, divides the graph library graph into m groups according to the r value, calculates the r value of the part real graph, compares the r value of the real graph with the maximum value and the minimum value of each group of r values in the m groups of graphs in the graph library, and supposes that the r value of the real graph is in the range of the r value of the ith group, because the rough matching is the r value, a large error is inevitably generated, especially when the r value of the real graph is close to the maximum value or the minimum value of the r value of the ith group, therefore, in order to reduce the error caused by the rough matching, the real graph is matched with the 3 groups of graphs of the i-1 group, the ith group and the (i + 1), especially when i is 1, the real graph is matched with the 2 groups of graphs of the 1 st group and the 2 nd group, when i is m, the rough matching is carried out to the screening of the posture graph library, 2 groups or 3 groups of graphs to be matched are reserved.
Further, in order to better implement the present invention, the specific method of fine matching in S8 is: and performing template matching on the object image and all the images in 2 groups or 3 groups of images obtained by rough matching, wherein the template matching is to slide the template image on the input image, the similarity between the template and the area covered by the template of the input image is calculated by using the pixel when each pixel is reached, the similarity value is stored in a result matrix, the maximum value in the result matrix is the pixel which is the most similar to the template image in the input image, the most similar pixel is taken as the starting point, and the area of the size of the template image is the most similar area to the template image in the input image.
Further, in order to better implement the present invention, the S9 is specifically that a graph obtained by rough matching of the part pose graph library is used as a template image, the ROI image of the part object is an input image, because the part in the template image and the input image is the content to be matched, and the size of the part in the image after ROI extraction is the image size, the size of the template image is the template size, and the ROI image of the part object and the graph library are adjusted to a uniform size, so that the template can complete template matching once only by calculating once, a similarity value between-1 and 1 is obtained once per matching and stored in a result matrix, because the size of the input image and the template image is the same, only one value is in each result matrix of each matching, the maximum value is found from a plurality of result matrices, the pose graph corresponding to the maximum value is the optimal matching graph, the file name of the optimal matching graph is read, the coordinate value of the optimal matching graph is obtained, the optimal coordinate value is brought into perspective projection, and the pose of the CAD model is rendered and adjusted, so that the pose of the CAD model is consistent with the shooting pose of the optical pen.
The utility model provides an be applied to light pen type three-coordinate measuring system's part gesture recognition's light pen structure which includes the panel, its characterized in that:
the panel is CNC finish machining's alloy material, and the panel openly is equipped with a protruding cavity, is equipped with 2 horizontal symmetric distributions's round hole on the protruding cavity, and other positions are equipped with 6 perpendicular symmetric distribution round holes on the panel openly, 8 emitting diode are fixed to 8 round holes, and 8 emitting diode pass through the wire at the panel back and connect, and the panel back is connected with the backshell, is connected with the connecting rod in the screw hole in the middle of the bottom of panel, and connecting rod bottom threaded connection has the gauge head.
Furthermore, in order to better realize the invention, the connecting rod is made of CNC finish machining alloy material, the connecting rod is cylindrical, one end of the connecting rod is provided with threads, and the other end of the connecting rod is provided with a threaded hole; the back shell is of a carbon fiber structure and is provided with a cavity, and components in the cavity comprise a lithium battery, a raspberry pi, a miniature camera, a direct current voltage reduction module, a power switch and a charging female seat; the top end of the rear shell is an inclined plane, the inclined plane is provided with two platforms, the left platform and the right platform are respectively provided with a touch screen, a key group consisting of 4 keys is arranged on the right platform, the keys are vertically arranged, and the touch screen is connected with the raspberry pie in the cavity through a flat cable; a round hole is formed in the left side, 4 threaded holes are formed in the periphery of the round hole, and a fan is arranged on the inner side of the round hole and fixed to the left side face of the cavity through screws; the right side of the rear shell is provided with a rectangular opening, and 3 rows of round holes are arranged beside the rectangular opening; the middle of the back surface of the rear shell is provided with a handle which is connected through a screw, the back surface is provided with two large threaded holes which are distributed at the upper left and the upper right, the switch is screwed into the cavity through the large threaded hole at the upper left, and the charging female seat is screwed into the cavity through the large threaded hole at the upper right; the back surface is also provided with 4 middle threaded holes which are fixed with the panel through long screws, and the back surface is also provided with 4 small threaded holes through which the raspberry group is fixed with the rear shell; the bottom of the rear shell is provided with a rectangular opening.
Further, in order to better realize the invention, the lithium battery is a 12-volt direct-current power supply and is used for supplying power to the light pen; the power switch is used as a circuit main switch, the circuit is divided into 3 branches, the 1 st branch is connected with the fan, the 2 nd branch is connected with 8 light-emitting diodes, the 8 light-emitting diodes are connected in series, the 3 rd branch is connected with the direct-current voltage reduction module and the raspberry pie, and the raspberry pie is connected with the miniature camera, the touch screen and the key group; the direct current voltage reduction module converts 12V voltage into 5V voltage to supply power to the raspberry group; the touch screen can select measuring elements, display measuring results and display measuring pictures; the key group is used for assisting the touch screen to select and confirm; the rectangular opening at the bottom of the rear shell can expose the lens of the miniature camera, and the miniature camera is obliquely placed so as to be convenient for shooting a part to be detected; the right side rectangle trompil of backshell can expose the expansion interface of raspberry group, including net gape and 4 usb mouths.
The invention has the beneficial effects that:
the rapid, accurate and automatic part CAD model-based attitude identification method can improve the measurement efficiency and the measurement accuracy of the optical pen type three-coordinate measurement system. The light pen type light pen for the three-coordinate measuring system is independently and wirelessly designed, can finish part of measuring operations, is provided with a touch screen, a miniature camera, a lithium battery and the like, can better adapt to the measuring environment, meets the requirements of operators and improves the measuring efficiency.
Drawings
FIG. 1 is a flow chart of part pose recognition according to the present invention;
FIG. 2 is a diagram of the model rendering process of the present invention;
FIG. 3 is a perspective projection model diagram of the present invention;
FIG. 4 is a uniform sampling point plot for observing a spherical surface in accordance with the present invention;
FIG. 5 is a flowchart of ROI extraction according to the present invention;
FIG. 6 is a comparison chart of ROI extraction according to the present invention;
FIG. 7 is a block diagram of an optical pen-type three-coordinate measuring system according to the present invention;
FIG. 8 is a circuit diagram of an optical pen according to the present invention;
FIG. 9 is a perspective view of a light pen of the present invention;
FIG. 10 is a side view of a light pen of the present invention;
FIG. 11 is a chart of the gesture recognition results of the present invention;
in the context of figure 9 of the drawings,
1 light emitting diode, 2 panels, 3 connecting rods, 4 measuring heads, 5 backshells, 6 raspberry group expansion interfaces, 7 handles, 8 heat dissipation holes and 9 convex cavities.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "disposed," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected unless otherwise explicitly stated or limited. Either mechanically or electrically. They may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Fig. 1-11 illustrate an embodiment of the present invention, which is a method for recognizing the part posture of an optical pen type three-coordinate measuring system and an optical pen structure.
As shown in fig. 1, the method for recognizing the part posture of the optical pen type three-coordinate measuring system based on the CAD model is divided into two stages: an offline phase and an online phase. 1) In the off-line stage, firstly, a CAD model of a part to be tested is obtained, the CAD model of the part is loaded by means of an OpenGL library, a virtual attitude graph of the part with a real display effect is rendered by perspective projection, ROI extraction is carried out on the generated virtual attitude graph of the part in an interested area, an observation visual angle is continuously changed, a uniform and omnibearing virtual observation visual angle spherical surface is constructed, a large number of part attitude graph libraries containing various attitudes of the part are obtained, and graph grouping is carried out according to a similarity strategy so as to prepare for on-line stage calling; 2) And in the online stage, firstly, shooting the part by using a light pen with a miniature camera to obtain a part real object picture, extracting a real object picture ROI (region of interest), sending the real object picture after the ROI is extracted to a computer, roughly matching a picture library picture with the real object ROI picture by the computer, finely matching the real object ROI picture with a picture obtained by rough matching to obtain a matching similarity value, taking the picture with the maximum similarity value as an optimal matching picture, and adjusting the part CAD model posture by using the coordinate of the optimal matching picture to ensure that the part CAD model posture is consistent with the light pen shooting posture.
As shown in fig. 2, a specific part model rendering method is as follows: due to the dimension of the CAD model and the 2D image, it is difficult to directly match a single 2D image with the CAD model, so the idea of the embodiment is to generate a large number of 2D images by using the CAD model and perform 2D-2D matching with the physical image. Firstly, a CAD model of a part to be detected is obtained, the CAD model is rendered by means of an OpenGL library, and a 2D virtual posture graph of the part is generated. The OpenGL rendering part 2D virtual gesture graph is subjected to model viewport transformation, projection transformation, perspective division and viewport transformation.
The model viewport transformation comprises two parts of model transformation and viewport transformation: model transformation is to transform the model from the initial local space V local Transformation to world space, which is a larger spatial extent, is performed by a model matrix M model The implementation comprises a translation transformation matrix T, a scaling transformation matrix S and a rotation transformation matrix R; the viewport transformation transforms the model from world space to observation space, and the transformation is from observation matrix M view And (5) realizing. The observation space is the space seen from the observation perspective and is the result of transforming the world space to the front of the field of view of the observation perspective, so that the model is viewed from the observation coordinates, i.e. the perspective of the observer. Model matrix M model Can be expressed as:
M model =R*S*T (1)
in the observation space, the model is shown in front of the observation angle, but the visual range, i.e. the visual space, also called clipping space V, needs to be further determined clip . The projective transformation defines a clipping space in which the portion of the model that is within the clipping space can be displayed on the screen, and determines how the model is displayed. Projection matrix M projection Projecting the observation space to the cutting spaceThe coordinates in the clipping space range are all displayed on the computer screen, and the coordinates in the clipping space range are converted into a standardized device coordinate system, and the series of processes is called projection, and 3D coordinates can be mapped to 2D coordinates by using a projection matrix. Transforming the point coordinates from the local space to a clipping space calculation formula:
V clip =M projection M view M model V local (2)
the perspective projection can display the visual effect of the part model closest to reality, and has the characteristics of disappearing feeling, distance feeling, small and large distances and the like. Setting a point P on the model 1 The homogeneous coordinate is (x) 1 ,y 1 ,z 1 W), during perspective projection, the perspective projection after the model is transformed to the clipping space modifies P 1 The value of w for the point coordinates is such that the farther away from the viewing angle, the larger the w component of the point coordinates. In the last step of perspective projection, after perspective division is carried out on the space coordinates of the cutting scissors, the coordinates are transformed to the coordinates of the standardized equipment. The perspective division divides coordinates of all points in the clipping space by w components of the coordinates, and the vertex coordinates are smaller as the distance from the observation visual angle is larger, so that the visual effect of large and small distances is simulated. Point P 1 Obtaining point P after perspective division 2 ,P 2 The coordinates are given by:
Figure BDA0003855804290000071
the clipping space defined by the perspective projection is in a quadrangular frustum pyramid shape, and the perspective projection model is composed of an observation visual angle, an observation plane and a part CAD model as shown in FIG. 3. The model is located in the range of the cutting space, the cutting space is formed by a quadrangular frustum pyramid area between two cross sections, and the model is mapped on an observation plane under an observation visual angle.
And finally, carrying out viewport transformation, adapting and rendering the standard equipment coordinates on the observation plane to a screen space through scaling and translation, and judging whether the rendering point faces an observation visual angle or not through the depth information of the rendering point so as to determine whether the current rendering point is displayed on the screen or not.
And rendering the part model through an OpenGL library, displaying the posture of the part model under a certain observation visual angle on a screen, and storing a current screen graph to obtain a part virtual posture graph. And the observation visual angle is continuously changed, so that a large number of part posture graphs can be obtained. In order to ensure that the sizes of the graphs of the part postures are consistent, a virtual observation spherical surface is constructed, the center of sphere is superposed with the origin of a part model coordinate system, a proper observation distance is selected as the radius of the observation spherical surface, the proper spherical surface radius should completely display different postures of the model, and meanwhile, the display area should not be too large. The observation sphere of this embodiment is shown in fig. 4, where discrete points of the sphere are observation view points, and to ensure uniformity of the observation view points, the sphere is uniformly sampled, coordinates of uniform sampling points are generated by a fibonacci grid method, a radius is set to be 1, N points are sampled in total, and then coordinates of an nth point (x is x (x) () of the nth point n ,y n ,z n ) Given by the following equation:
Figure BDA0003855804290000081
Figure BDA0003855804290000082
Figure BDA0003855804290000083
wherein constant is
Figure BDA0003855804290000084
When the radius is R, the nth point coordinate is (Rx) according to the properties of the sphere n ,Ry n ,Rz n )。
Rendering the n-th viewpoint point of view with "x n _y n _z n Bmp 'is named regularly, for example, 7.817366_6.235438_0.090000. Bmp', the coordinate values of the observation view points are saved by using the file names, so that the file names are directly read in an online stage to acquire the coordinate values and restore the model posture, and the method is fast and efficient.
When the model is rendered on the screen, in order to make various postures of the model observable, the display area of the screen is much larger than the size actually required by the model, and for a single image, a lot of storage space is obviously wasted, especially a large amount of part posture graphs are generated. Meanwhile, the accuracy and the speed of subsequent gesture recognition are influenced by the number of the observed spherical sampling points, the more the number of the sampling points is, the higher the gesture recognition accuracy is, and the slower the recognition speed is. According to different project requirements, a proper number of sampling points should be selected. Most existing gesture recognition methods based on CAD models need to store a large number of model gesture graphs, so that a large amount of storage space is occupied, when a plurality of models exist, the problem of storage space occupation is more prominent, and the method is difficult to implement in some environments with limited storage space, such as an embedded type. Taking a common lossless BMP format as an example, 1393MB of storage space is occupied by 1000 BMP format images with the width and the height of 700 pixels.
In order to reduce the storage space occupied by the part posture graph and also to reduce the calculation consumption of online stage processing and improve the posture recognition speed, the ROI algorithm is used in the embodiment to extract the pixel region actually used in the part posture graph. Firstly, after a part model posture graph is stored, the graph is read immediately, the graph is copied, and the copied graph is converted into a gray graph. And performing threshold processing on the gray pattern, setting a proper gray pixel threshold, setting the pixel value larger than the threshold to be 0, namely black, and setting other pixel values to be 255, namely white, and obtaining a black-white pattern of a black-bottom-white image after the gray pattern threshold processing. And marking a connected domain of the graph in the black and white graph, wherein the connected domain is a pixel collection which has the same pixel value and is formed by adjacent pixels. A plurality of connected domains are often present in one graph, and for the part posture graph of the embodiment, only part models are present in the graph, so that the largest connected domain is the part model. Judging the size of the connected domain through the area of the connected domain, making a maximum connected domain external rectangle according to the maximum connected domain initial point coordinate and the width and height information, cutting the original graph in the maximum connected domain external rectangle area, and replacing the original graph, so that the ROI can be immediately extracted when each part posture graph is generated, and the storage space is greatly saved. The process of extracting graphical ROI is shown in fig. 5. Part pose graph extraction ROI contrast effects are shown in fig. 6.
By using the method, a part attitude graphic library consisting of a large number of part attitude graphics under different observation visual angles can be obtained. In order to verify the effect of the ROI extraction algorithm, an experiment is designed in the embodiment, the conditions of the occupied storage space of the part attitude graphic libraries with different quantities before and after ROI extraction are given, and the compression rate is calculated. In the experiment, graphic resolution ratios of the non-extracted ROI are BMP format images of 700x700 pixels, the ROI is extracted without changing file format, a used part model is shown as figure 6, and the size of the storage occupied by the non-extracted ROI in a posture graphic library consisting of N graphics is S 1 The size of occupied storage after ROI extraction is S 2 Then compression ratio r S Given by:
Figure BDA0003855804290000091
the compression ratios are calculated by respectively taking N as 500, 1000, 2500 and 5000, and as shown in Table 1, the results show that the ROI algorithm of the embodiment can reduce about 90% of the storage space. If the graphics library is in a storage-limited environment and a plurality of models exist, the graphics library can be further compressed into a zip-format compressed packet for storage by using compression software, and decompression is performed when the graphics library needs to be used, so that the storage space can be further saved. In the embodiment, the bandclip software is used, the compression level is selected to be normal compression, the compression format is selected to be zip, and the storage size of the compressed ROI graphics library is shown in Table 1.
TABLE 1 compression ratios for different numbers of graphic libraries
Figure BDA0003855804290000101
In order to avoid a great amount of time waste caused by traversing the part posture graphic library during the online stage matching, the embodiment performs similarity grouping on the posture graphic library in the offline stage for calling in the online stage. Calculating the aspect ratio of each graph in the attitude graph library, setting the width as w and the height as h, wherein the aspect ratio r is given by the following formula:
Figure BDA0003855804290000102
and acquiring all the graphic files of the graphic library by using a vector container, storing corresponding r values, quickly sequencing from small to large by using a sort algorithm based on the r values, and dividing the graphics with similar r values into a group and an m group.
In the on-line stage, the gesture recognition function is selected by holding the light pen, the camera is used for shooting parts, the touch screen previews shot pictures, a proper angle is selected, and the key is pressed to finish shooting. And processing the part real object image by using the raspberry pi, extracting the part real object image ROI in the same process in an off-line stage, sending the real object ROI image to a computer by the raspberry pi and the computer through a wifi network under the same network, and calculating the aspect ratio r value of the real object image by the computer. Performing rough matching: and comparing the r value of the real object graph with the maximum value and the minimum value of each r value in the m groups of graphs in the graph library, and if the r value of the real object graph is within the range of the r value of the ith group, because the rough matching is the comparison of the r values, a large error is inevitably generated, and particularly when the r value of the real object graph is close to the maximum value or the minimum value of the r value of the ith group. Therefore, in order to reduce the error caused by rough matching, the real object graph is matched with 3 groups of graphs, namely the i-1 th group, the i-1 th group and the i +1 th group. Specifically, when i is 1, the physical graph is matched with the 2 groups of graphs of the 1 st group and the 2 nd group; when i is m, the physical map is matched with the 2 groups of graphs of the m-1 group and the m group. Coarse matching is to screen the gesture graphic library and reserve 2 or 3 groups of graphics to be matched, and thus, the coarse matching is completed.
And in the fine matching process, the object graph is subjected to template matching with all graphs in 2 groups or 3 groups of graphs obtained by coarse matching. The template matching is to slide the template image on the input image, and when one pixel point is reached, the similarity of the template and the coverage area of the input image by the template is calculated by the pixel point, and the similarity value is stored in a result matrix, wherein the maximum value in the result matrix is the most similar pixel point in the input image and the template image. And taking the most similar pixel points as starting points, wherein the area of the size of the template image is the most similar area with the template image in the input image. Assuming that the input image is I, the template image is T, the input image has a width W and a height H, the template image has a width W and a height H, and the result matrix is R, the width of R is W-W +1, and the height of R is H-H +1. Taking the size of the template image as a search box, (x, y) represents the coordinates of the element at the upper left corner of the current search box in the I matrix, and (x ', y') represents the matrix element coordinates of T and I framed by the search box. The template matching of the embodiment adopts a normalized correlation coefficient matching method: a perfect match results in 1, a perfect negative correlation match results in-1, and a perfect mismatch results in 0. The normalized correlation coefficient matching method formula is as follows:
Figure BDA0003855804290000111
wherein T '(x', y '), I' (x + x ', y + y') are as follows:
Figure BDA0003855804290000112
Figure BDA0003855804290000113
the method comprises the steps of taking a graph obtained by rough matching of a part posture graph library as a template image, taking a part real object ROI (region of interest) graph as an input image, wherein the size of the template image is the size of a template because parts in the template image and the input image are contents to be matched, and the size of the parts in the image after ROI extraction is the size of the image. The part real object ROI image and the image library image are adjusted to be uniform in size, so that the template matching can be completed once only by calculating the template once, and the time required by calculating the similarity of the template in an online stage in a sliding mode for multiple times is greatly saved. And obtaining a similarity value between-1 and 1 every time of matching, storing the similarity value in a result matrix, wherein the input image and the template image have the same size, so that only one value exists in each result matrix of matching every time, finding the maximum value from a plurality of result matrices, taking the posture graph corresponding to the maximum value as the optimal matching graph, reading the file name of the optimal matching graph to obtain the coordinate value of the optimal matching graph, bringing the optimal coordinate value into perspective projection, rendering and adjusting the posture of the CAD model, and enabling the posture of the CAD model to be consistent with the shooting posture of the light pen. The gesture recognition result is shown in fig. 11, the CPU used in the experiment of this embodiment is AMD R7-6800H 3.2ghz, pcle4.0 solid state disk, the number of gesture graphics libraries is 1000, the libraries are divided into 30 groups, raspberry groups 4B are selected on the light pen for testing, and the time from image shooting by the light pen to gesture recognition by the part CAD model in the gesture recognition method of the light pen type three-coordinate measurement system based on the CAD model is about 1.5s, which can meet the measurement use requirement.
In this embodiment, the light pen type three-coordinate measuring system is composed of a computer, a CCD camera and a light pen. An infrared light emitting diode on the pen body of the light pen is used as a detection target, the camera shoots the light pen to collect images, the images are transmitted to the computer to carry out relevant operation, and finally a measurement result is obtained. The composition of the optical pen-type three-coordinate measuring system is shown in fig. 7.
The light pen of the existing light pen type three-coordinate measuring system mainly comprises a panel and light-emitting points on the panel, the main display content and operation are completed by a computer, and an operator needs to switch between a part to be measured and the computer. When the zero detection quantity of the large size is carried out, an operator needs to walk between the part to be detected and the computer, and the measurement efficiency is further reduced. Few international companies can solve the problem well, the development of the light pen is less, the light pen starts late in the field of light pen type three-coordinate measurement in China, and the structure of the light pen is simpler. Therefore, this embodiment has designed a light pen type three-coordinate measurement system light pen structure, especially to light pen type three-coordinate measurement system on a large scale, is equipped with the touch-sensitive screen, and miniature camera etc. puts some measurement operation to the light pen end, can improve measurement of efficiency by a wide margin, saves measuring time.
The light pen of this embodiment includes panel, connecting rod, gauge head, backshell, emitting diode, lithium cell, raspberry group, touch-sensitive screen, miniature camera, direct current step-down module, handle, keypad, switch, the female seat that charges.
The connecting rod is made of CNC finish machined alloy materials, is cylindrical, and is provided with a thread at one end and a threaded hole at the other end;
the panel is made of CNC finish-machined alloy materials; the front surface of the panel is provided with a convex cavity, the panel has 8 round holes for fixing 8 light-emitting diodes, wherein the convex cavity is provided with 2 round holes which are horizontally and symmetrically distributed on the convex cavity, and the other 6 round holes are vertically and symmetrically distributed; the back of the panel is provided with 4 small bosses, the middle parts of the bosses are provided with threaded holes and are vertically and symmetrically distributed, and 8 light-emitting diodes are connected on the back of the panel through leads; the middle of the bottom of the panel is provided with a threaded hole, the threaded end of the connecting rod is connected with the panel through the threaded hole at the bottom of the panel, and the measuring head is connected with the connecting rod through the threaded end;
the back shell is of a carbon fiber structure and is provided with a cavity, and components in the cavity comprise a lithium battery, a raspberry pi, a miniature camera, a direct current voltage reduction module, a power switch and a charging female seat; the top end of the rear shell is an inclined plane, the inclined plane is provided with two platforms, the left platform and the right platform are respectively provided with a touch screen, a key group consisting of 4 keys is arranged on the right platform, the keys are vertically arranged, and the touch screen is connected with the raspberry pie in the cavity through a flat cable; a round hole is formed in the left side, 4 threaded holes are formed in the periphery of the round hole, and a fan is arranged on the inner side of the round hole and fixed to the left side face of the cavity through screws; the right side of the rear shell is provided with a rectangular opening, and 3 rows of round holes are arranged beside the rectangular opening; the middle of the back surface of the rear shell is provided with a handle which is connected through a screw, the back surface is provided with two large threaded holes which are distributed at the upper left and the upper right, the switch is screwed into the cavity through the large threaded hole at the upper left, and the charging female seat is screwed into the cavity through the large threaded hole at the upper right; the back surface is also provided with 4 middle threaded holes which are fixed with the panel through long screws, and the back surface is also provided with 4 small threaded holes, and the raspberry pie is fixed with the rear shell through the 4 small threaded holes; the bottom of the rear shell is provided with a rectangular opening.
As shown in fig. 8, the light pen is powered by a lithium battery, the power switch is used as a main circuit switch, the circuit is divided into 3 branches, the 1 st branch is connected with the fan, the second branch is connected with 8 light-emitting diodes, the 8 light-emitting diodes are connected in series, the 3 rd branch is connected with the direct-current voltage reduction module and the raspberry pi, and the raspberry pi is connected with the miniature camera, the touch screen and the key set.
Each component in the light pen structure in this embodiment functions as: 1) The lithium battery is a 12-volt direct-current power supply and is used for supplying power to the light pen, so that the light pen becomes wireless equipment, the light pen is free to measure, and the use is more convenient; 2) The power switch controls the closing of the circuit, and a green LED lamp on the power switch is turned on when the power switch is electrified to indicate the current light pen state; 3) The fan is used for dissipating heat for the light pen; 4) 8 light-emitting diodes form characteristic points of the system, and light waves of the light-emitting diodes are infrared light, so that image processing is facilitated; 5) The direct current voltage reduction module converts the 12V voltage into the 5V voltage to supply power to the raspberry group; 6) The small camera can shoot parts and is used for augmented reality display, measurement navigation, part posture recognition and the like; 7) The touch screen can select measurement elements, display measurement results, display measurement pictures and the like; 8) The key group is used for assisting the touch screen to perform operations such as selection, confirmation and the like; 9) The raspberry group is used as a microcomputer, is a control center of a light pen, and is used for processing a measurement task, wirelessly interacting with host data, processing small-sized camera shooting, transmitting a picture to a touch screen and receiving key group data; 10 A handle for holding, facilitating use of the light pen; 11 The panel and the connecting rod are made of finish machining alloy, the finish machining error is small, the precision is high, the measurement precision of the light pen can be improved, the alloy material is not easy to deform, and the change of the shape change error is very small after long-term use; 12 The rear shell and the handle are made of carbon fiber materials, so that the light pen is light in weight and high in strength and can reduce the weight of the light pen; 13 The surface of the touch screen is an inclined surface, so that the touch screen is convenient to watch and operate during measurement; 14 A rectangular opening at the bottom of the rear shell can expose the lens of the miniature camera, and the miniature camera is obliquely arranged at a certain angle so as to be convenient for shooting a part to be detected; 15 A rectangular opening at the right side of the rear shell can expose an expansion interface of a raspberry pie, and the expansion interface comprises a net port and 4 usb ports; 16 A charging female terminal for charging a lithium battery; 173 rows of rectangular openings on the right side of the rear housing for heat dissipation.
The using method of the light pen comprises the following steps that 1) the light pen needs to be matched with a CCD camera and a computer for use. Calibrating a CCD camera before use, and determining parameters of the CCD camera; 2) The method comprises the following steps of calibrating the light pen: calibrating 8 light-emitting diodes on the panel to obtain the coordinates of the 8 light-emitting diodes under a light pen coordinate system, and calibrating a measuring ball of a measuring head to obtain the coordinates of the measuring ball under the light pen coordinate system; 3) During measurement, a power switch is pressed down, a light pen handle is held by hands, a measurement ball contacts a part to be measured, at the moment, a CCD camera shoots a light pen to obtain an image containing 8 light-emitting points of the light-emitting diodes, coordinates of the light-emitting diodes are calculated according to calibration parameters of the CCD camera and calibration coordinates of the light pen, and further coordinates of a measurement ball of the light pen are obtained; 4) During measurement, measurement elements such as straight lines, surfaces and the like can be selected on the light pen touch screen, and a confirmation button is pressed or touched to start measurement; 5) During measurement, a measurement navigation function can be selected on the light pen touch screen, and navigation pictures are displayed according to the touch screen for measurement in sequence; 6) During measurement, a posture recognition function can be selected, a part CAD model is led in, a coordinate system is established for the part, then the posture recognition function is selected, the part is shot, and the posture of the part CAD model is immediately adjusted, so that the posture of the part CAD model is consistent with the posture of the shot image; 7) The electric quantity of the lithium battery is kept sufficient.
After establishing a part coordinate system for the part, the light pen type three-coordinate measuring system can compare the part with a CAD design value, and the part posture is consistent with the CAD model posture in the measuring process, so that the observation of an operator is facilitated, and the design value is also conveniently directly compared with the part. During initial measurement, the part CAD model importing posture is often greatly different from the part placing posture, and at the moment, the model posture can be adjusted at once by using the part posture identification method of the embodiment. In the measurement process, when the model posture difference between the last measurement point location and the current measurement point location is large, the model posture can be adjusted immediately by using the part posture identification method of the embodiment.
Finally, the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited, and other modifications or equivalent substitutions made by the technical solutions of the present invention by the persons skilled in the art should be covered within the scope of the claims of the present invention as long as they do not depart from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A part posture recognition method of an optical pen type three-coordinate measurement system is characterized by comprising the following steps:
an off-line stage: s1, obtaining a CAD model of a part to be tested, and loading the CAD model of the part by means of an OpenGL library;
s2, rendering a virtual posture graph of the part with a real display effect by using perspective projection;
s3, extracting a region of interest ROI from the generated virtual gesture graph of the part;
s4, continuously changing the observation visual angle, and constructing a uniform and omnibearing virtual observation visual angle spherical surface to obtain a large number of part posture graphic libraries containing various postures of parts;
s5, grouping graphs according to a similarity strategy for calling at an online stage;
an online stage: s6, shooting the part by using a light pen with a miniature camera to obtain a part real object image, extracting a real object image ROI (region of interest), and sending the real object image with the ROI extracted to a computer;
s7, roughly matching the part posture graphic library graph with the real object ROI graph by the computer;
s8, performing fine matching on the object ROI image and the image obtained by coarse matching;
and S9, obtaining a matching similarity value, taking the graph with the maximum similarity value as an optimal matching graph, and adjusting the part CAD model posture according to the coordinates of the optimal matching graph so that the part CAD model posture is consistent with the shooting posture of the light pen.
2. The method of recognizing the attitude of a part in an optical pen-type three-coordinate measuring system according to claim 1, wherein:
in S2, the method for specifically rendering the part model includes: and after S1, generating a 2D virtual attitude graph of the part, and after the 2D virtual attitude graph of the OpenGL rendering part is subjected to model view port transformation, projection transformation, perspective division and view port transformation, judging whether a rendering point faces an observation view angle or not according to depth information of the rendering point, so as to determine whether the current rendering point is displayed on a screen or not.
3. The method for recognizing the attitude of a part of an optical pen-type three-coordinate measuring system according to claim 2, wherein:
the method comprises the steps of displaying part model postures under a certain observation visual angle on a screen, storing current screen graphics to obtain a part virtual posture graphic, continuously changing the observation visual angle to obtain a large number of part posture graphics, constructing a virtual observation spherical surface for ensuring the consistent size of the part posture graphics, enabling the spherical center to be superposed with the original point of a part model coordinate system, enabling spherical discrete points to be observation visual angle points, enabling the spherical surface to be uniformly sampled for ensuring the uniformity of the spherical observation visual angle points, and generating uniform sampling point coordinates by adopting a Fibonacci grid method.
4. The method for recognizing the attitude of a part of an optical pen-type three-coordinate measuring system according to claim 1, wherein:
the specific ROI extraction method in S3 comprises the following steps:
s31, after a part model posture graph is stored, reading the graph immediately, copying the graph and converting the copied graph into a gray graph;
s32, carrying out threshold processing on the gray image, setting a proper gray pixel threshold, setting the pixel value larger than the threshold to be 0, namely black, setting other pixel values to be 255, namely white, and carrying out threshold processing on the gray image to obtain a black and white image of a black-and-white image;
s33, marking a connected domain of the graph in the black and white graph, wherein the connected domain is a pixel collection which has the same pixel value and is formed by adjacent pixels, and for the part posture graph, only a part model exists in the graph, so that the largest connected domain is the part model;
s34, judging the size of the connected domain according to the area of the connected domain, making a maximum connected domain external rectangle according to the maximum connected domain initial point coordinate and the width and height information, cutting the original graph in the maximum connected domain external rectangle area, and replacing the original graph, so that the ROI of the graph can be immediately extracted when each part posture graph is generated.
5. The method for recognizing the attitude of a part of an optical pen-type three-coordinate measuring system according to claim 1, wherein:
the specific method for rough matching in the S7 comprises the following steps: the computer firstly calculates the aspect ratio r value of each graph in the part posture graph library, sequences the r values from small to large, divides the graph library graphs into m groups according to the r values, then calculates the r values of the part real graph, compares the r values of the real graph with the maximum value and the minimum value of each group of r values in the m groups of graphs in the graph library, and supposes that the r values of the real graph are in the range of the r values of the ith group, because the rough matching is the r values, large errors can be avoided, especially when the r values of the real graph are close to the maximum value or the minimum value of the r values of the ith group, therefore, in order to reduce the errors brought by the rough matching, the real graph is matched with the 3 groups of graphs of the (i-1) group, the ith group and the (i + 1) group, especially when i is 1, the real graph is matched with the 2 groups of graphs of the 1 and the 2 groups of the mth group, and the rough matching is carried out by screening the posture graph library, and 2 groups or 3 groups of graphs to be matched are reserved, and the rough matching is finished.
6. The method for recognizing the attitude of a part of an optical pen-type three-coordinate measuring system according to claim 1, wherein:
the specific method for fine matching in the S8 comprises the following steps: and performing template matching on the object image and all the images in 2 groups or 3 groups of images obtained by rough matching, wherein the template matching is to slide the template image on the input image, the similarity between the template and the area covered by the template of the input image is calculated by using the pixel when each pixel is reached, the similarity value is stored in a result matrix, the maximum value in the result matrix is the pixel which is the most similar to the template image in the input image, the most similar pixel is taken as the starting point, and the area of the size of the template image is the most similar area to the template image in the input image.
7. The method for recognizing the attitude of a part of an optical pen-type three-coordinate measuring system according to claim 1, wherein:
the S9 is specifically that a graph obtained by rough matching of a part posture graph library is used as a template image, a part real object ROI graph is used as an input image, the part real object ROI graph is the size of the template because the template image and a part in the input image are contents to be matched, and the size of the part in the image is the size of the image after ROI extraction, so that the size of the template image is the size of the template, the part real object ROI graph and the graph in the graph library are adjusted to be in a uniform size, template matching can be completed once by the template only through calculation, a similarity value between-1 and 1 is obtained once through matching, the similarity value is stored in a result matrix, and because the size of the input image is the same as that of the template image, only one value exists in each matched result matrix, the maximum value is found from a plurality of result matrices, the posture graph corresponding to the maximum value is the optimal matching graph, the file name of the optimal matching graph is read, the coordinate value of the optimal matching graph is obtained, the optimal matching graph is brought into projection, the rendering and the posture of a CAD model is adjusted, so that the posture of the CAD model is consistent with the coordinate value of the shooting by an optical pen.
8. The utility model provides an optical pen formula three coordinate measurement system part gesture recognition's optical pen structure, includes the panel, its characterized in that:
the panel is CNC finish machining's alloy material, and the panel openly is equipped with a protruding cavity, is equipped with 2 horizontal symmetric distributions's round hole on the protruding cavity, and other positions are equipped with 6 perpendicular symmetric distribution round holes on the panel openly, 8 emitting diode are fixed to 8 round holes, and 8 emitting diode pass through the wire at the panel back and connect, and the panel back is connected with the backshell, is connected with the connecting rod in the screw hole in the middle of the bottom of panel, and connecting rod bottom threaded connection has the gauge head.
9. The optical pen structure for optical pen three coordinate measurement system part pose recognition according to claim 8, wherein:
the connecting rod is made of CNC finish machining alloy materials, is cylindrical, and is provided with a thread at one end and a threaded hole at the other end;
the back shell is of a carbon fiber structure and is provided with a cavity, and components in the cavity comprise a lithium battery, a raspberry group, a miniature camera, a direct current voltage reduction module, a power switch and a charging female seat; the top end of the rear shell is an inclined plane, the inclined plane is provided with two platforms, the left platform and the right platform are respectively provided with a touch screen, a key group consisting of 4 keys is arranged on the right platform, the keys are vertically arranged, and the touch screen is connected with the raspberry group in the cavity through a flat cable; a round hole is formed in the left side, 4 threaded holes are formed in the periphery of the round hole, and a fan is arranged on the inner side of the round hole and fixed to the left side face of the cavity through screws; the right side of the rear shell is provided with a rectangular opening, and 3 rows of round holes are arranged beside the rectangular opening; the middle of the back surface of the rear shell is provided with a handle which is connected through a screw, the back surface is provided with two large threaded holes which are distributed at the upper left and the upper right, the switch is screwed into the cavity through the large threaded hole at the upper left, and the charging female seat is screwed into the cavity through the large threaded hole at the upper right; the back surface is also provided with 4 middle threaded holes which are fixed with the panel through long screws, and the back surface is also provided with 4 small threaded holes, and the raspberry pie is fixed with the rear shell through the 4 small threaded holes; the bottom of the rear shell is provided with a rectangular opening.
10. The optical pen structure for optical pen three coordinate measurement system part pose recognition according to claim 9, wherein:
the lithium battery is a 12-volt direct-current power supply and is used for supplying power to the light pen; the power switch is used as a circuit main switch, the circuit is divided into 3 branches, the 1 st branch is connected with the fan, the second branch is connected with 8 light-emitting diodes, the 8 light-emitting diodes are connected in series, the 3 rd branch is connected with the direct-current voltage reduction module and the raspberry pie, and the raspberry pie is connected with the miniature camera, the touch screen and the key group;
the direct current voltage reduction module converts 12V voltage into 5V voltage and supplies power to the raspberry group; the touch screen can select measurement elements, display measurement results and display measurement pictures; the key group is used for assisting the touch screen to select and confirm; the rectangular opening at the bottom of the rear shell can expose the lens of the miniature camera, and the miniature camera is obliquely arranged, so that the part to be detected can be conveniently shot;
the right side rectangle trompil of backshell can expose the expansion interface of raspberry group, including net gape and 4 usb mouths.
CN202211148489.8A 2022-09-21 2022-09-21 Light pen type three-coordinate measuring system part gesture recognition method and light pen Active CN115393620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211148489.8A CN115393620B (en) 2022-09-21 2022-09-21 Light pen type three-coordinate measuring system part gesture recognition method and light pen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211148489.8A CN115393620B (en) 2022-09-21 2022-09-21 Light pen type three-coordinate measuring system part gesture recognition method and light pen

Publications (2)

Publication Number Publication Date
CN115393620A true CN115393620A (en) 2022-11-25
CN115393620B CN115393620B (en) 2023-07-14

Family

ID=84125802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211148489.8A Active CN115393620B (en) 2022-09-21 2022-09-21 Light pen type three-coordinate measuring system part gesture recognition method and light pen

Country Status (1)

Country Link
CN (1) CN115393620B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1570547A (en) * 2004-04-23 2005-01-26 天津大学 Light pen type portable three dimensional coordinates measuring system
CN106845354A (en) * 2016-12-23 2017-06-13 中国科学院自动化研究所 Partial view base construction method, part positioning grasping means and device
CN109373895A (en) * 2018-10-18 2019-02-22 九江精密测试技术研究所 A kind of light pen measuring system light pen
CN109446895A (en) * 2018-09-18 2019-03-08 中国汽车技术研究中心有限公司 A kind of pedestrian recognition method based on human body head feature
CN109636854A (en) * 2018-12-18 2019-04-16 重庆邮电大学 A kind of augmented reality three-dimensional Tracing Registration method based on LINE-MOD template matching
CN109887030A (en) * 2019-01-23 2019-06-14 浙江大学 Texture-free metal parts image position and posture detection method based on the sparse template of CAD
WO2022040983A1 (en) * 2020-08-26 2022-03-03 南京翱翔智能制造科技有限公司 Real-time registration method based on projection marking of cad model and machine vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1570547A (en) * 2004-04-23 2005-01-26 天津大学 Light pen type portable three dimensional coordinates measuring system
CN106845354A (en) * 2016-12-23 2017-06-13 中国科学院自动化研究所 Partial view base construction method, part positioning grasping means and device
CN109446895A (en) * 2018-09-18 2019-03-08 中国汽车技术研究中心有限公司 A kind of pedestrian recognition method based on human body head feature
CN109373895A (en) * 2018-10-18 2019-02-22 九江精密测试技术研究所 A kind of light pen measuring system light pen
CN109636854A (en) * 2018-12-18 2019-04-16 重庆邮电大学 A kind of augmented reality three-dimensional Tracing Registration method based on LINE-MOD template matching
CN109887030A (en) * 2019-01-23 2019-06-14 浙江大学 Texture-free metal parts image position and posture detection method based on the sparse template of CAD
WO2022040983A1 (en) * 2020-08-26 2022-03-03 南京翱翔智能制造科技有限公司 Real-time registration method based on projection marking of cad model and machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
解则晓 等: "光笔式双摄像机三维坐标视觉测量系统", 《光学技术》, vol. 38, no. 4, pages 459 - 464 *

Also Published As

Publication number Publication date
CN115393620B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN109859296B (en) Training method of SMPL parameter prediction model, server and storage medium
JP4677536B1 (en) 3D object recognition apparatus and 3D object recognition method
CN108537881B (en) Face model processing method and device and storage medium thereof
KR100443552B1 (en) System and method for embodying virtual reality
JP6740033B2 (en) Information processing device, measurement system, information processing method, and program
Filip et al. Bidirectional texture function modeling: A state of the art survey
CN109285215A (en) A kind of human 3d model method for reconstructing, device and storage medium
JP5093053B2 (en) Electronic camera
CN108230395A (en) Stereoscopic image is calibrated and image processing method, device, storage medium and electronic equipment
CN113689578B (en) Human body data set generation method and device
CN109271023B (en) Selection method based on three-dimensional object outline free-hand gesture action expression
CN111460937B (en) Facial feature point positioning method and device, terminal equipment and storage medium
JP4552431B2 (en) Image collation apparatus, image collation method, and image collation program
CN112950759B (en) Three-dimensional house model construction method and device based on house panoramic image
CN105378573B (en) The computational methods of information processor, examination scope
CN118102044A (en) Point cloud data generation method, device, equipment and medium based on 3D Gaussian splatter
CN112197708B (en) Measuring method and device, electronic device and storage medium
CN115393620A (en) Part posture recognition method of light pen type three-coordinate measurement system and light pen structure
Vietti et al. Development of a low-cost and portable device for Reflectance Transformation Imaging
CN109166176B (en) Three-dimensional face image generation method and device
KR102662058B1 (en) An apparatus and method for generating 3 dimension spatial modeling data using a plurality of 2 dimension images acquired at different locations, and a program therefor
CN112462948B (en) Calibration method and device based on deviation of user gesture control by depth camera
CN112150527B (en) Measurement method and device, electronic equipment and storage medium
CN111429570B (en) Method and system for realizing modeling function based on 3D camera scanning
CN113496142A (en) Method and device for measuring volume of logistics piece

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant