CN117367308A - Three-dimensional full-field strain measurement method and application thereof in mechanical equipment - Google Patents

Three-dimensional full-field strain measurement method and application thereof in mechanical equipment Download PDF

Info

Publication number
CN117367308A
CN117367308A CN202311589300.3A CN202311589300A CN117367308A CN 117367308 A CN117367308 A CN 117367308A CN 202311589300 A CN202311589300 A CN 202311589300A CN 117367308 A CN117367308 A CN 117367308A
Authority
CN
China
Prior art keywords
image
point
information
detected
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311589300.3A
Other languages
Chinese (zh)
Inventor
盛林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Special Equipment Safety Supervision Inspection Institute of Jiangsu Province
Original Assignee
Special Equipment Safety Supervision Inspection Institute of Jiangsu Province
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Special Equipment Safety Supervision Inspection Institute of Jiangsu Province filed Critical Special Equipment Safety Supervision Inspection Institute of Jiangsu Province
Priority to CN202311589300.3A priority Critical patent/CN117367308A/en
Publication of CN117367308A publication Critical patent/CN117367308A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/16Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Abstract

The invention discloses a three-dimensional full-field strain measurement method and application thereof in mechanical equipment, the scheme is ingenious in acquiring surface images of an object to be detected in different states in a multi-direction image acquisition mode, selecting reference points in the images and positioning the reference points based on the surface images, combining the reference points of a plurality of images of the object to be detected in a specific state, correcting pitch angles of an image acquisition unit when acquiring the images, and finally determining three-dimensional coordinate positions of all the reference points in a constructed virtual three-dimensional coordinate system according to relative position information of the image acquisition unit, thereby completing contour inversion of a part of the object to be detected, acquiring strain information conditions of the object to be detected by comparing the contours of the object to be detected before and after the strain state.

Description

Three-dimensional full-field strain measurement method and application thereof in mechanical equipment
Technical Field
The invention relates to the technical field of measurement technology and equipment monitoring, in particular to a three-dimensional full-field strain measurement method and application thereof in mechanical equipment.
Background
The strain refers to the local relative deformation of an object under the action of external force, a non-uniform temperature field and other factors, in the mechanical design and manufacture, the strain evaluation monitoring of a product is one of means for considering the use reliability of the product, especially for equipment with larger external force or larger vibration in some working processes, such as crane equipment, carriers, transfer devices and the like, when the equipment is arranged on a use site, the equipment always presents different strain conditions due to the influence of the performance attenuation of working environments, working objects and parts, and currently in the strain measurement, the three-dimensional reconstruction is carried out by adopting the technical combination of binocular vision and structured light, and then the scheme for analyzing the strain condition of the detection object is more common, such as a three-dimensional deformation measuring system, a three-dimensional deformation measuring method, a three-dimensional deformation measuring device and a storage medium (application number: 202011424019.0) of China patent application, a three-dimensional deformation measuring method and a three-dimensional deformation measuring device (202110925790.4); for these schemes, the dependence on the reference object is high, especially when the grating stripes are adopted as the reference object, certain requirements are imposed on the detection environment, for example, a device for projecting the grating stripes needs to project the object to be detected in a better position, and then a camera is required to acquire images at a corresponding reflection position, which is a great inconvenience for implementation and application of large-scale equipment such as a crane working outdoors, so how to perform strain measurement for the outdoor application equipment or the large-scale equipment without causing interference to the work of the equipment is a research subject with very practical significance.
Disclosure of Invention
In view of the above, the present invention aims to provide a three-dimensional full-field strain measurement method with reliable implementation, flexible application and good result reference and application thereof in mechanical equipment.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a three-dimensional full field strain measurement method, comprising:
s01, acquiring a surface image of an object to be detected in a first state from at least one direction, and generating at least one first image;
s02, selecting a reference point in the generated first image, and generating first reference point information;
s03, enabling the object to be detected to enter a second state from the first state under the action of the strain acting force;
s04, acquiring a surface image of the object to be detected in a second state from at least one direction, and generating at least one second image;
s05, selecting a reference point from the generated second image, and generating second reference point information;
s06, carrying out three-dimensional contour inversion on an object to be detected according to the first reference point information and at least one first image to generate first contour data;
s07, carrying out three-dimensional contour inversion on the object to be detected according to the second reference point information and at least one second image to generate second contour data;
S08, comparing the first contour data with the second contour data according to preset conditions to obtain strain information of the object to be detected after the first state enters the second state.
As a possible implementation manner, in the solutions S01 and S04, the surface image acquisition of the object to be detected in the first state and the second state is performed by at least three image acquisition units, which respectively perform surface image acquisition of the object to be detected from at least three directions, and the generated first image and the generated second image are at least three;
wherein the first images generated in S01 are assembled to form a first image group;
the second images generated in S04 are assembled to form a second image group;
when the first image and the second image are generated, a unique image ID corresponding to the first image and the second image is generated.
As a preferred implementation choice, in the present solutions S02 and S04, the surface of the object to be detected preferably has a feature structure or a feature mark for selecting a reference point, and the number of feature structures or feature marks is a plurality of feature points;
the image ranges corresponding to the first image and the second image at least cover more than three characteristic points on the surface of the object to be detected;
When the feature point is selected as the reference point, the corresponding generated first reference information and/or second reference information have corresponding feature point IDs, and ID numbers of different feature points are different.
As a preferred implementation choice, in this embodiment S01, S04, when the surface images of the object to be detected in the first state and the second state are acquired from at least one direction, the relative positions of the different image acquisition units when the surface images of the object to be detected are acquired, and the posture data and the working parameters of the image acquisition units are recorded;
the first image and the second image are respectively provided with a characteristic point at the center of an image picture;
and when the first image and the second image are generated, corresponding generation of association information is carried out, wherein the association information comprises position data, posture data, working parameters and generation time of an image acquisition unit for generating the corresponding first image and the second image.
As a preferred implementation choice, in the present embodiment S02, it is preferable that the selecting a reference point in the generated first image, and generating the first reference point information includes:
acquiring all the generated first images, positioning the feature points in the first images, generating first feature point positioning information, then carrying out ID assignment on the feature points,
The first characteristic point positioning information is used for marking position information of different characteristic points in the first image and is associated with an image ID of the first image and a characteristic point ID corresponding to the characteristic points;
in addition, the same feature point ID between different first images is assigned the same value, and then the first feature point positioning information and the feature point ID in all the first images are collected to generate first reference point information.
As a preferred implementation choice, in the present embodiment S04, selecting a reference point in the generated second image, generating second reference point information includes:
acquiring all generated second images, positioning feature points in the second images, generating second feature point positioning information, comparing and matching the second feature point positioning information with first feature point positioning information corresponding to first reference point information, performing ID assignment on the same feature points corresponding to the second images by referring to the first reference point information when the same feature points exist, performing ID assignment on different feature points corresponding to the second images when different feature points exist, and collecting the second feature point positioning information and the feature point IDs in all the second images to generate second reference point information;
The second feature point positioning information is used for marking position information of different feature points in the second image, and is associated with an image ID of the second image and a feature point ID corresponding to the feature points.
As a preferred implementation option, preferably, the solution S06 includes:
a01, acquiring a first image group, first reference point information and associated information corresponding to a first image in the first image group;
a02, constructing a virtual three-dimensional coordinate system;
a03, extracting a characteristic point ID from the first reference point information, setting the characteristic point ID as a characteristic point A, extracting 3 first characteristic point positioning information associated with the characteristic point ID from the first reference point information based on the extracted characteristic point ID, and extracting 3 first images corresponding to the characteristic point ID and associated information corresponding to the first images from the first image group based on the 3 first characteristic point positioning information;
a04, according to the association information of the first image extracted by the A02, establishing a corresponding image acquisition unit in a three-dimensional coordinate system through a virtual point location form;
a05, acquiring a characteristic point ID corresponding to a characteristic point positioned at the center of the picture in the first image extracted by the A02, and setting the characteristic point ID as a characteristic point B;
a06, calculating a displacement vector for moving the feature point A to the center of a first image picture by combining first feature point information of the first image corresponding to the feature point A and the feature point B, adjusting attitude data of an image acquisition unit corresponding to the first image according to the displacement vector and the association information to obtain image shooting pitch angle data of the image acquisition unit when the feature point A is positioned at the center of the first image picture, and repeating the steps to obtain the image shooting pitch angle data of all corresponding image shooting units when the feature point A is positioned at the center of 3 first images extracted by A02;
A07, taking a virtual point position corresponding to an image acquisition unit in a virtual three-dimensional coordinate system as a starting point, taking image shooting pitch angle data processed by the A06 as an ejection angle, establishing virtual rays, and obtaining 3 virtual rays, wherein the 3 virtual rays intersect at one point, so as to finish the position determination of the characteristic point A in the virtual three-dimensional coordinate system;
a08, repeating A03-A07 to finish three-dimensional position confirmation of all the characteristic points in the virtual three-dimensional coordinate system, and then carrying out virtual connection on adjacent characteristic points to generate first contour data, thereby realizing three-dimensional contour inversion of the object to be detected in the first state.
As a preferred implementation option, the solution S07 preferably includes:
b01, acquiring a second image group, second reference point information and associated information corresponding to a second image in the second image group;
b02, constructing a virtual three-dimensional coordinate system;
b03, extracting a characteristic point ID from the second reference point information, setting the characteristic point ID as a characteristic point C, extracting 3 second characteristic point positioning information related to the characteristic point ID from the second reference point information based on the extracted characteristic point ID, and extracting 3 second images corresponding to the characteristic point ID and related information corresponding to the second images from the second image group based on the 3 second characteristic point positioning information;
B04, according to the association information of the second image extracted by the B02, establishing a corresponding image acquisition unit in a three-dimensional coordinate system through a virtual point location form;
b05, obtaining a characteristic point ID corresponding to a characteristic point positioned at the center of the picture in the second image extracted by the B02, and setting the characteristic point ID as a characteristic point D;
b06, calculating a displacement vector for moving the feature point C to the center of a second image picture by combining the feature point C and second feature point information of the second image corresponding to the feature point D, adjusting the posture data of the image acquisition unit corresponding to the second image according to the displacement vector and the association information to obtain image shooting pitch angle data of the image acquisition unit when the feature point C is positioned at the center of the second image picture, and repeating the steps to obtain the image shooting pitch angle data of all the corresponding image shooting units when the feature point C is positioned at the center of the 3 images extracted by B02;
b07, taking a virtual point position corresponding to an image acquisition unit in a virtual three-dimensional coordinate system as a starting point, taking image shooting pitch angle data processed by the B06 as an ejection angle, establishing virtual rays, and obtaining 3 virtual rays, wherein the 3 virtual rays intersect at one point, so as to finish the position determination of a characteristic point C in the virtual three-dimensional coordinate system;
And B08, repeating the steps B03-B07, completing the three-dimensional position confirmation of all the characteristic points in the virtual three-dimensional coordinate system, and then carrying out virtual connection on adjacent characteristic points to generate second contour data, thereby realizing the three-dimensional contour inversion of the object to be detected in the second state.
As a preferred implementation choice, preferably, the comparing the first profile data with the second profile data according to the preset condition in the present embodiment S08 to obtain the strain information of the object to be detected after the first state enters the second state includes one of the following:
(1) Overlapping and matching the first contour data and the second contour data to obtain an overlapping region with the highest overlapping degree, and then marking the non-overlapping part of the second contour data and the first contour data to obtain strain information of the object to be detected after the object to be detected enters the second state in the first state;
(2) And selecting 3 characteristic points which are not positioned on the same virtual straight line from the first contour data or the second contour data as reference points, overlapping the characteristic points of the reference points corresponding to the first contour data and the second contour data, and marking the non-overlapping part of the second contour data and the first contour data to obtain the strain information of the object to be detected after the object to be detected enters the second state in the first state.
Based on the above, the present invention also provides a three-dimensional full field strain measurement system, which includes:
the image shooting units are used for acquiring surface images of the object to be detected in a first state from at least one direction and generating at least one first image; the method comprises the steps of obtaining a surface image of an object to be detected in a second state from at least one direction, and generating at least one second image;
the image positioning unit is used for selecting a reference point in the generated first image and generating first reference point information; the method comprises the steps of generating a first image, selecting a reference point in the generated first image, and generating first reference point information;
the strain applying unit is used for enabling the object to be detected to enter a second state from the first state under the action of the strain acting force;
the contour construction unit is used for carrying out three-dimensional contour inversion on the object to be detected according to the first reference point information and at least one first image to generate first contour data; the three-dimensional contour inversion is carried out on the object to be detected according to the second reference point information and at least one second image, and second contour data are generated;
and the data matching unit is used for comparing the first contour data with the second contour data according to preset conditions to obtain strain information of the object to be detected after the first state enters the second state.
Based on the above, the invention also provides a mechanical equipment strain measurement method for measuring the strain of the mechanical equipment of the crane, which comprises the three-dimensional full-field strain measurement method; the object to be detected is a part of which the outer surface is exposed outside the mechanical equipment.
Based on the foregoing, the present invention further provides a computer readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored, where the at least one instruction, at least one program, code set, or instruction set is loaded by a processor and executed to implement a three-dimensional full-field strain measurement method as described above or a mechanical device strain measurement method as described above.
By adopting the technical scheme, compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of acquiring surface images of an object to be detected in different states in a multidirectional image acquisition mode, selecting reference points in the images and positioning the reference points based on the surface images, combining the reference points of a plurality of images of the object to be detected in a specific state, correcting pitch angles of an image acquisition unit when the images are acquired, and finally determining three-dimensional coordinate positions of all the reference points (the characteristic points) in a constructed virtual three-dimensional coordinate system according to relative position information of the image acquisition unit, so that profile inversion of a part of the object to be detected, which needs strain measurement, is completed, strain information conditions of the object to be detected are obtained by comparing profiles of the object to be detected before and after the strain state.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a simplified implementation of the method of the present invention;
FIG. 2 is a schematic representation of a simplified implementation of the method of the present invention for crane beam strain monitoring;
FIG. 3 is a schematic flow chart of a simplified implementation of the method of the present invention for generating first profile data;
FIG. 4 is a schematic diagram of the pitch angle data adjustment of the image acquisition unit according to the method of the present invention;
FIG. 5 is a schematic diagram of the principle of the method of the present invention for locating the position of a feature point in a virtual three-dimensional coordinate system by three influence acquisition units;
FIG. 6 is a schematic flow chart of a simplified implementation of the method of the present invention for generating second profile data;
FIG. 7 is a schematic illustration of the method of the present invention generating first profile data, second profile data, wherein only the characteristic point strain condition of a single side of an object to be inspected is shown;
FIG. 8 is one of the schematic unit module connection diagrams of the system of the present invention;
FIG. 9 is a second schematic diagram of a simplified unit module connection of the system of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is specifically noted that the following examples are only for illustrating the present invention, but do not limit the scope of the present invention. Likewise, the following examples are only some, but not all, of the examples of the present invention, and all other examples, which a person of ordinary skill in the art would obtain without making any inventive effort, are within the scope of the present invention.
As shown in fig. 1, the method for measuring the three-dimensional full-field strain according to the embodiment includes:
s01, acquiring a surface image of an object to be detected in a first state from at least one direction, and generating at least one first image;
s02, selecting a reference point in the generated first image, and generating first reference point information;
s03, enabling the object to be detected to enter a second state from the first state under the action of the strain acting force;
s04, acquiring a surface image of the object to be detected in a second state from at least one direction, and generating at least one second image;
S05, selecting a reference point from the generated second image, and generating second reference point information;
s06, carrying out three-dimensional contour inversion on an object to be detected according to the first reference point information and at least one first image to generate first contour data;
s07, carrying out three-dimensional contour inversion on the object to be detected according to the second reference point information and at least one second image to generate second contour data;
s08, comparing the first contour data with the second contour data according to preset conditions to obtain strain information of the object to be detected after the first state enters the second state.
In the scheme, the quantity corresponding to the surface images of the object to be detected in the first state and the second state is obtained, and the direct positive correlation relation exists between the quantity and the inversion precision of the profile data.
When the reference point on the object to be detected is positioned based on a single image, the three-dimensional structure of the object to be detected is limited, and in most cases, only the rough hierarchical relationship between the two structures can be judged, but judgment errors exist in the depth of a picture, so that after the first contour data and the second contour data are inverted by only one first image and one second image, the accuracy or the fineness of the strain information judgment is relatively low, namely, the strain situation is judged mostly through the displacement change of the corresponding structure on the picture of the image.
When the reference point on the object to be detected is positioned based on the two images, the judging precision of the three-dimensional structure of the object to be detected is higher than that of a single image, wherein the two first images can respectively judge the hierarchical relationship between the two structures covered by the images of the object to be detected from different acquisition angles, but a certain judging error still exists in the picture depth, when the first contour data and the second contour data are inverted by the two first images and the second images respectively, the constructed first contour data and the constructed second contour data have certain reliable precision on the two-dimensional layer, but larger uncertainty exists in judging the strain of the three-dimensional space, namely, a ray for judging the position of the reference point can only be formed by combining the direction angle acquired by the images and the reference point, and even if the two rays intersect, the position of the object to be detected in the three-dimensional space is difficult to judge.
In geometric mathematics, when the positions and the relative positions of three reference points (A, B, C) are known, a fourth reference point (D) is constructed in space, then by establishing AD rays, BD rays and CD rays, under the condition that the ray emission angles are known, the three-dimensional coordinates of the reference point D in space can be deduced according to the relative position distances and the ray emission angles of the three reference points (A, B, C), so that when the image acquisition is carried out on an object to be detected from three different direction angles, the three-dimensional coordinates of the characteristic points on the object to be detected can be obtained through the position relation among the image acquisition units and the pitch angles of the image acquisition, and the contour data of the object to be detected in the first state and the second state can be better reflected.
In order to better illustrate the embodiment, the embodiment is illustrated by taking image acquisition from at least three direction angles as a starting point, in the embodiment, a bridge crane is taken as an example of an object to be detected in the embodiment, and strain measurement is performed on a cross beam of the bridge crane and the strain condition of the cross beam in the running process is measured in combination with the illustration of fig. 2.
Further, in the present embodiments S01 and S04, the surface image acquisition of the object to be detected in the first state and the second state is performed by at least three image acquisition units, which respectively perform surface image acquisition of the object to be detected from at least three directions, and the generated first image and the generated second image are at least three.
The first images generated in the S01 are collected to form a first image group for the convenience of data collection, calling and identification; the second images generated in S04 are assembled to form a second image group; when the first image and the second image are generated, a unique image ID corresponding to the first image and the second image is generated.
Accordingly, as a preferred implementation choice, in the present embodiments S02 and S04, the surface of the object to be detected preferably has a feature structure or a feature mark for selecting a reference point, and the number of feature structures or feature marks is a plurality of feature points.
In order to facilitate three-dimensional contour inversion and improve the accuracy of the first contour data and the second contour data, the image ranges corresponding to the first image and the second image in the scheme at least cover more than three characteristic points on the surface of the object to be detected.
When the feature point is selected as the reference point, the corresponding generated first reference information and/or second reference information have corresponding feature point IDs, and ID numbers of different feature points are different.
In addition, as a preferred implementation choice, in the present solutions S01 and S04, when the surface images of the object to be detected in the first state and the second state are acquired from at least one direction, the relative positions of the different image acquisition units when the surface images of the object to be detected are acquired, and the posture data and the working parameters of the image acquisition units are recorded; the attitude data of the image acquisition unit comprises an included angle between a virtual ray and a horizontal plane or a preset reference plane formed between a picture center point of the image acquisition unit and the image acquisition unit when the image acquisition unit acquires a surface image of an object to be detected, namely a pitch angle of the image acquisition unit during image acquisition, and the working parameters comprise parameters related to image definition and brightness such as exposure parameters and aperture.
In order to facilitate the positioning of the feature points and the deduction of the positions of other feature points in the same image, in the scheme, the first image and the second image are provided with a feature point at the center of an image picture.
Meanwhile, in order to conveniently trace the working condition and time of the object to be detected when the surface image is acquired, so as to trace the working state of the object to be detected, when the first image and the second image are generated, the associated information is correspondingly generated, and the associated information comprises position data, gesture data, working parameters and generation time of an image acquisition unit for generating the corresponding first image and the second image.
In the generation of the first reference point information, as a preferable implementation choice, in the present embodiment S02, the selection of the reference point in the generated first image includes:
acquiring all the generated first images, positioning the feature points in the first images, generating first feature point positioning information, then carrying out ID assignment on the feature points,
the first characteristic point positioning information is used for marking position information of different characteristic points in the first image and is associated with an image ID of the first image and a characteristic point ID corresponding to the characteristic points;
In addition, the same feature point ID between different first images is assigned the same value, and then the first feature point positioning information and the feature point ID in all the first images are collected to generate first reference point information.
In the generating of the second reference point information, as a preferred implementation option, in the present embodiment S04, preferably, selecting the reference point in the generated second image includes:
acquiring all generated second images, positioning feature points in the second images, generating second feature point positioning information, comparing and matching the second feature point positioning information with first feature point positioning information corresponding to first reference point information, performing ID assignment on the same feature points corresponding to the second images by referring to the first reference point information when the same feature points exist, performing ID assignment on different feature points corresponding to the second images when different feature points exist, and collecting the second feature point positioning information and the feature point IDs in all the second images to generate second reference point information;
the second feature point positioning information is used for marking position information of different feature points in the second image, and is associated with an image ID of the second image and a feature point ID corresponding to the feature points.
As shown in fig. 3 and 4, as a preferred implementation option, the solution S06 preferably includes:
a01, acquiring a first image group, first reference point information and associated information corresponding to a first image in the first image group;
a02, constructing a virtual three-dimensional coordinate system;
a03, extracting a characteristic point ID from the first reference point information, setting the characteristic point ID as a characteristic point A, extracting 3 first characteristic point positioning information associated with the characteristic point ID from the first reference point information based on the extracted characteristic point ID, and extracting 3 first images corresponding to the characteristic point ID and associated information corresponding to the first images from the first image group based on the 3 first characteristic point positioning information;
a04, according to the association information of the first image extracted by the A02, establishing a corresponding image acquisition unit in a three-dimensional coordinate system through a virtual point location form;
a05, acquiring a characteristic point ID corresponding to a characteristic point positioned at the center of the picture in the first image extracted by the A02, and setting the characteristic point ID as a characteristic point B;
a06, calculating a displacement vector for moving the feature point A to the center of a first image picture by combining first feature point information of the first image corresponding to the feature point A and the feature point B, adjusting attitude data of an image acquisition unit corresponding to the first image according to the displacement vector and the association information to obtain image shooting pitch angle data of the image acquisition unit when the feature point A is positioned at the center of the first image picture, and repeating the steps to obtain the image shooting pitch angle data of all corresponding image shooting units when the feature point A is positioned at the center of 3 first images extracted by A02 (as shown in fig. 4);
A07, taking a virtual point position corresponding to an image acquisition unit in a virtual three-dimensional coordinate system as a starting point, taking image shooting pitch angle data processed by the A06 as an ejection angle, establishing virtual rays, and obtaining 3 virtual rays, wherein the 3 virtual rays intersect at one point, so as to finish the position determination of a characteristic point A in the virtual three-dimensional coordinate system (as shown in figure 5);
a08, repeating A03-A07 to finish three-dimensional position confirmation of all the characteristic points in the virtual three-dimensional coordinate system, and then carrying out virtual connection on adjacent characteristic points to generate first contour data, thereby realizing three-dimensional contour inversion of the object to be detected in the first state.
In the scheme, when three-dimensional coordinate confirmation is carried out on the characteristic points in the virtual three-dimensional coordinate system, the characteristic points are moved to the center of an image picture, at the moment, a ray with a track covering the characteristic points can be just established by the position of the acquisition end and the pitch angle of the image acquisition units, and as the characteristic points are only one, when the three image acquisition units corresponding to the three first images are established by taking the positions of the three image acquisition units as starting points after the pitch angle is adjusted, the three image acquisition units intersect on the rays formed by the two image acquisition units, but as the two lines can only establish a two-dimensional plane, in order to more reliably establish the coordinate system, the position of the characteristic points is further confirmed, the third image acquisition unit is introduced to establish the rays, and as the unique correspondence of the characteristic points, the three rays intersect at one point, and according to the relative position information of the associated information and the recorded image acquisition units, the relative position and the coordinate data of the three image acquisition units in the virtual coordinate system can be obtained when the virtual three image acquisition units are established, and the corresponding pitch angle data are combined, and the position and the characteristic point of the image acquisition units can be established by the position and the position of the characteristic points of the image acquisition units under the condition of knowing parameters.
In addition, a virtual triangle may be established in the virtual three-dimensional coordinate system (for example, fig. 5), and when the length of the bottom edge of the virtual triangle and the coordinate values of the corner points (1, 2, 3 correspond to the image capturing units corresponding to the three first images) are both known, and further when the angles between the three oblique edges and the bottom surface (the surface formed by 1, 2, 3) are known, the positions of the feature points in the virtual three-dimensional coordinate system may be obtained through geometric relation calculation.
Besides the way of calculating through geometric relations, the method can also be used for obtaining the positions of the characteristic points more simply and conveniently by introducing currently known three-dimensional drawing software (such as AUTOcad, solidworks and the like), namely, corresponding image obtaining units (1, 2 and 3) are directly modeled in the software in a mode of actual size or proportional size, then rays are established through the pitch angle and orientation position relation between the image obtaining units and the characteristic points, finally, the distances between the characteristic points (other characteristic points are the same as the characteristic points) and X0Z face, XOY face and YOZ face are directly measured in the software through a projection method, and the three-dimensional coordinate values of the characteristic points (other characteristic points are the same as the characteristic points) in a three-dimensional coordinate system can be obtained, so that the position determination of the characteristic point A in a virtual three-dimensional coordinate system is completed.
Similarly, as a preferred implementation option, preferably, in combination with fig. 6, the scheme S07 includes:
b01, acquiring a second image group, second reference point information and associated information corresponding to a second image in the second image group;
b02, constructing a virtual three-dimensional coordinate system;
b03, extracting a characteristic point ID from the second reference point information, setting the characteristic point ID as a characteristic point C, extracting 3 second characteristic point positioning information related to the characteristic point ID from the second reference point information based on the extracted characteristic point ID, and extracting 3 second images corresponding to the characteristic point ID and related information corresponding to the second images from the second image group based on the 3 second characteristic point positioning information;
b04, according to the association information of the second image extracted by the B02, establishing a corresponding image acquisition unit in a three-dimensional coordinate system through a virtual point location form;
b05, obtaining a characteristic point ID corresponding to a characteristic point positioned at the center of the picture in the second image extracted by the B02, and setting the characteristic point ID as a characteristic point D;
b06, calculating a displacement vector for moving the feature point C to the center of a second image picture by combining the feature point C and second feature point information of the second image corresponding to the feature point D, adjusting the posture data of the image acquisition unit corresponding to the second image according to the displacement vector and the association information to obtain image shooting pitch angle data of the image acquisition unit when the feature point C is positioned at the center of the second image picture, and repeating the steps to obtain the image shooting pitch angle data of all the corresponding image shooting units when the feature point C is positioned at the center of the 3 images extracted by B02;
B07, taking a virtual point position corresponding to an image acquisition unit in a virtual three-dimensional coordinate system as a starting point, taking image shooting pitch angle data processed by the B06 as an ejection angle, establishing virtual rays, and obtaining 3 virtual rays, wherein the 3 virtual rays intersect at one point, so as to finish the position determination of a characteristic point C in the virtual three-dimensional coordinate system;
and B08, repeating the steps B03-B07, completing the three-dimensional position confirmation of all the characteristic points in the virtual three-dimensional coordinate system, and then carrying out virtual connection on adjacent characteristic points to generate second contour data, thereby realizing the three-dimensional contour inversion of the object to be detected in the second state.
In contrast to the strain results, as shown in fig. 7, fig. 7 shows three-dimensional profiles of one surface of the object to be detected after being switched from the first state to the second state, and as a preferred implementation option, preferably, the comparing the first profile data with the second profile data according to the preset condition in the present embodiment S08, to obtain strain information of the object to be detected after the object to be detected enters the first state to the second state includes one of the following:
(1) Overlapping and matching the first contour data and the second contour data to obtain an overlapping region with the highest overlapping degree, and then marking the non-overlapping part of the second contour data and the first contour data to obtain strain information of the object to be detected after the object to be detected enters the second state in the first state;
(2) And selecting 3 characteristic points which are not positioned on the same virtual straight line from the first contour data or the second contour data as reference points, overlapping the characteristic points of the reference points corresponding to the first contour data and the second contour data, and marking the non-overlapping part of the second contour data and the first contour data to obtain the strain information of the object to be detected after the object to be detected enters the second state in the first state.
The mode of the scheme (1) is that after the first profile data and the second profile data are overlapped according to an object to be detected (such as a cross beam of the crane in the embodiment), the non-overlapped part is defined as a strain part according to the part with the largest overlap ratio as an unstrained part, so that strain information is output; this approach may require more computer hardware performance in matching, which is suitable for objects whose whole may deform under strain forces.
For the object which can be judged in advance and cannot be deformed due to the strain force, three datum points are directly determined through the scheme (2), then the first contour data and the second contour data are directly overlapped corresponding to the datum points, the rest non-overlapped parts are used as parts which deform after the strain effect and are used as part of contents of the strain information, and the mode is more efficient and simple.
In the scheme, when the number of the characteristic points is larger, the accuracy of the profile data corresponding to inversion is higher.
Based on the above, the three-dimensional full-field strain measurement method of the present embodiment may also be used in strain measurement of other mechanical devices, where the object to be detected is a component whose outer surface is exposed outside the mechanical device.
As shown in fig. 8, based on the foregoing, the present embodiment further provides a three-dimensional full-field strain measurement system, which includes:
the image shooting units are used for acquiring surface images of the object to be detected in a first state from at least one direction and generating at least one first image; the method comprises the steps of obtaining a surface image of an object to be detected in a second state from at least one direction, and generating at least one second image;
the image positioning unit is used for selecting a reference point in the generated first image and generating first reference point information; the method comprises the steps of generating a first image, selecting a reference point in the generated first image, and generating first reference point information;
the strain applying unit is used for enabling the object to be detected to enter a second state from the first state under the action of the strain acting force;
the contour construction unit is used for carrying out three-dimensional contour inversion on the object to be detected according to the first reference point information and at least one first image to generate first contour data; the three-dimensional contour inversion is carried out on the object to be detected according to the second reference point information and at least one second image, and second contour data are generated;
And the data matching unit is used for comparing the first contour data with the second contour data according to preset conditions to obtain strain information of the object to be detected after the first state enters the second state.
The system shown in fig. 8 may further incorporate a data server to store intermediate data and the results of the processing, and then multiplex and retrieve the intermediate data and the results to other step units, as described in connection with fig. 9.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only a partial embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent devices or equivalent processes using the descriptions and the drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the present invention.

Claims (10)

1. A three-dimensional full-field strain measurement method, comprising:
s01, acquiring a surface image of an object to be detected in a first state from at least one direction, and generating at least one first image;
s02, selecting a reference point in the generated first image, and generating first reference point information;
s03, enabling the object to be detected to enter a second state from the first state under the action of the strain acting force;
s04, acquiring a surface image of the object to be detected in a second state from at least one direction, and generating at least one second image;
s05, selecting a reference point from the generated second image, and generating second reference point information;
s06, carrying out three-dimensional contour inversion on an object to be detected according to the first reference point information and at least one first image to generate first contour data;
s07, carrying out three-dimensional contour inversion on the object to be detected according to the second reference point information and at least one second image to generate second contour data;
S08, comparing the first contour data with the second contour data according to preset conditions to obtain strain information of the object to be detected after the first state enters the second state.
2. The method for measuring the three-dimensional full-field strain according to claim 1, wherein in S01 and S04, the surface image of the object to be measured is acquired in the first state and the second state by at least three image acquisition units, which acquire the surface image of the object to be measured from at least three directions respectively, and the generated first image and second image are at least three;
wherein the first images generated in S01 are assembled to form a first image group;
the second images generated in S04 are assembled to form a second image group;
when the first image and the second image are generated, a unique image ID corresponding to the first image and the second image is generated.
3. The three-dimensional full-field strain measurement method according to claim 2, wherein in S02, S04, the surface of the object to be detected has a feature structure or a feature mark for selecting a reference point, the number of which is plural and is set as a feature point;
The image ranges corresponding to the first image and the second image at least cover more than three characteristic points on the surface of the object to be detected;
when the feature point is selected as the reference point, the corresponding generated first reference information and/or second reference information have corresponding feature point IDs, and ID numbers of different feature points are different.
4. The three-dimensional full-field strain measurement method according to claim 3, wherein in S01 and S04, when the surface images of the object to be detected in the first state and the second state are acquired from at least one direction, the relative positions of the image acquisition units when acquiring the surface images of the object to be detected, and the posture data and the working parameters of the image acquisition units are also recorded;
the first image and the second image are respectively provided with a characteristic point at the center of an image picture;
and when the first image and the second image are generated, corresponding generation of association information is carried out, wherein the association information comprises position data, posture data, working parameters and generation time of an image acquisition unit for generating the corresponding first image and the second image.
5. The three-dimensional full-field strain measurement method according to claim 4, wherein in S02, selecting a reference point in the generated first image, generating first reference point information includes:
Acquiring all the generated first images, positioning the feature points in the first images, generating first feature point positioning information, then carrying out ID assignment on the feature points,
the first characteristic point positioning information is used for marking position information of different characteristic points in the first image and is associated with an image ID of the first image and a characteristic point ID corresponding to the characteristic points;
in addition, the same characteristic point ID between different first images is assigned the same value, and then the first characteristic point positioning information and the characteristic point ID in all the first images are collected to generate first reference point information;
in S04, selecting a reference point in the generated second image, and generating second reference point information includes:
acquiring all generated second images, positioning feature points in the second images, generating second feature point positioning information, comparing and matching the second feature point positioning information with first feature point positioning information corresponding to first reference point information, performing ID assignment on the same feature points corresponding to the second images by referring to the first reference point information when the same feature points exist, performing ID assignment on different feature points corresponding to the second images when different feature points exist, and collecting the second feature point positioning information and the feature point IDs in all the second images to generate second reference point information;
The second feature point positioning information is used for marking position information of different feature points in the second image, and is associated with an image ID of the second image and a feature point ID corresponding to the feature points.
6. The three-dimensional full-field strain measurement method of claim 5, wherein S06 comprises:
a01, acquiring a first image group, first reference point information and associated information corresponding to a first image in the first image group;
a02, constructing a virtual three-dimensional coordinate system;
a03, extracting a characteristic point ID from the first reference point information, setting the characteristic point ID as a characteristic point A, extracting 3 first characteristic point positioning information associated with the characteristic point ID from the first reference point information based on the extracted characteristic point ID, and extracting 3 first images corresponding to the characteristic point ID and associated information corresponding to the first images from the first image group based on the 3 first characteristic point positioning information;
a04, according to the association information of the first image extracted by the A02, establishing a corresponding image acquisition unit in a three-dimensional coordinate system through a virtual point location form;
a05, acquiring a characteristic point ID corresponding to a characteristic point positioned at the center of the picture in the first image extracted by the A02, and setting the characteristic point ID as a characteristic point B;
a06, calculating a displacement vector for moving the feature point A to the center of a first image picture by combining first feature point information of the first image corresponding to the feature point A and the feature point B, adjusting attitude data of an image acquisition unit corresponding to the first image according to the displacement vector and the association information to obtain image shooting pitch angle data of the image acquisition unit when the feature point A is positioned at the center of the first image picture, and repeating the steps to obtain the image shooting pitch angle data of all corresponding image shooting units when the feature point A is positioned at the center of 3 first images extracted by A02;
A07, taking a virtual point position corresponding to an image acquisition unit in a virtual three-dimensional coordinate system as a starting point, taking image shooting pitch angle data processed by the A06 as an ejection angle, establishing virtual rays, and obtaining 3 virtual rays, wherein the 3 virtual rays intersect at one point, so as to finish the position determination of the characteristic point A in the virtual three-dimensional coordinate system;
a08, repeating A03-A07 to finish three-dimensional position confirmation of all the characteristic points in the virtual three-dimensional coordinate system, and then carrying out virtual connection on adjacent characteristic points to generate first contour data, thereby realizing three-dimensional contour inversion of the object to be detected in the first state.
7. The three-dimensional full-field strain measurement method of claim 6, wherein S07 comprises:
b01, acquiring a second image group, second reference point information and associated information corresponding to a second image in the second image group;
b02, constructing a virtual three-dimensional coordinate system;
b03, extracting a characteristic point ID from the second reference point information, setting the characteristic point ID as a characteristic point C, extracting 3 second characteristic point positioning information related to the characteristic point ID from the second reference point information based on the extracted characteristic point ID, and extracting 3 second images corresponding to the characteristic point ID and related information corresponding to the second images from the second image group based on the 3 second characteristic point positioning information;
B04, according to the association information of the second image extracted by the B02, establishing a corresponding image acquisition unit in a three-dimensional coordinate system through a virtual point location form;
b05, obtaining a characteristic point ID corresponding to a characteristic point positioned at the center of the picture in the second image extracted by the B02, and setting the characteristic point ID as a characteristic point D;
b06, calculating a displacement vector for moving the feature point C to the center of a second image picture by combining the feature point C and second feature point information of the second image corresponding to the feature point D, adjusting the posture data of the image acquisition unit corresponding to the second image according to the displacement vector and the association information to obtain image shooting pitch angle data of the image acquisition unit when the feature point C is positioned at the center of the second image picture, and repeating the steps to obtain the image shooting pitch angle data of all the corresponding image shooting units when the feature point C is positioned at the center of the 3 images extracted by B02;
b07, taking a virtual point position corresponding to an image acquisition unit in a virtual three-dimensional coordinate system as a starting point, taking image shooting pitch angle data processed by the B06 as an ejection angle, establishing virtual rays, and obtaining 3 virtual rays, wherein the 3 virtual rays intersect at one point, so as to finish the position determination of a characteristic point C in the virtual three-dimensional coordinate system;
And B08, repeating the steps B03-B07, completing the three-dimensional position confirmation of all the characteristic points in the virtual three-dimensional coordinate system, and then carrying out virtual connection on adjacent characteristic points to generate second contour data, thereby realizing the three-dimensional contour inversion of the object to be detected in the second state.
8. The method of claim 7, wherein comparing the first profile data with the second profile data according to the preset condition in S08 to obtain the strain information of the object to be detected after the first state is entered into the second state comprises one of the following:
(1) Overlapping and matching the first contour data and the second contour data to obtain an overlapping region with the highest overlapping degree, and then marking the non-overlapping part of the second contour data and the first contour data to obtain strain information of the object to be detected after the object to be detected enters the second state in the first state;
(2) And selecting 3 characteristic points which are not positioned on the same virtual straight line from the first contour data or the second contour data as reference points, overlapping the characteristic points of the reference points corresponding to the first contour data and the second contour data, and marking the non-overlapping part of the second contour data and the first contour data to obtain the strain information of the object to be detected after the object to be detected enters the second state in the first state.
9. A three-dimensional full-field strain measurement system, comprising:
the image shooting units are used for acquiring surface images of the object to be detected in a first state from at least one direction and generating at least one first image; the method comprises the steps of obtaining a surface image of an object to be detected in a second state from at least one direction, and generating at least one second image;
the image positioning unit is used for selecting a reference point in the generated first image and generating first reference point information; the method comprises the steps of generating a first image, selecting a reference point in the generated first image, and generating first reference point information;
the strain applying unit is used for enabling the object to be detected to enter a second state from the first state under the action of the strain acting force;
the contour construction unit is used for carrying out three-dimensional contour inversion on the object to be detected according to the first reference point information and at least one first image to generate first contour data; the three-dimensional contour inversion is carried out on the object to be detected according to the second reference point information and at least one second image, and second contour data are generated;
and the data matching unit is used for comparing the first contour data with the second contour data according to preset conditions to obtain strain information of the object to be detected after the first state enters the second state.
10. A method for measuring strain of mechanical equipment, for measuring strain of mechanical equipment of a crane, characterized in that it comprises the three-dimensional full-field strain measuring method according to one of claims 1 to 8; the object to be detected is a part of which the outer surface is exposed outside the mechanical equipment.
CN202311589300.3A 2023-11-24 2023-11-24 Three-dimensional full-field strain measurement method and application thereof in mechanical equipment Pending CN117367308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311589300.3A CN117367308A (en) 2023-11-24 2023-11-24 Three-dimensional full-field strain measurement method and application thereof in mechanical equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311589300.3A CN117367308A (en) 2023-11-24 2023-11-24 Three-dimensional full-field strain measurement method and application thereof in mechanical equipment

Publications (1)

Publication Number Publication Date
CN117367308A true CN117367308A (en) 2024-01-09

Family

ID=89398608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311589300.3A Pending CN117367308A (en) 2023-11-24 2023-11-24 Three-dimensional full-field strain measurement method and application thereof in mechanical equipment

Country Status (1)

Country Link
CN (1) CN117367308A (en)

Similar Documents

Publication Publication Date Title
Trucco et al. Model-based planning of optimal sensor placements for inspection
US8600147B2 (en) System and method for remote measurement of displacement and strain fields
EP2372648B1 (en) Optical acquisition of object shape from coded structured light
US8208029B2 (en) Method and system for calibrating camera with rectification homography of imaged parallelogram
EP2188589B1 (en) System and method for three-dimensional measurement of the shape of material objects
EP0526938A2 (en) Method and apparatus for determining the distance between an image and an object
Kim et al. A camera calibration method using concentric circles for vision applications
JP2015147256A (en) Robot, robot system, control device, and control method
JP2004127239A (en) Method and system for calibrating multiple cameras using calibration object
JP2011174879A (en) Apparatus and method of estimating position and orientation
Chen et al. Color and depth data fusion using an RGB‐D sensor for inexpensive and contactless dynamic displacement‐field measurement
Ahmadabadian et al. Clustering and selecting vantage images in a low-cost system for 3D reconstruction of texture-less objects
Shmuel et al. Active vision: 3d from an image sequence
Bergström et al. Virtual projective shape matching in targetless CAD-based close-range photogrammetry for efficient estimation of specific deviations
CN110398215A (en) Image processing apparatus and method, system, article manufacturing method, storage medium
CN116129037A (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
Kent et al. Ridge curves and shape analysis.
Yamauchi et al. Calibration of a structured light system by observing planar object from unknown viewpoints
Zhang et al. Freight train gauge-exceeding detection based on three-dimensional stereo vision measurement
US8102516B2 (en) Test method for compound-eye distance measuring apparatus, test apparatus, and chart used for the same
CN117367308A (en) Three-dimensional full-field strain measurement method and application thereof in mechanical equipment
CN109741389A (en) One kind being based on the matched sectional perspective matching process of region base
CN113160416A (en) Speckle imaging device and method for coal flow detection
Mosnier et al. A New Method for Projector Calibration Based on Visual Servoing.
Uyanik et al. A method for determining 3D surface points of objects by a single camera and rotary stage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination