CN116524030A - Reconstruction method and system for digital twin crane under swinging condition - Google Patents

Reconstruction method and system for digital twin crane under swinging condition Download PDF

Info

Publication number
CN116524030A
CN116524030A CN202310796498.6A CN202310796498A CN116524030A CN 116524030 A CN116524030 A CN 116524030A CN 202310796498 A CN202310796498 A CN 202310796498A CN 116524030 A CN116524030 A CN 116524030A
Authority
CN
China
Prior art keywords
crane
points
feature point
cameras
swing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310796498.6A
Other languages
Chinese (zh)
Other versions
CN116524030B (en
Inventor
贾蒙
冯文静
王金波
郭晋飞
曹文平
刘玉成
朱婉毓
刘烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinxiang University
Original Assignee
Xinxiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinxiang University filed Critical Xinxiang University
Priority to CN202310796498.6A priority Critical patent/CN116524030B/en
Publication of CN116524030A publication Critical patent/CN116524030A/en
Application granted granted Critical
Publication of CN116524030B publication Critical patent/CN116524030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to a reconstruction method and a reconstruction system for a digital twin crane under the swinging condition, which are characterized in that a camera is used for collecting an environment visual image, a reference object in the environment is extracted from the image, a visual space model is established, an operation target is extracted from the image, and the self swinging parameter of the crane is calculated based on the reference object to reconstruct a digital twin image. Therefore, the relevant feedback can be accurately represented in the virtual environment, and the reality of digital twin experiment teaching is improved.

Description

Reconstruction method and system for digital twin crane under swinging condition
Technical Field
The invention belongs to the field of crane swing amplitude measurement, and particularly relates to a crane swing measurement control system.
Background
The construction of the industry in China goes through the development stages of mechanization, automation and digitalization, the production process and the management efficiency of factories are developed rapidly, and great contribution is made to the industrial business and urban development in China. In recent years, with the continuous advancement of construction of smart cities, smart industry, digital china and other projects, social development has put higher demands on plant managers. At present, the whole industry is still a labor-intensive traditional industry, the modernization level of the industry is not high, and the problems of longer construction period, higher resource and energy consumption, higher production efficiency, lower technological content and the like exist. Under the great tide of industry 4.0, through constructing the intelligent crane operation education platform based on the digital twin technology, the crane operation teaching and research measures are perfected based on the digital information, the automatic control, the equipment, the communication transmission and the AI intelligent analysis model, and the digital twin model with the virtualized and abstract crane operation is established, so that the engineering practice level of relevant professions such as mechanical design and manufacture, automation and the like is comprehensively improved.
In reality, the crane itself swings when lifting a heavy object, and although various means such as a counterweight are performed, a minute swing is unavoidable, which is determined by the operation mode and physical structure of the crane. In order to simulate the real feedback under the digital twin environment model, the swing generated when the crane lifts the real object in the real physical environment needs to be measured, so that the related feedback can be accurately represented in the virtual environment, and the reality of the digital twin is improved.
The swing of the crane can be measured by adding a positioning sensor in the prior art, but the cost is higher. There are also sway measurements using machine vision, but usually the photographic subject is a crane, thus making the measurement accuracy dependent on the image resolution, difficult to detect for small sways. At present, although a camera is also proposed to be installed on the crane, and the swinging condition of the crane is reversely pushed according to the preset marker shot by the camera, the marker is very troublesome to set, the requirement on the engineering site operation is not met, the requirement on the marker is higher, and otherwise, the precision is greatly reduced. Therefore, a technical scheme capable of facilitating field use of the crane and accurately determining swing parameters is urgently needed.
Disclosure of Invention
To solve one or more of the above technical problems, it is proposed that:
a reconstruction method for digital twin crane swing condition,
pre-arrangement: two cameras are assembled in the cab of the crane to acquire images, and the images are respectively marked as、/>The method comprises the steps of carrying out a first treatment on the surface of the A camera is mounted on the crane base, marked +.>Three cameras synchronously acquire visual images;
step 1: visual-based reference extraction and operation target extraction
1-1: extracting image feature points
1-2: if a feature pointAnd its surrounding neighborhood->If the following formula is satisfied, reserving the characteristic points as reference object characteristic point candidates;
if a feature pointAnd its surrounding neighborhood->If the following formula is satisfied, reserving the characteristic point candidate as an operation target characteristic point candidate;
wherein the method comprises the steps ofIs pixel coordinates; the feature point candidate sets of the reference objects in the three cameras are respectively obtained by the method>、/>The operation target feature point candidate sets are +.>、/>、/>
1-3: matching the characteristic points in the reference object characteristic point sets of the three cameras according to the characteristic point algorithm, and obtaining a matched characteristic point set、/>、/>
At the position ofA feature point is randomly selected>Let its coordinates be->In->Randomly selects a characteristic pointLet its coordinates be->. The sums of its surrounding neighborhood pixels are calculated separately.
Wherein the method comprises the steps ofRepresenting a Gaussian convolution template, ">Representing the neighborhood around the feature point, ++>Representing characteristic points->A surrounding neighborhood;
if it isWill->And in the other two corresponding sets +.>、/>Corresponding points in (a) are added with reference object feature point set +.>The method comprises the steps of carrying out a first treatment on the surface of the Otherwise if->Will->And in the other two corresponding sets +.>、/>Corresponding points in (a) are added with reference object feature point set +.>
And so on, inA feature point is randomly selected>In->A feature point is randomly selected>Calculating the sum of the neighboring pixels according to the formula (3), comparing with the formulas (4) and (5), and adding the corresponding characteristic points and two corresponding points thereof into the reference object characteristic point set ∈>The method comprises the steps of carrying out a first treatment on the surface of the At->A feature point is randomly selected>In->A feature point is randomly selected>Calculating the sum of the neighboring pixels according to the formula (3), comparing with the formulas (4) and (5), and adding the corresponding characteristic points and two corresponding points thereof into the reference object characteristic point set ∈>
Step 2: and (3) calculating swing parameters in the running process of the crane according to the coordinates of the reference object feature points in the image obtained in the step (1).
In step 1-3, the method further comprises the step of candidate set of operation target characteristic points、/>、/>A group of corresponding characteristic points and neighborhood thereof are sequentially input into a neural network classifier to obtain an optimized operation target characteristic point set +.>
Further comprising the step 3: according to the characteristic point set of the operation targetThe target is mapped into a digital twin environment.
The neural network classifier is a nonlinear binary neural network classifier.
According to the coordinate change of the reference object characteristic points in the images of the three cameras, the actual position change of the three cameras can be calculated, and according to the actual position change of the three cameras, the swing parameters of the crane can be calculated.
A reconstruction system for the swing condition of a crane for implementing the method comprises three cameras, a site processor and a server.
The two cameras are arranged in the cab of the crane; the other camera is arranged on the crane base.
The operation target is contained in a common field of view of the three cameras.
The on-site processor is used for calculating the swing parameters according to the steps 1 and 2.
The server is used for generating digital twin images.
The invention has the following technical effects:
1. the invention provides a crane swing measurement method, which utilizes a camera to collect an environment visual image. Extracting a reference object in the environment from the image, and establishing a visual space model; and calculates the swing parameters of the crane itself based on the reference. Because the camera is necessary in the digital twinning process, no additional hardware equipment is needed, and no additional sensors, markers and the like are needed to be additionally arranged on the engineering site, so that the camera can adapt to the complex engineering site.
2. The reference object extraction method is particularly optimized, the target extraction method is operated, a plurality of characteristic points are extracted from the visual image, and further resampling optimization and set optimization construction are carried out on the characteristic points, so that the characteristic point set of the background reference object can be accurately obtained under the condition of complex engineering environment and no preset marker, and the position change of the camera can be accurately reversely deduced. The swing parameters of the crane can be calculated on complex engineering sites.
3. The construction method of the characteristic point set of the operation target is optimized, and the real mapping among the position of the operation target, the position deviation of the crane and the swinging parameters is formed in the technology. Under the digital twin environment, according to the mapping, when the crane is at a certain offset position, according to the operation target position, corresponding swing parameters are calculated according to the obtained mapping, and the mapping is applied under the digital twin environment, so that the effect of simulating the real operation environment is achieved.
Detailed Description
The invention describes a crane swaying measurement method and rebuilds on the basis of the method, comprising a reference object extraction method based on vision, an operation target extraction method and a crane swaying measurement method based on a reference object.
A crane swing measurement method utilizes a camera to collect an environment visual image. Extracting a reference object in the environment from the image, and establishing a visual space model; extracting an operation target from the image; and calculates the swing parameters of the crane itself based on the reference.
A crane swing measurement method utilizes a camera to collect an environment visual image. Two cameras are assembled in the cab of the crane to acquire images, and the images are respectively marked as、/>The method comprises the steps of carrying out a first treatment on the surface of the A camera is mounted on the crane base, marked +.>. Three cameras synchronously acquire visual images. The operation target should be included in the common field of view of the three cameras. Wherein (1)>、/>Participating in extracting a reference object, calculating self-swinging parameters of the crane and extracting an operation target; />Only the extraction operation target is involved.
The system also comprises a site processor which is used for being connected with the three cameras, processing images shot by the three cameras, calculating swing parameters and sending the swing parameters to the server. And simultaneously, the field image is also transmitted to a server for generating the twin images.
The system also comprises a server which is used for receiving the data sent by the site processor and realizing the generation and display of the twin images.
Wherein the algorithm implemented in the site processor comprises:
step 1: the visual-based reference extraction method operates the target extraction method.
And extracting a plurality of characteristic points from the visual image, and registering the characteristic points in the images shot by the three cameras.
1. Extracting feature points
Feature points are extracted from the visual image, which feature points refer to a certain pixel in the image, which pixel corresponds to a certain predefined feature. Classical feature point extraction algorithms include Harris, SIFT, SURF, etc. Preferably, the invention adopts SURF algorithm to extract feature points and records the coordinate pair of each feature point
2. Feature point resampling
Because the feature points are generated by a fixed feature extraction algorithm and are not necessarily matched with the actual requirements, the feature point extraction method and the device are used for extracting and positioning the reference object and the operation target, so that the feature points are resampled for the two types of targets.
If a feature pointAnd its surrounding neighborhood->And if the following formula is satisfied, reserving the characteristic points as reference object characteristic point candidates.
If a feature pointAnd its surrounding neighborhood->And if the following formula is satisfied, reserving the characteristic point as an operation target characteristic point candidate.
The neighborhood size is preferably 13 x 13, it is understood that i represents the absolute value,is the pixel coordinates. A feature point may be both a reference feature point candidate and an operation target feature point candidate.
The candidate sets of the characteristic points of the reference object in the three cameras are respectively recorded as、/>、/>The operation target feature point candidate sets are +.>、/>、/>
In this example, the motion of the operation target relative to the ground is mainly vertical motion perpendicular to the ground, while the reference object is stationary relative to the ground, and the image feature points are primarily screened by the above formulas 1 and 2, so that the feature points better conform to the objective condition, and the calculation efficiency and measurement performance are improved.
3. And optimizing the operation target characteristic point set and the reference object characteristic point set.
According to the foregoing description, one feature point may exist in both the operation target feature point candidate set and the reference feature point candidate set. Whereas the operating target is dynamic with respect to the ground and the reference is stationary with respect to the ground, so that the movement of the weight can be distinguished from the swinging of the crane itself. Therefore, further optimization of the two feature point sets extracted as described above is required.
3.1 optimization of the reference feature point set.
Matching feature points in a reference feature point set of three cameras according to a feature point algorithm (SURF feature point algorithm in this example), and obtaining a matched feature point set、/>、/>. The number of elements in the three sets is the same, and one feature point in any set has a unique matching object in the corresponding set.
At the position ofA feature point is randomly selected>Let its coordinates be->In->Randomly selects a characteristic pointLet its coordinates be->. The sums of its surrounding neighborhood pixels are calculated separately.
Wherein the method comprises the steps ofRepresenting a gaussian convolution modulusA plate that acts to reduce the interference of local pixel noise (e.g., pretzel noise) on the measurement; />Representing characteristic points->Surrounding neighborhood, ->Representing characteristic points->Surrounding neighborhood.
If it is
Will beAnd in the other two corresponding sets +.>、/>Corresponding points in (a) together with the candidate set +.>
Otherwise if it
Will beAnd in the other two corresponding sets +.>、/>Corresponding points in (a) together with the candidate set +.>
Further, similarly, inA feature point is randomly selected>In->A feature point is randomly selected>Calculating the sum of neighboring pixels according to formula 3, comparing with conditions 4 and 5, and adding corresponding feature points and two corresponding points into candidate set->
Further, analogize inA feature point is randomly selected>In->A feature point is randomly selected>Calculating the sum of neighboring pixels according to formula 3, comparing with conditions 4 and 5, and adding corresponding feature points and two corresponding points into the set ∈>
Repeating the above steps until the collectionAdding enough characteristic point pairs. Preferably, the stopping condition in this example is 128 pairs of feature points.
If the number of the feature points extracted in the step 1 is insufficient, gatheringIf 128 pairs of feature point pairs are not satisfied, the calculation parameters of the feature point extraction algorithm are readjusted so that the number of feature point pairs in the step is satisfied.
The collection extracted by the above stepsThe characteristic points (pairs) contained in the method are relatively stable, and the calculation efficiency and the measurement performance can be improved as the optimization result of the characteristic points of the reference object.
3.2 optimization of operation target feature Point set
Using separate camerasShooting an image of an operation target, solving characteristic points in the image by using a SURF characteristic point extraction algorithm, manually screening to obtain a group of characteristic point sets marked as +.>
Repeatedly shooting a plurality of operation target images, and adding the characteristic points marked manually into the set
A nonlinear binary neural network classifier is built, which is defined as follows.
Wherein the initial value=1, and/>is 256, initial value->=0,/>E is the natural logarithm.
Wherein the initial value=1,/>/> />256 dimensions.
Wherein the initial valueAnd->1024-dimensional->Representation->In the vicinity of a certain characteristic point, the coordinates are +.>Is a pixel value of (a). Randomly select->1024 pixels in the vicinity of the periphery 32 x 32 are substituted into the formula 8 to obtain ∈18>
Further, in equation 7
Further, in equation 6
Is provided withRepresentation and input->Corresponding sample true values.
Calculating a cost function
Is a minimum value of (2). The BP algorithm can be adopted to iteratively solve according to the initial value、/>、/>Is used for the parameter values of (a).
The neural network classifier judges the probability that the neural network classifier belongs to an operation target according to a feature point and a neighborhood thereof.
For operation target characteristic point candidate set、/>、/>A corresponding set of feature points->、/>、/>Sequentially inputting each point and its neighborhood into a neural network classifier to obtain three corresponding outputs marked as +.>、/>、/>
If it is
The set of feature points is added to the candidate set。/>Is a threshold value.
Sequentially testing、/>、/>Every group of feature points in the set satisfying the above condition 10>
AggregationAnd (5) the operation target characteristic point set is optimized. After screening by the neural network classifier, the characteristic points which are not on the operation target and are extracted in the previous step can be removed, and the position of the operation target in the image is obtained.
And (II) measuring physical deflection and swing parameters of the crane in the step 2.
And (3) calculating physical offset and swing parameters in the running process of the crane according to the coordinates of the reference object feature points in the image obtained in the step (1).
The physical offset of the crane is the offset of the cockpit relative to the base. Can be calculated from the physical world coordinates of both. Wherein the physical world coordinates of the cockpit are obtained by a camera、/>It follows that the physical world coordinates of the base are represented by camera +.>And obtaining the product.
Video camera、/>、/>The relative physical offset between them is obtained by calibration in advance. The expression is as follows.
Equation left Bian Zuobiao aboveThe same is the physical camera coordinates (in homogeneous form) of three cameras, i.e. the offset of the object in physical space relative to the camera itself. Equation right coordinates>Is the physical world coordinate (represented in homogeneous form), i.e., the offset of an object relative to the origin of the physical world coordinate system. />Equal rotation and offset of the camera coordinate system origin relative to the physical world coordinate system origin.
For example, when physical world coordinate system origin selection and cameraWhen coincident, the head is (are)>By calibration +.>、/>、/>、/>、/>、/>.
Before the crane works, the characteristic point set of the reference object is obtained according to the method. Wherein the image coordinates of a group of characteristic points in the three cameras are respectively +.>、/>、/>.
From the linear camera model, the conversion relationship between the image coordinates and the camera coordinates can be known as follows.
Wherein, the liquid crystal display device comprises a liquid crystal display device,to refer broadly to the homogeneous expression of the coordinates of the above-mentioned individual images,/->An internal matrix of parameters representing the camera, which can be obtained in advance, related to the properties of the camera lens and the imaging element, -is obtained in advance>Broadly referring to the homogeneous expression of the aforementioned (in equation 11) individual physical camera coordinates. />Is a scale factor, which is related to the physical unit.
Before the crane starts working, according to the image coordinates、/>The coordinates of the feature point of a certain reference object in the camera can be calculated by combining the formula 12, and then the coordinates of the feature point in the physical world coordinate system can be obtained according to the formula 11>
When the crane works, the camera coordinate system changes due to the movement of the crane, but the image coordinates can be obtained according to the step 1, the world coordinates are not changed, and the offset parameters in the formula 11 can be calculated by the linear least square method according to the simultaneous combination of the formulas 11 and 12 through a sufficient plurality of groups of characteristic points
Due to the cameraIs fixed on the base without being influenced by the swing of the crane, and the camera is deviated relative to the world coordinate system according to the working state>Relative to the post-operation offset->And subtracting to obtain the position offset of the crane during operation.
Video camera、/>The crane swing parameter calculation device is fixed on the cockpit and influenced by the swing of the crane, and can be calculated according to the crane swing parameter. Similarly, the offset parameter +_in equation 11 can be calculated using the linear least squares method according to equations 11, 12>、/>. According to、/>And->、/>Is averaged to obtain the position offset of the cockpit.
According to the difference between the position deviation of the cab and the position deviation of the crane, the swing parameter can be obtained.
(III) step 3: a measuring method of an operation target position.
Obtaining an operation target characteristic point set according to the method
According to equations 11 and 12, world coordinates of feature points on an operation target can be calculated using image coordinates of the feature points in three cameras. I.e. the operation target position measurement is completed. And a real mapping among the operation target position, the crane position deviation and the swinging parameters is formed.
Under the digital twin environment, according to the mapping, when the crane is at a certain offset position, according to the operation target position, corresponding swing parameters are calculated according to the obtained mapping, and the mapping is applied under the digital twin environment, so that the effect of simulating the real operation environment is achieved.
Image measurement method reference Positional offset error Wobble parameter error
Harris 1.33% 2.61%
SURF 1.05% 2.19%
The invention is that 0.57% 1.05%
According to the crane swing measurement method, the swing parameters of the crane in the physical environment are measured, and the real feedback is simulated in the digital twin environment model, so that the reality of digital twin experiment teaching can be improved. Experimental results show that compared with the classical image-based method, the method provided by the invention has higher measurement precision, so that the method can be used for fitting the swing parameters of the physical environment more accurately.

Claims (10)

1. A reconstruction method for a digital twin crane sway condition, characterized by:
pre-arrangement: two cameras are assembled in the cab of the crane to acquire images, and the images are respectively marked as、/>The method comprises the steps of carrying out a first treatment on the surface of the A camera is mounted on the crane base, marked +.>Three cameras synchronously acquire visual images;
step 1: visual-based reference extraction and operation target extraction
1-1: extracting image feature points
1-2: if a feature pointAnd its surrounding neighborhood->If the following formula is satisfied, reserving the characteristic points as reference object characteristic point candidates;
if a feature pointAnd its surrounding neighborhood->If the following formula is satisfied, reserving the characteristic point candidate as an operation target characteristic point candidate;
wherein the method comprises the steps ofIs pixel coordinates; the feature point candidate sets of the reference objects in the three cameras are respectively obtained by the method>、/>、/>The operation target feature point candidate sets are +.>、/>、/>
1-3: matching the characteristic points in the reference object characteristic point sets of the three cameras according to the characteristic point algorithm, and obtaining a matched characteristic point set、/>、/>
At the position ofA feature point is randomly selected>Let its coordinates be->In->A feature point is randomly selected>Let its coordinates be->Respectively calculating the sum of the neighboring pixels around the pixel,
wherein the method comprises the steps ofRepresenting a Gaussian convolution template, ">Representing characteristic points->Surrounding neighborhood, ->Representing characteristic points->A surrounding neighborhood;
if it isWill->And in the other two corresponding sets +.>、/>Corresponding points in (a) are added with reference object feature point set +.>The method comprises the steps of carrying out a first treatment on the surface of the Otherwise if->Will->And in the other two corresponding sets +.>、/>Corresponding points in (a) are added with reference object feature point set +.>
And so on, inA feature point is randomly selected>In->A feature point is randomly selected>Calculating the sum of the neighboring pixels according to the formula (3), comparing with the formulas (4) and (5), and adding the corresponding characteristic points and two corresponding points thereof into the reference object characteristic point set ∈>The method comprises the steps of carrying out a first treatment on the surface of the At->A feature point is randomly selected>In->Randomly selects a characteristic pointCalculating the sum of the neighboring pixels according to the formula (3), comparing with the formulas (4) and (5), and adding the corresponding characteristic points and two corresponding points thereof into the reference object characteristic point set ∈>
Step 2: and (3) calculating swing parameters in the running process of the crane according to the coordinates of the reference object feature points in the image obtained in the step (1).
2. A reconstruction method for a digital twin crane swing situation as defined in claim 1, wherein: in step 1-3, the method further comprises the step of candidate set of operation target characteristic points、/>、/>A group of corresponding characteristic points and neighborhood thereof are sequentially input into a neural network classifier to obtain an optimized operation target characteristic point set +.>
3. A reconstruction method for a digital twin crane swing situation as claimed in claim 2, characterized by: further comprising the step 3: according to the characteristic point set of the operation targetThe target is mapped into a digital twin environment.
4. A reconstruction method for a digital twin crane swing situation as claimed in claim 2, characterized by: the neural network classifier is a nonlinear binary neural network classifier.
5. A reconstruction method for a digital twin crane swing situation as claimed in claim 2, characterized by: according to the coordinate change of the reference object characteristic points in the images of the three cameras, the actual position change of the three cameras can be calculated, and according to the actual position change of the three cameras, the swing parameters of the crane can be calculated.
6. Reconstruction system in case of sway of a crane implementing the method according to any of the preceding claims 1-5, characterized in that: the system comprises three cameras, a site processor and a server.
7. Reconstruction system in case of swing of crane according to claim 6, characterized in that: the two cameras are arranged in the cab of the crane; the other camera is arranged on the crane base.
8. Reconstruction system in case of swing of crane according to claim 7, characterized in that: the operation target is contained in a common field of view of the three cameras.
9. Reconstruction system in case of swing of crane according to claim 6, characterized in that: the on-site processor is used for calculating the swing parameters according to the steps 1 and 2.
10. Reconstruction system in case of swing of crane according to claim 6, characterized in that: the server is used for generating digital twin images.
CN202310796498.6A 2023-07-03 2023-07-03 Reconstruction method and system for digital twin crane under swinging condition Active CN116524030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310796498.6A CN116524030B (en) 2023-07-03 2023-07-03 Reconstruction method and system for digital twin crane under swinging condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310796498.6A CN116524030B (en) 2023-07-03 2023-07-03 Reconstruction method and system for digital twin crane under swinging condition

Publications (2)

Publication Number Publication Date
CN116524030A true CN116524030A (en) 2023-08-01
CN116524030B CN116524030B (en) 2023-09-01

Family

ID=87390610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310796498.6A Active CN116524030B (en) 2023-07-03 2023-07-03 Reconstruction method and system for digital twin crane under swinging condition

Country Status (1)

Country Link
CN (1) CN116524030B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103528571A (en) * 2013-10-12 2014-01-22 上海新跃仪表厂 Monocular stereo vision relative position/pose measuring method
CN110334701A (en) * 2019-07-11 2019-10-15 郑州轻工业学院 Collecting method based on deep learning and multi-vision visual under the twin environment of number
KR20210123437A (en) * 2020-04-02 2021-10-14 한국해양대학교 산학협력단 Sloshing prediction system using digital twin
CN113868803A (en) * 2021-10-13 2021-12-31 大连理工大学 Mechanism model and dynamic data combined driven cloud-edge combined digital twinning method
US20220067229A1 (en) * 2020-09-03 2022-03-03 International Business Machines Corporation Digital twin multi-dimensional model record using photogrammetry
CN114359412A (en) * 2022-03-08 2022-04-15 盈嘉互联(北京)科技有限公司 Automatic calibration method and system for external parameters of camera facing to building digital twins
KR102468718B1 (en) * 2021-10-29 2022-11-18 (주)넥스트빅스튜디오 Method and device for providing 3d digital twin space using deep neural network
US20230081908A1 (en) * 2021-09-10 2023-03-16 Milestone Systems A/S Method of training a machine learning algorithm to identify objects or activities in video surveillance data
CN115849202A (en) * 2023-02-23 2023-03-28 河南核工旭东电气有限公司 Intelligent crane operation target identification method based on digital twin technology

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103528571A (en) * 2013-10-12 2014-01-22 上海新跃仪表厂 Monocular stereo vision relative position/pose measuring method
CN110334701A (en) * 2019-07-11 2019-10-15 郑州轻工业学院 Collecting method based on deep learning and multi-vision visual under the twin environment of number
KR20210123437A (en) * 2020-04-02 2021-10-14 한국해양대학교 산학협력단 Sloshing prediction system using digital twin
US20220067229A1 (en) * 2020-09-03 2022-03-03 International Business Machines Corporation Digital twin multi-dimensional model record using photogrammetry
US20230081908A1 (en) * 2021-09-10 2023-03-16 Milestone Systems A/S Method of training a machine learning algorithm to identify objects or activities in video surveillance data
CN113868803A (en) * 2021-10-13 2021-12-31 大连理工大学 Mechanism model and dynamic data combined driven cloud-edge combined digital twinning method
KR102468718B1 (en) * 2021-10-29 2022-11-18 (주)넥스트빅스튜디오 Method and device for providing 3d digital twin space using deep neural network
CN114359412A (en) * 2022-03-08 2022-04-15 盈嘉互联(北京)科技有限公司 Automatic calibration method and system for external parameters of camera facing to building digital twins
CN115849202A (en) * 2023-02-23 2023-03-28 河南核工旭东电气有限公司 Intelligent crane operation target identification method based on digital twin technology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NING CAI等: "Method for the Relative Pose Reconstruction of Hydraulic Supports Driven by Digital Twins", 《IEEE SENSORS JOURNAL》, vol. 23, no. 5 *
林耿萱: "基于数字孪生的智能起重装卸控制系统设计与实现", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
陈末然;邓昌义;张健;郭锐锋;: "基于数字孪生的生产线三维检测与交互算法研究", 小型微型计算机系统, no. 05 *

Also Published As

Publication number Publication date
CN116524030B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN108537215A (en) A kind of flame detecting method based on image object detection
CN108334899A (en) Quantify the bone age assessment method of information integration based on hand bone X-ray bone and joint
CN107862712A (en) Sized data determines method, apparatus, storage medium and processor
CN111323788B (en) Building change monitoring method and device and computer equipment
CN115294145B (en) Method and system for measuring sag of power transmission line
CN116738552B (en) Environment detection equipment management method and system based on Internet of things
CN109410175B (en) SAR radar imaging quality rapid automatic evaluation method based on multi-subregion image matching
CN108830317B (en) Rapid and fine evaluation method for joint attitude of surface mine slope rock mass based on digital photogrammetry
CN113435282A (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN115512247A (en) Regional building damage grade assessment method based on image multi-parameter extraction
CN114676763A (en) Construction progress information processing method
CN114627461A (en) Method and system for high-precision identification of water gauge data based on artificial intelligence
CN114494845A (en) Artificial intelligence hidden danger troubleshooting system and method for construction project site
CN116524030B (en) Reconstruction method and system for digital twin crane under swinging condition
CN115849202B (en) Intelligent crane operation target identification method based on digital twin technology
CN109005390A (en) Personnel's distributed model method for building up and system based on signal strength and video
CN115841557B (en) Intelligent crane operation environment construction method based on digital twin technology
CN113421236A (en) Building wall surface water leakage apparent development condition prediction method based on deep learning
CN111161227B (en) Target positioning method and system based on deep neural network
Wang et al. Vision technique for deflection measurements based on laser positioning
CN117076928A (en) Bridge health state monitoring method, device and system and electronic equipment
CN115311447A (en) Pointer instrument indicating number identification method based on deep convolutional neural network
CN114549613A (en) Structural displacement measuring method and device based on deep super-resolution network
CN116051988B (en) Fire source positioning method and system based on artificial neural network
CN116823737B (en) Tunnel wall abnormity detection method and system in low-texture environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant