CN108090572B - Control method of offshore wind farm augmented reality system - Google Patents

Control method of offshore wind farm augmented reality system Download PDF

Info

Publication number
CN108090572B
CN108090572B CN201711250639.5A CN201711250639A CN108090572B CN 108090572 B CN108090572 B CN 108090572B CN 201711250639 A CN201711250639 A CN 201711250639A CN 108090572 B CN108090572 B CN 108090572B
Authority
CN
China
Prior art keywords
target instrument
point
module
virtual
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711250639.5A
Other languages
Chinese (zh)
Other versions
CN108090572A (en
Inventor
赵向前
沈润杰
范玉鹏
常志明
姜浩杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datang Guoxin Binhai Offshore Wind Power Generation Co ltd
Tongji University
Original Assignee
Datang Guoxin Binhai Offshore Wind Power Generation Co ltd
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datang Guoxin Binhai Offshore Wind Power Generation Co ltd, Tongji University filed Critical Datang Guoxin Binhai Offshore Wind Power Generation Co ltd
Priority to CN201711250639.5A priority Critical patent/CN108090572B/en
Publication of CN108090572A publication Critical patent/CN108090572A/en
Application granted granted Critical
Publication of CN108090572B publication Critical patent/CN108090572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an offshore wind farm augmented reality system and a control method thereof, wherein the system comprises: the scene acquisition module comprises AR equipment and is used for acquiring target instruments in the offshore wind farm; the tracking module is used for calculating coordinate conversion data of the virtual object from a virtual coordinate system to the coordinates of the target instrument or the set direction position of the target instrument according to the visual angle of an operator and the position of a camera in the AR equipment; and the virtual fusion and display module is matched with the tracking module to merge the virtual object and the real scene, superimpose the virtual object on the target instrument in the real scene or at the set direction position of the target instrument, and finally display the virtual object to a user. The augmented reality system of the offshore wind farm provided by the invention has the advantages that the virtual object for assisting the operator to work is superposed in the real scene, so that the decision and operation of the operation and maintenance problem of the operator on site are facilitated, the working efficiency of the operator is improved, and the normal work of the wind farm is ensured.

Description

Control method of offshore wind farm augmented reality system
Technical Field
The invention belongs to the field of AR related technology design and development or the field of an offshore wind farm visual operation and maintenance decision system, and particularly relates to a control method of an offshore wind farm augmented reality system.
Background
An operation and maintenance strategy combining preventive maintenance and error correction maintenance is generally adopted for the offshore wind power plant, and the working condition complexity, the accessibility and safety of transportation, the completeness of fault treatment and the like of the offshore wind power plant can become important factors influencing the operation and maintenance quality. Aiming at the problems of lack of management experience, uneven personnel quality, complex weather and operation environment of offshore wind power plants, high risk of personnel, equipment and ships in offshore operation and the like of offshore wind power plant operation and maintenance systems, the invention designs and develops a visual operation and maintenance decision system of offshore wind power plants based on AR (augmented reality) related technology, helps operators to timely and accurately position and eliminate problems by utilizing technologies such as image recognition, intelligent detection, data mining and the like, and makes corresponding operation and maintenance strategies according to field conditions. The AR technology designs trigger points according to actual field environment and equipment states, real-time operation data are embedded into an AR display interface, centralized control data and a real scene coexist, and therefore human-computer interaction body feeling is obtained. In the visual operation and maintenance decision system based on the AR, on-site maintenance personnel identify an object to be maintained through the AR equipment worn by the maintenance personnel, and attach related structural information and real-time operation data to a real scene to manufacture a scene with coexisting virtual data and the real scene, so that the maintenance personnel can be helped to make operation and maintenance problem decision and operation on site.
In view of the above, the present invention is particularly proposed.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art, and provide an offshore wind farm augmented reality system, wherein a virtual object for assisting an operator to work is superposed in a real scene, so that the decision and operation of the operation and maintenance problem of the operator on site are facilitated, the working efficiency of the operator is improved, and the normal work of a wind farm is ensured.
In order to solve the technical problems, the invention adopts the technical scheme that:
an offshore wind farm augmented reality system, comprising:
the scene acquisition module comprises AR equipment and is used for acquiring target instruments in the offshore wind farm;
the tracking module is used for calculating coordinate conversion data of the virtual object from a virtual coordinate system to the coordinates of the target instrument or the set direction position of the target instrument according to the visual angle of an operator and the position of a camera in the AR equipment;
and the virtual fusion and display module is matched with the tracking module to merge the virtual object and the real scene, superimpose the virtual object on the target instrument in the real scene or at the set direction position of the target instrument, and finally display the virtual object to a user.
Preferably, the offshore wind farm augmented reality system further comprises a man-machine interaction module and a control module;
the human-computer interaction module comprises a camera device and/or an audio receiving device and is used for receiving gestures or voice information of an operator to determine the intention of the operator; the control module is used for controlling the virtual fusion and display module to carry out corresponding response according to the intention of the operator determined by the human-computer interaction module.
Preferably, the augmented reality system for the offshore wind farm further comprises a data storage module, wherein the data storage module stores virtual objects corresponding to each instrument of the offshore wind farm, and each virtual object comprises any one or more of a virtual internal structure of each instrument, a virtual operating posture of internal components of each instrument, operating parameter data of each instrument, a disassembly and assembly animation of each instrument, and historical information of each instrument.
Preferably, the data storage module stores three-dimensional models, each three-dimensional model is a virtual three-dimensional model which is rendered by making texture and material data according to the actual physical properties of each apparatus of the offshore wind farm, and the data storage module establishes a dynamic motion model of the machinery with dynamic motion;
and the virtual fusion and display module displays the three-dimensional model or the motion model on a target instrument in a real scene or at a set direction position of the target instrument.
Preferably, the data storage module establishes a dynamic time sequence animation for disassembling the target instrument into parts and assembling the parts into the target instrument aiming at the disassembling and assembling task in the operation and maintenance process of the offshore wind farm, and the virtual fusion and display module displays the animation on the target instrument or at the set direction position of the target instrument in a real scene so as to guide an operator to correctly execute the operation task.
Preferably, each instrument of the offshore wind farm is configured with an environmental perception sensor and a state detection sensor, the augmented reality system further comprises a communication module for acquiring detection information of each sensor, and the virtual fusion and display module displays the detection information of each sensor on each instrument in a real scene or at a set direction position of each instrument to prompt a user.
By the scheme, the auxiliary user can be helped to quickly know the working state of each electrical equipment in the process of patrolling the wind power plant, so that an operator can make operation and maintenance problem decision and operation on site.
Preferably, the data storage module stores a circuit schematic diagram, a circuit connection diagram, a switch opening and closing time sequence and work history data information of each instrument, and the control module is further used for controlling corresponding information to be displayed on AR equipment worn by an operator according to an instruction of the human-computer interaction module and assisting the operator in maintenance and detection;
the communication module is in communication connection with the control module, rated working parameters of all instruments are stored in the data storage module, the state detection module detects working state information of all instruments, the controller determines running health states of all instruments according to detection results of all sensors and the rated working parameters of all instruments, and when running risks of some instruments are detected, the AR equipment prompts a user.
Another object of the present invention is to provide a control method applied to the above-mentioned augmented reality system for an offshore wind farm, including: the method comprises the steps of acquiring a target instrument in an offshore wind farm by using a scene acquisition module, acquiring coordinate conversion data of a virtual object from a virtual coordinate system to the coordinate of the target instrument or the set direction position of the target instrument by using a tracking module, combining the virtual object and a real scene by using a virtual fusion and display module according to the coordinate conversion data, superposing the virtual object on the target instrument or the set direction position of the target instrument in the real scene, and finally displaying the virtual object to a user.
Preferably, the tracking module performs an initialization process on the target instrument acquired by the scene acquisition module: obtaining the accurate pose of the target instrument in the actual scene; when the target instrument and the operator move relatively, the tracking module carries out edge tracking on the target instrument and calculates the coordinate conversion data of the virtual object from the virtual coordinate system to the target instrument coordinate or the set direction position of the target instrument in real time.
Preferably, the data storage module stores point cloud data pre-established for a model of the target instrument, and the initialization process includes: determining the position of a target instrument in an image acquired by a camera in the AR equipment, extracting point cloud data of the target instrument in the image, and finally performing point cloud matching on the obtained point cloud data and the point cloud data stored in a data storage module to obtain an accurate transformation relation among point cloud sets;
when the target instrument and the operator move relatively, the tracking module executes the following steps:
s1, performing edge tracking, and determining the pose of the target instrument after movement;
s2, performing point cloud extraction on the new image position of the target instrument by using an SLAM algorithm;
and S3, point cloud matching, and determining the accurate pose of the target instrument.
By adopting the technical scheme, the invention has the following beneficial effects:
the augmented reality system of the offshore wind farm provided by the invention has the advantages that the virtual object for assisting the operator to work is superposed in the real scene, so that the decision and operation of the operation and maintenance problem of the operator on site are facilitated, the working efficiency of the operator is improved, and the normal work of the wind farm is ensured. An operation and maintenance strategy combining preventive maintenance and error correction maintenance is generally adopted for the offshore wind power plant, and the working condition complexity, the accessibility and safety of transportation, the completeness of fault treatment and the like of the offshore wind power plant can become important factors influencing the operation and maintenance quality. Aiming at the problems of lack of management experience, uneven personnel quality, complex weather and operation environment of an offshore wind farm, high risk of personnel, equipment and ships in offshore operation and the like of an offshore wind farm, the offshore wind farm visual operation and maintenance decision system is designed and developed based on AR (augmented reality) related technology, and the technology such as image recognition, intelligent detection, data mining and the like is utilized to help operators to timely and accurately position and eliminate problems, so that a corresponding operation and maintenance strategy is made according to the field condition. The AR technology designs trigger points according to actual field environment and equipment states, real-time operation data are embedded into an AR display interface, centralized control data and a real scene coexist, and therefore human-computer interaction body feeling is obtained. In the visual operation and maintenance decision system based on the AR, on-site maintenance personnel identify an object to be maintained through the AR equipment worn by the maintenance personnel, and attach related structural information and real-time operation data to a real scene to manufacture a scene with coexisting virtual data and the real scene, so that the maintenance personnel can be helped to make operation and maintenance problem decision and operation on site.
The following describes embodiments of the present invention in further detail with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention, are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention without limiting the invention to the right. It is obvious that the drawings in the following description are only some embodiments, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a diagram of the initialization process steps for tracking a target instrument according to the present invention;
FIG. 2 is a step diagram of the edge tracking process for the target instrument of the present invention.
It should be noted that the drawings and the description are not intended to limit the scope of the inventive concept in any way, but to illustrate it by a person skilled in the art with reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and the following embodiments are used for illustrating the present invention and are not intended to limit the scope of the present invention.
Example one
This implementation provides an offshore wind farm augmented reality system, includes:
the scene acquisition module comprises AR equipment and is used for acquiring target instruments in the offshore wind farm; the scene acquisition module is preferably an image video acquisition device arranged on a wind power plant site or an AR (augmented reality) device worn by an operator, and the operator can operate in the wind power site and also can remotely operate or train and learn on the basis of remotely watching image video information acquired by the image video acquisition device in the wind power site.
The tracking module is used for calculating coordinate conversion data of the virtual object from a virtual coordinate system to the coordinates of the target instrument or the set direction position of the target instrument according to the visual angle of an operator and the position of a camera in the AR equipment;
and the virtual fusion and display module is matched with the tracking module to merge the virtual object and the real scene, superimpose the virtual object on the target instrument in the real scene or at the set direction position of the target instrument, and finally display the virtual object to a user.
Preferably, the offshore wind farm augmented reality system further comprises a man-machine interaction module and a control module;
the human-computer interaction module comprises a camera device and/or an audio receiving device and is used for receiving gestures or voice information of an operator to determine the intention of the operator; the control module is used for controlling the virtual fusion and display module to carry out corresponding response according to the intention of the operator determined by the human-computer interaction module.
Preferably, the augmented reality system for the offshore wind farm further comprises a data storage module, wherein the data storage module stores virtual objects corresponding to each instrument of the offshore wind farm, and each virtual object comprises any one or more of a virtual internal structure of each instrument, a virtual operating posture of internal components of each instrument, operating parameter data of each instrument, a disassembly and assembly animation of each instrument, and historical information of each instrument.
Preferably, the data storage module stores three-dimensional models, each three-dimensional model is a virtual three-dimensional model which is rendered by making texture and material data according to the actual physical properties of each apparatus of the offshore wind farm, and the data storage module establishes a dynamic motion model of the machinery with dynamic motion;
and the virtual fusion and display module displays the three-dimensional model or the motion model on a target instrument in a real scene or at a set direction position of the target instrument.
Preferably, the data storage module establishes a dynamic time sequence animation for disassembling the target instrument into parts and assembling the parts into the target instrument aiming at the disassembling and assembling task in the operation and maintenance process of the offshore wind farm, and the virtual fusion and display module displays the animation on the target instrument or at the set direction position of the target instrument in a real scene so as to guide an operator to correctly execute the operation task.
Preferably, each instrument of the offshore wind farm is configured with an environmental perception sensor and a state detection sensor, the augmented reality system further comprises a communication module for acquiring detection information of each sensor, and the virtual fusion and display module displays the detection information of each sensor on each instrument in a real scene or at a set direction position of each instrument to prompt a user.
By the scheme, the auxiliary user can be helped to quickly know the working state of each electrical equipment in the process of patrolling the wind power plant, so that an operator can make operation and maintenance problem decision and operation on site.
Preferably, the data storage module stores a circuit schematic diagram, a circuit connection diagram, a switch opening and closing time sequence and work history data information of each instrument, and the control module is further used for controlling corresponding information to be displayed on AR equipment worn by an operator according to an instruction of the human-computer interaction module and assisting the operator in maintenance and detection;
the communication module is in communication connection with the control module, rated working parameters of all instruments are stored in the data storage module, the state detection module detects working state information of all instruments, the controller determines running health states of all instruments according to detection results of all sensors and the rated working parameters of all instruments, and when running risks of some instruments are detected, the AR equipment prompts a user.
Example two
The present embodiment provides a control method applied to the augmented reality system of the offshore wind farm in the first embodiment, including: the method comprises the steps of acquiring a target instrument in an offshore wind farm by using a scene acquisition module, acquiring coordinate conversion data of a virtual object from a virtual coordinate system to the coordinate of the target instrument or the set direction position of the target instrument by using a tracking module, combining the virtual object and a real scene by using a virtual fusion and display module according to the coordinate conversion data, superposing the virtual object on the target instrument or the set direction position of the target instrument in the real scene, and finally displaying the virtual object to a user.
Preferably, the tracking module performs an initialization process on the target instrument acquired by the scene acquisition module: obtaining the accurate pose of the target instrument in the actual scene; when the target instrument and the operator move relatively, the tracking module carries out edge tracking on the target instrument and calculates the coordinate conversion data of the virtual object from the virtual coordinate system to the target instrument coordinate or the set direction position of the target instrument in real time.
Preferably, the data storage module stores point cloud data pre-established for a model of the target instrument, and the initialization process includes: determining the position of a target instrument in an image acquired by a camera in the AR equipment, extracting point cloud data of the target instrument in the image, and finally performing point cloud matching on the obtained point cloud data and the point cloud data stored in a data storage module to obtain an accurate transformation relation among point cloud sets;
when the target instrument and the operator move relatively, the tracking module executes the following steps:
s1, performing edge tracking, and determining the pose of the target instrument after movement;
s2, performing point cloud extraction on the new image position of the target instrument by using an SLAM algorithm;
and S3, point cloud matching, and determining the accurate pose of the target instrument.
EXAMPLE III
In the third embodiment, the initialization process of the target instrument acquired by the scene acquisition module by the tracking module is further disclosed in detail on the basis of the second embodiment, and as shown in fig. 1, the method specifically includes the following steps:
a1001, according to a model of a known target instrument, a point cloud database of the target instrument is established in advance and stored in a data storage module;
a1002, determining the position of a target instrument in an image acquired by a camera (the camera of an AR device), and extracting point cloud data of the target instrument in the image;
and A1003, carrying out point cloud matching on the obtained point cloud data and the established point cloud database to obtain an accurate transformation relation among the point cloud sets.
Preferably, the model of the target instrument is a 3D model, a multi-view point cloud database is generated by selecting different views of the 3D model, and the pose of the target instrument under each view is recorded. Implementation can be done using slam software library or PCL, etc.
Preferably, the image acquired by the camera is subjected to ORB matching with a pre-prepared image of the target instrument, so as to obtain the approximate position of the target instrument in the image.
Preferably, the ORB feature point extraction of the image of the target instrument prepared in advance and the matching with the image acquired by the camera include: respectively obtaining the feature points of the two images to obtain feature descriptors, and judging whether the Euclidean distance between the feature descriptors of the two images is smaller than a set threshold value, if so, judging that the two images are matched, otherwise, judging that the two images are not matched;
the ORB feature point extraction method comprises the following steps:
a1, generating a Gaussian pyramid of an image from a pre-prepared image of a target instrument;
a2, generating a DOG pyramid according to the image obtained in the step A1;
a3, carrying out spatial extreme point detection on the image obtained in the step A2 to obtain a plurality of key points which are local extreme points in a scale space and a two-dimensional image space;
a4, in the key points obtained in the step A3, making a circle with the radius of 3 and each key point pixel p as the center, wherein the circle has 16 pixel points: p16, p1, p2.;
a5, defining a threshold, calculating pixel differences between p1, p9 and the center p, if the absolute values of the pixel differences are smaller than the set threshold, judging that the p point cannot be a characteristic point, and removing the characteristic point, otherwise, judging that the p point is a candidate point and needing further judgment;
a6, if p is a candidate point, calculating pixel differences between p1, p9, p5, p13 and the center p, if the absolute values of the p and the p exceed a threshold value by at least 3, taking the pixel differences as the candidate point, and then carrying out next investigation;
a7, calculating the pixel difference between the 16 points p1 to p16 and the center p, and if at least 9 of the 16 points exceed the threshold value, then p is a characteristic point;
a8, carrying out non-maximum suppression on the image: calculating score values of the feature points, judging that the s value of each feature point is judged if a plurality of feature points exist in a neighborhood taking the feature point p as the center, and if the p is the maximum response value in all the feature points of the neighborhood, keeping the s value;
the score calculation formula is as follows:
Figure GDA0003506464450000071
wherein p represents a pixel value of a central point, value represents a pixel value of a feature point in a field centered on p, S represents a score, and t represents a threshold; the s value of the characteristic point is the sum of absolute values of differences between 16 points and the center;
a9, taking the feature point reserved in the step A8 as the center, taking a neighborhood window of SxS, randomly selecting a pair of points in the window, comparing the sizes of the pixels of the two points, and carrying out the following binary value assignment;
Figure GDA0003506464450000081
wherein, p (x), p (y) are pixel values of random points x ═ (u1, v1), y ═ u2, v2, respectively;
a10, randomly selecting N pairs of random points in a window, and repeating binary assignment to obtain a feature descriptor;
a11, obtaining a 256-bit binary code for each feature point screened in the step A8.
Preferably, step a1 includes the steps of:
a101, doubling a pre-prepared image of a target instrument to serve as a first group of first layers of a Gaussian pyramid, and carrying out Gaussian convolution on the first group of first layer images to obtain a first group of second layers, wherein the formula of the Gaussian convolution is as follows:
Figure GDA0003506464450000082
wherein, (x, y) is the coordinates of the pixel points, and σ is the standard deviation of normal distribution, preferably set to 1.6;
a102, multiplying the sigma by a proportionality coefficient k to obtain a new sigma, using the new sigma to smooth the images of the first group and the second layer, repeating the step, and finally obtaining L-layer images, wherein in the same group, the size of each layer of image is the same, but the smooth coefficients are different;
a103, performing down-sampling on the first group of last-but-third layer images with the scale factor of 2 to obtain images serving as a second group of first layers, and then performing the steps A102 and A103 to obtain a second group of L layer images;
a104, repeatedly executing the above process to obtain a total O group, wherein each group comprises L layers, and the total O x L images are obtained;
in step a2, subtracting the first group of first layers from the first group of second layers in the gaussian pyramid of the image obtained in step a1 to obtain a first group of first layers of the DOG pyramid, subtracting the first group of second layers from the first group of third layers in the gaussian pyramid to obtain a first group of second layers of the DOG pyramid, and so on, generating each differential image group by group and layer by layer, wherein all the differential images form a differential DOG pyramid, that is, the group O I image of the group O of the DOG pyramid is obtained by subtracting the group O I +1 layer from the group O I layer of the gaussian pyramid;
step a3 further includes the steps of:
a301, in the DOG pyramid image, comparing all pixel points with 8 points in the 3x3 neighborhood;
a302, comparing each pixel point with 2 x 9 points in the 3x3 field of the pixel points at the same position in the two adjacent layers of images;
and A303, ensuring that the key point is a local extreme point in a scale space and a two-dimensional image space.
Preferably, the extracting of the point cloud data of the target instrument in the image includes extracting the object point cloud by using a SLAM algorithm after determining the position of the target instrument in the image;
the SLAM algorithm adopts any one algorithm of an LSD-SLAM algorithm, an ORB-SLAM algorithm, an RGBD-SLAM2 algorithm and an Elasticfusion algorithm;
preferably, the SLAM algorithm is an ORB-SLAM algorithm.
Preferably, when the monocular SLAM algorithm is adopted, the feature points extracted by the monocular SLAM are two-dimensional points, the depth information of the feature points needs to be obtained by using a triangulation method, and the point cloud data is obtained after the depth information of the feature points is obtained.
Preferably, the point cloud matching the obtained point cloud data with the established point cloud database to obtain the accurate transformation relation between the point cloud sets comprises the following steps of adopting a point cloud matching algorithm to obtain accurate object pose information:
the point cloud matching algorithm comprises the following steps:
a401, point cloud feature point selection process:
a402, calculating a feature description subprocess;
a403, matching feature points, and performing coarse matching on point clouds to obtain coordinate change T and scale transformation S of the coarse matching;
and A404, iterative optimization process.
Preferably, step a401 further comprises the steps of:
a411, inquiring all points in the radius ri of each point pi in the data obtained by the point cloud once, and calculating the weight:
Figure GDA0003506464450000091
wijthe weight of any point pj in the field of the three-dimensional point pi is shown, and pi and pj in the formula respectively represent three-dimensional coordinate vectors of the two points;
a412, calculating a variance matrix according to the weight
Figure GDA0003506464450000092
Wherein T is a rotation value;
a413, calculating the eigenvalue of the variance matrix
Figure GDA0003506464450000093
Arranging according to the sequence from big to small;
a414, setting thresholds ε 1 and ε 2 remain satisfied
Figure GDA0003506464450000094
And
Figure GDA0003506464450000095
the point (b) is a key point;
step S402 further includes the steps of:
a421, searching all the points in the r radius range of the key points pi meeting the step A414, and assuming that the number of the points is ni;
a422, calculating normal vectors of ni points;
a423, calculating a feature descriptor of the key point pi according to the ni points;
wherein, the features between any two points Ds and Dt and their corresponding normals ns and nt are calculated as follows:
α=V·nt;
Figure GDA0003506464450000101
θ=arctan(W·ns,U·nt);
d=||Dt-Ds||;
u, V and W respectively represent unit vectors of three coordinate axes in a three-dimensional rectangular coordinate system, wherein ns is the same as the direction of U, phi is an included angle between the U direction and a connecting line direction of Ds and Dt, alpha is an included angle between nt and the V direction, theta is an included angle between the projection of nt on a U-V plane and the U direction, d is an Euclidean distance between Ds and Dt, and alpha, phi, theta and d between any two points in the r radius field of a key point pi are calculated as characteristics of the key point pi.
Preferably, step a404 includes the steps of:
a441, let pi ═(xi, yi, zi), qj ═ xj, yj, zj) be two 3D points in the three-dimensional space, and their euclidean distances are:
Figure GDA0003506464450000102
in order to solve the rotation matrix R and the coordinate transformation T of any two point cloud sets P and Q, for any characteristic point pi in the point cloud P and a characteristic point qj corresponding to pi in Q, wherein qj is Rpi + T, the least square method is used for solving the optimal solution to obtain an error E:
Figure GDA0003506464450000103
wherein N represents the total amount of matched characteristic points in the two point clouds, and R and T which enable the error E to be minimum are solved by using a least square method;
a442, parallel translation and rotation separation: firstly, carrying out initial estimation on coordinate transformation T to respectively obtain the centers of point sets P and Q:
Figure GDA0003506464450000104
Figure GDA0003506464450000111
a443, constructing a covariance matrix of the point sets P and Q:
Figure GDA0003506464450000112
wherein
Figure GDA0003506464450000113
Representing a centralized matrix of point clouds, qi TIs the transposition of the vector;
a444, constructing A4 x4 symmetric matrix from the covariance matrix:
Figure GDA0003506464450000114
wherein I3 is a 3x3 identity matrix;
Figure GDA0003506464450000115
wherein, delta refers to symbol and represents Q (Sigma P, Q);
a445, calculating the eigenvalue and eigenvector of Q (Σ P, Q), and the eigenvector corresponding to the largest eigenvalue is the optimal rotation vector qR ═ Q0Q1Q2Q3]T
A446, calculating an optimal translation vector:
Figure GDA0003506464450000116
a447, superposing the rotation matrix and the translation vector on the point cloud Q, and then substituting the formula:
Figure GDA0003506464450000117
if the error is smaller than the set threshold, finishing the iteration, otherwise, continuously repeating the steps; after the iteration is finished, the obtained rotation matrix and translation vector are the initial position of the target instrument, and the initialization process is finished.
Example four
A fourth embodiment is to further describe in detail, on the basis of the second embodiment and the third embodiment, a process of performing edge tracking on the target instrument by the tracking module to calculate, in real time, coordinate conversion data of the virtual object from the virtual coordinate system to the target instrument coordinate or the target instrument set direction position when the target instrument and the operator move relatively, where:
the AR device is AR glasses worn by an operator, as shown in fig. 2, and the edge tracking process includes the following steps:
s1, performing edge tracking, and determining the pose of the target instrument after movement;
s2, performing point cloud extraction on the new image position of the target instrument by using an SLAM algorithm;
and S3, point cloud matching, and determining the accurate pose of the target instrument.
Preferably, in step S1, the method includes detecting an edge of the target device in real time to determine the position of the target device, wherein the step of detecting the edge of the target device is:
s101, performing Gaussian smoothing on an image acquired by a camera;
s102, calculating to obtain the global gradient of the image;
s103, reserving a point with the maximum local gradient for the graph, and inhibiting a non-maximum value;
s104, detecting and connecting image edges by using a double-threshold algorithm;
and S105, obtaining a new contour position of the target instrument, and updating the pose information of the target instrument.
Preferably, in step S101, the gaussian smoothing processing on the image uses a gaussian smoothing function as follows:
Figure GDA0003506464450000121
let g (x, y) be the smoothed image, and perform smoothing on the image f (x, y) by using h (x, y, σ), that is:
g(x,y)=h(x,y,σ)*f(x,y);
in step S102, the method further includes the steps of:
s1021, calculating partial derivatives f 'x (x, y) and f' y (x, y) in x and y directions using first order finite differences, thereby obtaining partial derivative matrices Gx (x, y) and Gy (x, y), as follows:
f′x(x,y)≈Gx=[f(x+1,y)-f(x,y)+f(x+1,y+1)-f(x,y+1)]/2;
f′y(x,y)≈Gy=[f(x,y+1)-f(x,y)+f(x+1,y+1)-f(x+1,y)]/2;
s1022, further averaging the finite differences to calculate the partial derivative gradients of x and y at the same point in the image, wherein the amplitude and the azimuth can be calculated by a coordinate transformation formula from rectangular coordinates to polar coordinates:
Figure GDA0003506464450000122
θ[x,y]=arctan(Gx(x,y)/Gy(x,y));
wherein M [ x, y ] reflects the edge strength of the image; theta x, y reflects the direction of the edge, so that M x, y obtains the direction theta x, y of the local maximum value, and reflects the direction of the edge;
in step S103, the step of retaining the point with the largest local gradient for the graph, and suppressing the non-maximum value includes: comparing a central pixel M [ x, y ] of the field at each point with two pixels along the gradient line, and if the gradient value of M [ x, y ] is not larger than the gradient values of two adjacent pixels along the gradient line, making M [ x, y ] equal to 0, thereby obtaining a non-maximum value suppression image;
the step S104 of detecting and connecting the image edges by using the dual-threshold algorithm includes: applying two thresholds th1 and th2 to the non-local maximum suppression image obtained in step S103, where th1 is 0.4th 2;
setting the gray value of the pixel with the gradient value less than th1 as 0 to obtain an image 1, then setting the gray value of the pixel with the gradient value less than th2 as 0 to obtain an image 2, and connecting the edges of the images by taking the image 1 as a supplement on the basis of the image 2;
the specific steps of connecting the edges of the images are as follows:
s1041, scanning the image 2, and when a pixel p (x, y) with non-zero gray is encountered, tracking a contour line taking p (x, y) as a starting point until an end point q (x, y) of the contour line;
s1042, considering 8 neighboring regions of a point S (x, y) in the image 1 corresponding to the position of the q (x, y) point in the image 2, if there is a non-zero pixel S (x, y) in the 8 neighboring regions of the S (x, y) point, including it in the image 2 as the r (x, y) point;
s1043, repeating the above steps starting from r (x, y) until the images cannot be continued in both image 1 and image 2;
s1044, after completing the connection of the contour lines including p (x, y), mark the contour line as visited, enter step S1041, find the next contour line, and repeat the above steps until no new contour line can be found in the image 2.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A control method of an augmented reality system of an offshore wind farm is characterized by comprising the following steps:
the scene acquisition module comprises AR equipment and is used for acquiring target instruments in the offshore wind farm;
the tracking module is used for calculating coordinate conversion data of the virtual object from a virtual coordinate system to the coordinates of the target instrument or the set direction position of the target instrument according to the visual angle of an operator and the position of a camera in the AR equipment;
the virtual fusion and display module is matched with the tracking module to combine the virtual object and the real scene, superimpose the virtual object on the target instrument in the real scene or at the set direction position of the target instrument and finally display the virtual object to a user;
the control method comprises the following steps: acquiring a target instrument in an offshore wind farm by using a scene acquisition module, acquiring coordinate conversion data of a virtual object from a virtual coordinate system to the coordinate of the target instrument or the set direction position of the target instrument by using a tracking module, finally merging the virtual object and a real scene by using a virtual fusion and display module according to the coordinate conversion data, superposing the virtual object on the target instrument or the set direction position of the target instrument in the real scene, and finally displaying the virtual object to a user;
the tracking module carries out an initialization process on the target instrument acquired by the scene acquisition module: obtaining the accurate pose of the target instrument in the actual scene; when the target instrument and the operator move relatively, the tracking module carries out edge tracking on the target instrument and calculates the coordinate conversion data of the virtual object from the virtual coordinate system to the target instrument coordinate or the set direction position of the target instrument in real time;
the data storage module stores point cloud data pre-established for a model of a target instrument, and the initialization process comprises the following steps: determining the position of a target instrument in an image acquired by a camera in the AR equipment, extracting point cloud data of the target instrument in the image, and finally performing point cloud matching on the obtained point cloud data and the point cloud data stored in a data storage module to obtain an accurate transformation relation among point cloud sets;
when the target instrument and the operator move relatively, the tracking module executes the following steps:
s1, performing edge tracking, and determining the pose of the target instrument after movement;
s2, performing point cloud extraction on the new image position of the target instrument by using an SLAM algorithm;
s3, point cloud matching is carried out, and the accurate pose of the target instrument is determined;
the model of the target instrument is a 3D model, a multi-view point cloud database is generated by selecting different view angles of the 3D model, and the pose of the target instrument under each view angle is recorded;
carrying out ORB matching on the image acquired by the camera and a pre-prepared image of the target instrument to obtain the approximate position of the target instrument in the image;
ORB feature point extraction is carried out on the image of a target instrument prepared in advance and is matched with the image acquired by a camera, and the method comprises the following steps: respectively obtaining feature points of the two images to obtain feature descriptors, judging whether the Euclidean distance between the feature descriptors of the two images is smaller than a set threshold value, if so, judging that the two images are matched, otherwise, judging that the two images are not matched;
the ORB feature point extraction method comprises the following steps:
a1, generating a Gaussian pyramid of an image from a pre-prepared image of a target instrument;
a2, generating a DOG pyramid according to the image obtained in the step A1;
a3, carrying out spatial extreme point detection on the image obtained in the step A2 to obtain a plurality of key points which are local extreme points in a scale space and a two-dimensional image space;
a4, in the key points obtained in the step A3, taking each key point pixel p as a center, making a circle with a radius of 3, wherein 16 pixel points on the circle are respectively: p16, p1, p2.;
a5, defining a threshold, calculating pixel differences between p1, p9 and the center p, if the absolute values of the pixel differences are smaller than the set threshold, judging that the p point cannot be a characteristic point, and removing the characteristic point, otherwise, judging that the p point is a candidate point and needing further judgment;
a6, if p is a candidate point, calculating pixel differences between p1, p9, p5, p13 and the center p, if the absolute values of the p and the p exceed a threshold value by at least 3, taking the pixel differences as the candidate point, and then carrying out next investigation;
a7, calculating the pixel difference between the 16 points p1 to p16 and the center p, and if at least 9 of the 16 points exceed the threshold value, then p is a characteristic point;
a8, carrying out non-maximum suppression on the image: calculating score values of the feature points, judging that the s value of each feature point is judged if a plurality of feature points exist in a neighborhood taking the feature point p as the center, and if the p is the maximum response value in all the feature points of the neighborhood, keeping the s value;
the score calculation formula is as follows:
Figure FDA0003541819950000021
wherein p represents a pixel value of a central point, value represents a pixel value of a feature point in a field centered on p, S represents a score, and t represents a threshold; the s value of the characteristic point is the sum of absolute values of differences between 16 points and the center;
a9, taking the feature points reserved in the step A8 as the center, taking a neighborhood window of SxS, randomly selecting a pair of points in the window, comparing the sizes of the pixels of the two points, and carrying out binary value assignment as follows;
Figure FDA0003541819950000031
wherein, p (x), p (y) are pixel values of random points x ═ (u1, v1), y ═ u2, v2, respectively;
a10, randomly selecting N pairs of random points in a window, and repeating binary assignment to obtain a feature descriptor;
a11, obtaining a 256-bit binary code for each feature point screened in the step A8;
step a1 includes the following steps:
a101, doubling a pre-prepared image of a target instrument to serve as a first group of first layers of a Gaussian pyramid, and carrying out Gaussian convolution on the first group of first layer images to obtain a first group of second layers, wherein the formula of the Gaussian convolution is as follows:
Figure FDA0003541819950000032
wherein, (x, y) is the coordinate of the pixel point, and sigma is the standard deviation of normal distribution;
a102, multiplying the sigma by a proportionality coefficient k to obtain a new sigma, using the new sigma to smooth the images of the first group and the second layer, repeating the step, and finally obtaining L-layer images, wherein in the same group, the size of each layer of image is the same, but the smooth coefficients are different;
a103, performing down-sampling on the first group of last-but-third layer images with the scale factor of 2 to obtain images serving as a second group of first layers, and then performing the steps A102 and A103 to obtain a second group of L layer images;
a104, repeatedly executing the process to obtain a total of O groups, wherein each group comprises L layers, and the total of O x L images are obtained;
step a3 includes the following steps:
a301, in the DOG pyramid image, comparing all pixel points with 8 points in the 3x3 neighborhood;
a302, comparing each pixel point with 2 x 9 points in the 3x3 field of the pixel points at the same position in the two adjacent layers of images;
and A303, ensuring that the key point is a local extreme point in a scale space and a two-dimensional image space.
2. The control method of the offshore wind farm augmented reality system according to claim 1, further comprising a human-computer interaction module and a control module;
the human-computer interaction module comprises a camera device and/or an audio receiving device and is used for receiving gestures or voice information of an operator to determine the intention of the operator; the control module is used for controlling the virtual fusion and display module to carry out corresponding response according to the intention of the operator determined by the human-computer interaction module.
3. The method for controlling the augmented reality system of the offshore wind farm according to claim 1 or 2, further comprising a data storage module, wherein the data storage module stores virtual objects corresponding to the instruments of the offshore wind farm, and the virtual objects comprise any one or more of virtual internal structures of the instruments, virtual operating postures of internal components of the instruments, operating parameter data of the instruments, assembly and disassembly animations of the instruments, and historical information of the instruments.
4. The method according to claim 3, wherein the data storage module stores three-dimensional models, each three-dimensional model is a virtual three-dimensional model rendered by texture and material data according to actual physical properties of each apparatus of the offshore wind farm, and the data storage module establishes a dynamic motion model of a machine with dynamic motion;
and the virtual fusion and display module displays the three-dimensional model or the motion model on a target instrument in a real scene or at a set direction position of the target instrument.
5. The method for controlling the offshore wind farm augmented reality system according to claim 3, wherein the data storage module establishes a dynamic time sequence animation for disassembling and assembling parts from and into target instruments for disassembly and assembly tasks in the offshore wind farm operation and maintenance process, and the virtual fusion and display module displays the animation on the target instruments in a real scene or at a set direction position of the target instruments so as to guide an operator to correctly execute operation tasks.
6. The method for controlling the augmented reality system of the offshore wind farm according to any one of claims 1, 2, 4 and 5, wherein each instrument of the offshore wind farm is configured with an environment sensing sensor and a state detection sensor, the augmented reality system further comprises a communication module for acquiring detection information of each sensor, and the virtual fusion and display module displays the detection information of each sensor on each instrument or at a set direction position of each instrument in a real scene to prompt a user.
7. The control method of the offshore wind farm augmented reality system according to claim 3, wherein the data storage module stores a circuit schematic diagram, a circuit connection diagram, a switch opening and closing timing sequence and work history data information of each instrument, and the control module is further used for controlling corresponding information to be displayed on AR equipment worn by an operator according to an instruction of the human-computer interaction module so as to assist the operator in maintenance and detection;
the communication module is in communication connection with the control module, rated working parameters of all instruments are stored in the data storage module, the state detection module detects working state information of all instruments, the control module determines running health states of all instruments according to detection results of all sensors and the rated working parameters of all instruments, and when running risks of some instruments are detected, the AR equipment prompts a user.
CN201711250639.5A 2017-12-01 2017-12-01 Control method of offshore wind farm augmented reality system Active CN108090572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711250639.5A CN108090572B (en) 2017-12-01 2017-12-01 Control method of offshore wind farm augmented reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711250639.5A CN108090572B (en) 2017-12-01 2017-12-01 Control method of offshore wind farm augmented reality system

Publications (2)

Publication Number Publication Date
CN108090572A CN108090572A (en) 2018-05-29
CN108090572B true CN108090572B (en) 2022-05-06

Family

ID=62172495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711250639.5A Active CN108090572B (en) 2017-12-01 2017-12-01 Control method of offshore wind farm augmented reality system

Country Status (1)

Country Link
CN (1) CN108090572B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1026509B1 (en) * 2018-08-02 2020-03-04 North China Electric Power Univ Baoding METHOD FOR DETERMINING A WIND TURBINE TARGET
CN109840882B (en) * 2018-12-24 2021-05-28 中国农业大学 Station matching method and device based on point cloud data
CN109712233B (en) * 2018-12-27 2023-07-04 华自科技股份有限公司 Pipeline fault display method, system, AR equipment and storage medium
CN109732606A (en) * 2019-02-13 2019-05-10 深圳大学 Long-range control method, device, system and the storage medium of mechanical arm
CN110031880B (en) * 2019-04-16 2020-02-21 杭州易绘科技有限公司 High-precision augmented reality method and equipment based on geographical position positioning
CN112446799B (en) * 2019-09-03 2024-03-19 全球能源互联网研究院有限公司 Power grid dispatching method and system based on AR equipment virtual interaction
CN111401154B (en) * 2020-02-29 2023-07-18 同济大学 AR-based logistics accurate auxiliary operation device for transparent distribution
CN112686399A (en) * 2020-12-24 2021-04-20 国网上海市电力公司 Distribution room fire emergency repair method and system based on augmented reality technology
CN112686254B (en) * 2020-12-31 2022-08-09 山西三友和智慧信息技术股份有限公司 Typhoon center positioning method based on infrared satellite cloud picture
CN113269832B (en) * 2021-05-31 2022-03-29 长春工程学院 Electric power operation augmented reality navigation system and method for extreme weather environment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009036782A1 (en) * 2007-09-18 2009-03-26 Vrmedia S.R.L. Information processing apparatus and method for remote technical assistance
EP2405402A1 (en) * 2010-07-06 2012-01-11 EADS Construcciones Aeronauticas, S.A. Method and system for assembling components

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789514B (en) * 2012-04-20 2014-10-08 青岛理工大学 Induction method of three-dimensional (3D) online induction system for mechanical equipment dismounting
CN104933718B (en) * 2015-06-23 2019-02-15 广东省智能制造研究所 A kind of physical coordinates localization method based on binocular vision
CN105158927B (en) * 2015-09-28 2018-06-26 大连楼兰科技股份有限公司 The method of part dismounting of the intelligent glasses during automobile maintenance
CN106845502B (en) * 2017-01-23 2020-07-07 东南大学 Wearable auxiliary device for equipment maintenance and visual equipment maintenance guiding method
CN107331220A (en) * 2017-09-01 2017-11-07 国网辽宁省电力有限公司锦州供电公司 Transformer O&M simulation training system and method based on augmented reality

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009036782A1 (en) * 2007-09-18 2009-03-26 Vrmedia S.R.L. Information processing apparatus and method for remote technical assistance
EP2405402A1 (en) * 2010-07-06 2012-01-11 EADS Construcciones Aeronauticas, S.A. Method and system for assembling components

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Efficient 3D Mapping with RGB-D Camera Based on Distance Dependent Update;HyungGi Jo1 etal;《2016 16th International Conference on Control, Automation and Systems》;20161031;第873-875页 *

Also Published As

Publication number Publication date
CN108090572A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN108090572B (en) Control method of offshore wind farm augmented reality system
CN111563446B (en) Human-machine interaction safety early warning and control method based on digital twin
Rizzini et al. Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
CN107958466B (en) Slam algorithm optimization model-based tracking method
CN108109208B (en) Augmented reality method for offshore wind farm
CN102915039B (en) A kind of multirobot joint objective method for searching of imitative animal spatial cognition
US9299161B2 (en) Method and device for head tracking and computer-readable recording medium
CN105818167A (en) Method for calibrating an articulated end effector employing a remote digital camera
CN107993287A (en) A kind of auto-initiation method of target following
CN111998862B (en) BNN-based dense binocular SLAM method
Antonelli et al. A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot
CN107300100A (en) A kind of tandem type mechanical arm vision guide approach method of Online CA D model-drivens
CN110796700A (en) Multi-object grabbing area positioning method based on convolutional neural network
KR101460313B1 (en) Apparatus and method for robot localization using visual feature and geometric constraints
CN107527368A (en) Three-dimensional attitude localization method and device based on Quick Response Code
Gulde et al. RoPose: CNN-based 2D pose estimation of industrial robots
CN111531546B (en) Robot pose estimation method, device, equipment and storage medium
Silva et al. Saliency-based cooperative landing of a multirotor aerial vehicle on an autonomous surface vehicle
Ninomiya et al. Automatic calibration of industrial robot and 3D sensors using real-time simulator
CN113313824B (en) Three-dimensional semantic map construction method
Sigalas et al. Robust model-based 3d torso pose estimation in rgb-d sequences
Cuntoor et al. Human-robot teamwork using activity recognition and human instruction
CN110232711B (en) Binocular vision real-time perception positioning method, system and device for marine product grabbing
Ni et al. Teleoperation system with virtual reality based on stereo vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant