CN112102473A - Operation scene modeling method and system for distribution network live working robot - Google Patents

Operation scene modeling method and system for distribution network live working robot Download PDF

Info

Publication number
CN112102473A
CN112102473A CN202010902915.7A CN202010902915A CN112102473A CN 112102473 A CN112102473 A CN 112102473A CN 202010902915 A CN202010902915 A CN 202010902915A CN 112102473 A CN112102473 A CN 112102473A
Authority
CN
China
Prior art keywords
image
modeling
modeling method
matching
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010902915.7A
Other languages
Chinese (zh)
Inventor
唐旭明
陈宇涛
单晓锋
甄武东
韩先国
侯传锦
郭祥
刘强
董二宝
王亚豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
Huainan Power Supply Co of State Grid Anhui Electric Power Co Ltd
Original Assignee
University of Science and Technology of China USTC
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
Huainan Power Supply Co of State Grid Anhui Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC, State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd, Huainan Power Supply Co of State Grid Anhui Electric Power Co Ltd filed Critical University of Science and Technology of China USTC
Priority to CN202010902915.7A priority Critical patent/CN112102473A/en
Publication of CN112102473A publication Critical patent/CN112102473A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Abstract

The embodiment of the invention provides a method and a system for modeling an operation scene of a robot with a distribution network, belonging to the technical field of robots. The modeling method comprises the following steps: establishing a standard element library; acquiring a field image; matching based on the image and a known library of standard components; and constructing the operation scene according to the matching result. Through the technical scheme, the operation scene modeling method and the system for the robot with the distribution network realize rapid modeling of the specific scene by establishing the standard element library aiming at the specific operation scene, then performing element matching based on the image shot on site and finally performing three-dimensional modeling based on the matching result.

Description

Operation scene modeling method and system for distribution network live working robot
Technical Field
The invention relates to the technical field of robots, in particular to a method and a system for modeling an operation scene of a distribution network live-working robot.
Background
Three-dimensional stereometry since the sixties of the early century, machine vision workers have been able to acquire the three-dimensional dimensions of objects using vision techniques through the analysis of simple geometric figures (e.g., points, lines, planes, image boundaries, etc.) in images. With the development of computer vision technology, the relationship between the position and the posture of a real world object and the image information of the object can be comprehensively analyzed from the aspect of mathematical geometry, and the computer vision technology is gradually and really applied to actual production.
The industry in the nineties is developing towards high and new technology, and the stereo vision can restore the three-dimensional geometrical shape of an object in a plurality of two-dimensional images to perform three-dimensional reconstruction of the object. At present, with the innovation of measurement technology and the improvement of industrial precision, the computer vision technology is applied to replace human eyes and even complete tasks which cannot be completed by human eyes, and the application of the computer vision technology in the fields of space exploration, visual navigation, virtual reality, medical education, cultural relic protection and the like is gradually promoted step by step.
Binocular stereo vision is a process of acquiring object depth information from a pair of cameras, and a binocular stereo vision algorithm is generally considered to be a stereo method which is applied to two frames under a known camera geometric structure to generate a dense stereo disparity map, namely disparity estimation of each pixel.
In the field of charged working robot application, machine vision-based measurement is widely applied. Compared with the traditional contact measurement, the non-contact measurement based on computer vision has the characteristics of high precision, high speed, high flexibility and the like, and develops towards the direction of equipment miniaturization and light weight. The robot carries out active obstacle avoidance, planning and navigation according to three-dimensional environment modeling of stereoscopic vision, and becomes a main detection and measurement method in the field.
However, with the continuous development of the technology, the requirements of the robot on high precision and high autonomy are continuously increased, higher and faster requirements are provided for the three-dimensional information of a specific target, but the requirement on a panoramic dense information graph is less important. For a specific operation scene, most of the panoramic dense information graphs are ignored, and only the key part is reserved, so that the modeling process can be greatly accelerated.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a system for modeling an operation scene of a distribution network live-line operation robot, which can be used for rapidly modeling specific operation scenes.
In order to achieve the above object, an embodiment of the present invention provides a method for modeling an operation scene of a robot with a network distribution band, where the modeling method includes:
acquiring a field image;
matching based on the image and a pre-established standard element library;
and constructing the operation scene according to the matching result.
Optionally, the acquiring the on-site image specifically includes:
the method comprises the steps of acquiring images of a scene by adopting a preset binocular vision system, wherein the binocular vision system comprises two mechanical arms, a circular target is installed at the tail end of one of the mechanical arms, and a binocular camera is installed at the tail end of the other mechanical arm.
Optionally, the constructing the operation scene according to the matching result specifically includes:
taking the installation coordinate system of the one as a world coordinate system, and acquiring absolute coordinates of the circular target under the world coordinate system by reading angles of all joints of the one;
controlling the other one to move, and controlling the binocular camera to measure relative coordinates of the circular target under a camera coordinate system of the binocular camera for multiple times;
reversely deducing to obtain the position of the camera coordinate system according to the absolute coordinates and the relative coordinates;
determining absolute coordinates of the result from the relative coordinates of the result and the location.
Optionally, the matching based on the image and a pre-established standard component library specifically includes:
executing bilateral filtering operation, contour recognition operation and line region filling operation on the image; and
and removing non-target areas in the image.
Optionally, the performing the bilateral filtering operation on the image specifically includes:
and performing RGB filtering operation, bilateral filtering operation and binarization operation on the image.
Optionally, the target of the image is a line, and the performing the contour recognition operation on the image specifically includes:
and matching the first edge characteristic of the line by adopting a square pixel block, wherein the diagonal length of the square pixel block is larger than the diameter of the line by a preset number of pixels.
Optionally, the performing a line region filling operation on the image specifically includes:
marking all points of the line by adopting a flooding idea to form a marked area;
filling blank spots in the marking area by adopting a hole filling method to update the marking area;
processing each updated mark area by adopting a connected filtering method;
and screening the marked region containing the region pixel points which are larger than a preset threshold value to update the first edge characteristic.
Optionally, the removing the non-target region in the image specifically includes:
processing the original image by adopting a Canny edge detection algorithm to obtain a second edge feature of the image;
and acquiring the accurate edge feature of the target according to the first edge feature and the second edge feature.
In another aspect, the invention further provides a work scene modeling system for a distribution network live working robot, the modeling system comprising a processor, the processor being configured to be read by a machine to cause the machine to perform the modeling method according to any one of the above.
In yet another aspect, the present invention also provides a storage medium storing instructions for reading by a machine to cause the machine to perform a modeling method as described in any one of the above.
Through the technical scheme, the operation scene modeling method and the system for the robot with the distribution network realize rapid modeling of the specific scene by establishing the standard element library aiming at the specific operation scene, then performing element matching based on the image shot on site and finally performing three-dimensional modeling based on the matching result.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
fig. 1 is a flow chart of a method for modeling an operation scenario of a robot with a distribution network according to an embodiment of the present invention;
FIG. 2 is a partial flow diagram of a method for modeling a work scenario for a robot with a distribution network in accordance with one embodiment of the present invention;
FIG. 3 is a partial flow diagram of a method for modeling a work scenario for a robot with a distribution network in accordance with one embodiment of the present invention;
FIG. 4 is a schematic diagram of a match operator according to one embodiment of the invention;
FIG. 5 is a partial flow diagram of a method for modeling a work scenario for a robot with a distribution network in accordance with one embodiment of the present invention;
FIG. 6 is a schematic diagram of an image processing process according to one embodiment of the invention; and
fig. 7 is a partial flowchart of a method for modeling a work scenario of a robot with a distribution network according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
In the embodiments of the present invention, unless otherwise specified, the use of directional terms such as "upper, lower, top, and bottom" is generally used with respect to the orientation shown in the drawings or the positional relationship of the components with respect to each other in the vertical, or gravitational direction.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between the various embodiments can be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not be within the protection scope of the present invention.
Fig. 1 is a flowchart illustrating a method for modeling a working scenario of a robot with a distribution network according to an embodiment of the present invention. In fig. 1, the modeling method may include:
in step S10, a standard component library is created. In this embodiment, the content of the standard component library may also be different for different scenarios. Taking the operation scene of the distribution network robot provided by the invention as an example, the standard component library can comprise a plurality of lines, and each line corresponds to information such as a three-dimensional model, a diameter, a model and the like in the standard component library.
In step S11, an image of the scene is acquired. In this embodiment, the manner in which the image of the scene is acquired may be many as known to those skilled in the art. Taking a shooting mode commonly used in the prior art as an example, the shooting mode can be a conventional mode of directly shooting by using a camera, and the position of each component in the image is directly determined based on a coordinate system of the camera. This method allows the position of the component to be obtained relatively intuitively. However, due to the influence of errors in machining and installation of the base of the camera, if the camera is simply positioned in the coordinate system, a certain error exists between the positioned position and the actual position, and the subsequent image recognition and three-dimensional modeling processes are influenced. Therefore, in a preferred example of the present invention, a binocular vision system may be employed to acquire the image. Wherein, this binocular vision system can include two arms, and circular mark target can be installed to the end of one of them, and binocular camera can be installed to the end of another person.
Based on the binocular vision system provided above, when the position of the component is located, the installation coordinate system of one of the two mechanical arms (equipped with the circular target) can be used as the world coordinate system in advance, and the absolute coordinates of the circular target under the world coordinate system can be obtained by reading the angle of each joint of the one. Because the coordinate positioning of the mechanical arm is determined based on the length and the angle of the mechanical arm, the positioning precision of the mechanical arm is higher than that of the camera. Therefore, the absolute coordinates can be understood as precise coordinates. The absolute coordinates of the circular target obtained by the angles of the respective joints can be determined by those skilled in the art based on knowledge of the space geometry, or can be determined by a preset computer positioning program.
After the absolute coordinates are positioned, the other one (provided with the binocular camera) can be further controlled to move, and the binocular camera is controlled to measure the relative coordinates of the circular target under the camera coordinate system of the binocular camera for multiple times in the moving process. And then, the position of the camera coordinate system is obtained by reverse deduction according to the absolute coordinates and the relative coordinates obtained by a plurality of measurements. And finally, determining the absolute coordinates of the components according to the relative coordinates and positions of the matched results.
In step S12, matching is performed based on the image and a library of known standard component. In this embodiment, the standard component library is different for different operation scenes, and the matching mode is also different. Taking the operation scenario provided by the present invention as an example, the standard component library includes various lines. In making the matching, a method as illustrated in fig. 2 may be employed. In fig. 2, the matching process may specifically include:
in step S20, a bilateral filtering operation is performed on the image. The modeling method provided by the invention is an operation scene of an application line. The inventors consider that the conductors of the line have relatively distinct and unique characteristics. Firstly, the wires are all in a regular cylindrical geometric shape, the diameter of the wires cannot change along with the change of a visual angle, and the wires in the same section are uniform; and secondly, the color of the conducting wire is mostly pure black (the dark color can also be regarded as black), and the surface basically has no other color information or texture information (the identification code can be ignored). Thus, the inventors have employed a method as illustrated in FIG. 3 to perform the bilateral filtering operation. Specifically, the bilateral filtering operation may include: :
in step S30, an RGB filtering operation is performed on the image to filter out the black features. Because the colour of wire is mostly pure black, after RGB filtering operation, pure black in the original image can be shown as black to can directly select the black characteristic, and need not be as further operation, thereby it has more advantages to other discernment extraction methods relatively.
In step S31, bilateral filtering is performed on the image to enhance the texture characteristics of the edges of the object. For the black features screened in step S30, it is considered that there may be some slight difference in the gray information, so that the texture is not very uniform. Therefore, the step S31 further adopts bilateral filtering to enhance the texture characteristics, so that the texture of the environment of the wire is more uniform, and the texture inside the wire is also more uniform, thereby facilitating the subsequent further operation.
In step S32, the image is subjected to a binarization operation based on the RGB information of the image. The binarization operation can further enhance the boundary of the black region and the non-black region.
In step S21, a contour recognition operation is performed on the image. In this embodiment, the manner in which the contour recognition operation is performed may be various as known to those skilled in the art. Taking a conventional contour recognition operation in the prior art as an example, it is often determined by comparing color differences of pixel points in different regions. Although this approach also achieves the goal of contour recognition. However, since this method requires multiple comparison operations, the running time of the algorithm is too long, and the load on the device is increased accordingly. Therefore, the inventor adopts a square pixel block to match the line to obtain the first edge characteristic of the line according to the characteristics of the wire. The specific matching process may be as shown in fig. 4. In fig. 4, when any one of the wires (lines) is matched, a square pixel block may be overlaid on the wire, a portion where the square pixel block (matching operator) coincides with the wire may define a pixel as 0, and a pixel of a portion where it does not coincide may define as 1. Thus, the edge feature of the wire, i.e., the first edge feature, can be obtained based on pixel 0 and pixel 1. In addition, in order to ensure that the square pixel block can cover the wire, the diagonal length of the square pixel block may be greater than the diameter of the wire by a predetermined number of pixels. Taking the matching operator shown in fig. 4 as an example, the predetermined number may be, for example, 2.
In step S22, a fill-in-line region operation is performed on the image. Specifically, this step S22 may include the steps shown in fig. 5. In fig. 5, the step S22 may further include:
in step S40, all points of the route are marked using the flooding concept to form a marked area. The main idea of the flooding idea is to fill pixel points meeting conditions into new colors until all pixel points in a closed area in the field are filled. The specific process may be as shown in fig. 6.
In step S41, the blank spots in the mark area are filled by hole filling. The inventors considered that when an image is captured, since the surface smoothness of the wire is good, there is a problem that bright white reflected light is likely to be present in the captured image. In this case, the inventors further adopt the hole filling method to fill the blank spots of the mark region, thereby overcoming the technical problem. Specifically, as can be seen from fig. 6, in the image of the operator matching result, a blank region connected into a white line appears at the center of the conductive line. After the hole filling in step S41, the white area inside the hole is eliminated, thereby avoiding the influence on the subsequent operation.
In step S42, each mark region is processed by a connected component filtering method.
In step S43, the marked region including the region pixel point larger than the predetermined threshold is filtered to update the first edge feature. As can be seen from fig. 6, the operation of hole filling is prone to mis-filling, i.e., the multiple black dots in fig. 6. In order to avoid the influence of the black points on the subsequent operations, a method of connected filtering and screening may be used to exclude the black points, so as to further update the first edge feature.
In step S23, a non-target region in the image is removed. Specifically, the step S23 may further include a step as shown in fig. 7. In fig. 7, the step S23 may include:
in step S50, the original image is processed using the Canny edge detection algorithm to obtain a second edge feature of the image.
In step S51, an accurate edge feature of the object is obtained from the first edge feature and the second edge feature. The Canny edge detection algorithm is a method for identifying edge features of an image, and the finally obtained edge features are not accurate by adopting the method alone or the method shown in fig. 5. Therefore, the modeling method provided by the invention respectively adopts two methods for identification, and finally the intersection of the two methods is taken, so that the accurate edge characteristics meeting the requirements are obtained.
In step S13, a job scene is constructed based on the result of matching.
In another aspect, the present invention also provides a work scene modeling system for a distribution network live working robot, which may include a processor, which may be configured to be read by a machine to cause the machine to perform the modeling method as described in any one of the above.
In yet another aspect, the present invention also provides a storage medium which may store instructions which are readable by a machine to cause the machine to perform any of the modeling methods described above.
Through the technical scheme, the operation scene modeling method and the system for the robot with the distribution network realize rapid modeling of the specific scene by establishing the standard element library aiming at the specific operation scene, then performing element matching based on the image shot on site and finally performing three-dimensional modeling based on the matching result. Compared with the common method in the prior art, the method has the characteristics of high speed and high efficiency, and has important significance for environment perception and accurate target task operation of the distribution network live working robot. The main manifestations are as follows:
the first non-dense depth information is beneficial to distinguishing the relation between the foreground and the background, the depth information in front of the robot is fully utilized, the interference of background environment information is eliminated, sundries in the background can be effectively removed, and the difficulty of the three-dimensional reconstruction process is reduced;
secondly, the modeling method is beneficial to carrying out accurate position identification on the specified interesting target, automatic matching is carried out by contrasting the size parameters of the general distribution network equipment in the data set and combining with the measurement parameters, more accurate pose information than direct measurement is finally obtained, the obtained data only needs to store the equipment model and the pose information, and a model file for direct measurement does not need to be directly stored, so that the model is greatly simplified;
and thirdly, the sensitivity of a structured light camera to a red heat source can be overcome by using a binocular stereo camera for measurement and modeling, and the method is more suitable for the outdoor operation environment of the mesh belt electric robot.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, the embodiments of the present invention are not limited to the details of the above embodiments, and various simple modifications can be made to the technical solution of the embodiments of the present invention within the technical idea of the embodiments of the present invention, and the simple modifications all belong to the protection scope of the embodiments of the present invention.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, the embodiments of the present invention will not be described separately for the various possible combinations.
Those skilled in the art can understand that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a (may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, various different embodiments of the present invention may be arbitrarily combined with each other, and the embodiments of the present invention should be considered as disclosed in the disclosure of the embodiments of the present invention as long as the embodiments do not depart from the spirit of the embodiments of the present invention.

Claims (10)

1. A method for modeling an operation scene of a distribution network live working robot is characterized by comprising the following steps:
acquiring a field image;
matching based on the image and a pre-established standard element library;
and constructing an operation scene according to the matching result.
2. The modeling method of claim 1, wherein the acquiring the image of the scene specifically comprises:
the method comprises the steps of acquiring images of a scene by adopting a preset binocular vision system, wherein the binocular vision system comprises two mechanical arms, a circular target is installed at the tail end of one of the mechanical arms, and a binocular camera is installed at the tail end of the other mechanical arm.
3. The modeling method according to claim 2, wherein the constructing the job scenario according to the result of the matching specifically comprises:
taking the installation coordinate system of the one as a world coordinate system, and acquiring absolute coordinates of the circular target under the world coordinate system by reading angles of all joints of the one;
controlling the other one to move, and controlling the binocular camera to measure relative coordinates of the circular target under a camera coordinate system of the binocular camera for multiple times;
reversely deducing to obtain the position of the camera coordinate system according to the absolute coordinates and the relative coordinates;
determining absolute coordinates of the result from the relative coordinates of the result and the location.
4. The modeling method of claim 1, wherein said matching based on said image and a pre-established library of standard components specifically comprises:
executing bilateral filtering operation, contour recognition operation and line region filling operation on the image; and
and removing non-target areas in the image.
5. The modeling method of claim 4, wherein the performing bilateral filtering operations on the image specifically comprises:
and performing RGB filtering operation, bilateral filtering operation and binarization operation on the image.
6. The modeling method of claim 5, wherein the image is targeted at a line, and wherein the performing the contour recognition operation on the image specifically comprises:
and matching the first edge characteristic of the line by adopting a square pixel block, wherein the diagonal length of the square pixel block is larger than the diameter of the line by a preset number of pixels.
7. The modeling method of claim 6, wherein the performing a fill line region operation on the image specifically comprises:
marking all points of the line by adopting a flooding idea to form a marked area;
filling blank spots in the marking area by adopting a hole filling method to update the marking area;
processing each updated mark area by adopting a connected filtering method;
and screening the marked region containing the region pixel points which are larger than a preset threshold value to update the first edge characteristic.
8. The modeling method of claim 7, wherein the removing non-target regions in the image specifically comprises:
processing the original image by adopting a Canny edge detection algorithm to obtain a second edge feature of the image;
and acquiring the accurate edge feature of the target according to the first edge feature and the second edge feature.
9. A work scenario modeling system for a distribution network live working robot, characterized in that the modeling system comprises a processor for being read by a machine for causing the machine to perform the modeling method of any of claims 1 to 8.
10. A storage medium storing instructions for reading by a machine to cause the machine to perform a modeling method according to any of claims 1 to 8.
CN202010902915.7A 2020-09-01 2020-09-01 Operation scene modeling method and system for distribution network live working robot Pending CN112102473A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010902915.7A CN112102473A (en) 2020-09-01 2020-09-01 Operation scene modeling method and system for distribution network live working robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010902915.7A CN112102473A (en) 2020-09-01 2020-09-01 Operation scene modeling method and system for distribution network live working robot

Publications (1)

Publication Number Publication Date
CN112102473A true CN112102473A (en) 2020-12-18

Family

ID=73757241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010902915.7A Pending CN112102473A (en) 2020-09-01 2020-09-01 Operation scene modeling method and system for distribution network live working robot

Country Status (1)

Country Link
CN (1) CN112102473A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112947424A (en) * 2021-02-01 2021-06-11 国网安徽省电力有限公司淮南供电公司 Distribution network operation robot autonomous operation path planning method and distribution network operation system
CN114750154A (en) * 2022-04-25 2022-07-15 贵州电网有限责任公司 Dynamic target identification, positioning and grabbing method for distribution network live working robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306393A (en) * 2011-08-02 2012-01-04 清华大学 Method and device for deep diffusion based on contour matching
CN108510550A (en) * 2018-03-29 2018-09-07 轻客智能科技(江苏)有限公司 A kind of binocular camera automatic calibration method and device
CN108648232A (en) * 2018-05-04 2018-10-12 北京航空航天大学 A kind of binocular stereo visual sensor integral type scaling method based on accurate two-axis platcform
CN108988197A (en) * 2018-06-01 2018-12-11 南京理工大学 A kind of method for fast reconstruction at hot line robot system livewire work scene
CN110370286A (en) * 2019-08-13 2019-10-25 西北工业大学 Dead axle motion rigid body spatial position recognition methods based on industrial robot and monocular camera
CN110514114A (en) * 2019-07-30 2019-11-29 江苏海事职业技术学院 A kind of small objects space position calibration method based on binocular vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306393A (en) * 2011-08-02 2012-01-04 清华大学 Method and device for deep diffusion based on contour matching
CN108510550A (en) * 2018-03-29 2018-09-07 轻客智能科技(江苏)有限公司 A kind of binocular camera automatic calibration method and device
CN108648232A (en) * 2018-05-04 2018-10-12 北京航空航天大学 A kind of binocular stereo visual sensor integral type scaling method based on accurate two-axis platcform
CN108988197A (en) * 2018-06-01 2018-12-11 南京理工大学 A kind of method for fast reconstruction at hot line robot system livewire work scene
CN110514114A (en) * 2019-07-30 2019-11-29 江苏海事职业技术学院 A kind of small objects space position calibration method based on binocular vision
CN110370286A (en) * 2019-08-13 2019-10-25 西北工业大学 Dead axle motion rigid body spatial position recognition methods based on industrial robot and monocular camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
祝晶;章昊;唐旭明;韩先国;甄武东;许?;董二宝;: "一种配网带电作业机器人的目标线路识别与定位算法研究", 工业控制计算机, no. 03, pages 26 - 27 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112947424A (en) * 2021-02-01 2021-06-11 国网安徽省电力有限公司淮南供电公司 Distribution network operation robot autonomous operation path planning method and distribution network operation system
CN114750154A (en) * 2022-04-25 2022-07-15 贵州电网有限责任公司 Dynamic target identification, positioning and grabbing method for distribution network live working robot

Similar Documents

Publication Publication Date Title
US9436987B2 (en) Geodesic distance based primitive segmentation and fitting for 3D modeling of non-rigid objects from 2D images
US11504853B2 (en) Robotic system architecture and control processes
Trucco et al. Model-based planning of optimal sensor placements for inspection
JP4865557B2 (en) Computer vision system for classification and spatial localization of bounded 3D objects
JP3735344B2 (en) Calibration apparatus, calibration method, and calibration program
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
Palazzolo et al. Fast image-based geometric change detection given a 3d model
CN109671115A (en) The image processing method and device estimated using depth value
CN109211207B (en) Screw identification and positioning device based on machine vision
JP6955783B2 (en) Information processing methods, equipment, cloud processing devices and computer program products
DE112016006262B4 (en) Three-dimensional scanner and processing method for measurement support therefor
CN107957246B (en) binocular vision-based method for measuring geometric dimension of object on conveyor belt
CN110555878B (en) Method and device for determining object space position form, storage medium and robot
WO2009042265A1 (en) 3d beverage container localizer
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
CN112102473A (en) Operation scene modeling method and system for distribution network live working robot
CN114283139A (en) Weld joint detection and segmentation method and device based on area array structured light 3D vision
CN112348890A (en) Space positioning method and device and computer readable storage medium
CN115497077A (en) Carriage attitude recognition system, carriage attitude recognition method, electronic device and storage medium
CN109492521B (en) Face positioning method and robot
CN110514140B (en) Three-dimensional imaging method, device, equipment and storage medium
CN111724432A (en) Object three-dimensional detection method and device
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium
CN115381354A (en) Obstacle avoidance method and obstacle avoidance device for cleaning robot, storage medium and equipment
CN114782529A (en) High-precision positioning method and system for line grabbing point of live working robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination