CN116079791A - Robot vision recognition system and robot with same - Google Patents

Robot vision recognition system and robot with same Download PDF

Info

Publication number
CN116079791A
CN116079791A CN202211716648.XA CN202211716648A CN116079791A CN 116079791 A CN116079791 A CN 116079791A CN 202211716648 A CN202211716648 A CN 202211716648A CN 116079791 A CN116079791 A CN 116079791A
Authority
CN
China
Prior art keywords
unit
axis
executed
recognition system
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211716648.XA
Other languages
Chinese (zh)
Inventor
支峻楠
杨梅
邵萌萌
许国威
付杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Qiushi Industrial Technology Research Institute
Original Assignee
Qingdao Qiushi Industrial Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Qiushi Industrial Technology Research Institute filed Critical Qingdao Qiushi Industrial Technology Research Institute
Priority to CN202211716648.XA priority Critical patent/CN116079791A/en
Publication of CN116079791A publication Critical patent/CN116079791A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot vision recognition system and a robot with the system, comprising: the acquisition module consists of at least four cameras on a horizontal plane and is used for jointly acquiring image information of an executed unit; the identification module takes an extended line of an axis point of an execution unit as a front, is used as a proportion scale of a plurality of acquired image information, and carries out frame-by-frame identification on the executed unit positioned in an identification area of the image information; the processing module is used for sequentially carrying out image processing analysis on the image information and establishing a three-dimensional model, and establishing dynamic three-dimensional simulation dynamic data of the three-dimensional model based on the result of the image processing analysis; and a data processing module. According to the invention, through the optimization processing of the acquired image, the calculation of the given mechanical arm movement parameters is more accurate, so that the technical purpose of reducing the difference between the axis of grabbing and the axis of an object is realized.

Description

Robot vision recognition system and robot with same
Technical Field
The invention relates to the technical field of vision systems, in particular to an identification system for robot vision and a robot with the system.
Background
The robot vision recognition system is mainly used for recognizing objects or helping the robot to avoid states in the moving process, recognizes based on collected image information, gives an algorithm, and enables the robot to make corresponding actions to finish industrial production behaviors or avoid in front.
Such as publication (bulletin) number: CN109693263a, publication (date): 2019-04-30, which discloses a recognition system for robot vision, is characterized by comprising a vision sensor, an image acquisition module and an image processing module which are sequentially connected; the image acquisition module is provided with an illumination module; the system comprises a cloud storage server, an image integration module and a data comparison module, wherein the image integration module is respectively connected with the cloud storage server and the data comparison module. The invention has the beneficial effects that: the database processing system comprises a cloud storage server, an image integration module and a data comparison module, wherein the image integration module is respectively connected with the cloud storage server and the data comparison module, so that received images can be integrated and stored; 2. and the image scanning module can scan the object and then identify the object through the database processing system.
Or as publication (announcement) number: CN109968319a, publication (date): 2019-07-05, a robot vision recognition system and a robot vision recognition method for plug wires are disclosed, wherein the robot vision recognition system comprises a power distribution terminal, a water distribution branch body and a power distribution terminal, wherein the power distribution terminal is used for bearing and fixing a tested power distribution terminal; the distribution terminal detection flow branch line body comprises a plurality of detection stations; the wire plugging robot is used for performing wire plugging and unplugging operation on the tested power distribution terminal on the detection station; the power distribution terminal detection platform is used for detecting the detected power distribution terminal after the wire is inserted into the detection station. According to the invention, the detected power distribution terminal is carried and fixed by adopting the detection station, the detected power distribution terminal is subjected to wire plugging operation by the wire plugging robot, the detection equipment detects the power distribution terminal after wire plugging, and the detection result is transmitted to the wire discharging device, so that the wire discharging device is convenient for processing the qualified products and the unqualified products.
In the prior art including the above two patents, it is seen that the prior art robot is mainly applied to industrial production, and performs operations such as grabbing, carrying, welding, etc., especially in the grabbing work link of the conveyed object on the assembly line, since the target is always in a moving state, if the capturing unit is disordered to achieve capturing the moving object picture, the captured moving picture needs to be processed, and fed back to the processor, and the processor makes the computer program execute based on the picture. However, the inaccuracy of the image processing on the profile recognition of the moving object makes the manipulator capable of performing the grabbing of the moving object, but the manipulator and the grabbing axis and the axis of the object are both in a certain gap, and how to reduce the gap, so that the gap between the grabbing axis and the axis of the object is reduced, and the manipulator is expected to be well solved.
Disclosure of Invention
The invention aims to provide an identification system for robot vision and a robot with the system, which are used for solving the problems.
In order to achieve the above object, the present invention provides the following technical solutions: an identification system for robot vision, comprising:
the acquisition module consists of at least four cameras on a horizontal plane and is used for jointly acquiring image information of an executed unit;
the identification module takes an extended line of an axis point of an execution unit as a front, is used as a proportion scale of a plurality of acquired image information, and carries out frame-by-frame identification on the executed unit positioned in an identification area of the image information;
the processing module is used for sequentially carrying out image processing analysis on the image information and establishing a three-dimensional model, and establishing dynamic three-dimensional simulation dynamic data of the three-dimensional model based on the result of the image processing analysis;
and the data processing module is used for calculating the dynamic compensation parameter of the executed unit relative to the executing unit according to the dynamic three-dimensional simulation dynamic data.
Preferably, the collecting module comprises an indicating guiding unit arranged on each camera, the indicating guiding unit emits a plurality of indicating light spots positioned on the same straight line to indicate that the collecting positions of the plurality of cameras are matched with each other, the included angle between the direction of the indicating light spot emitted by the indicating guiding unit and the ground is a preset inclination angle, and the indicating guiding unit emits the indicating light spot to the belt surface of the conveyor belt for conveying the executed unit according to the preset inclination angle after the collecting module adjusts the level.
Preferably, the acquisition module comprises an XYZ-axis electric moving mechanism, the camera is mounted on a cradle head of the XYZ-axis electric moving mechanism, and the acquisition position of the camera is moved on the XYZ-axis by the movement of the camera on the cradle head, so that the acquisition position of the camera is coaxial with the indication light spot.
Preferably, the acquisition module further comprises an image processing unit for dividing pixel units acquired by an acquisition picture of the camera, identifying and extracting a plurality of pixel units of the acquisition picture (wherein N is more than or equal to 3) by an image mask method, and respectively named as P i Wherein i is E [1, N];
And by P i The coordinate points are used as centers, and N pictures with the pixel size of 200 x 200 are cut;
the pixel unit is an executed unit.
Preferably, the pixel unit P i Surrounding small images are identified, gaussian blur is performed on surrounding areas, the blurred areas are picked up, filling is performed to identify color ranges, and n is generated i And a contour image of the periphery of the pixel unit with +1 coordinate points as the center.
Preferably, the coordinate points are divided into core points and boundary points, the coordinate points divided into boundary points in the images are respectively output, a polygonal frame is drawn on the pixel units according to the coordinate points divided into boundary points in each image, and the inner area of the polygonal frame is the detected pixel unit.
Preferably, the threeThe dimension model provides the data processing module with the executed unit coordinate parameters, wherein: the Z-axis direction vector is [ a, b, c ]] T Recorded as vector
Figure BDA0004026675200000031
Marking the moving direction vector in the dynamic three-dimensional simulation dynamic data as
Figure BDA0004026675200000032
Obtaining a coordinate system transformation formula: />
Figure BDA0004026675200000033
Wherein R is ZYX The expression of the transformation matrix is as follows:
Figure BDA0004026675200000034
wherein, alpha is the adjustment Euler angle corresponding to the Z axis, beta is the adjustment Euler angle corresponding to the Y axis, gamma is the adjustment Euler angle corresponding to the X axis, cos alpha is abbreviated as calpha other functions and the like.
Preferably, the unit dynamic compensation parameter is a vector calculated by a transformation matrix equation
Figure BDA0004026675200000035
Vector->
Figure BDA0004026675200000036
Substituting vector parameters into +.>
Figure BDA0004026675200000037
And obtaining the xyz axis adjustment Euler angle parameter of the execution unit. />
A robot includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the computer program when executed by the processor implements the dynamic compensation parameters of the executed unit output by the data processing module in the steps of the recognition system for robot vision in the above scheme, so as to drive the executing unit to complete the predetermined station operation of the executed unit.
Preferably, the execution unit is specifically a mechanical arm with a mechanical claw.
In the technical scheme, the recognition system for robot vision and the robot with the system provided by the invention have the following beneficial effects: based on the processing of the acquired picture, the pixel units of the image acquisition are identified and segmented, so that more accurate contour parameters are acquired, and the three-dimensional coordinate parameters of the given executed unit are more accurate in calculation.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a schematic illustration of an embodiment of the present invention;
fig. 2 is a schematic block diagram of an embodiment of the present invention;
fig. 3 is a schematic diagram of a unit structure under an acquisition module according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1-2, an identification system for robot vision and a robot having the same, comprising:
the acquisition module consists of at least four cameras on a horizontal plane and is used for jointly acquiring image information of an executed unit;
the identification module takes an extended line of an axis point of the execution unit as the front, is used as a scale of a plurality of acquired image information, and carries out frame-by-frame identification on the execution unit positioned in an identification area of the image information;
the processing module is used for sequentially carrying out image processing analysis on the image information and establishing a three-dimensional model, and establishing dynamic three-dimensional simulation dynamic data of the three-dimensional model based on the result of the image processing analysis;
and the data processing module is used for calculating the dynamic compensation parameters of the executed units relative to the executing units according to the dynamic three-dimensional simulation dynamic data.
Further, the robot includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program when executed by the processor realizes the dynamic compensation parameters of the executed unit output by the data processing module in the step of the visual recognition system, so as to drive the execution unit to complete the predetermined station operation of the executed unit. And the execution unit is specifically a mechanical arm with a mechanical claw.
Further, the three-dimensional model in the above embodiment is provided to the data processing module to be executed with unit coordinate parameters, wherein: the Z-axis direction vector is [ a, b, c ]] T Recorded as vector
Figure BDA0004026675200000051
Marking the moving direction vector in the dynamic three-dimensional simulation dynamic data as +.>
Figure BDA0004026675200000052
Obtaining a coordinate system transformation formula: />
Figure BDA0004026675200000053
Wherein R is ZYX The expression of the transformation matrix is as follows:
Figure BDA0004026675200000054
wherein, alpha is the adjustment Euler angle corresponding to the Z axis, beta is the adjustment Euler angle corresponding to the Y axis, gamma is the adjustment Euler angle corresponding to the X axis, cos alpha is abbreviated as calpha, cos beta is abbreviated as cbeta, cos gamma is abbreviated as cgamma, other trigonometric functions and the like.
Furthermore, the unit dynamic compensation parameter is a vector calculated by the transformation matrix equation
Figure BDA0004026675200000055
Vector->
Figure BDA0004026675200000056
Substituting vector parameters into +.>
Figure BDA0004026675200000057
And obtaining the xyz axis adjustment Euler angle parameter of the execution unit.
Example 2
As shown in fig. 3, the acquisition module in the above-mentioned visual recognition system performs recognition, acquisition, pixel processing, and contour recognition on the executed unit, and specifically includes:
the technical scheme is that the acquisition module comprises an indication guide unit arranged on each camera, the indication guide unit emits a plurality of indication light spots positioned on the same straight line to indicate that the acquisition positions of the plurality of cameras are matched with each other, the included angle between the direction of the emitted indication light spots of the indication guide unit and the ground is a preset inclination angle, and the indication guide unit emits the indication light spots to the belt surface of the conveyor belt of the unit to be executed according to the preset inclination angle after the acquisition module adjusts the level; the preset inclination angle is in an angle range of more than or equal to 0 degree and less than or equal to 90 degrees;
the camera is arranged on a cradle head of the XYZ-axis electric moving mechanism, and the acquisition position of the camera moves on the XYZ axis through the movement of the camera on the cradle head so as to keep the acquisition position of the camera coaxial with the indication light spot;
the image processing unit is used for dividing pixel units acquired by an acquisition picture of the camera and passing through a square of an image maskThe method identifies and extracts a plurality of pixel units of the acquired picture (wherein N is more than or equal to 3) and respectively names as P i Wherein i is E [1, N];
Image processing unit P i The coordinate points are used as centers, and N pictures with the pixel size of 200 x 200 are cut;
the pixel unit is an executed unit.
Further, in the above embodiment, the image processing unit is also used for the pixel unit P i Surrounding small images are identified, gaussian blur is performed on surrounding areas, the blurred areas are picked up, filling is performed to identify color ranges, and n is generated i A contour image of a pixel unit periphery centered on +1 coordinate points.
Furthermore, in the above technical solution, the coordinate points are divided into the core points and the boundary points, the coordinate points divided into the boundary points in the images are output respectively, and the polygon frame is drawn on the pixel units according to the coordinate points divided into the boundary points in each image, and the inner area of the polygon frame is the detected pixel unit.
In summary, based on the processing of the acquired image, the pixel units of the image acquisition are identified and segmented, so as to obtain more accurate contour parameters, so that the three-dimensional coordinate parameters of the given executed unit are more accurate in calculation.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principles and embodiments of the present invention have been described in detail with reference to specific examples, which are provided to facilitate understanding of the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
The embodiment of the application also provides a specific implementation manner of the electronic device capable of implementing all the steps in the method in the embodiment, and the electronic device specifically comprises the following contents:
a processor (processor), a memory (memory), a communication interface (Communications Interface), and a bus;
the processor, the memory and the communication interface complete communication with each other through the bus;
the processor is configured to invoke the computer program in the memory, and when the processor executes the computer program, the processor implements all the steps in the method in the above embodiment.
The embodiments of the present application also provide a computer-readable storage medium capable of implementing all the steps of the methods in the above embodiments, the computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements all the steps of the methods in the above embodiments.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a hardware+program class embodiment, the description is relatively simple, as it is substantially similar to the method embodiment, as relevant see the partial description of the method embodiment. Although the present description provides method operational steps as described in the examples or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in an actual device or end product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment, or even in a distributed data processing environment) as illustrated by the embodiments or by the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, it is not excluded that additional identical or equivalent elements may be present in a process, method, article, or apparatus that comprises a described element. For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, when implementing the embodiments of the present disclosure, the functions of each module may be implemented in the same or multiple pieces of software and/or hardware, or a module that implements the same function may be implemented by multiple sub-modules or a combination of sub-units, or the like. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form. The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description embodiments may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein. In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the embodiments of the present specification.
In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction. The foregoing is merely an example of an embodiment of the present disclosure and is not intended to limit the embodiment of the present disclosure. Various modifications and variations of the illustrative embodiments will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of the embodiments of the present specification, should be included in the scope of the claims of the embodiments of the present specification.

Claims (10)

1. A recognition system for robot vision, comprising:
the acquisition module consists of at least four cameras on a horizontal plane and is used for jointly acquiring image information of an executed unit;
the identification module takes an extended line of an axis point of an execution unit as a front, is used as a proportion scale of a plurality of acquired image information, and carries out frame-by-frame identification on the executed unit positioned in an identification area of the image information;
the processing module is used for sequentially carrying out image processing analysis on the image information and establishing a three-dimensional model, and establishing dynamic three-dimensional simulation dynamic data of the three-dimensional model based on the result of the image processing analysis;
and the data processing module is used for calculating the dynamic compensation parameter of the executed unit relative to the executing unit according to the dynamic three-dimensional simulation dynamic data.
2. The recognition system for robot vision according to claim 1, wherein the acquisition module includes an indication guide unit provided on each of the cameras, the indication guide unit emitting a plurality of indication light spots on the same line to indicate that the acquisition positions of the plurality of cameras match, the indication guide unit emitting an indication light spot whose direction of the emission is at a preset inclination angle with respect to the ground, the indication guide unit emitting an indication light spot onto a belt surface conveying the executed unit belt at the preset inclination angle after the acquisition module adjusts the level; the preset inclination angle is located in an angle range of more than or equal to 0 degree and less than or equal to 90 degrees.
3. The recognition system for robot vision according to claim 2, wherein the acquisition module includes an XYZ-axis electric moving mechanism, the camera is mounted on a holder of the XYZ-axis electric moving mechanism, and an acquisition position of the camera is moved on the XYZ-axis by movement of the camera on the holder so that the acquisition position of the camera is kept coaxial with the indication light spot.
4. The recognition system for robot vision according to claim 1, wherein the acquisition module further comprises an image processing unit for dividing pixel units acquired by an acquisition picture of the camera, recognizing and extracting a plurality of pixel units of the acquisition picture (wherein N is 3 or more) by an image mask method, and respectively named as P i Wherein i is E [1, N];
The image processing unit is also used for P i The coordinate points are used as centers, and N pictures with the pixel size of 200 x 200 are cut;
the pixel unit is an executed unit.
5. The recognition system for robot vision according to claim 4, wherein the image processing unit is further configured to compare the pixel unit P i Surrounding small images are identified, gaussian blur is performed on surrounding areas, the blurred areas are picked up, filling is performed to identify color ranges, and n is generated i And a contour image of the periphery of the pixel unit with +1 coordinate points as the center.
6. The recognition system for robot vision according to claim 5, wherein the coordinate points are divided into a core point and a boundary point, the coordinate points divided into boundary points in the images are output, respectively, and a polygonal frame is drawn on the pixel unit according to the coordinate points divided into boundary points in each of the images, and the inside area of the polygonal frame is the detected pixel unit.
7. The recognition system for robot vision of claim 1, wherein the three-dimensional model is provided to the data processing module to be executed with unit coordinate parameters, wherein: the Z-axis direction vector is [ a, b, c ]] T Recorded as vector
Figure FDA0004026675190000021
Marking the moving direction vector in the dynamic three-dimensional simulation dynamic data as +.>
Figure FDA0004026675190000027
Obtaining a coordinate system transformation formula: />
Figure FDA0004026675190000022
Wherein R is ZYX The expression of the transformation matrix is as follows: />
Figure FDA0004026675190000023
Wherein, alpha is the adjustment Euler angle corresponding to the Z axis, beta is the adjustment Euler angle corresponding to the Y axis, gamma is the adjustment Euler angle corresponding to the X axis, cos alpha is abbreviated as calpha, cos beta is abbreviated as cbeta, cos gamma is abbreviated as cgamma, other trigonometric functions and the like.
8. The robot vision recognition system of claim 7, wherein the performed unit dynamic compensation parameter is a vector calculated from a transformation matrix equation
Figure FDA0004026675190000024
Vector->
Figure FDA0004026675190000025
Substituting vector parameters into +.>
Figure FDA0004026675190000026
And obtaining the xyz axis adjustment Euler angle parameter of the execution unit.
9. A robot comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the computer program when executed by the processor implements the dynamic compensation parameters of the executed unit output by the data processing module in the steps of the recognition system for robot vision of any one of claims 1 to 8, to drive the execution unit to complete the predetermined station operation of the executed unit.
10. A robot according to claim 9, characterized in that the execution unit is in particular a robotic arm with a gripper.
CN202211716648.XA 2022-12-29 2022-12-29 Robot vision recognition system and robot with same Pending CN116079791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211716648.XA CN116079791A (en) 2022-12-29 2022-12-29 Robot vision recognition system and robot with same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211716648.XA CN116079791A (en) 2022-12-29 2022-12-29 Robot vision recognition system and robot with same

Publications (1)

Publication Number Publication Date
CN116079791A true CN116079791A (en) 2023-05-09

Family

ID=86207606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211716648.XA Pending CN116079791A (en) 2022-12-29 2022-12-29 Robot vision recognition system and robot with same

Country Status (1)

Country Link
CN (1) CN116079791A (en)

Similar Documents

Publication Publication Date Title
KR20180120647A (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
CN110176078B (en) Method and device for labeling training set data
CN113610921B (en) Hybrid workpiece gripping method, apparatus, and computer readable storage medium
CN110065068B (en) Robot assembly operation demonstration programming method and device based on reverse engineering
CN107957246B (en) binocular vision-based method for measuring geometric dimension of object on conveyor belt
CN114758236A (en) Non-specific shape object identification, positioning and manipulator grabbing system and method
CN114355953B (en) High-precision control method and system of multi-axis servo system based on machine vision
CN112577447B (en) Three-dimensional full-automatic scanning system and method
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
CN109215075B (en) Positioning and identifying system and method for workpiece in material grabbing of industrial robot
CN115358965A (en) Welding deformation adaptive linear weld grinding track generation method and device
CN109143167B (en) Obstacle information acquisition device and method
CN116638521A (en) Mechanical arm positioning and grabbing method, system, equipment and storage medium for target object
CN115629066A (en) Method and device for automatic wiring based on visual guidance
CN112348768A (en) Method and system for aligning wire contact with insertion hole of connector
CN113601501B (en) Flexible operation method and device for robot and robot
US20240029234A1 (en) Method for inspecting a correct execution of a processing step of components, in particular a wiring harness, data structure, and system
CN112529856A (en) Method for determining the position of an operating object, robot and automation system
CN116079791A (en) Robot vision recognition system and robot with same
CN214200141U (en) Robot repeated positioning precision measuring system based on vision
CN115164751A (en) Riveting aperture size detection system and method
CN115346211A (en) Visual recognition method, visual recognition system and storage medium
Fröhlig et al. Three-dimensional pose estimation of deformable linear object tips based on a low-cost, two-dimensional sensor setup and AI-based evaluation
Xiang Industrial automatic assembly technology based on machine vision recognition
CN113808201A (en) Target object detection method and guided grabbing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination