CN117549301A - Visual guidance method, visual guidance device, readable storage medium and visual guidance platform - Google Patents
Visual guidance method, visual guidance device, readable storage medium and visual guidance platform Download PDFInfo
- Publication number
- CN117549301A CN117549301A CN202311570535.8A CN202311570535A CN117549301A CN 117549301 A CN117549301 A CN 117549301A CN 202311570535 A CN202311570535 A CN 202311570535A CN 117549301 A CN117549301 A CN 117549301A
- Authority
- CN
- China
- Prior art keywords
- information
- component
- visual
- template
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 247
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000004891 communication Methods 0.000 claims description 35
- 238000006243 chemical reaction Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 13
- 230000015654 memory Effects 0.000 claims description 13
- 230000008569 process Effects 0.000 description 13
- 238000002372 labelling Methods 0.000 description 11
- 238000011161 development Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000011990 functional testing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Manipulator (AREA)
Abstract
The application provides a visual guidance method, a visual guidance device, a readable storage medium and a visual guidance platform. The visual guiding method is applied to a visual guiding platform, the visual guiding platform comprises a visual component and an execution component, and the visual guiding method comprises the following steps: acquiring hardware configuration information and object information in the task information; configuring component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base; generating a target object template according to the object information and a first object template in the template database; the control vision component and the execution component execute target guiding tasks based on the component parameters, wherein the target guiding tasks are guiding tasks generated based on the task information and the target object templates.
Description
Technical Field
The application relates to the technical field of robots, in particular to a visual guiding method, a visual guiding device, a readable storage medium and a visual guiding platform.
Background
The factory has a large number of scenes to carry out mechanical intelligent and automatic treatment, including unordered workpiece loading and unloading, article unstacking, workpiece high-precision positioning and assembling and the like. The automatic positioning device can realize quick positioning through the AI vision matched with the mechanical arm, guide the mechanical arm to accurately execute the operation, reduce the labor cost and reduce the operation cost.
In the related art, a plurality of different types of scenes exist in a visual guiding mechanical arm project, the service modules and the technical architecture corresponding to each scene have a certain degree of similarity, repeated development phenomena are common, and the development online efficiency is low.
Disclosure of Invention
The present application aims to solve one of the technical problems existing in the prior art or related technologies.
To this end, a first aspect of the present application proposes a visual guidance method.
A second aspect of the present application proposes a visual guide device.
A third aspect of the present application proposes a visual guide device.
A fourth aspect of the present application proposes a readable storage medium.
A fifth aspect of the present application proposes a computer program product.
A sixth aspect of the present application proposes a visual guidance platform.
In view of this, according to a first aspect of the present application, there is provided a visual guidance method applied to a visual guidance platform, the visual guidance platform including a visual component and an execution component, the visual guidance method including: acquiring hardware configuration information and object information in the task information; configuring component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base; generating a target object template according to the object information and a first object template in the template database; the control vision component and the execution component execute target guiding tasks based on the component parameters, wherein the target guiding tasks are guiding tasks generated based on the task information and the target object templates.
The visual guiding method is applied to a visual guiding platform, a visual component and an executing component are arranged in the visual guiding platform, wherein the visual component is used for collecting image data, the visual guiding platform can control the executing component based on the collected image data, and the executing component is used for executing specific operations such as unordered workpiece loading and unloading, object unstacking, workpiece high-precision positioning and assembling and the like.
In the technical scheme, the task information is an item task required to be executed by the visual guidance platform, and the task information comprises visual components for executing the task and hardware configuration information of the execution components and object information of a workpiece object in the task.
In the technical scheme, after the hardware configuration information is acquired, the component parameters of the visual component and the component parameters of the execution component of the visual guiding platform can be configured based on the hardware configuration information, so that the visual guiding platform can call the proper visual component and the execution component. The visual guiding platform is provided with a component information base, component parameters of different visual components and component information of the mechanical arm component are stored in the component information base, required component information can be found through hardware configuration information, and corresponding component parameters can be generated based on the component information and the hardware configuration information.
It should be noted that, when the visual guidance platform performs different project tasks, different visual components and execution components may be selectively installed.
In the technical scheme, after the object information is extracted, a proper first object template can be found in a template database based on the object information, the first object template comprises a three-dimensional model or a two-dimensional model of the object, and a target object template can be generated based on the object information and the first object template and is matched with the object information in the task information.
According to the technical scheme, the target guiding task can be generated according to the task information and the target object template, and after hardware configuration is completed for the visual component and the execution component, the visual component and the execution component are controlled to execute the generated target guiding task based on component parameters, so that project tasks corresponding to the task information are completed.
In the technical scheme, the component information base and the template database are stored in the visual guiding platform, so that the historically generated component information and object templates can be multiplexed, after hardware configuration information and objects in task information are acquired, component parameters of the visual components and execution components can be configured based on the component information and the hardware configuration information in the component information base, and a target object template is generated based on the object information and a first object template in the template database, so that the visual guiding platform can multiplex the historically configured parameter information under the condition of facing different project scenes, the development resource consumption of the visual guiding platform is reduced, and the development efficiency of the visual guiding platform is improved.
In some embodiments, optionally, the component parameters include: the coordinate conversion parameter configures the component parameters of the vision component and the execution component according to the hardware configuration information and the component information in the component information base, and comprises the following steps:
extracting camera configuration information and mechanical arm configuration information in the hardware configuration information;
searching a first coordinate system and a second coordinate system in the component information base according to the camera configuration information and the mechanical arm configuration information, wherein the first coordinate system comprises a camera coordinate system of the vision component, and the second coordinate system comprises a mechanical arm coordinate system of the execution component;
and generating coordinate conversion parameters according to the first coordinate system and the second coordinate system.
In this technical solution, the component parameters include coordinate conversion parameters of the first coordinate system and the second coordinate system, and the coordinate conversion parameters can convert the first coordinate system and the second coordinate system. The hardware configuration information comprises camera configuration information corresponding to the visual component and mechanical arm configuration information corresponding to the execution component. The camera coordinate system corresponding to the camera, namely the first coordinate system, can be found in the component information base through the camera configuration information. And the mechanical arm coordinate system corresponding to the mechanical arm, namely the second coordinate system, can be found in the component information base through the mechanical arm configuration information.
The first coordinate system is a camera coordinate system of the vision component, and the second coordinate system is a robot arm coordinate system of the execution component.
In the technical scheme, after the first coordinate system and the second coordinate system are found, the vision guidance platform can generate corresponding coordinate conversion parameters. After the coordinate conversion parameters are determined, the hand-eye calibration can be performed on the visual component and the execution component based on the coordinate conversion parameters, and the calibration result is automatically output.
In the technical scheme, the hardware configuration information comprises camera configuration information corresponding to the vision component and mechanical arm configuration information corresponding to the execution component, the corresponding first coordinate system and second coordinate system can be found through the camera configuration information and the mechanical arm configuration information, multiplexing of the camera coordinate system and the mechanical arm coordinate system is achieved, and coordinate conversion parameters can be automatically generated through the first coordinate system and the second coordinate system, so that subsequent automatic hand-eye calibration of the vision component and the execution component is facilitated.
In some embodiments, optionally, the component parameters include: camera information, configuring component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base, including:
Configuring camera information corresponding to the visual component according to camera configuration information in the hardware configuration information, wherein the camera information is information searched in a component information base based on the camera configuration information;
wherein the camera information includes at least one of: camera identification information, camera communication serial port information and exposure time length information.
In the technical scheme, the component parameters comprise camera information, and the camera information comprises at least one of camera communication serial port information, camera identification information and exposure time length information. The camera communication serial port is related information of a communication serial port connected with the camera in the visual guidance platform, so that the corresponding communication serial port can be configured for the camera. The camera identification information includes, but is not limited to, an identification code of a camera, a camera serial number, and the like, through which the vision guidance platform can find the camera, and when a plurality of vision cameras are configured in the vision guidance platform, the camera to be called can be found based on the camera identification information. The exposure time length information is the exposure time length of the vision camera in the process of collecting image data, and the vision guiding platform can set the exposure time length of the camera in operation by calling the exposure time length information without manual adjustment and setting.
It should be noted that, the camera information in the component information base may be the camera information set in the history project task, and may also set the information input into the component information base in advance.
In the technical scheme, after the camera configuration information is extracted from the hardware configuration information, at least one item of corresponding camera communication serial port information, camera identification information and exposure time length information can be searched in the component information base based on the camera configuration information, so that the configuration of the camera information of the visual component is completed.
In some embodiments, optionally, the component parameters include: the mechanical arm information configures component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base, and the mechanical arm information comprises:
according to the mechanical arm configuration information in the hardware configuration information, configuring mechanical arm information corresponding to the execution assembly, wherein the mechanical arm information is information searched in an assembly information base according to the mechanical arm configuration information;
wherein the mechanical arm information comprises at least one of the following: mechanical arm identification information and mechanical arm communication connection information.
In this technical solution, the component parameter includes mechanical arm information, and the mechanical arm information includes at least one of mechanical arm communication connection information and mechanical arm identification information. The mechanical arm identification information comprises, but is not limited to, an identification code of the mechanical arm, a mechanical arm serial number and the like, the mechanical arm can be found by the visual guiding platform through the mechanical arm identification information, and when a plurality of mechanical arms are configured in the visual guiding platform, the mechanical arm required to be called can be found based on the mechanical arm identification information. The communication connection information of the mechanical arm is communication information of communication connection between the visual guiding platform and the mechanical arm, and can be communicated with the mechanical arm according to the communication connection information.
It should be noted that the mechanical arm information in the component information base may be mechanical arm information set in a history project task, and may also set information input into the component information base in advance.
According to the technical scheme, after the mechanical arm configuration information is extracted from the hardware configuration information, at least one item of corresponding mechanical arm identification information and mechanical arm communication connection information can be searched in the component information base based on the mechanical arm configuration information, so that the configuration of the mechanical arm information of the visual component is completed.
In some embodiments, optionally, before generating the target object template according to the object information and the first object template in the template database, the method includes:
searching a first object template in a template database according to the object information;
wherein the first object template comprises at least one of: two-dimensional object templates, three-dimensional object templates.
In the technical scheme, a history generated object template is stored in a template database, and the first object template is an object template matched with object information in the history generated object template. The first object template includes a two-dimensional object template generated by a two-dimensional editor and a three-dimensional object template generated by a three-dimensional editor.
The object information includes at least identification information of the object, for example: object name, object identification code, etc., by means of which the corresponding standard object template, i.e. the first object template, can be found in the template database.
According to the technical scheme, after the object information is extracted, a proper first object template can be found in the template database based on the object information, so that a target object template for visual guidance can be conveniently generated based on the first object template.
In some aspects, generating a target object template from object information and a first object template in a template database includes:
acquiring image data, wherein object characteristics in the image data are matched with object information;
a target object template is generated based on the image data and the first object template.
In the technical scheme, the image data is an image acquired through the vision component, and the image comprises object features, wherein the object features are image features of an object corresponding to the object information.
In the technical scheme, after the image data is acquired, the first object template can be edited based on the acquired image data through the three-dimensional editor or the two-dimensional editor to generate the target object template, so that the template data in the template database are partially multiplexed. Specifically, the modeler can edit the first object template in the template database based on the captured image data to obtain the target object template, thereby reducing the workload required for the modeling process.
According to the technical scheme, in the process of modeling the object corresponding to the object information, the target object template can be generated by combining the actually collected image data on the basis of the first object model in the multiplexing template database, so that the accuracy of the target object template is improved, and meanwhile, the operation steps required by modeling can be reduced.
In some aspects, optionally, after generating the target object template based on the image data and the first object template, the method includes:
and responding to the model annotation input, and annotating the target object template according to the model annotation information corresponding to the model annotation input.
In the technical scheme, after the target template is generated, a modeling person can select to manually add the marking point in the target object template. After the visual guide platform receives the model annotation input, the target object model is annotated based on the model annotation information in the model annotation input. When the target object model is a two-dimensional model, the two-dimensional image of the two-dimensional modeling is marked, and when the target object model is a three-dimensional model, the point cloud image of the three-dimensional modeling is marked.
The first object template includes a labeling point, and after the target object model is generated based on the first object model, if the labeling point needs to be updated, model labeling input is executed on the visual guidance platform to label the target object model.
In the technical scheme, after the target object model is generated, modeling staff can label the target object model, and accuracy of visual guidance based on the target object model is guaranteed.
In some embodiments, optionally, before the controlling the visual component and the executing component perform the target guidance task based on the component parameters, the method further includes:
acquiring visual guidance project information;
binding the visual guide item information with component parameters of the visual component and the execution component to obtain a target guide item;
and binding and storing the target guide item and the target guide task.
In the technical scheme, the visual guide item information is the identification information of the newly-built target guide item, the target guide item is matched with the target guide task, the target guide item can be generated by binding the target guide item with the component parameters of the visual component and the execution component, and then the target guide item and the target guide task are bound and stored, so that the corresponding target guide task can be automatically executed when the target guide item is operated.
According to a second aspect of the present application, there is provided a visual guiding device for use in a visual guiding platform, the visual guiding platform comprising a visual component and an execution component, the visual guiding device comprising:
The acquisition module is used for acquiring hardware configuration information and object information in the task information;
the configuration module is used for configuring component parameters of the visual component and the execution component according to the hardware configuration information;
the generating module is used for generating a target object template according to the object information and the first object template in the template database;
and the control module is used for controlling the visual component and the execution component to execute a target guide task based on the component parameters, wherein the target guide task is generated based on the task information and the target object template.
The application provides a vision guiding device is applied to vision guiding platform, is provided with vision subassembly and execution subassembly in the vision guiding platform, and wherein, vision subassembly is used for gathering image data, makes vision guiding platform can control execution subassembly based on the image data who gathers, and execution subassembly is used for carrying out specific operations such as unordered work piece loading and unloading, the stack breaking of an order, work piece high accuracy location assembly of article.
In the technical scheme, the task information is an item task required to be executed by the visual guidance platform, and the task information comprises visual components for executing the task and hardware configuration information of the execution components and object information of a workpiece object in the task.
In the technical scheme, after the hardware configuration information is acquired, the component parameters of the visual component and the component parameters of the execution component of the visual guiding platform can be configured based on the hardware configuration information, so that the visual guiding platform can call the proper visual component and the execution component. The visual guiding platform is provided with a component information base, component parameters of different visual components and component information of the mechanical arm component are stored in the component information base, required component information can be found through hardware configuration information, and corresponding component parameters can be generated based on the component information and the hardware configuration information.
It should be noted that, when the visual guidance platform performs different project tasks, different visual components and execution components may be selectively installed.
In the technical scheme, after the object information is extracted, a proper first object template can be found in a template database based on the object information, the first object template comprises a three-dimensional model or a two-dimensional model of the object, and a target object template can be generated based on the object information and the first object template and is matched with the object information in the task information.
According to the technical scheme, the target guiding task can be generated according to the task information and the target object template, and after hardware configuration is completed for the visual component and the execution component, the visual component and the execution component are controlled to execute the generated target guiding task based on component parameters, so that project tasks corresponding to the task information are completed.
In the technical scheme, the component information base and the template database are stored in the visual guiding platform, so that the historically generated component information and object templates can be multiplexed, after hardware configuration information and objects in task information are acquired, component parameters of the visual components and execution components can be configured based on the component information and the hardware configuration information in the component information base, and a target object template is generated based on the object information and a first object template in the template database, so that the visual guiding platform can multiplex the historically configured parameter information under the condition of facing different project scenes, the development resource consumption of the visual guiding platform is reduced, and the development efficiency of the visual guiding platform is improved.
According to a third aspect of the present application there is provided a visual guide device comprising: a memory in which a program or instructions are stored; the processor executes a program or instructions stored in the memory to implement the steps of the visual guidance method as in any one of the first aspects, so that the method has all the beneficial technical effects of the visual guidance method as in any one of the first aspects, and will not be described in detail herein.
According to a fourth aspect of the present application, there is provided a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the visual guidance method as in any of the above-mentioned first aspects. Therefore, the method has all the beneficial technical effects of the visual guiding method in any one of the above first aspects, and will not be described in detail herein.
According to a fifth aspect of the present application a computer program product is presented, comprising a computer program which, when executed by a processor, implements the steps of the visual guidance method as in any of the above-mentioned first aspects. Therefore, the method has all the beneficial technical effects of the visual guiding method in any one of the above first aspects, and will not be described in detail herein.
According to a sixth aspect of the present application, there is provided a visual guidance platform, comprising: the whole technical advantages of the visual guiding device as defined in the above second or third aspect, and/or the readable storage medium as defined in the above fourth aspect, and/or the computer program product as defined in the above fifth aspect, thus having the visual guiding device as defined in the above second or third aspect, and/or the readable storage medium as defined in the above fourth aspect, and/or the computer program product as defined in the above fifth aspect, are not repeated here.
Additional aspects and advantages of the present application will become apparent in the following description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, wherein:
FIG. 1 illustrates one of the schematic flow diagrams of the visual guidance method provided in some embodiments of the present application;
FIG. 2 illustrates a second schematic flow diagram of a visual guidance method provided in some embodiments of the present application;
FIG. 3 illustrates a schematic diagram of project management by a visual guidance platform provided in some embodiments of the present application;
FIG. 4 illustrates one of the block diagrams of the visual guidance device provided in some embodiments of the present application;
FIG. 5 illustrates a second block diagram of a visual guidance device provided in some embodiments of the present application;
FIG. 6 illustrates a schematic structural diagram of a visual guidance platform provided by some embodiments of the present application;
fig. 7 is a flow chart illustrating a visual guidance method in a clasping machine and assembly scenario provided in some embodiments of the present application;
FIG. 8 illustrates a third schematic flow chart of a visual guidance method provided in some embodiments of the present application;
Fig. 9 illustrates a fourth schematic flow diagram of a visual guidance method provided in some embodiments of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the present embodiment and the features in the embodiment may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced otherwise than as described herein, and thus the scope of the present application is not limited by the specific embodiments disclosed below.
A visual guidance method, apparatus, readable storage medium, and visual guidance platform according to some embodiments of the present application are described below with reference to fig. 1 through 9.
According to one embodiment of the present application, as shown in fig. 1, a visual guiding method is provided and applied to a visual guiding platform, where the visual guiding platform includes a visual component and an execution component, and the visual guiding method includes:
102, acquiring hardware configuration information and object information in task information;
Step 104, configuring component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base;
step 106, generating a target object template according to the object information and a first object template in the template database;
and step 108, controlling the vision component and the execution component to execute a target guiding task based on the component parameters, wherein the target guiding task is generated based on the task information and the target object template.
The visual guiding method is applied to a visual guiding platform, a visual component and an executing component are arranged in the visual guiding platform, wherein the visual component is used for collecting image data, the visual guiding platform can control the executing component based on the collected image data, and the executing component is used for executing specific operations such as unordered workpiece loading and unloading, object unstacking, workpiece high-precision positioning and assembling and the like.
Illustratively, the vision component is a camera, and the component robot is implemented.
In this embodiment, the task information is a task item to be executed by the visual guidance platform, and the task information includes a visual component for executing the task and hardware configuration information of the execution component, and further includes object information of a workpiece object in the task.
The project task is loading and unloading of the air compressor, wherein the hardware configuration information is camera configuration information and mechanical arm configuration information, and the object information is object information of the air compressor.
In this embodiment, after the hardware configuration information is acquired, the component parameters of the visual component of the visual guidance platform and the component parameters of the execution component can be configured based on the hardware configuration information, so that the visual guidance platform can call the appropriate visual component and the execution component. The visual guiding platform is provided with a component information base, component parameters of different visual components and component information of the mechanical arm component are stored in the component information base, required component information can be found through hardware configuration information, and corresponding component parameters can be generated based on the component information and the hardware configuration information.
It should be noted that, when the visual guidance platform performs different project tasks, different visual components and execution components may be selectively installed.
In this embodiment, after the object information is extracted, a suitable first object template can be found in the template database based on the object information, the first object template comprising a three-dimensional model or a two-dimensional model of the object, and a target object template can be generated based on the object information and the first object template, the target object template matching the object information in the task information.
Illustratively, the project task is loading and unloading of the air compressor, and the target object template includes a three-dimensional template of the compressor, for example: and editing the obtained point cloud template through a three-dimensional editor, and calculating the grabbing center position of the compressor through an algorithm. The target object template also includes a two-dimensional template of the compressor. For example: the camera collects two-dimensional images of the compressor, and an algorithm is used for calculating grabbing hole positions of the compressor.
In this embodiment, a target guidance task can be generated according to the task information and the target object template, and after hardware configuration is completed for the vision component and the execution component, the vision component and the execution component are controlled to execute the generated target guidance task based on the component parameters, so that project tasks corresponding to the task information are completed.
In the embodiment of the application, the component information base and the template database are stored in the visual guiding platform, so that the historically generated component information and object templates can be multiplexed, after the hardware configuration information and the hardware configuration information in the task information are acquired, the component parameters of the visual component and the execution component can be configured based on the component information and the hardware configuration information in the component information base, and the target object templates are generated based on the object information and the first object template in the template database, so that the visual guiding platform can multiplex the historically configured parameter information under the condition of facing different project scenes, the development resource consumption of the visual guiding platform is reduced, and the development efficiency of the visual guiding platform is improved.
In some embodiments, optionally, the component parameters include: the coordinate conversion parameter configures the component parameters of the vision component and the execution component according to the hardware configuration information and the component information in the component information base, and comprises the following steps:
extracting camera configuration information and mechanical arm configuration information in the hardware configuration information;
searching a first coordinate system and a second coordinate system in the component information base according to the camera configuration information and the mechanical arm configuration information, wherein the first coordinate system comprises a camera coordinate system of the vision component, and the second coordinate system comprises a mechanical arm coordinate system of the execution component;
and generating coordinate conversion parameters according to the first coordinate system and the second coordinate system.
In this embodiment, the component parameters include coordinate conversion parameters of the first coordinate system and the second coordinate system, and the coordinate conversion parameters are capable of converting the first coordinate system and the second coordinate system. The hardware configuration information comprises camera configuration information corresponding to the visual component and mechanical arm configuration information corresponding to the execution component. The camera coordinate system corresponding to the camera, namely the first coordinate system, can be found in the component information base through the camera configuration information. And the mechanical arm coordinate system corresponding to the mechanical arm, namely the second coordinate system, can be found in the component information base through the mechanical arm configuration information.
The camera configuration information includes at least device identification information of the camera, by means of which the corresponding first coordinate system can be found in the component information base.
The mechanical arm configuration information at least includes device identification information of the mechanical arm, and a corresponding second coordinate system can be found in the component information base through the mechanical arm configuration information.
The first coordinate system is a camera coordinate system of the vision component, and the second coordinate system is a robot arm coordinate system of the execution component.
In this embodiment, the visual guidance platform is able to generate the corresponding coordinate transformation parameters after finding the first and second coordinate systems. After the coordinate conversion parameters are determined, the hand-eye calibration can be performed on the visual component and the execution component based on the coordinate conversion parameters, and the calibration result is automatically output.
In the process of calibrating the eyes, the method is automatically carried out through a preset expression algorithm, so that the steps of manual operation are reduced, and the time required by calibrating the eyes is shortened.
In the embodiment of the application, the hardware configuration information comprises camera configuration information corresponding to the vision component and mechanical arm configuration information corresponding to the execution component, the corresponding first coordinate system and second coordinate system can be searched through the camera configuration information and the mechanical arm configuration information, multiplexing of the camera coordinate system and the mechanical arm coordinate system is achieved, and coordinate conversion parameters can be automatically generated through the first coordinate system and the second coordinate system, so that subsequent automatic hand-eye calibration of the vision component and the execution component is facilitated.
In some embodiments, optionally, the component parameters include: camera information, configuring component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base, including:
configuring camera information corresponding to the visual component according to camera configuration information in the hardware configuration information, wherein the camera information is information searched in a component information base based on the camera configuration information;
wherein the camera information includes at least one of: camera identification information, camera communication serial port information and exposure time length information.
In this embodiment, the component parameters include camera information including at least one of camera communication serial port information, camera identification information, exposure time length information. The camera communication serial port is related information of a communication serial port connected with the camera in the visual guidance platform, so that the corresponding communication serial port can be configured for the camera. The camera identification information includes, but is not limited to, an identification code of a camera, a camera serial number, and the like, through which the vision guidance platform can find the camera, and when a plurality of vision cameras are configured in the vision guidance platform, the camera to be called can be found based on the camera identification information. The exposure time length information is the exposure time length of the vision camera in the process of collecting image data, and the vision guiding platform can set the exposure time length of the camera in operation by calling the exposure time length information without manual adjustment and setting.
It should be noted that, the camera information in the component information base may be the camera information set in the history project task, and may also set the information input into the component information base in advance.
Illustratively, the camera identification information includes at least one of: camera name, camera type, camera brand, camera serial number.
Illustratively, the camera information may further include: at least one of camera connection timeout period, photographing timeout period and photographing reconnection times. The connection timeout time is the time for the visual guidance platform to be in communication connection with the camera, namely, the time is exceeded, and prompt information is output. And the photographing timeout time is the time when the visual guidance platform calls the camera to photograph, namely, the time is longer than the time, and prompt information is output. The shooting reconnection times are retry times after the vision guiding platform calls the camera to fail shooting, namely, the retry times are exceeded, and prompt information is output.
In the embodiment of the application, after the camera configuration information is extracted from the hardware configuration information, at least one item of corresponding camera communication serial port information, camera identification information and exposure duration information can be found in the component information base based on the camera configuration information, so that the configuration of the camera information of the visual component is completed.
In some embodiments, optionally, the component parameters include: the mechanical arm information configures component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base, and the mechanical arm information comprises:
according to the mechanical arm configuration information in the hardware configuration information, configuring mechanical arm information corresponding to the execution assembly, wherein the mechanical arm information is information searched in an assembly information base according to the mechanical arm configuration information;
wherein the mechanical arm information comprises at least one of the following: mechanical arm identification information and mechanical arm communication connection information.
In this embodiment, the component parameters include robot arm information including at least one of robot arm communication connection information and robot arm identification information. The mechanical arm identification information comprises, but is not limited to, an identification code of the mechanical arm, a mechanical arm serial number and the like, the mechanical arm can be found by the visual guiding platform through the mechanical arm identification information, and when a plurality of mechanical arms are configured in the visual guiding platform, the mechanical arm required to be called can be found based on the mechanical arm identification information. The communication connection information of the mechanical arm is communication information of communication connection between the visual guiding platform and the mechanical arm, and can be communicated with the mechanical arm according to the communication connection information.
It should be noted that the mechanical arm information in the component information base may be mechanical arm information set in a history project task, and may also set information input into the component information base in advance.
Illustratively, the robotic arm identification information includes at least one of: arm name, arm type, arm brand, arm serial number.
Illustratively, the robotic arm information may further include: and the mechanical arm is connected for a timeout period. The connection timeout period is the period of communication connection between the visual guidance platform and the mechanical arm, namely, the period is longer than the period, and prompt information is output.
In the embodiment of the application, after the mechanical arm configuration information is extracted from the hardware configuration information, at least one item of corresponding mechanical arm identification information and mechanical arm communication connection information can be found in the component information base based on the mechanical arm configuration information, so that the configuration of the mechanical arm information of the visual component is completed.
In some embodiments, optionally, before generating the target object template according to the object information and the first object template in the template database, the method includes:
searching a first object template in a template database according to the object information;
wherein the first object template comprises at least one of: two-dimensional object templates, three-dimensional object templates.
In this embodiment, the object templates generated in the history are stored in the template database, and the first object template is an object template matching the object information in the object templates generated in the history. The first object template includes a two-dimensional object template generated by a two-dimensional editor and a three-dimensional object template generated by a three-dimensional editor.
The object information includes at least identification information of the object, for example: object name, object identification code, etc., by means of which the corresponding standard object template, i.e. the first object template, can be found in the template database.
The project task is an air compressor loading and unloading task, and the compressors need to be grabbed and placed in the tray, so that two templates need to be established, one of the templates is a compressor template, a compressor standard template is established, the compressor standard template can call a first object template in a template database, the first object template is compared with the template in the actual visual guiding process, and the grabbing pose of each compressor is calculated for the mechanical arm to grab. The other template is a tray template, a tray standard template is established, the tray standard template can also call a first object template in a template database, and the first object template is compared with the template in the actual visual guiding process, and the placement pose of each compressor is calculated. The first object template of the compressor may be a three-dimensional object template, and the first object template of the tray may be a two-dimensional object template.
In the embodiment of the application, after the object information is extracted, a suitable first object template can be found in the template database based on the object information, so that a target object template for visual guidance can be conveniently generated based on the first object template.
In some embodiments, generating a target object template from the object information and a first object template in a template database includes:
acquiring image data, wherein object characteristics in the image data are matched with object information;
a target object template is generated based on the image data and the first object template.
In this embodiment, the image data is an image acquired by the vision component, and the image includes object features, which are image features of an object corresponding to the object information.
Illustratively, the object information is compressor information, and the object feature is a compressor feature.
In this embodiment, after the image data is acquired, the first object template can be edited based on the acquired image data by a three-dimensional editor or a two-dimensional editor to generate a target object template, thereby partially multiplexing template data in the template database. Specifically, the modeler can edit the first object template in the template database based on the captured image data to obtain the target object template, thereby reducing the workload required for the modeling process.
In the embodiment of the application, in the process of modeling the object corresponding to the object information, the target object template can be generated by combining the actually acquired image data on the basis of the first object model in the multiplexing template database, so that the accuracy of the target object template is improved, and meanwhile, the operation steps required by modeling can be reduced.
In some embodiments, optionally, after generating the target object template based on the image data and the first object template, comprising:
and responding to the model annotation input, and annotating the target object template according to the model annotation information corresponding to the model annotation input.
In this embodiment, after the target template is generated, the modeler may choose to manually add the annotation point to the target object template. After the visual guide platform receives the model annotation input, the target object model is annotated based on the model annotation information in the model annotation input. When the target object model is a two-dimensional model, the two-dimensional image of the two-dimensional modeling is marked, and when the target object model is a three-dimensional model, the point cloud image of the three-dimensional modeling is marked.
The first object template includes a labeling point, and after the target object model is generated based on the first object model, if the labeling point needs to be updated, model labeling input is executed on the visual guidance platform to label the target object model.
In the embodiment of the application, after the target object model is generated, a modeling person can label the target object model, so that accuracy of visual guidance needed to be performed based on the target object model is ensured.
In some embodiments, optionally, the controlling the visual component and the executing component further comprises, prior to executing the target boot task, based on the component parameters:
acquiring visual guidance project information;
binding the visual guide item information with component parameters of the visual component and the execution component to obtain a target guide item;
and binding and storing the target guide item and the target guide task.
In this embodiment, the visual guide item information is identification information of a newly-built target guide item, and the target guide item is matched with a target guide task, so that the target guide item can be generated by binding the target guide item with component parameters of the visual component and the execution component, and then the target guide item and the target guide task are bound and stored, thereby realizing that the corresponding target guide task can be automatically executed when the target guide item is operated.
As shown in fig. 2, the method for constructing the visual guidance platform includes: newly-built cameras and newly-built mechanical arms, hand-eye calibration, newly-built templates, newly-built tasks, newly-built projects and project operation.
The new camera and the new mechanical arm are corresponding hardware of the new project. In the hand-eye calibration process, an intelligent hand-eye calibration is used for establishing a transformation relation between a camera coordinate system and a robot coordinate system. The new template comprises intelligent editing of the template and rapid modeling by using a two-dimensional/three-dimensional image data editor. The new tasks include: binding the newly-built target guide task with the corresponding template. The new project comprises: binding the newly-built target guiding task with a corresponding camera and a mechanical arm.
In the embodiment, in the process of carrying out camera deployment and mechanical arm deployment on site by the visual guide platform, the camera information and the mechanical arm information in the component information base can be multiplexed, and the camera coordinate system and the mechanical arm coordinate system can be multiplexed in the hand-eye calibration stage carried out on site to obtain coordinate system conversion parameters so as to carry out hand-eye calibration. The modeling stage performed in the field can multiplex the first object templates in the template database and call the editor to edit the object templates in the template data for a second time, reducing manual steps compared to re-modeling. The vision guidance platform can also multiplex the target guidance tasks, and after the mechanical arm, the camera and the task data are bound in the project, the mechanical arm, the camera and the task data can be partially multiplexed.
As shown in fig. 3, the project executed by the vision guidance platform may include a plurality of cameras and a plurality of mechanical arms, and the correspondence between the plurality of cameras and the plurality of mechanical arms is determined, and the corresponding first coordinate system and the second coordinate system are searched according to the correspondence, and the coordinate system conversion parameters are determined according to the correspondence. Multiple templates, possibly including multiple sets of templates, may be managed during the project management stage, and a set of templates may include multiple sub-templates, with each target boot task being associated with a set of templates.
According to one embodiment of the present application, as shown in fig. 4, a visual guiding device 400 is proposed, and is applied to a visual guiding platform, where the visual guiding platform includes a visual component and an execution component, and the visual guiding device 400 includes:
an obtaining module 402, configured to obtain hardware configuration information and object information in the task information;
a configuration module 404, configured to configure component parameters of the visual component and the execution component according to the hardware configuration information;
a generating module 406, configured to generate a target object template according to the object information and a first object template in the template database;
the control module 408 is configured to control the visual component and the execution component to execute a target guidance task based on the component parameters, where the target guidance task is a guidance task generated based on the task information and the target object template.
The application provides a vision guiding device is applied to vision guiding platform, is provided with vision subassembly and execution subassembly in the vision guiding platform, and wherein, vision subassembly is used for gathering image data, makes vision guiding platform can control execution subassembly based on the image data who gathers, and execution subassembly is used for carrying out specific operations such as unordered work piece loading and unloading, the stack breaking of an order, work piece high accuracy location assembly of article.
In the technical scheme, the task information is an item task required to be executed by the visual guidance platform, and the task information comprises visual components for executing the task and hardware configuration information of the execution components and object information of a workpiece object in the task.
In the technical scheme, after the hardware configuration information is acquired, the component parameters of the visual component and the component parameters of the execution component of the visual guiding platform can be configured based on the hardware configuration information, so that the visual guiding platform can call the proper visual component and the execution component. The visual guiding platform is provided with a component information base, component parameters of different visual components and component information of the mechanical arm component are stored in the component information base, required component information can be found through hardware configuration information, and corresponding component parameters can be generated based on the component information and the hardware configuration information.
It should be noted that, when the visual guidance platform performs different project tasks, different visual components and execution components may be selectively installed.
In the technical scheme, after the object information is extracted, a proper first object template can be found in a template database based on the object information, the first object template comprises a three-dimensional model or a two-dimensional model of the object, and a target object template can be generated based on the object information and the first object template and is matched with the object information in the task information.
According to the technical scheme, the target guiding task can be generated according to the task information and the target object template, and after hardware configuration is completed for the visual component and the execution component, the visual component and the execution component are controlled to execute the generated target guiding task based on component parameters, so that project tasks corresponding to the task information are completed.
In the technical scheme, the component information base and the template database are stored in the visual guiding platform, so that the historically generated component information and object templates can be multiplexed, after hardware configuration information and objects in task information are acquired, component parameters of the visual components and execution components can be configured based on the component information and the hardware configuration information in the component information base, and a target object template is generated based on the object information and a first object template in the template database, so that the visual guiding platform can multiplex the historically configured parameter information under the condition of facing different project scenes, the development resource consumption of the visual guiding platform is reduced, and the development efficiency of the visual guiding platform is improved.
In some embodiments, optionally, the component parameters include: and (5) coordinate conversion parameters.
The vision guide apparatus 400 includes:
the extraction module is used for extracting camera configuration information and mechanical arm configuration information in the hardware configuration information;
the searching module is used for searching a first coordinate system and a second coordinate system in the component information base according to the camera configuration information and the mechanical arm configuration information, wherein the first coordinate system comprises a camera coordinate system of the vision component, and the second coordinate system comprises a mechanical arm coordinate system of the execution component;
the generating module 406 is configured to generate a coordinate transformation parameter according to the first coordinate system and the second coordinate system.
In the embodiment of the application, the hardware configuration information comprises camera configuration information corresponding to the vision component and mechanical arm configuration information corresponding to the execution component, the corresponding first coordinate system and second coordinate system can be searched through the camera configuration information and the mechanical arm configuration information, multiplexing of the camera coordinate system and the mechanical arm coordinate system is achieved, and coordinate conversion parameters can be automatically generated through the first coordinate system and the second coordinate system, so that subsequent automatic hand-eye calibration of the vision component and the execution component is facilitated.
In some embodiments, optionally, the component parameters include: camera information.
The vision guide apparatus 400 includes:
the configuration module 404 is configured to configure camera information corresponding to the visual component according to camera configuration information in the hardware configuration information, where the camera information is information found in the component information base based on the camera configuration information;
wherein the camera information includes at least one of: camera identification information, camera communication serial port information and exposure time length information.
In the embodiment of the application, after the camera configuration information is extracted from the hardware configuration information, at least one item of corresponding camera communication serial port information, camera identification information and exposure duration information can be found in the component information base based on the camera configuration information, so that the configuration of the camera information of the visual component is completed.
In some embodiments, optionally, the component parameters include: mechanical arm information.
The vision guide apparatus 400 includes:
the configuration module 404 is configured to configure the mechanical arm information corresponding to the execution component according to the mechanical arm configuration information in the hardware configuration information, where the mechanical arm information is information found in the component information base according to the mechanical arm configuration information;
wherein the mechanical arm information comprises at least one of the following: mechanical arm identification information and mechanical arm communication connection information.
In the embodiment of the application, after the mechanical arm configuration information is extracted from the hardware configuration information, at least one item of corresponding mechanical arm identification information and mechanical arm communication connection information can be found in the component information base based on the mechanical arm configuration information, so that the configuration of the mechanical arm information of the visual component is completed.
In some embodiments, optionally, a searching module is configured to search the template database for the first object template according to the object information;
wherein the first object template comprises at least one of: two-dimensional object templates, three-dimensional object templates.
In the embodiment of the application, after the object information is extracted, a suitable first object template can be found in the template database based on the object information, so that a target object template for visual guidance can be conveniently generated based on the first object template.
In some embodiments, optionally, the acquiring module 402 is configured to acquire image data, where object features in the image data match the object information;
a generating module 406 is configured to generate a target object template based on the image data and the first object template.
In the embodiment of the application, in the process of modeling the object corresponding to the object information, the target object template can be generated by combining the actually acquired image data on the basis of the first object model in the multiplexing template database, so that the accuracy of the target object template is improved, and meanwhile, the operation steps required by modeling can be reduced.
In some embodiments, optionally, the visual guide device 400 comprises:
and the labeling module is used for responding to the model labeling input and labeling the target object template according to the model labeling information corresponding to the model labeling input.
In the embodiment of the application, after the target object model is generated, a modeling person can label the target object model, so that accuracy of visual guidance needed to be performed based on the target object model is ensured.
In some embodiments, optionally, an obtaining module 402 is configured to obtain visual guidance item information;
the vision guide apparatus 400 includes:
the binding module is used for binding the visual guiding project information with the component parameters of the visual component and the execution component so as to obtain a target guiding project;
and the storage module is used for binding and storing the target guide item and the target guide task.
In the technical scheme, the visual guide item information is the identification information of the newly-built target guide item, the target guide item is matched with the target guide task, the target guide item can be generated by binding the target guide item with the component parameters of the visual component and the execution component, and then the target guide item and the target guide task are bound and stored, so that the corresponding target guide task can be automatically executed when the target guide item is operated.
In one embodiment according to the present application, as shown in fig. 5, a visual guide device 500 is provided, comprising: a memory 504, the memory 504 storing programs or instructions; the processor 502, the processor 502 executes the program or the instructions stored in the memory 504 to implement the steps of the visual guidance method as in any embodiment, so that the method has all the advantages of the visual guidance method in any embodiment described above, and will not be described in detail herein.
In one embodiment according to the present application, a readable storage medium is proposed, on which a program or instructions is stored which, when executed by a processor, implement the steps of the visual guidance method as in any of the embodiments described above. Therefore, the method has all the beneficial technical effects of the visual guiding method in any of the above embodiments, and will not be described in detail herein.
In an embodiment according to the present application, a computer program product is presented, comprising a computer program which, when executed by a processor, implements the steps of the visual guidance method as in any of the embodiments described above. Therefore, the method has all the beneficial technical effects of the visual guiding method in any of the above embodiments, and will not be described in detail herein.
In one embodiment according to the present application, a visual guidance platform is presented, comprising: the visual guiding device in any of the above embodiments, and/or the readable storage medium in any of the above embodiments, and/or the computer program product in any of the above embodiments, thus having all the technical advantages of the visual guiding device in any of the above embodiments, and/or the readable storage medium in any of the above embodiments, and/or the computer program product in any of the above embodiments, are not repeated here.
As shown in fig. 6, the visual guidance platform includes a management setting module and an operation display module, and the management setting module includes a project management module, a task management module, a camera management module, a mechanical arm management module, a hand-eye calibration management module and a template management module. The operation display module comprises an operation state display module and a history record display module.
The project management module is used for mechanical arm binding management, camera binding management and project start-stop management. The task management module is used for template binding management. The camera management module is used for component information base management of the camera and camera setting. The mechanical arm management module is used for managing an assembly information base of the mechanical arm and setting the mechanical arm. The hand-eye calibration management module is used for two-dimensional/three-dimensional automatic hand-eye calibration and calibration structure adjustment. The template management module is used for managing a template database, a two-dimensional editor and a three-dimensional editor and managing parameters. The running state display module is used for displaying the current running state result display of each point, the hardware communication state display, the production data statistics display and the log display. The history record display module is used for displaying each point history record, and screening and displaying results and history log.
As shown in fig. 7, 8 and 9, the common requirements of the platform level are determined by analyzing the common visual guidance requirements of the mechanical arm in various industrial scenes, and the common requirements are extracted and manufactured into common functions. The shaded portions in fig. 7 to 9 are differential portions in different scenes, and the white portions are common portions in different scenes, wherein the common portions can directly multiplex the common function modules, and the differential portions are intensively adjusted and customized.
Specifically, the placement points in the templates in the arming and assembly scene need to be redetermined, and the placement points of the cameras need to be redetermined.
Under the loading and unloading scene of the air compressor, the compressor template and the tray template need to be reconstructed, and after the compressor template and the tray template are constructed, the marking points need to be input again. The grabbing template and the placing template are also required to be reselected when a task is newly established. When newly creating an item, the camera needs to be replaced, the capture point camera is selected, and the capture point two-dimensional camera is selected.
In the FCT (Functional Circuit Test, functional test) scenario, the template type needs to be redetermined. When a new task is created, the finished board code is selected, the associated template is confirmed to be required to be customized again, and when a new project is created, the camera is required to be replaced, the site camera is selected, and the secondary positioning camera is required to be selected.
The described methods may be implemented in a variety of different ways depending on the particular features and/or example applications. For example, the methods may be implemented by a combination of hardware, firmware, and/or software. For example, in a hardware implementation, a processor may be implemented within one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, electronic devices, other device units designed to perform the functions described above, and/or a combination thereof.
A computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, but is not limited to being, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium would include: portable computer floppy disks, hard disks, random Access Memories (RAMs), read-only memories (ROMs), erasable programmable read-only memories (EPROMs or flash memories), static Random Access Memories (SRAMs), portable compact disk read-only memories (CD-ROMs), digital Versatile Disks (DVDs), memory cards, floppy disks, encoding machinery such as punch cards or grooves having a raised structure with recorded instructions, or any suitable combination of the above. Computer-readable storage media, as used herein, should not be construed as transmitting a signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium, or an electrical signal transmitted through an electrical wire, etc.
It should be understood that, in the claims, the description, and the drawings of the present application, the term "plurality" refers to two or more, and unless otherwise explicitly defined, the orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, merely for the convenience of describing the present application and making the description process easier, and not for the purpose of indicating or implying that the apparatus or element in question must have the particular orientation, configuration and operation described, of a particular orientation, and therefore such description should not be construed as limiting the present application; the terms "connected," "mounted," "secured," and the like are to be construed broadly, and may be, for example, a fixed connection between a plurality of objects, a removable connection between a plurality of objects, or an integral connection; the objects may be directly connected to each other or indirectly connected to each other through an intermediate medium. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art based on the above data.
The descriptions of the terms "one embodiment," "some embodiments," "particular embodiments," and the like in the claims, specification, and drawings of this application mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In the claims, specification and drawings of this application, the schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and variations may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.
Claims (13)
1. A vision guidance method, applied to a vision guidance platform, the vision guidance platform including a vision component and an execution component, the vision guidance method comprising:
acquiring hardware configuration information and object information in the task information;
configuring component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base;
generating a target object template according to the object information and a first object template in a template database;
and controlling the vision component and the execution component to execute a target guiding task based on the component parameters, wherein the target guiding task is generated based on the task information and the target object template.
2. The visual guidance method of claim 1, wherein the component parameters include: and the coordinate conversion parameters are used for configuring the component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base, and the coordinate conversion parameters comprise:
Extracting camera configuration information and mechanical arm configuration information in the hardware configuration information;
searching a first coordinate system and a second coordinate system in the component information base according to the camera configuration information and the mechanical arm configuration information, wherein the first coordinate system comprises a camera coordinate system of the vision component, and the second coordinate system comprises a mechanical arm coordinate system of the execution component;
and generating the coordinate conversion parameters according to the first coordinate system and the second coordinate system.
3. The visual guidance method of claim 1, wherein the component parameters include: camera information, the configuration of the visual component and the component parameters of the execution component according to the hardware configuration information and the component information in the component information base, including:
configuring the camera information corresponding to the visual component according to camera configuration information in the hardware configuration information, wherein the camera information is information searched in the component information base based on the camera configuration information;
wherein the camera information includes at least one of: camera identification information, camera communication serial port information and exposure time length information.
4. The visual guidance method of claim 1, wherein the component parameters include: the mechanical arm information configures the component parameters of the visual component and the execution component according to the hardware configuration information and the component information in the component information base, and the mechanical arm information comprises:
configuring the mechanical arm information corresponding to the execution assembly according to mechanical arm configuration information in the hardware configuration information, wherein the mechanical arm information is information searched in an assembly information base according to the mechanical arm configuration information;
wherein the robot arm information includes at least one of: mechanical arm identification information and mechanical arm communication connection information.
5. The visual guide method according to any one of claims 1 to 4, characterized in that before generating a target object template from the object information and a first object template in a template database, it comprises:
searching the first object template in the template database according to the object information;
wherein the first object template comprises at least one of: two-dimensional object templates, three-dimensional object templates.
6. The visual guide method according to any one of claims 1 to 4, wherein the generating a target object template from the object information and a first object template in a template database comprises:
Acquiring image data, wherein object characteristics in the image data are matched with the object information;
the target object template is generated based on the image data and the first object template.
7. The visual guidance method according to claim 6, wherein after the generating the target object template based on the image data and the first object template, comprising:
and responding to the model annotation input, and annotating the target object template according to the model annotation information corresponding to the model annotation input.
8. The visual guidance method of any of claims 1-4, wherein the controlling the visual component and the execution component based on the component parameters, prior to executing a target guidance task, further comprises:
acquiring visual guidance project information;
binding the visual guidance item information with the component parameters of the visual component and the execution component to obtain a target guidance item;
and binding and storing the target guide item and the target guide task.
9. A visual guidance apparatus for use with a visual guidance platform, the visual guidance platform comprising a visual component and an execution component, the visual guidance apparatus comprising:
The acquisition module is used for acquiring hardware configuration information and object information in the task information;
the configuration module is used for configuring component parameters of the visual component and the execution component according to the hardware configuration information;
the generating module is used for generating a target object template according to the object information and a first object template in the template database;
and the control module is used for controlling the visual component and the execution component to execute a target guiding task based on the component parameters, wherein the target guiding task is generated based on the task information and the target object template.
10. A visual guide device, comprising:
a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method of any one of claims 1 to 8.
11. A readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to any of claims 1 to 8.
12. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the visual guidance method of any one of claims 1 to 8.
13. A vision guidance platform, comprising:
the visual guide device of claim 9 or 10; and/or
The readable storage medium of claim 11; and/or
The computer program product of claim 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311570535.8A CN117549301A (en) | 2023-11-23 | 2023-11-23 | Visual guidance method, visual guidance device, readable storage medium and visual guidance platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311570535.8A CN117549301A (en) | 2023-11-23 | 2023-11-23 | Visual guidance method, visual guidance device, readable storage medium and visual guidance platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117549301A true CN117549301A (en) | 2024-02-13 |
Family
ID=89816437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311570535.8A Pending CN117549301A (en) | 2023-11-23 | 2023-11-23 | Visual guidance method, visual guidance device, readable storage medium and visual guidance platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117549301A (en) |
-
2023
- 2023-11-23 CN CN202311570535.8A patent/CN117549301A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4062790A1 (en) | Automated identification of shoe parts | |
CN112621765B (en) | Automatic equipment assembly control method and device based on manipulator | |
CN104690420B (en) | FPC edges of boards positioning and processing method based on digital CCD | |
CN110171000B (en) | Groove cutting method, device and control equipment | |
CN110621150B (en) | Printed circuit board assembling method and related device | |
JP2007151383A (en) | Rfid system, rfid cable system, rfid cable installation method | |
CN115278017A (en) | Infrared intelligent shooting method and device, infrared thermal imaging equipment and medium | |
CN113919794A (en) | Method for rapidly analyzing lifting capacity of components in batches | |
CN109313333B (en) | Method and system for engaging a visual inspection device | |
CN117549301A (en) | Visual guidance method, visual guidance device, readable storage medium and visual guidance platform | |
CN207099116U (en) | Data acquisition and remote control | |
CN106817469B (en) | Terminal software testing method and system for simulating manual operation | |
CN113468048A (en) | System testing method, device, equipment and computer readable storage medium | |
CN110163580B (en) | Creation method of multitask VR training scene, VR training system and storage medium | |
CN112643324A (en) | Automatic screw driving equipment and automatic screw driving method adopting same | |
CN112256555A (en) | Automatic test case management system and test case execution state conversion method | |
JP2020022101A (en) | Monitoring device, production line, and control method of monitoring device | |
CN116001000A (en) | Intelligent control method and system for PCB (printed circuit board) dividing machine | |
CN108108895B (en) | Method, system, equipment and storage medium for dynamically controlling task state | |
CN112059983A (en) | Method, device and computer readable medium for assembling workpiece | |
CN115825688A (en) | PCBA automatic test equipment based on domestic industrial controller | |
CN112379916A (en) | Method and device for modifying version number of maven project | |
CN109741422B (en) | Intelligent wiring auxiliary method and intelligent wiring auxiliary system | |
CN117850639A (en) | Configuration management method and system for visual guidance positioning system | |
CN110010019B (en) | Control method and device for assembling LED screen |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |