CN113778414A - Machine vision communication script generation method and device based on graphical programming - Google Patents

Machine vision communication script generation method and device based on graphical programming Download PDF

Info

Publication number
CN113778414A
CN113778414A CN202111333650.4A CN202111333650A CN113778414A CN 113778414 A CN113778414 A CN 113778414A CN 202111333650 A CN202111333650 A CN 202111333650A CN 113778414 A CN113778414 A CN 113778414A
Authority
CN
China
Prior art keywords
graph
building block
communication
robot
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111333650.4A
Other languages
Chinese (zh)
Inventor
王建民
李仲效
姜宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuejiang Technology Co Ltd
Original Assignee
Shenzhen Yuejiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuejiang Technology Co Ltd filed Critical Shenzhen Yuejiang Technology Co Ltd
Priority to CN202111333650.4A priority Critical patent/CN113778414A/en
Publication of CN113778414A publication Critical patent/CN113778414A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop

Abstract

The application provides a machine vision communication script generation method and device based on graphical programming, and relates to the technical field of graphical programming. The method comprises the following steps: responding to the dragging operation of a user on the building block graph, and building a communication graph combination on an editing interface, wherein the communication graph combination is used for describing data interaction logic of a robot for controlling a first vision sensor to acquire position information of a target object; responding to the input operation of a user in the communication graph combination, and determining configuration data required by realizing data interaction logic; and analyzing the communication graph combination and the configuration data to obtain a script file for realizing data interaction logic. The method and the device for generating the machine vision communication script based on graphical programming can solve the problem that the existing script file of machine vision communication is difficult to program.

Description

Machine vision communication script generation method and device based on graphical programming
Technical Field
The application relates to the technical field of graphical programming, in particular to a machine vision communication script generation design method and device based on graphical programming.
Background
With the continuous development of automation technology, the degree of intelligence of the robot is higher and higher. The intelligent vision sensor is a machine vision device with the functions of image acquisition, image processing and information transmission. The robot can acquire the position information of a target object in the surrounding environment through the intelligent vision sensor, and realize the functions of target positioning, target object grabbing and the like according to the position information of the target object. At present, a script file for data interaction between a robot and an intelligent vision sensor usually needs a professional to write codes by using a programming language, but the programming difficulty is high, and a script file cannot be written quickly and accurately for most of users who are not professional in computers.
Disclosure of Invention
The embodiment of the application provides a machine vision communication design method and device based on graphical programming, and can solve the problem that the script file programming difficulty for data interaction between the existing robot and a vision sensor is high.
In a first aspect, an embodiment of the present application provides a machine vision communication design method based on graphical programming, including: responding to the dragging operation of a user on the building block graph, and building a communication graph combination on an editing interface, wherein the communication graph combination is used for describing data interaction logic of a robot for controlling a first vision sensor to acquire position information of a target object; responding to the input operation of a user in the communication graph combination, and determining configuration data required by realizing data interaction logic; and analyzing the communication graph combination and the configuration data to obtain a script file for realizing data interaction logic.
According to the machine vision script generation method based on graphical programming, machine vision related building block graphs are arranged in a programming interface of graphical programming software, the building block graphs are dragged to an editing interface, communication graph combinations are built according to data interaction logic achieved as required, and relevant parameters are input, so that corresponding script files can be obtained. In the whole design process, a user does not need to write codes through a professional programming language, and can obtain a script file of data interaction logic between the robot and the visual sensor only by completing building of building blocks and configuration of data, so that the problem that the programming difficulty of the script file for data interaction between the robot and the visual sensor is high is solved.
Optionally, the communication graph combination includes: sending a building block graph and receiving a building block graph;
the sending building block graph comprises a first editing column and a second editing column, and after the identification of the first visual sensor is input into the first editing column and the label of the target object is input into the second editing column, the sending building block graph shows the process that the robot sends the label to the first visual sensor and indicates the first visual sensor to obtain the position information of the target object according to the label; and receiving a building block graph which comprises a third edit bar and a type edit bar, and after the identifier is input in the third edit bar and the data type is input in the type edit bar, receiving the building block graph to show the process that the robot receives the position information and performs data conversion on the position information according to the data type.
Optionally, analyzing the communication graph combination and the configuration data includes: and calling prestored configuration parameters of the first visual sensor according to the identification, and analyzing the communication graph combination, the configuration parameters and the configuration data.
Optionally, before the building block graph is built on the editing interface in response to a dragging operation of the building block graph by the user, the method further includes: displaying a management interface, wherein the management interface comprises a programming control and a process control; responding to a first click operation received by the programming control, and displaying an editing interface; responding to a second click operation received by the process control, and displaying a process interface, wherein the process interface comprises a visual control; and displaying the visual configuration interface in response to the third click operation received by the visual control, and determining and saving the configuration parameters in response to the setting operation of the user on the visual configuration interface.
Optionally, the configuration parameter includes a network connection mode, a trigger mode, a transmission mode of the location information, and/or a transmission format of the location information.
Optionally, the communication graph combination further includes: a plurality of building block graphs and data building block graphs are combined;
building a set of building block graphs after receiving the building block graphs, wherein the set of building block graphs show that position information comprises m sets of coordinate information, each set of coordinate information comprises n numerical values corresponding to n directions, n is more than or equal to 2, and m is more than or equal to 1; the data building block graph comprises: and after a numerical value i is input in the group number editing column and a numerical value j is input in the item number editing column, the data building block graph represents a jth item numerical value in the ith group of coordinate information in the position information, wherein i is more than or equal to 0 and is less than or equal to n, and j is more than 0 and is less than or equal to m.
Optionally, the communication graph combination further comprises a connecting building block graph; the connecting building block graph comprises a fourth edit bar, and after the identifier is input into the fourth edit bar, the connecting building block graph represents the process of establishing communication connection between the robot and the first vision sensor.
Optionally, the communication graph combination further comprises a closed building block graph; and closing the building block graph comprises a fifth edit bar, and when the identifier is input in the fifth edit bar, closing the building block graph to show the process that the robot is disconnected from the communication connection with the first vision sensor.
Optionally, the communication graph combination further comprises a triggering building block graph; the triggering building block graph comprises a sixth editing column, and after the identification is input in the sixth editing column, the triggering building block graph represents the process that the robot triggers the first vision sensor to acquire the position information of the target object.
In a second aspect, an embodiment of the present application provides a machine vision communication script generating apparatus based on graphical programming, including: the building unit is used for responding to dragging operation of a user on the building block graph, and building a communication graph combination on the editing interface, wherein the communication graph combination is used for describing data interaction logic of the robot for controlling the first visual sensor to acquire position information of the target object; the input unit is used for responding to the input operation of a user in the communication graph combination and determining configuration data required by realizing data interaction logic; and the analysis unit is used for analyzing the communication graph combination and the configuration data to obtain a script file of data interaction logic.
In a third aspect, embodiments of the present application provide a robot, including a robot arm, a first vision sensor, a memory, a processor, and a computer program stored in the memory and executable on the processor, where the robot arm and the first vision sensor are respectively connected to the processor, and the processor executes the computer program to implement the method according to any one of the first aspects.
In a fourth aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method according to any one of the first aspect is implemented.
In a fifth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method according to any one of the above first aspects.
In a sixth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method of any one of the first aspect.
It is understood that the beneficial effects of the second to sixth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a management interface of a graphical programming software provided in an embodiment of the present application;
FIG. 2 is a process interface of a graphical programming software provided in an embodiment of the present application;
FIG. 3 is a visual configuration interface of a graphical programming software provided by an embodiment of the present application;
FIG. 4 is a management interface of a graphical programming software provided in another embodiment of the present application;
FIG. 5 is a programming interface of a graphical programming software provided by an embodiment of the present application;
FIG. 6 is a flowchart of a method for generating a machine vision communication script based on graphical programming according to an embodiment of the present application;
fig. 7 is an editing interface constructed with a communication graph combination according to an embodiment of the present application;
FIG. 8 is an editing interface of a communication graph set with configuration data according to an embodiment of the present application;
FIG. 9 is a script file for data interaction logic displayed in graphical programming software according to an embodiment of the present application;
fig. 10 is an operation flowchart of a user setting up a communication graph combination and obtaining a script file according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a machine vision communication script generating device based on graphical programming according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The intelligent vision sensor is a machine vision device with the functions of image acquisition, image processing and information transmission. The robot can acquire the position information of a target object in the surrounding environment through the intelligent vision sensor, and realize the functions of target positioning, target object grabbing and the like according to the position information of the target object. At present, a script file for data interaction between a robot and an intelligent vision sensor usually needs a professional to write codes by using a programming language, but the programming difficulty is high, and a script file cannot be written quickly and accurately for most of users who are not professional in computers.
In order to solve the technical problem, embodiments of the present application provide a method and an apparatus for generating a machine vision communication script based on graphical programming. Building block graphs corresponding to data interaction logic of the robot control visual sensor for acquiring the position information of the target object are arranged in the graphical programming software, and a user can drag the required building block graphs to an editing interface to build a communication graph combination according to the preset data interaction logic and perform data configuration on each building block graph. The graphical programming software can automatically analyze the communication graph combination and the configured data to obtain a script file corresponding to the data interaction logic. The user does not need to program, only needs to complete building of building blocks and configuration of data to obtain the script file, is convenient to operate, and solves the problem that the script file programming difficulty of data interaction between the existing robot and the vision sensor is large.
The technical solution of the present application is described in detail below with reference to the accompanying drawings. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The application provides graphical programming software, wherein building block graphs required when a robot controls a visual sensor to acquire position information of a target object are arranged in the graphical programming software. A user can drag the building block graphs to an editing interface to build a communication graph combination, and the communication graph combination is used for indicating the robot to control the visual sensor to acquire data interaction logic of position information of the target object. The user can input corresponding data in the edit bar of each building block graph, so that data configuration of each building block graph is realized. In addition, the configuration parameters of the visual sensor can be preset and stored by a user, so that the corresponding configuration parameters can be directly called when data configuration is carried out on the building block graphs in the communication graph combination.
For example, fig. 1-5 are different interface diagrams of graphical programming software provided by an embodiment of the present application. The graphical programming software may be Scratch, Makecode, Mixly, MBlock, Mind +, or others. After the user launches the graphical programming software, the management interface (i.e., the top page) shown in FIG. 1 can be displayed. And programming controls and process controls are displayed on the management interface.
In one embodiment, after the user clicks on the process control, a process interface such as that shown in FIG. 2 may be displayed. The process interface includes a visual control and other controls, and the user can display a visual configuration interface as shown in fig. 3 after clicking on the visual control. The visual configuration interface comprises a plurality of setting modules, and the setting modules are respectively used for setting the identification of the visual sensor, a triggering mode for triggering the visual sensor to acquire the position information of the target object, a network connection mode between the robot and the visual sensor, a receiving mode of the position information and a transmission format of the position information. The user can preset corresponding configuration parameters in the corresponding setting module according to actual requirements and store the preset configuration parameters.
For example, the triggering manner may be intra-IO triggering or network triggering, and the IO triggering index may be 0 or 1, or may be other data. The network trigger index may be "0, 0, 0, 0". The setting module corresponding to the network connection mode includes a network mode, an open port and connection timeout time, and the network mode may be TCP. The reception may be blocked and the duration of the blockage or non-blocked. The network receiving mode is used for setting a transmission format of the position information, the position information includes at least one group of coordinate information, and each group of coordinate information generally includes two-dimensional or three-dimensional numerical values. Each numerical value in each group of coordinate information can be transmitted after being separated by a first preset symbol, each group of coordinate information can be transmitted after being separated by a second preset symbol, and the first preset symbol is different from the second preset symbol. The preset symbols may be semicolons, commas, slashes or other types of symbols.
In another embodiment, after the user clicks the programming control in fig. 4, the programming interface shown in fig. 5 may be displayed, and the programming interface includes a first display interface, a second display interface and an editing interface. The first display interface comprises various types of icons and names, such as events, controls, Vision, motions and other types.
After the user clicks any type of icon or name in the first display interface, all the building block graphs in the corresponding type can be displayed in the second display interface. For example, as shown in fig. 5, after the user clicks an icon corresponding to the event type in the first display interface, a building block graph which is included in the event type and starts to be run is displayed in the second display interface. After clicking the icon corresponding to the control type, the user displays the building block graph with the circulation function, the building block graph with the condition judgment function and the building block graph with other functions included in the control type in the second display interface, and only the condition judgment building block graph corresponding to the if statement and the circulation building block graph corresponding to the while statement are shown in fig. 5.
After clicking the Vision type icon in the first display interface, the user displays the connecting building block graph, confirms the connecting building block graph, triggers the building block graph, sends the building block graph, receives the building block graph, confirms the receiving building block graph, assembles the building block graph, the data building block graph and closes the building block graph in the second display interface.
The sending building block graph comprises a first editing column and a second editing column, and after a user inputs the identification of the first visual sensor in the first editing column and the label of the target object in the second editing column, the sending building block graph represents the process that the robot sends the label to the first visual sensor and indicates the first visual sensor to obtain the position information of the target object according to the label. For example, a label corresponding to one object may be input in the second edit bar, or a plurality of labels corresponding to a plurality of objects may be input.
And receiving a building block graph, wherein the building block graph comprises a third edit bar and a type edit bar, and after a user inputs an identifier in the third edit bar and inputs a data type in the type edit bar, receiving the building block graph to show a process that the robot receives position information and performs data conversion on the position information according to the data type. Illustratively, the data type may be string type, double type, or number type.
The building block graph is built after the building block graph is received, the building block graph representation position information comprises m sets of coordinate information, each set of coordinate information comprises n numerical values corresponding to n directions, n is a positive integer larger than or equal to 2, and m is a positive integer larger than or equal to 1.
The data building block graph comprises a group number editing column and a term number editing column, after a user inputs a numerical value i in the group number editing column and a numerical value j in the term number editing column, the data building block graph represents a jth term value in ith group of coordinate information in the position information, wherein i is greater than 0 and less than or equal to n, and j is greater than 0 and less than or equal to m. The data block graphic may be built after the set of several block graphics or before the set of several block graphics.
The connecting building block graph comprises a fourth editing column, and after a user inputs an identifier in the fourth editing column, the connecting building block graph represents a process of establishing communication connection between the robot and the first vision sensor according to a preset network connection mode. And closing the building block graph comprises a fifth edit bar, and when the identifier is input in the fifth edit bar, closing the building block graph to show the process that the robot is disconnected from the communication connection with the first vision sensor.
The triggered building block graphic includes a sixth edit bar. After the user inputs the identifier in the sixth edit bar, the building block graph is triggered to show the process that the robot sends a trigger signal to the first vision sensor according to a preset trigger mode, so that the first vision sensor is triggered to acquire the position information of the target object.
And the connection confirming building block graph is used for confirming the connection result of the robot and the first vision sensor, and the connection result is connection success or connection failure. And confirming the receiving building block graph is used for confirming the result that the robot receives the position information, and the result is successful receiving or failed receiving.
Based on the graphical programming software, the application provides a machine vision communication script generation method based on graphical programming. An exemplary graphical programming-based machine vision communication script generation method is shown in fig. 6, and includes:
and S101, responding to the dragging operation of a user on the building block graph, and building a communication graph combination on an editing interface, wherein the communication graph combination is used for describing data interaction logic of the robot for controlling the first vision sensor to acquire the position information of the target object.
For example, assuming that the robot realizes the function of grabbing apples in a scene, the robot needs to establish a TCP communication connection with the first vision sensor; then sending a label corresponding to the apple to the first visual sensor, and triggering the first visual sensor to acquire the position information of the apple in the scene; after the first vision sensor transmits the position information to the robot, the robot controls the mechanical arm to grab the apple according to coordinate information in the position information; and finally closing the communication connection with the first vision sensor.
If the data interaction logic is to be implemented, a user can determine the required building block graph in the communication graph combination. After the user starts the editing interface, the building block graph required by the data interaction logic is dragged to the editing interface, and the building of the communication graph combination is completed.
For example, as shown in fig. 7, a user may drag the starting block graph from the second display interface to the editing interface, and after the starting block graph is started, build a connected block graph, confirm the connected block graph, judge the block graph with the condition, and embed a cyclic block graph in the condition judged block graph, trigger the block graph, send the block graph, receive the block graph, confirm the received block graph, group number block graph, move the block graph, and numeric block graph, so as to obtain the communication graph combination shown in fig. 7.
And S102, responding to the input operation of the user in the communication graph combination, and determining the configuration data required by the data interaction logic.
According to the preset data interaction logic between the robot and the vision sensor, the needed configuration parameters can be determined. Therefore, before the user builds the communication graph combination on the editing interface, the user can start the visual configuration interface, and the configuration parameters of the first visual sensor are preset and stored on the visual configuration interface. After the communication graph combination is built, input operation is carried out on each building block graph in the communication graph combination, and therefore configuration data needed by data interaction between the robot and the first vision sensor are determined.
By way of example and not limitation, the data configuration process of the communication graph combination is exemplarily described by taking the data interaction logic between the robot and the first vision sensor as an example.
As shown in fig. 3, the identification setting module in the visual configuration interface inputs the identification of the first sensor (i.e., the name of the camera) "CAM 0". A network mode "TCP _ server", a listening port number "6001", and a connection timeout time "0" are input in the network connection mode setting module. The receiving mode setting module sets the transmission mode of the position information as blocking and the input blocking time as 0. Commas are input at D1, commas are input at D2 and semicolons are input at D3 in the network reception format setting module for representing that 3-dimensional numerical values are included in the coordinate information, the first numerical value is separated from the second numerical value by commas, the second numerical value is separated from the third numerical value by commas, and the third numerical value is followed by a semicolon. The user clicks a save button in the visual configuration interface and the configuration parameters of the first visual sensor may be saved.
Referring to fig. 8, a connecting block pattern in a communication pattern combination is shown. The identification "CAM 0" of the first visual sensor is entered in the fourth edit field in the connected building block graphic. The connecting building block graph can call the pre-stored configuration parameters of the first vision sensor, so that TCP communication connection is established between the robot and the first vision sensor.
And inputting data '0' in an edit bar of the condition judgment building block graph, wherein 0 represents successful connection. And the graphics group constructed by determining the connected building block graphics, the condition judgment building block graphics and the cycle building block graphics represents the function corresponding to all the building block graphics embedded in the cycle building block graphics after the TCP communication connection is successfully established between the robot and the first vision sensor.
For the triggered block graphic, the identification "CAM 0" is entered in the sixth edit field of the triggered block graphic. And triggering the building block graph to show that the robot sends a trigger signal '0' to the first vision sensor according to the prestored configuration parameters.
For the sending block graph, the identification "CAM 0" is input in the first edit field of the sending block graph and the reference number "5" corresponding to the apple is input in the second edit field, and the sending block graph indicates that the robot sends the reference number 5 to the first visual sensor to instruct the first visual sensor to acquire the position information of the apple corresponding to the reference number 5.
For receiving a block graphic, the identification "CAM 0" is entered in the third edit field of the received block graphic and the data type "string" is entered in the type edit field. Assuming that two apples are in a scene shot by the first visual sensor, the first visual sensor transmits coordinate information corresponding to the two apples respectively to the robot, and the set of numerical block graphs can determine that the position information acquired by the robot comprises 2 sets of coordinate information, and each set of coordinate information comprises 3 numerical values.
The method aims at a graph group constructed by a cycle building block graph, a plurality of groups of building block graphs, three first motion building block graphs, a second motion building block graph and three data building block graphs. For the first data block graphic, "i" is entered in the group edit field and "1" is entered in the number of items edit field. For the second data block graphic, "i" is entered in the group edit field and "2" is entered in the number of items edit field. For the third data block diagram, "i" is entered in the group edit field and "3" is entered in the number-of-items edit field, and the initial value of i is set to 1. "P1" and "X" are entered in two edit columns in the first motion block graphic, respectively, "P1" and "Y" are entered in two edit columns in the second first motion block graphic, "P1" and "Z" are entered in two edit columns in the third first motion block graphic, respectively, and motion patterns "MovJ" and "P1" are entered in two edit columns in the second motion block graphic, respectively. The graph group indicates that three values in each group of coordinate information are sequentially assigned as P1 from the first group of coordinate information in the 2 groups of coordinate information, and then P1 is sequentially assigned as two coordinate information, so that the robot can determine the coordinate positions of two apples in the scene, and thus the mechanical arm is controlled to move to the two coordinate positions corresponding to P1 point in the set motion mode to respectively grab the two apples.
It should be noted that step S101 and step S102 may be executed successively or alternately. For example, after the communication graphic combination is established, the user may perform input operation on each building block graphic. Or dragging one of the building block graphs to an editing interface, finishing the input operation of the building block graph, dragging the other building block graph to the editing interface, and so on until the communication graph combination is built and the input operation of each building block graph is finished.
In addition, for each edit bar in each building block graph and each setting module in the visual configuration interface, a user can input corresponding data in the edit bar or the setting module through a keyboard. Embedded plug-ins, such as drop-down combination buttons, sliders, etc., may also be provided in the edit bar or setup module. If a pull-down combination button, such as an inverted triangle button, is set in the edit bar or the setting module, the user clicks the pull-down combination button and then displays a plurality of data in the pull-down list, and the graphical programming software can respond to the user's clicking operation on one of the plurality of options and display the data selected by the user in the corresponding edit bar or setting module.
S103, analyzing the communication graph combination and the configuration data to obtain a script file for realizing data interaction logic.
In the embodiment of the application, after a user builds a communication graph combination in an editing interface and completes the input operation on each building block graph in the communication graph combination, the communication graph combination with configuration data displayed as shown in fig. 8 can be obtained. The graphical programming software can call the pre-stored configuration parameters of the first visual sensor according to the identification, and analyze the communication graph combination, the configuration parameters and the configuration data to obtain a code for data interaction between the robot and the first visual sensor. The specific codes are as follows:
resultInit=InitCam(“CAM0”)
If resultInit==0 then
print(“Connect camera success!”)
Else
print(“Connect camera failed, code:”,resultInit)
End
Print((resultInit))
If (resultInit)==0 then
while 1 do
TriggerCam(“CAM0”)
SendCam(“CAM0”,“5; ”)
resultRecv,visionNum,visionData=RecvCam(“CAM0”,“string”)
if resultRecv==1 then
print(“Data receive timeout!”)
elseif resultRecv==2 then
print(“Data format configuration error!”)
elseif resultRecv==3 then
print(“Network disconnect”)
end
Print((resultRecv))
i=1
While not ((visionNum)<i) do
P1.coordinate[1]=(visionData[i][1])
P1.coordinate[2]=(visionData[i][2])
P1.coordinate[3]=(visionData[i][3])
GO(P1)
Sync(1)
i=i+1
end
end
end
optionally, after the user triggers the first operation, a program interface may be displayed, and the script file corresponding to the data interaction logic and the specific content of the code are displayed on the program interface. For example, referring to fig. 8, after the user clicks a "debug" button in the editing interface of the graphical programming software, a program interface as shown in fig. 9 may be displayed, in which specific codes in a script file are displayed, and the user clicks a "run" button to run the codes in the script file. In addition, the user can click a save button in the interface of the graphical programming software to save the script file and the communication graphic combination.
According to the machine vision script generation method based on graphical programming, machine vision related building block graphs are arranged in a programming interface of graphical programming software, the building block graphs are dragged to an editing interface, communication graph combinations are built according to data interaction logic achieved as required, and related parameters are input. The user can also preset configuration parameters such as the identification of the visual sensor in the visual configuration interface. The graphical programming software can call preset configuration parameters according to the identification, and automatically analyzes the communication graph combination, the configuration parameters and the input parameters in each building block graph to obtain a corresponding script file. In the whole design process, a user does not need to write codes through a professional programming language, and can obtain a script file of data interaction logic between the robot and the visual sensor only by completing building of building blocks and configuration of data, so that the problem that the programming difficulty of the script file for data interaction between the robot and the visual sensor is high is solved.
Referring to fig. 10, an operation flowchart for a user to set up a communication graph combination and obtain a script file based on graphical programming software is provided. The method comprises the following specific steps:
s201, the user drags the first building block graph and the building block graph required by the robot to control the first vision sensor to acquire the position information of the target object from the second display interface to the editing interface, and a communication graph combination is built on the editing interface.
Illustratively, the first block graphic is a beginning running block graphic.
S202, the user prestores configuration parameters of the first vision sensor according to data interaction logic between the robot and the first vision sensor, and sets corresponding configuration data in an edit bar of each building block graph.
S203, the user triggers a first operation and displays the script file of the data interaction logic.
For the specific content of steps S201 to S203, reference may be made to the above specific description of the method for generating a machine vision communication script based on graphical programming, which is not described herein again.
Based on the same inventive concept, the embodiment of the application also provides a machine vision communication script generation device based on graphical programming. As shown in fig. 11, the script generating apparatus 300 includes: a construction unit 301, an input unit 302 and an analysis unit 303.
The building unit 301 is configured to build, in response to a dragging operation of a building block graph by a user, a communication graph combination on an editing interface, where the communication graph combination is used to describe a data interaction logic for controlling the first vision sensor to acquire position information of the target object by the robot.
The input unit 302 is used for responding to the input operation of the user in the communication graph combination and determining the configuration data required for realizing the data interaction logic.
The parsing unit 303 is configured to parse the communication graph combination and the configuration data to obtain a script file of the data interaction logic.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. The specific content of each unit implementation may refer to the specific description in the other embodiments described above, and is not described herein again.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The embodiment of the application also provides a robot. The robot includes a robotic arm, a first vision sensor, a memory, a processor, and a computer program stored in the memory and executable on the processor. The mechanical arm and the first vision sensor are respectively connected with the processor. The steps of the above described method embodiments may be implemented by a processor executing a computer program. Illustratively, the first visual sensor may be a camera.
The embodiment of the application also provides the terminal equipment. As shown in fig. 12, the terminal apparatus 400 includes: at least one processor 403, a memory 401, and a computer program 402 stored in the memory 401 and executable on the at least one processor 403, wherein the processor 403 executes the computer program 402 to implement the graphical programming based machine vision communication script generation method provided by the present application.
The embodiments of the present application also provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments can be implemented.
The embodiments of the present application provide a computer program product, which, when running on an electronic device, enables a terminal device to implement the steps in the above method embodiments when executed.
Reference throughout this application to "one embodiment" or "some embodiments," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the description of the present application, it is to be understood that the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In addition, in the present application, unless otherwise explicitly specified or limited, the terms "connected," "connected," and the like are to be construed broadly, e.g., as meaning both mechanically and electrically; the terms may be directly connected or indirectly connected through an intermediate medium, and may be used for communicating between two elements or for interacting between two elements, unless otherwise specifically defined, and the specific meaning of the terms in the present application may be understood by those skilled in the art according to specific situations.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A machine vision communication script generation method based on graphical programming is characterized by comprising the following steps:
responding to the dragging operation of a user on the building block graph, and building a communication graph combination on an editing interface, wherein the communication graph combination is used for describing data interaction logic of a robot for controlling a first vision sensor to acquire position information of a target object;
responding to the input operation of a user in the communication graph combination, and determining configuration data required by the data interaction logic;
and analyzing the communication graph combination and the configuration data to obtain a script file for realizing the data interaction logic.
2. The method of claim 1, wherein the communication pattern assembly comprises: sending a building block graph and receiving a building block graph;
the sending building block graph comprises a first editing column and a second editing column, and after the identification of the first visual sensor is input into the first editing column and the label of the target object is input into the second editing column, the sending building block graph represents the process that the robot sends the label to the first visual sensor and indicates the first visual sensor to acquire the position information of the target object according to the label;
and the received building block graph comprises a third editing column and a type editing column, and after the identification is input into the third editing column and the data type is input into the type editing column, the received building block graph represents the process that the robot receives the position information and performs data conversion on the position information according to the data type.
3. The method of claim 2, wherein parsing the communication graph composition and the configuration data comprises:
and calling prestored configuration parameters of the first visual sensor according to the identification, and analyzing the communication graph combination, the configuration parameters and the configuration data.
4. The method of claim 3, wherein before the building of the communication graphic combination on the editing interface in response to the user dragging the building block graphic, the method further comprises:
displaying a management interface, wherein the management interface comprises a programming control and a process control;
responding to a first click operation received by the programming control, and displaying the editing interface;
responding to a second click operation received by the process control, and displaying a process interface, wherein the process interface comprises a visual control;
and displaying a visual configuration interface in response to the third click operation received by the visual control, and determining and saving the configuration parameters in response to the setting operation of the user on the visual configuration interface.
5. The method according to claim 3 or 4, wherein the configuration parameters comprise a network connection mode, a trigger mode, a transmission mode of the location information and/or a transmission format of the location information.
6. The method of claim 5, wherein the communication pattern assembly further comprises: a plurality of building block graphs and data building block graphs are combined;
the set of blocks is built after the block graph is received, the set of blocks graph shows that the position information comprises m sets of coordinate information, each set of coordinate information comprises n numerical values corresponding to n directions, n is more than or equal to 2, and m is more than or equal to 1;
the data block graph includes: and after a numerical value i is input in the group number editing column and a numerical value j is input in the item number editing column, the data building block graph represents a jth item numerical value in the ith group of coordinate information in the position information, wherein i is more than 0 and less than or equal to n, and j is more than 0 and less than or equal to m.
7. The method of claim 5, wherein the communication graphic composition further comprises a connection block graphic;
the connecting building block graph comprises a fourth edit bar, and after the identification is input into the fourth edit bar, the connecting building block graph represents a process of establishing communication connection between the robot and the first vision sensor.
8. The method of claim 5, wherein the communication graphic combination further comprises closing a block graphic;
and the closed building block graph comprises a fifth edit bar, and after the mark is input into the fifth edit bar, the closed building block graph represents the process that the robot is disconnected from the communication connection with the first visual sensor.
9. The method of any of claims 6 to 8, wherein the communication graph assembly further comprises: triggering the building block graph;
the triggering building block graph comprises a sixth editing column, and after the identification is input into the sixth editing column, the triggering building block graph represents a process that the robot triggers the first vision sensor to acquire the position information of the target object.
10. A graphical programming-based machine vision communication script generating apparatus, comprising:
the building unit is used for responding to dragging operation of a user on the building block graph, and building a communication graph combination on an editing interface, wherein the communication graph combination is used for describing data interaction logic of the robot for controlling the first visual sensor to acquire position information of the target object;
the input unit is used for responding to the input operation of a user in the communication graph combination and determining configuration data required by the data interaction logic;
and the analysis unit is used for analyzing the communication graph combination and the configuration data to obtain a script file of the data interaction logic.
11. A robot comprising a robotic arm, a first vision sensor, a memory, a processor, and a computer program stored in the memory and executable on the processor, the robotic arm and the first vision sensor being respectively connected to the processor, the processor implementing the method of any one of claims 1 to 9 when executing the computer program.
12. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 9 when executing the computer program.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 9.
CN202111333650.4A 2021-11-11 2021-11-11 Machine vision communication script generation method and device based on graphical programming Pending CN113778414A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111333650.4A CN113778414A (en) 2021-11-11 2021-11-11 Machine vision communication script generation method and device based on graphical programming

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111333650.4A CN113778414A (en) 2021-11-11 2021-11-11 Machine vision communication script generation method and device based on graphical programming

Publications (1)

Publication Number Publication Date
CN113778414A true CN113778414A (en) 2021-12-10

Family

ID=78956936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111333650.4A Pending CN113778414A (en) 2021-11-11 2021-11-11 Machine vision communication script generation method and device based on graphical programming

Country Status (1)

Country Link
CN (1) CN113778414A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657571A (en) * 2022-12-26 2023-01-31 广东群宇互动科技有限公司 Intelligent toy production method, system, platform and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1727839A (en) * 2004-07-28 2006-02-01 发那科株式会社 Method of and device for re-calibrating three-dimensional visual sensor in robot system
CN101973032A (en) * 2010-08-30 2011-02-16 东南大学 Off-line programming system and method of optical visual sensor with linear structure for welding robot
CN104503754A (en) * 2014-12-16 2015-04-08 江南大学 Programming and compiling design method in robot graphical programming system
CN105843630A (en) * 2016-06-08 2016-08-10 江西洪都航空工业集团有限责任公司 Method for graphical programming development based on robot
CN108406764A (en) * 2018-02-02 2018-08-17 上海大学 Intelligence style of opening service robot operating system and method
CN110573308A (en) * 2017-04-17 2019-12-13 西门子股份公司 mixed reality assisted space programming for robotic systems
CN110900061A (en) * 2019-12-13 2020-03-24 遨博(北京)智能科技有限公司 Welding process block generation method and device, welding robot and storage medium
CN111240661A (en) * 2020-01-06 2020-06-05 腾讯科技(深圳)有限公司 Programming page display method and device, storage medium and computer equipment
CN111421517A (en) * 2020-01-03 2020-07-17 武汉智美科技有限责任公司 Programming education robot that intelligent terminal strengthened
CN111580806A (en) * 2020-04-10 2020-08-25 天津大学 Collaborative robot graphical programming system
CN112130570A (en) * 2020-09-27 2020-12-25 重庆大学 Blind guiding robot of optimal output feedback controller based on reinforcement learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1727839A (en) * 2004-07-28 2006-02-01 发那科株式会社 Method of and device for re-calibrating three-dimensional visual sensor in robot system
CN101973032A (en) * 2010-08-30 2011-02-16 东南大学 Off-line programming system and method of optical visual sensor with linear structure for welding robot
CN104503754A (en) * 2014-12-16 2015-04-08 江南大学 Programming and compiling design method in robot graphical programming system
CN105843630A (en) * 2016-06-08 2016-08-10 江西洪都航空工业集团有限责任公司 Method for graphical programming development based on robot
CN110573308A (en) * 2017-04-17 2019-12-13 西门子股份公司 mixed reality assisted space programming for robotic systems
CN108406764A (en) * 2018-02-02 2018-08-17 上海大学 Intelligence style of opening service robot operating system and method
CN110900061A (en) * 2019-12-13 2020-03-24 遨博(北京)智能科技有限公司 Welding process block generation method and device, welding robot and storage medium
CN111421517A (en) * 2020-01-03 2020-07-17 武汉智美科技有限责任公司 Programming education robot that intelligent terminal strengthened
CN111240661A (en) * 2020-01-06 2020-06-05 腾讯科技(深圳)有限公司 Programming page display method and device, storage medium and computer equipment
CN111580806A (en) * 2020-04-10 2020-08-25 天津大学 Collaborative robot graphical programming system
CN112130570A (en) * 2020-09-27 2020-12-25 重庆大学 Blind guiding robot of optimal output feedback controller based on reinforcement learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657571A (en) * 2022-12-26 2023-01-31 广东群宇互动科技有限公司 Intelligent toy production method, system, platform and storage medium

Similar Documents

Publication Publication Date Title
US9122269B2 (en) Method and system for operating a machine from the field of automation engineering
CN103927253A (en) Multiple browser compatibility testing method and system
CN102855135A (en) Graphical component-based sensing network development platform and method
CN111857470B (en) Unattended control method and device for production equipment and controller
CN112276943A (en) Robot teaching control method, teaching control system, computer device, and medium
CN113778414A (en) Machine vision communication script generation method and device based on graphical programming
CN107370823A (en) Data acquisition and long-range control method, device and computer-readable recording medium
CN111400184A (en) Game testing method, device, system, equipment and cloud platform
CN105138419A (en) Set value restoring system
CN102929159B (en) State control method and device for simulation model
CN113778415A (en) ModBus communication script generation method and device based on graphical programming
CN112799656B (en) Script file configuration method, device, equipment and storage medium for automation operation
CN111694637B (en) Online full-automatic multi-agent control simulation compiling system
CN104951214A (en) Information processing method and electronic equipment
CN111556993A (en) Electronic product testing system and method
CN114625253A (en) Interaction method, interaction device and storage medium
CN114095343A (en) Disaster recovery method, device, equipment and storage medium based on double-active system
CN114139731A (en) Longitudinal federated learning modeling optimization method, apparatus, medium, and program product
CN113778418A (en) Multithreading script generation method and device based on graphical programming
CN113094132A (en) Remote checking robot history backtracking method, device, terminal and storage medium
CN112068756A (en) Steering engine debugging method, device, equipment and storage medium
JP2009020716A (en) Tool device and method for creating message transmission program
CN110554966A (en) Drive debugging method, behavior analysis method and drive debugging system
TWI767590B (en) Device and method for robotic process automation of multiple electronic computing devices
EP4257302A1 (en) Apparatus and method for updating a group of robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination