CN113468770B - Method and system for generating machine vision formula - Google Patents

Method and system for generating machine vision formula Download PDF

Info

Publication number
CN113468770B
CN113468770B CN202111023777.6A CN202111023777A CN113468770B CN 113468770 B CN113468770 B CN 113468770B CN 202111023777 A CN202111023777 A CN 202111023777A CN 113468770 B CN113468770 B CN 113468770B
Authority
CN
China
Prior art keywords
visual
task
formula
machine vision
tool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111023777.6A
Other languages
Chinese (zh)
Other versions
CN113468770A (en
Inventor
杜冰青
刘中
张勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Xinxiwang Automation Technology Co ltd
Original Assignee
Chengdu Xinxiwang Automation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Xinxiwang Automation Technology Co ltd filed Critical Chengdu Xinxiwang Automation Technology Co ltd
Priority to CN202111023777.6A priority Critical patent/CN113468770B/en
Publication of CN113468770A publication Critical patent/CN113468770A/en
Application granted granted Critical
Publication of CN113468770B publication Critical patent/CN113468770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for generating a machine vision formula, which comprises the following steps: labeling a visual tool of machine vision; establishing a directed topological graph; generating a machine vision formula model; acquiring an execution label of a newly-built machine vision task; traversing and retrieving and obtaining a visual formula; selecting an optimal visual formula; and sequentially adjusting the parameters of each visual tool in the optimal visual formula from the starting state of the second task to the optimal visual formula to generate a final visual formula. The invention also discloses a generating system of the machine vision formula. According to the method and the system for generating the machine vision formula, the directed graph technology and the machine vision related technology are combined, a technical route capable of intelligently selecting the machine vision formula is provided, manual intervention is reduced, the generation efficiency of the machine vision formula is effectively improved, and the delivery speed of a product is accelerated.

Description

Method and system for generating machine vision formula
Technical Field
The invention relates to the technical field of machine vision, in particular to a method and a system for generating a machine vision formula.
Background
With the continuous development of machine vision in the industrial field, the number of cases of visual projects designed, built and implemented is continuously increased, the number of visual tools released in the machine vision field is continuously increased, the computing capability is gradually improved, and the support function is improved. And the step of completing a visual project is a crucial step except for installation and debugging work related to external facilities, and completing a visual computing task by building a visual formula. In the prior art, it often needs to arrange special personnel to debug to build a vision prescription, accomplish the work of building the vision prescription including choosing for use of vision instrument, the use order of vision instrument, the parameter setting of vision instrument etc. in, it has the service condition of vision instrument, functional effect etc. to need the debugging personnel fully to be familiar with, need to select out the suitable vision instrument that the quantity is indefinite from numerous vision instruments, confirm the calling order of every vision instrument, need to adjust vision working parameter, not only it is long consuming time to accomplish the work of building of machine vision prescription, still need to invest the cost that promotes debugging personnel's ability.
Disclosure of Invention
The invention aims to solve the technical problems that the manual construction of a machine vision formula in the prior art is time-consuming and labor-consuming, and aims to provide a method and a system for generating the machine vision formula so as to solve the problems.
The invention is realized by the following technical scheme:
a method of generating a machine vision recipe, comprising:
marking a visual tool of machine vision, wherein the marked content comprises the use condition and the use effect of the visual tool;
establishing a directed topological graph according to the labels of the plurality of visual tools;
defining a first task starting state and a first task ending state according to the requirements of the machine vision task, and inserting the first task starting state and the first task ending state into the directed topological graph as description labels to generate a machine vision formula model;
when a machine vision task is newly built, acquiring a second task starting state and a second task ending state of the newly built machine vision task as execution labels;
traversing and retrieving paths in the machine vision formula model according to the execution labels and acquiring at least one vision formula;
selecting an optimal visual formula from the visual formulas based on a preset judgment condition;
and sequentially adjusting the parameters of each visual tool in the optimal visual formula from the second task starting state to the optimal visual formula to generate a final visual formula.
In the prior art, a technology for generating a formula by applying a directed graph to machine vision exists, but modules needing to be applied are mainly selected manually, and then the modules are connected in sequence to form the formula, the core thought of the formula still remains in a manual screening module, the corresponding mode is similar to the process of generating bottom codes through an algorithm block in an SCADA, and various interfaces are provided for different modules to realize the connection among the modules. The mode still can not be separated from manual intervention, a module needs to be selected manually, but when the same machine vision task is realized, a plurality of different modes can exist, the selection is also diversified, and the prior art can only manually discriminate the optimal result in the different modes, so that the formula generation efficiency is low.
In the implementation of the embodiment, a plurality of different visual tools are collected, the use condition and the use effect of each visual tool are labeled at the same time, and the visual tools can be connected in a directed manner through the use condition and the use effect to form a directed topological graph; in order to realize the intelligent automatic utilization of these visual tools, in this embodiment, the visual tools in the directed topology graph are further described by using the first task start state and the first task end state as description labels, and after the labels of these descriptions are inserted into the directed topology graph, a complete machine vision visual tool library, that is, a machine vision recipe model, is formed.
When a machine vision formula is required to be generated, traversing the whole machine vision formula model by using a second task starting state and a second task ending state of a newly-built machine vision task, wherein it should be understood that, since a first task starting state and a first task ending state exist in the machine vision formula model, in this embodiment, only a corresponding first task starting state is found through the second task starting state as a starting point, and a corresponding first task ending state is found through the second task ending state as an end point, so that the starting point and the end point of the machine vision task in the machine vision formula model can be definitely realized; since the machine vision recipe model is actually a labeled directed graph, the problem of generating the machine vision recipe is directly transformed into a mathematical problem which can be solved through calculation.
In the machine vision recipe model, there may be multiple paths from the starting point to the end point, each path corresponding to a visual recipe corresponding to the current machine vision task, and a problem to be solved in this embodiment is to find the most suitable recipe from the visual recipes obtained through traversal retrieval. In this embodiment, a method of selecting an optimal visual formula from the visual formulas based on a preset judgment condition and selecting the optimal visual formula is adopted to find out the optimal visual formula. In order to directly apply the optimal visual formula to a specific device, a final visual formula which can be directly applied to the device is finally formed by adjusting various visual tool parameters in the optimal visual formula. According to the embodiment of the invention, by combining the directed graph technology and the machine vision related technology, a technical route capable of intelligently selecting the machine vision formula is provided, so that manual intervention is reduced, the generation efficiency of the machine vision formula is effectively improved, and the product delivery speed is accelerated.
Further, the traversing and retrieving the path in the machine vision recipe model according to the execution tag and obtaining at least one vision recipe includes:
taking a first task starting state matched with the second task starting state as a starting point, and taking a first task ending state matched with the second task ending state as an end point;
in the machine vision recipe model, all paths from the starting point to the end point are obtained, and each path is a vision recipe.
Further, selecting an optimal visual formula from the visual formulas based on a preset judgment condition includes:
starting from the second task starting state, performing simulation training on the visual formula according to the requirement of the newly-built machine visual task, and generating execution parameters of the visual formula; the execution parameters comprise the overall execution time and the execution effect of each visual tool in the visual formula;
and selecting the best visual formula from the visual formulas according to the execution parameters and the requirements of the newly-built machine vision task.
Further, selecting an optimal visual recipe from the visual recipes according to the execution parameters and the requirements of the newly created machine vision task comprises:
when the execution effect of the visual tool closest to the second task ending state in the plurality of visual recipes meets the requirement of the visual task of the newly-built machine, selecting the visual recipe with the shortest overall execution time as the best visual recipe;
and when the overall execution time of the plurality of visual formulas meets the requirement of the newly-built machine visual task, selecting the visual formula with the best execution effect of the visual tool closest to the second task ending state as the best visual formula.
Further, sequentially adjusting, from the second task starting state, each visual tool parameter in the optimal visual recipe to generate a final visual recipe, includes:
sequentially adjusting each visual tool parameter in the optimal visual formula from the second task starting state;
after each visual tool parameter is adjusted, simulating and running to the current visual tool from the second task starting state;
and if the operation result of the current visual tool does not meet the execution effect, re-adjusting the parameters of the current visual tool until the operation result of the current visual tool meets the execution effect.
Further, sequentially adjusting, from the second task starting state, each visual tool parameter in the optimal visual recipe for the optimal visual recipe comprises:
obtaining the weight value of the parameter of each visual tool in the optimal visual formula;
and adjusting the parameters of the visual tool according to the weighted value from high to low.
A system for generating a machine vision recipe, comprising:
the marking unit is configured to mark a visual tool of machine vision, and the marked content comprises the use condition and the use effect of the visual tool;
the model unit is configured to establish a directed topological graph according to the labels of the plurality of visual tools, and define a first task starting state and a first task ending state as description labels to be inserted into the directed topological graph according to the requirements of the machine vision task to generate a machine vision formula model;
the generating unit is configured to acquire a second task starting state and a second task ending state of the newly-built machine vision task as execution tags when the machine vision task is newly built;
the generating unit is further configured to perform traversal retrieval on a path in the machine vision recipe model according to the execution tag and obtain at least one vision recipe;
the generation unit is further configured to select an optimal visual formula from the visual formulas based on a preset judgment condition;
and the adjusting unit is configured to sequentially adjust each visual tool parameter in the optimal visual formula from the second task starting state to the optimal visual formula to generate a final visual formula.
Further, the generating unit is further configured to take a first task start state matching the second task start state as a starting point and a first task end state matching the second task end state as an end point;
the generation unit acquires all paths from the starting point to the end point in the machine vision recipe model, wherein each path is a vision recipe.
Further, the generating unit is further configured to perform simulation training on the visual formula according to the requirement of the newly-built machine vision task, starting from the second task starting state, and generating execution parameters of the visual formula; the execution parameters comprise the overall execution time and the execution effect of each visual tool in the visual formula;
and the generation unit selects the optimal visual formula from the visual formulas according to the execution parameters and the requirements of the newly-built machine vision task.
Further, the generating unit is further configured to:
when the execution effect of the visual tool closest to the second task ending state in the plurality of visual recipes meets the requirement of the visual task of the newly-built machine, selecting the visual recipe with the shortest overall execution time as the best visual recipe;
and when the overall execution time of the plurality of visual formulas meets the requirement of the newly-built machine visual task, selecting the visual formula with the best execution effect of the visual tool closest to the second task ending state as the best visual formula.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the method and the system for generating the machine vision formula, the directed graph technology and the machine vision related technology are combined, a technical route capable of intelligently selecting the machine vision formula is provided, manual intervention is reduced, the generation efficiency of the machine vision formula is effectively improved, and the delivery speed of a product is accelerated.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a schematic diagram of the method steps of an embodiment of the present invention;
FIG. 2 is a system architecture diagram according to an embodiment of the present invention;
FIG. 3 is a single vision tool with a directed graph according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a machine vision recipe model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a directed graph traversal according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating another directed graph traversal according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating adjustment of parameters of a vision tool according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Examples
Referring to fig. 1, a flow chart of a method for generating a machine vision recipe according to an embodiment of the present invention is shown, where the method for generating a machine vision recipe may be applied to the system for generating a machine vision recipe shown in fig. 2, and further, the method for generating a machine vision recipe may specifically include the contents described in the following steps S1-S7.
S1: marking a visual tool of machine vision, wherein the marked content comprises the use condition and the use effect of the visual tool;
s2: establishing a directed topological graph according to the labels of the plurality of visual tools;
s3: defining a first task starting state and a first task ending state according to the requirements of the machine vision task, and inserting the first task starting state and the first task ending state into the directed topological graph as description labels to generate a machine vision formula model;
s4: when a machine vision task is newly built, acquiring a second task starting state and a second task ending state of the newly built machine vision task as execution labels;
s5: traversing and retrieving paths in the machine vision formula model according to the execution labels and acquiring at least one vision formula;
s6: selecting an optimal visual formula from the visual formulas based on a preset judgment condition;
s7: and sequentially adjusting the parameters of each visual tool in the optimal visual formula from the second task starting state to the optimal visual formula to generate a final visual formula.
In the implementation of the embodiment, a plurality of different visual tools are collected, the use condition and the use effect of each visual tool are labeled at the same time, and the visual tools can be connected in a directed manner through the use condition and the use effect to form a directed topological graph; referring to fig. 3, a directed graph of a single vision tool is shown, while fig. 4 shows a directed graph composed of multiple vision tools.
In order to realize the intelligent automatic utilization of these visual tools, in this embodiment, the visual tools in the directed topology graph are further described by using the first task start state and the first task end state as description labels, and after the labels of these descriptions are inserted into the directed topology graph, a complete machine vision visual tool library, that is, a machine vision recipe model, is formed.
In particular, in a directed topology graph, the usage condition of any node visual tool should be contained in the union of the usage effects of all the upstream node visual tools of the node visual tool.
When a machine vision formula is required to be generated, traversing the whole machine vision formula model by using a second task starting state and a second task ending state of a newly-built machine vision task, wherein it should be understood that, since a first task starting state and a first task ending state exist in the machine vision formula model, in this embodiment, only a corresponding first task starting state is found through the second task starting state as a starting point, and a corresponding first task ending state is found through the second task ending state as an end point, so that the starting point and the end point of the machine vision task in the machine vision formula model can be definitely realized; since the machine vision recipe model is actually a labeled directed graph, the problem of generating the machine vision recipe is directly transformed into a mathematical problem which can be solved through calculation.
It should be understood that the first task start state corresponds to a condition of use of some or all of the visual tools, and the first task end state corresponds to an effect of use of some or all of the visual tools.
In this embodiment, the traversal retrieving includes:
finding a corresponding first task starting state as a starting point through a second task starting state in the machine vision formula model, and finding a corresponding first task ending state as an end point through a second task ending state; finding all paths from the starting point to the ending point within the machine vision recipe model.
Specifically, after a visual formula is obtained through traversal retrieval, a circular path is obtained from the visual formula, and the circular path is integrated into an integrated visual tool; the annular path is a path with a starting point and an end point of the same visual tool;
using the integrated visual tool as a complete visual tool in the visual formula, and performing parameter debugging by regarding the integrated visual tool as a complete visual tool;
and after the integrated visual tool is labeled, adding the directed topological graph to form a new directed topological graph, and performing traversal retrieval based on the new directed topological graph to generate a visual formula when a new machine visual task is performed.
In the machine vision recipe model, there may be multiple paths from the starting point to the end point, each path corresponding to a visual recipe corresponding to the current machine vision task, and a problem to be solved in this embodiment is to find the most suitable recipe from the visual recipes obtained through traversal retrieval. In this embodiment, a method of selecting an optimal visual formula from the visual formulas based on a preset judgment condition and selecting the optimal visual formula is adopted to find out the optimal visual formula. In order to directly apply the optimal visual formula to a specific device, a final visual formula which can be directly applied to the device is finally formed by adjusting various visual tool parameters in the optimal visual formula. According to the embodiment of the invention, by combining the directed graph technology and the machine vision related technology, a technical route capable of intelligently selecting the machine vision formula is provided, so that manual intervention is reduced, the generation efficiency of the machine vision formula is effectively improved, and the product delivery speed is accelerated.
In one embodiment, step S5 includes:
taking a first task starting state matched with the second task starting state as a starting point, and taking a first task ending state matched with the second task ending state as an end point;
in the machine vision recipe model, all paths from the starting point to the end point are obtained, and each path is a vision recipe.
In this embodiment, when the first task starting state matched with the second task starting state may adopt a first task starting state identical to the second task starting state, and similarly, the first task ending state matched with the second task ending state may adopt a first task ending state identical to the second task ending state. After the starting point and the end point are selected, all corresponding paths can be obtained from the machine vision recipe model to serve as the vision recipe.
In one embodiment, step S6 includes:
starting from the second task starting state, performing simulation training on the visual formula according to the requirement of the newly-built machine visual task, and generating execution parameters of the visual formula; the execution parameters comprise the overall execution time and the execution effect of each visual tool in the visual formula;
and selecting the best visual formula from the visual formulas according to the execution parameters and the requirements of the newly-built machine vision task.
In this embodiment, in order to further implement selection of an optimal visual recipe, in this embodiment, an execution parameter of each visual recipe is obtained by performing simulation training on the visual recipe according to a requirement of a newly-built machine visual task, and then the visual recipes are further screened through the execution parameter.
In one embodiment, selecting the best visual recipe from the visual recipes according to the execution parameters and the requirements of the newly created machine vision task comprises:
when the execution effect of the visual tool closest to the second task ending state in the plurality of visual recipes meets the requirement of the visual task of the newly-built machine, selecting the visual recipe with the shortest overall execution time as the best visual recipe;
and when the overall execution time of the plurality of visual formulas meets the requirement of the newly-built machine visual task, selecting the visual formula with the best execution effect of the visual tool closest to the second task ending state as the best visual formula.
In this embodiment, the inventor finds that, in the visual recipe after the simulation training is completed, the output result of the visual tool closest to the end state of the second task determines the result of the overall visual recipe, so the results are screened from two dimensions, that is, the overall execution time and the execution effect of the last visual tool, where the execution effect can be quantified by using the offset from the preset effect.
In one embodiment, sequentially adjusting the visual tool parameters in the optimal visual recipe from the second task start state to generate a final visual recipe comprises:
sequentially adjusting each visual tool parameter in the optimal visual formula from the second task starting state;
after each visual tool parameter is adjusted, simulating and running to the current visual tool from the second task starting state;
and if the operation result of the current visual tool does not meet the execution effect, re-adjusting the parameters of the current visual tool until the operation result of the current visual tool meets the execution effect.
In the implementation of this embodiment, after the optimal visual formula is screened, the optimal visual formula cannot be directly applied to the related equipment, because various parameters still need to be debugged, in order to further improve the automation degree of this embodiment, this embodiment adopts a method of sequentially adjusting the parameters of the visual tools from the second task starting state, for example, there are A, B and C visual tools in sequence from the second task starting state, so that the parameters are adjusted from a until the operation result of a meets the requirement, then the parameters of B are adjusted until the operation result of B meets the requirement, and finally the parameters of C are adjusted until the operation result of C meets the requirement. By the method, the iteration times during parameter debugging can be reduced, and the efficiency of adjusting the parameters of the visual tool is improved.
In one embodiment, sequentially adjusting the visual tool parameters in the optimal visual recipe from the second task start state comprises:
obtaining the weight value of the parameter of each visual tool in the optimal visual formula;
and adjusting the parameters of the visual tool according to the weighted value from high to low.
When the embodiment is implemented, different weight values are set for different parameters, and the weights are used for describing that different parameter settings of the visual tool influence the execution effect change, and meanwhile, the adjustment of the parameters can be performed in a mode of firstly using the default values and then performing left-right adjustment on the default values, and the adjustment effect can also be increased.
Referring to fig. 2, based on the same inventive concept, there is also provided a system for generating a machine vision formula, the system including:
the marking unit is configured to mark a visual tool of machine vision, and the marked content comprises the use condition and the use effect of the visual tool;
the model unit is configured to establish a directed topological graph according to the labels of the plurality of visual tools, and define a first task starting state and a first task ending state as description labels to be inserted into the directed topological graph according to the requirements of the machine vision task to generate a machine vision formula model;
the generating unit is configured to acquire a second task starting state and a second task ending state of the newly-built machine vision task as execution tags when the machine vision task is newly built;
the generating unit is further configured to perform traversal retrieval on a path in the machine vision recipe model according to the execution tag and obtain at least one vision recipe;
the generation unit is further configured to select an optimal visual formula from the visual formulas based on a preset judgment condition;
and the adjusting unit is configured to sequentially adjust each visual tool parameter in the optimal visual formula from the second task starting state to the optimal visual formula to generate a final visual formula.
In one embodiment, the generating unit is further configured to take a first task start state matching the second task start state as a starting point and a first task end state matching the second task end state as an end point;
the generation unit acquires all paths from the starting point to the end point in the machine vision recipe model, wherein each path is a vision recipe.
In one embodiment, the generating unit is further configured to perform simulation training on the visual recipe according to the requirements of the newly-built machine vision task, starting from the second task starting state, and generating execution parameters of the visual recipe; the execution parameters comprise the overall execution time and the execution effect of each visual tool in the visual formula;
and the generation unit selects the optimal visual formula from the visual formulas according to the execution parameters and the requirements of the newly-built machine vision task.
In one embodiment, the generating unit is further configured to:
when the execution effect of the visual tool closest to the second task ending state in the plurality of visual recipes meets the requirement of the visual task of the newly-built machine, selecting the visual recipe with the shortest overall execution time as the best visual recipe;
and when the overall execution time of the plurality of visual formulas meets the requirement of the newly-built machine visual task, selecting the visual formula with the best execution effect of the visual tool closest to the second task ending state as the best visual formula.
In a more specific embodiment, when labeling a visual tool of machine vision, the use conditions include "gray scale image required by image format", "color image required by image format", "no requirement for image format", "1280 × 1000 required by image size", "no requirement for image size", "50% for image binarization effect judgment", "50% for image filtering effect judgment", "50% for image spotting effect judgment", and the like. For example, for the brightness adjustment visual tool, the relevant performance effect parameters are also labeled, such as "Param: ContrastRatio; default: 50; MinValue: 0; MaxValue: 100 "," Param: gamma value; default: 0.5; MinValue: 0; MaxValue: 1 ", and the like. In this embodiment, the annotations are collated and the results stored in a form.
Referring to fig. 3 and 4, first traversing two types of tags, namely "use condition" and "use achievable effect" of the description tag of the visual tool, and establishing a directed graph of "use condition" - > "visual tool" - > "use achievable effect"; secondly, traversing and establishing each visual tool, and establishing the relation of the same label with the same description type; and finally, obtaining a directed topological graph of the visual tool library.
In a more specific embodiment, the visual task is split into a "task start state" and a "task end state", and the "task start state" needs to be kept consistent with the description tags of the visual tool library. If the task starting state is 'image format requires gray level image', and the task ending state is 'matching image center'.
In the directed topology graph of the visual tool library, all paths which finish the task starting state and reach the task ending state can be found in a traversal mode by taking the task starting state as the start and the task ending state as the end. When the same vision tool is retrieved a second time, as shown in FIG. 5, vision tool C appears in the search path a second time, set to discard the current search path, i.e., the vision tool execution order of "C- > F- > B- > E- > C" cannot appear in the vision recipe; finally, two visual formulas shown in figure 6 are obtained, namely 'A- > C- > F- > I' and 'A- > C- > D- > G- > H- > I' respectively.
In a more specific embodiment, referring to fig. 7, the resulting set of machine vision recipes, simulated by default configuration parameters of each vision tool, supports the basic task of completing the vision task, but the "execution decision threshold" obtained for the training of the vision tool to reach the "task end state" may not be optimal, and the parameters of each vision tool need to be adjusted one by one in an iterative manner. The parameter adjusting direction of each visual tool is a left value and a right value of a default value after a default value, and the parameters of the visual tool are adjusted according to an execution judgment threshold value. And iterating a path from the task starting state to each visual tool by using the test data meeting the task requirement for multiple times, and training and adjusting the optimal parameter configuration for finishing the visual task by each visual tool.
And finally, generating a machine vision formula which comprises information such as the type of the used machine vision tools, the sequence of the used machine vision tools, the parameter setting of each vision tool and the like, and finishing the vision task requirement to be used as a vision project.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The elements described as separate parts may or may not be physically separate, as one of ordinary skill in the art would appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general sense in the foregoing description for clarity of explanation of the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a grid device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. A method of generating a machine vision recipe, comprising:
marking a visual tool of machine vision, wherein the marked content comprises the use condition and the use effect of the visual tool;
establishing a directed topological graph according to the labels of the plurality of visual tools;
defining a first task starting state and a first task ending state according to the requirements of the machine vision task, and inserting the first task starting state and the first task ending state into the directed topological graph as description labels to generate a machine vision formula model;
when a machine vision task is newly built, acquiring a second task starting state and a second task ending state of the newly built machine vision task as execution labels;
traversing and retrieving paths in the machine vision formula model according to the execution labels and acquiring at least one vision formula;
selecting an optimal visual formula from the visual formulas based on a preset judgment condition;
sequentially adjusting the parameters of each visual tool in the optimal visual formula from the second task starting state to the optimal visual formula to generate a final visual formula;
traversing and retrieving a path in the machine vision recipe model according to the execution tag and obtaining at least one vision recipe comprises:
taking a first task starting state matched with the second task starting state as a starting point, and taking a first task ending state matched with the second task ending state as an end point;
in the machine vision formula model, all paths from the starting point to the end point are obtained, and each path is a vision formula;
selecting an optimal visual formula from the visual formulas based on a preset judgment condition comprises:
starting from the second task starting state, performing simulation training on the visual formula according to the requirement of the newly-built machine visual task, and generating execution parameters of the visual formula; the execution parameters comprise the overall execution time and the execution effect of each visual tool in the visual formula;
and selecting the best visual formula from the visual formulas according to the execution parameters and the requirements of the newly-built machine vision task.
2. The method of claim 1, wherein selecting the best visual recipe from the visual recipes according to the execution parameters and the requirements of the newly created machine vision task comprises:
when the execution effect of the visual tool closest to the second task ending state in the plurality of visual recipes meets the requirement of the visual task of the newly-built machine, selecting the visual recipe with the shortest overall execution time as the best visual recipe;
and when the overall execution time of the plurality of visual formulas meets the requirement of the newly-built machine visual task, selecting the visual formula with the best execution effect of the visual tool closest to the second task ending state as the best visual formula.
3. The method of claim 1, wherein adjusting the visual tool parameters in the optimal visual recipe sequentially from the second task start state to generate a final visual recipe comprises:
sequentially adjusting each visual tool parameter in the optimal visual formula from the second task starting state;
after each visual tool parameter is adjusted, simulating and running to the current visual tool from the second task starting state;
and if the operation result of the current visual tool does not meet the execution effect, re-adjusting the parameters of the current visual tool until the operation result of the current visual tool meets the execution effect.
4. The method of claim 3, wherein adjusting the visual tool parameters in the optimal visual recipe in sequence from the second task start state comprises:
obtaining the weight value of the parameter of each visual tool in the optimal visual formula;
and adjusting the parameters of the visual tool according to the weighted value from high to low.
5. A system for generating a machine vision recipe, comprising:
the marking unit is configured to mark a visual tool of machine vision, and the marked content comprises the use condition and the use effect of the visual tool;
the model unit is configured to establish a directed topological graph according to labels of a plurality of visual tools, and define a first task starting state and a first task ending state as description labels to be inserted into the directed topological graph to generate a machine vision formula model according to the requirements of a machine vision task;
the generating unit is configured to acquire a second task starting state and a second task ending state of the newly-built machine vision task as execution tags when the machine vision task is newly built;
the generating unit is further configured to perform traversal retrieval on a path in the machine vision recipe model according to the execution tag and obtain at least one vision recipe;
the generation unit is further configured to select an optimal visual formula from the visual formulas based on a preset judgment condition;
an adjusting unit configured to sequentially adjust, for the optimal visual recipe, visual tool parameters in the optimal visual recipe from the second task start state to generate a final visual recipe;
the generating unit is further configured to take a first task start state matching the second task start state as a starting point and a first task end state matching the second task end state as an end point;
the generating unit acquires all paths from the starting point to the end point in the machine vision formula model, wherein each path is a vision formula;
the generating unit is further configured to perform simulation training on the visual formula according to the requirement of the newly-built machine vision task by taking the second task starting state as an initial state, and generate execution parameters of the visual formula; the execution parameters comprise the overall execution time and the execution effect of each visual tool in the visual formula;
and the generation unit selects the optimal visual formula from the visual formulas according to the execution parameters and the requirements of the newly-built machine vision task.
6. The system of claim 5, wherein the generation unit is further configured to:
when the execution effect of the visual tool closest to the second task ending state in the plurality of visual recipes meets the requirement of the visual task of the newly-built machine, selecting the visual recipe with the shortest overall execution time as the best visual recipe;
and when the overall execution time of the plurality of visual formulas meets the requirement of the newly-built machine visual task, selecting the visual formula with the best execution effect of the visual tool closest to the second task ending state as the best visual formula.
CN202111023777.6A 2021-09-02 2021-09-02 Method and system for generating machine vision formula Active CN113468770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111023777.6A CN113468770B (en) 2021-09-02 2021-09-02 Method and system for generating machine vision formula

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111023777.6A CN113468770B (en) 2021-09-02 2021-09-02 Method and system for generating machine vision formula

Publications (2)

Publication Number Publication Date
CN113468770A CN113468770A (en) 2021-10-01
CN113468770B true CN113468770B (en) 2021-11-12

Family

ID=77867338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111023777.6A Active CN113468770B (en) 2021-09-02 2021-09-02 Method and system for generating machine vision formula

Country Status (1)

Country Link
CN (1) CN113468770B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105425791A (en) * 2015-11-06 2016-03-23 武汉理工大学 Swarm robot control system and method based on visual positioning
CN110263881A (en) * 2019-05-20 2019-09-20 厦门大学 A kind of multi-model approximating method of the asymmetric geometry in combination part
CN110766684A (en) * 2019-10-30 2020-02-07 江南大学 Stator surface defect detection system and detection method based on machine vision
CN111814658A (en) * 2020-07-07 2020-10-23 西安电子科技大学 Scene semantic structure chart retrieval method based on semantics
CN112069927A (en) * 2020-08-19 2020-12-11 南京埃斯顿机器人工程有限公司 Element set processing method and device applied to modular visual software
CN112529762A (en) * 2020-12-04 2021-03-19 成都新西旺自动化科技有限公司 Machine vision system configuration screening method and device and readable storage medium
CN112559074A (en) * 2020-12-18 2021-03-26 昂纳工业技术(深圳)有限公司 Dynamic configuration method of machine vision software and computer

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10235477B2 (en) * 2014-07-31 2019-03-19 National Instruments Corporation Prototyping an image processing algorithm and emulating or simulating execution on a hardware accelerator to estimate resource usage or performance
CN104899042B (en) * 2015-06-15 2018-07-24 江南大学 A kind of embedded machine vision detection program developing method and system
CN112699953B (en) * 2021-01-07 2024-03-19 北京大学 Feature pyramid neural network architecture searching method based on multi-information path aggregation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105425791A (en) * 2015-11-06 2016-03-23 武汉理工大学 Swarm robot control system and method based on visual positioning
CN110263881A (en) * 2019-05-20 2019-09-20 厦门大学 A kind of multi-model approximating method of the asymmetric geometry in combination part
CN110766684A (en) * 2019-10-30 2020-02-07 江南大学 Stator surface defect detection system and detection method based on machine vision
CN111814658A (en) * 2020-07-07 2020-10-23 西安电子科技大学 Scene semantic structure chart retrieval method based on semantics
CN112069927A (en) * 2020-08-19 2020-12-11 南京埃斯顿机器人工程有限公司 Element set processing method and device applied to modular visual software
CN112529762A (en) * 2020-12-04 2021-03-19 成都新西旺自动化科技有限公司 Machine vision system configuration screening method and device and readable storage medium
CN112559074A (en) * 2020-12-18 2021-03-26 昂纳工业技术(深圳)有限公司 Dynamic configuration method of machine vision software and computer

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A new approach for building a strategy map based on digraph theory;Nasser Shahsavari-Pour 等;《International Journal of Applied Management Science》;20170228;第9卷(第1期);第1-18页 *
基于最优路径的智能车设计与实现;刘博 等;《计算机测量与控制》;20120825;第20卷(第8期);第2264-2269页 *
基于机器视觉的风电机组识别及编组标定;张娟;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20190715(第07期);第C042-318页 *
得心应手的检测——VeriSens视觉传感器让标签检测变得更简单;堡盟电子(上海)有限公司;《传感器世界》;20170125;第23卷(第01期);第1-3页 *
机器视觉技术及其在汽车制造质量检测中的应用;胡兴军 等;《现代零部件》;20051101(第11期);第96-100页 *

Also Published As

Publication number Publication date
CN113468770A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN106776995B (en) Structured data tree-form acquisition method based on model-driven architecture
CN110276456A (en) A kind of machine learning model auxiliary construction method, system, equipment and medium
CN105224458A (en) A kind of database method of testing and system
CN109933311A (en) A kind of information system creation method and relevant apparatus
CN111062520B (en) Hostname feature prediction method based on random forest algorithm
CN114003451B (en) Interface testing method, device, system and medium
CN112685011B (en) AI application visualization arrangement method based on Vue
CN112669055A (en) Power transmission and transformation project exploitable estimation simulation group price method and device
CN110472298B (en) Method, device, equipment and storage medium for constructing electric power market model
CN113468770B (en) Method and system for generating machine vision formula
CN110149241A (en) A kind of automated testing method and storage medium based on IMS equipment
CN112309313B (en) Module controller configuration method, device and system and computer readable storage medium
CN110704252B (en) Automatic testing device and testing method based on cloud dynamic management
CN113238748A (en) Method, device, terminal and medium for modifying and checking direct current control security program page
CN110163498B (en) Courseware originality scoring method and device, storage medium and processor
CN113987102A (en) Interactive power data visualization method and system
CN107861725B (en) iOS data reverse automatic analysis strategy
CN113947284B (en) Data compliance conversion method, device and system for homeland space planning
CN109508185A (en) A kind of Code Review method and apparatus
CN113918474B (en) Test case management method and device based on data mode
CN112764396B (en) Configuration method and device
CN111400118B (en) Method and system for creating serial port command for online function file
JP2006146735A (en) Service quality evaluation support apparatus
CN115630167A (en) Method, device and equipment for displaying cross relationship of data points
CN114299143A (en) Method and device for marking coordinate points in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant