CN114067189A - Workpiece identification method, device, equipment and storage medium - Google Patents

Workpiece identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN114067189A
CN114067189A CN202111450902.1A CN202111450902A CN114067189A CN 114067189 A CN114067189 A CN 114067189A CN 202111450902 A CN202111450902 A CN 202111450902A CN 114067189 A CN114067189 A CN 114067189A
Authority
CN
China
Prior art keywords
workpiece
assembly
base assembly
base
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111450902.1A
Other languages
Chinese (zh)
Inventor
林陈鸿
叶煌城
潘定国
袁子良
秘家志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Aerospace Siert Robot System Co Ltd
Original Assignee
Xiamen Aerospace Siert Robot System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Aerospace Siert Robot System Co Ltd filed Critical Xiamen Aerospace Siert Robot System Co Ltd
Priority to CN202111450902.1A priority Critical patent/CN114067189A/en
Publication of CN114067189A publication Critical patent/CN114067189A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a workpiece identification device, a method, a device and a storage medium, and relates to the technical field of automatic auxiliary equipment. The workpiece identification method comprises the following steps: and S1, acquiring depth images of the workpiece from multiple visual angles. Wherein the plurality of viewing angles include a side viewing angle at which width information of the workpiece is photographed. And S2, acquiring the width of the workpiece according to the depth-of-field image of the side view. And S5, fusing the depth images of the multiple visual angles to obtain a three-dimensional image of the workpiece. And S6, acquiring the length and the height of the workpiece according to the three-dimensional image. And S7, acquiring the model of the workpiece from the database according to the width, the length and the height of the workpiece. And obtaining a three-dimensional model of the workpiece through fusion of the depth-of-field images of the workpiece at multiple angles, and then obtaining the model of the workpiece from a database through the parameters of the three-dimensional model so as to identify the workpiece. The workpiece can be identified without setting marks on the workpiece, and the method has good practical significance.

Description

Workpiece identification method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of automatic auxiliary equipment, in particular to workpiece identification equipment, a method, a device and a storage medium.
Background
In industrial production, different types of workpieces need to be processed differently. In the prior art, the workpiece is often visually inspected by a human or identified according to a mark on the product.
In some automated production lines, products are identified by machine vision. It needs to set up the sign on the work piece just can discern the product. And some products cannot be provided with a logo. Thus, recognition cannot be performed using existing machine vision schemes. Greatly reduces the automation level of the production line and greatly inhibits the production efficiency.
In view of the above, the applicant has specifically proposed the present application after studying the existing technologies.
Disclosure of Invention
The present invention provides a workpiece recognition apparatus, method, device and storage medium to improve the above technical problems.
The first aspect,
The embodiment of the invention provides a workpiece identification method, which comprises the following steps:
and S1, acquiring depth images of the workpiece from multiple visual angles. Wherein the plurality of viewing angles include a side viewing angle at which width information of the workpiece is photographed.
And S2, acquiring the width of the workpiece according to the depth-of-field image of the side view.
And S5, fusing the depth images of the multiple visual angles to obtain a three-dimensional image of the workpiece.
And S6, acquiring the length and the height of the workpiece according to the three-dimensional image.
And S7, acquiring the model of the workpiece from the database according to the width, the length and the height of the workpiece.
In an alternative embodiment, the workpiece is a rounded rectangular or racetrack coil.
The plurality of viewing angles are four viewing angles.
In an optional embodiment, step S2 specifically includes:
and acquiring the outline of the width edge of the workpiece according to the depth-of-field image of the side view.
And acquiring the width of the workpiece according to the outline of the width edge.
In an alternative embodiment, step S5 is preceded by steps S3 and S4:
and S3, selecting a local area of the top surface of the workpiece based on the flatness. Wherein, the planeness is calculated according to the root mean square.
And S4, calculating the vertical distance between the local area and the first support plane below the workpiece.
In an optional embodiment, step S5 specifically includes:
and according to the vertical distance, fusing the depth-of-field images of multiple visual angles to obtain a three-dimensional image of the workpiece.
In an optional embodiment, step S6 specifically includes:
and S61, acquiring a first top view according to the three-dimensional model. Wherein, the first top view comprises a first supporting plane.
And S62, filtering the first supporting plane in the first top view according to the vertical distance to obtain a second top view of the top surface of the workpiece.
And S63, binarizing the second top view, and acquiring the length of the workpiece according to the binarized second top view. The length is obtained by multiplying the number of pixels in the length direction of the workpiece by the distance value corresponding to the unit pixel in the second top view.
In an optional embodiment, step S6 further includes:
and S64, acquiring the top surface of the workpiece and a second supporting plane below the workpiece according to the three-dimensional image. The second supporting plane is located above the first supporting plane, and the area of the second supporting plane is smaller than that of the first supporting plane.
And S65, calculating the average distance between the top surface and the second supporting plane to obtain the height of the workpiece.
The second aspect,
The embodiment of the invention provides a workpiece identification device which comprises a control assembly. The control assembly includes a processor, a memory, and a computer program stored in the memory. A computer program capable of being executed by a processor to carry out a method of workpiece identification as defined in any one of the paragraphs above.
In an optional embodiment, the workpiece recognition apparatus further comprises a first base assembly, a second base assembly and a shooting assembly disposed on the first base assembly, a third base assembly and a first limiting assembly disposed on the second base assembly, and a fourth base assembly and a second limiting assembly disposed on the third base assembly. The control assembly is configured on the first base assembly.
The first base is used for driving the second base to lift. The second base is used for driving the third base to be close to or far away from the first base. The first limiting component is used for limiting the third base component to move. The third base is used for driving the fourth base to rotate. The second limiting component is used for limiting the rotation of the fourth base component. The fourth base is used for placing a workpiece. The shooting component is used for shooting the depth-of-field image above the first base.
The control assembly is electrically connected with the first base assembly, the second base assembly, the third base assembly, the fourth base assembly, the first limiting assembly, the second limiting assembly and the shooting assembly.
In an alternative embodiment, acquiring depth-of-field images of a workpiece from multiple viewing angles specifically includes:
and judging whether the second base assembly is at the lowest point.
And when the second base assembly is judged to be at the lowest point, executing the subsequent steps. Otherwise, the first base assembly is controlled to drive the second base assembly to descend to the lowest point, and then the subsequent steps are executed.
And controlling the second base assembly to drive the third base assembly to be far away from the first base assembly.
And controlling the first base assembly to drive the second base assembly to ascend so that the fourth base assembly is supported on the workpiece.
And controlling the second base assembly to drive the third base assembly to be close to the first base assembly.
And controlling the first base assembly to drive the second base assembly to lower. So that the workpiece on the fourth base assembly is positioned at the shooting station.
And controlling the first limiting assembly to limit the third base assembly so as to prevent the third base assembly from moving.
And controlling the second limiting assembly to limit the fourth base assembly so as to prevent the fourth base assembly from rotating.
And controlling the shooting component to shoot the depth-of-field image of the first visual angle.
Repeatedly executing the current step to capture depth images of a predetermined number of viewing angles: the first limiting assembly and the second limiting assembly are controlled to be relieved from limiting, the third base assembly is controlled to drive the fourth base assembly to rotate, the first limiting assembly and the second limiting assembly are controlled to limit, and then the shooting assembly is controlled to shoot the depth of field image of the next visual angle.
After a predetermined number of views of depth images are captured:
and controlling the first base assembly to drive the second base assembly to lift.
Controlling the second limiting assembly and the first limiting assembly to release limiting;
and controlling the second base assembly to drive the third base assembly to be far away from the first base assembly.
And controlling the first base assembly to drive the second base assembly to descend so as to separate the workpiece from the fourth base assembly.
And controlling the second base assembly to drive the third base assembly to be close to the first base assembly so as to reset the equipment.
The third aspect,
An embodiment of the present invention provides a workpiece recognition apparatus, including:
the image acquisition module is used for acquiring depth-of-field images of a plurality of visual angles of the workpiece. Wherein the plurality of viewing angles include a side viewing angle at which width information of the workpiece is photographed.
And the width acquisition module is used for acquiring the width of the workpiece according to the depth-of-field image of the side view angle.
And the image fusion module is used for fusing the depth-of-field images of a plurality of visual angles to obtain a three-dimensional image of the workpiece.
And the length and height acquisition module is used for acquiring the length and height of the workpiece according to the three-dimensional image.
And the model acquisition module is used for acquiring the model of the workpiece from the database according to the width, the length and the height of the workpiece.
The fourth aspect,
An embodiment of the present invention provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, the apparatus in which the computer-readable storage medium is located is controlled to execute the workpiece identification method according to any one of the first aspect.
By adopting the technical scheme, the invention can obtain the following technical effects:
and obtaining a three-dimensional model of the workpiece through fusion of the depth-of-field images of the workpiece at multiple angles, and then obtaining the model of the workpiece from a database through the parameters of the three-dimensional model so as to identify the workpiece. The workpiece can be identified without setting marks on the workpiece, and the method has good practical significance.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is an isometric view of a workpiece recognition apparatus provided in accordance with a first embodiment of the present invention.
Fig. 2 is an exploded view of a workpiece recognition apparatus according to a first embodiment of the present invention.
FIG. 3 is a flow chart of a workpiece recognition method according to a second embodiment of the present invention
Fig. 4 is a schematic structural diagram of a workpiece recognition apparatus according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In the embodiments, the references to "first \ second" are merely to distinguish similar objects and do not represent a specific ordering for the objects, and it is to be understood that "first \ second" may be interchanged with a specific order or sequence, where permitted. It should be understood that "first \ second" distinct objects may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced in sequences other than those illustrated or described herein.
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the first embodiment is as follows:
referring to fig. 1 to 3, a workpiece recognition apparatus according to a first embodiment of the present invention is provided. The workpiece recognition equipment comprises a first base assembly 1, a second base assembly 2 and a shooting assembly 6 which are arranged on the first base assembly 1, a third base assembly 4 and a first limiting assembly 3 which are arranged on the second base assembly 2, a fourth base assembly 5 and a second limiting assembly 7 which are arranged on the third base assembly 4, and a control assembly which is arranged on the first base assembly 1. Wherein:
the first base is used for driving the second base to lift. The second base is used for driving the third base to be close to or far away from the first base. The first limiting component 3 is used for limiting the movement of the third base component 4. The third base is used for driving the fourth base to rotate. The second limiting component 7 is used for limiting the rotation of the fourth base component 5. The fourth base is used for placing a workpiece. And the shooting component 6 is used for shooting the depth-of-field image above the first base.
The control assembly is electrically connected to the first base assembly 1, the second base assembly 2, the third base assembly 4, the fourth base assembly 5, the first limiting assembly 3, the second limiting assembly 7 and the shooting assembly 6.
Specifically, the first base assembly 1 drives the second base assembly 2 to ascend and descend, the second base assembly 2 drives the third base assembly 4 to horizontally move, the third base assembly 4 drives the fourth base assembly 5 to rotate, the first limiting assembly 3 limits the third base assembly 4 to move, and the second limiting assembly 7 limits the fourth base assembly 5 to rotate. Those skilled in the art can adopt the structures shown in fig. 1 and fig. 2, or adopt other existing structures, and the present invention does not limit the specific structures of these components, so long as they can achieve the above functions, and thus, the present invention is within the protection scope of the present invention.
As shown in fig. 3, on the basis of the above embodiment, in an alternative embodiment of the present invention, the control component includes a processor, a memory, and a computer program stored in the memory. The computer program is executable by the processor to implement steps S1, S2, S5, S6, and S7.
And S1, acquiring depth images of the workpiece from multiple visual angles. Wherein the plurality of viewing angles include a side viewing angle at which width information of the workpiece is photographed.
It will be appreciated that the control component may be an electronic device with computing capabilities, such as a portable computer, desktop computer, server, smart phone, or tablet computer. The camera assembly 6 is a binocular depth of field camera.
Specifically, the binocular depth-of-field camera can directly measure the size of the shot object. However, the object measured by the present invention is a rounded rectangle or a racetrack coil. It will be appreciated that the borders of a rounded rectangle or racetrack coil cannot be accurately obtained directly from the captured image. Therefore, there is no way to identify it using existing techniques.
Based on the above embodiments, in an optional embodiment of the present invention, the depth images of multiple viewing angles are depth images of four viewing angles. In other embodiments, other numbers of depth images are possible. The invention does not specifically limit the angle and the number of the depth images.
As shown in fig. 1 and fig. 2, based on the above embodiments, in an alternative embodiment of the present invention, acquiring depth-of-field images of a workpiece from multiple viewing angles specifically includes steps S101 to S117.
S101, judging whether the second base assembly 2 is at the lowest point.
And S102, when the second base assembly 2 is judged to be at the lowest point, executing the subsequent steps. Otherwise, the first base assembly 1 is controlled to drive the second base assembly 2 to descend to the lowest point, and then the subsequent steps are executed.
And S103, controlling the second base assembly 2 to drive the third base assembly 4 to be far away from the first base assembly 1.
And S104, controlling the first base assembly 1 to drive the second base assembly 2 to be lifted so that the fourth base assembly 5 is supported on the workpiece.
And S105, controlling the second base assembly 2 to drive the third base assembly 4 to be close to the first base assembly 1.
And S106, controlling the first base assembly 1 to drive the second base assembly 2 to lower. So that the workpiece on the fourth foot assembly 5 is located at the shooting station.
As shown in fig. 1 and 2, it should be noted that the coil is generally hoisted to the front of the workpiece recognition device by a worker.
In order to facilitate the hoisting and recognition processes, the workpiece recognition device provided by the invention moves the fourth base assembly 5 from the lowest height level to the position below the coil, then raises the fourth base assembly 5 to enable the workpiece to be supported on the fourth base assembly 5, so that the hoisting equipment is in a loose state, and then unlocks the hoisting equipment to move the workpiece to the shooting range of the shooting assembly 6.
It can be understood that the lifting device has a fast moving speed and a low precision. The coil is supported by moving the fourth base assembly 5, and the workpiece is not directly placed on the fourth base assembly 5 through the hoisting equipment, so that the impact during contact can be effectively reduced.
S107, the first limiting component 3 is controlled to limit the third base component 4 so as to prevent the third base component 4 from moving.
And S108, controlling the second limiting assembly 7 to limit the fourth base assembly 5 so as to prevent the fourth base assembly 5 from rotating.
And S109, controlling the shooting component 6 to shoot the depth image of the first visual angle.
Specifically, through first spacing subassembly 3 and the spacing subassembly 7 of second, can prevent that the coil from removing at the shooting in-process, guarantee to shoot the definition of the depth of field image that obtains.
S111, repeatedly executing the current step to shoot depth-of-field images with a preset number of visual angles: the first limiting component 3 and the second limiting component 7 are controlled to release limiting, the third base component 4 is controlled to drive the fourth base component 5 to rotate, the first limiting component 3 and the second limiting component 7 are controlled to limit, and then the shooting component 6 is controlled to shoot the depth of field image of the next visual angle.
Specifically, in the embodiment of the invention, the depth-of-field images at different angles are shot by rotating the coil, so that the height and the angle of the camera can be ensured, and the subsequent fused images are more accurate. In other embodiments, a plurality of sets of binocular depth cameras may be arranged to capture depth images at different angles, which is not specifically limited in the present invention as long as depth images at multiple viewing angles can be obtained.
S112, after the depth images of the preset number of visual angles are shot:
and S113, controlling the first base assembly 1 to drive the second base assembly 2 to ascend.
S114, controlling the second limiting component 7 and the first limiting component 3 to relieve limiting;
and S115, controlling the second base assembly 2 to drive the third base assembly 4 to be far away from the first base assembly 1.
And S116, controlling the first base assembly 1 to drive the second base assembly 2 to descend so as to separate the workpiece from the fourth base assembly 5.
And S117, controlling the second base assembly 2 to drive the third base assembly 4 to be close to the first base assembly 1 so as to reset the equipment.
Specifically, after the depth of field images at different angles are shot, the coil is moved to the initial hoisting position, the hoisting equipment is installed on the coil, and then the fourth base assembly 5 is lowered to be separated from the coil. Let fourth base subassembly 5 activity down and break away from the coil, rather than rise the coil through lifting device and break away from fourth base subassembly 5, can guarantee that the coil is in lower height, and can not rise very high position, reduced the risk to impact to lifting device when having reduced coil and fourth base subassembly 5 separation.
In this embodiment, the coil is transported by hoisting. In other embodiments, the coil may be transported by way of an RGV transport, with rollers disposed below the coil (or the top surface of the RGV and the fourth base assembly) to facilitate movement of the coil between the RGV and the fourth base assembly. The present invention is not limited to a specific moving and conveying manner of the coil. As long as it can be moved to the workpiece recognition apparatus for recognition and then moved away from the workpiece recognition apparatus after recognition.
And S2, acquiring the width of the workpiece according to the depth-of-field image of the side view.
As shown in fig. 1 and 2, the length direction of the coil has a distinct line profile. Therefore, the distance between the contour lines can be captured by the range image.
Based on the above embodiments, in an alternative embodiment of the present invention, step S2 specifically includes step S21 and step S22.
And S21, acquiring the outline of the width edge of the workpiece according to the depth-of-field image of the side view angle.
And S22, acquiring the width of the workpiece according to the outline of the width edge.
It should be noted that, the obtaining of the distance between two points/two lines through the depth image is the prior art, and the present invention is not described herein again.
And S5, fusing the depth images of the multiple visual angles to obtain a three-dimensional image of the workpiece.
Specifically, the depth images of multiple viewing angles are combined into a three-dimensional image, which belongs to the prior art, and the present invention is not described herein again.
It will be appreciated that the coil has rounded corners. Therefore, some of the current conditions of the coil cannot be captured. Therefore, in the present invention, the three-dimensional size of the coil is acquired by a method of constructing a three-dimensional image of the workpiece.
Specifically, on the basis of the above embodiment, in an alternative embodiment of the present invention, step S5 is preceded by steps S3 and S4:
and S3, selecting a local area of the top surface of the workpiece based on the flatness. Wherein, the planeness is calculated according to the root mean square.
And S4, calculating the vertical distance between the local area and the first support plane below the workpiece.
Specifically, the edge of the coil is arc-shaped, and the top surface is planar. Therefore, it is necessary to determine the plane area of the top of the workpiece by flatness. Then, the vertical distance of the three-dimensional model is determined by the plane area and the first support plane.
It should be noted that the first supporting plane is an upper surface of the third base assembly 4. The second support plane is the upper surface of the fourth base assembly 5. Since the area of the first support plane is much larger than the area of the second support plane. The height of the coil is the average distance between the coil top plane area and the second support plane. However, since the area of the second support plane is too small, a large error may be generated by directly calculating the height of the coil. In the present embodiment, the distance in the vertical direction of the three-dimensional model is determined by the plane area of the top surface of the workpiece and the first support plane, thereby determining the size information of the three-dimensional model. Rather than simply building the shape of the three-dimensional model.
Specifically, on the basis of the foregoing embodiment, in an optional embodiment of the present invention, step S5 specifically includes: and S51, fusing the depth images of multiple visual angles according to the vertical distance to obtain a three-dimensional image of the workpiece.
Specifically, a three-dimensional image is created by a plurality of images, and only a shape can be constructed. In the embodiment, the vertical distance is combined, so that each distance of the three-dimensional model is determined, and the method has good practical significance.
Specifically, on the basis of the foregoing embodiment, in an optional embodiment of the present invention, step S5 specifically further includes
And S52, acquiring the width of the workpiece according to the three-dimensional image.
And S53, judging whether the error between the two is within a preset range according to the width obtained by the depth image obtained through the side view angle and the width obtained by the three-dimensional image. The preset range may be a proportional range of an error, for example, within 2%, or may be a specific numerical value of an error, for example, within 5 cm.
S54, when the error between the two is judged to be in the preset range, continuing the subsequent steps; when the error between the two is judged to exceed the preset range, the depth image of the workpiece is re-captured, and the process returns to step S1. Wherein the number of the depth images of the workpiece which is shot again is more than that of the depth images which are obtained in the previous recognition.
And S6, acquiring the length and the height of the workpiece according to the three-dimensional image.
Specifically, on the basis of the above embodiment, in an alternative embodiment of the present invention, the step S6 specifically includes the steps S61 to S65.
And S61, acquiring a first top view according to the three-dimensional model. Wherein, the first top view comprises a first supporting plane.
And S62, filtering the first supporting plane in the first top view according to the vertical distance to obtain a second top view of the top surface of the workpiece.
And S63, binarizing the second top view, and acquiring the length of the workpiece according to the binarized second top view. The length is obtained by multiplying the number of pixels in the length direction of the workpiece by the distance value corresponding to the unit pixel in the second top view.
In particular, since the edges of the coil are radiused excessively. Therefore, the edge profile can only be obtained by projection. It will be appreciated that a fourth mount assembly 5 and a third mount assembly 4 are also included in the three-dimensional model. These base assemblies can affect the effectiveness of the projection. Therefore, the third and fourth pedestals are cut out according to the height information of the three-dimensional model. Thus, only a partial three-dimensional model of the coil is retained, so that the projection in the second top view is a projection of the coil itself. Thereby accurately obtaining the contour of the workpiece. And further acquiring the length of the workpiece.
And S64, acquiring the top surface of the workpiece and a second supporting plane below the workpiece according to the three-dimensional image. The second supporting plane is located above the first supporting plane, and the area of the second supporting plane is smaller than that of the first supporting plane.
And S65, calculating the average distance between the top surface and the second supporting plane to obtain the height of the workpiece.
It will be appreciated that the second support plane of the fourth mount assembly 5 is relatively accurate in the three-dimensional model. Therefore, the distance between the coil top plane and the second supporting plane, namely the height of the workpiece, is obtained through the three-dimensional model, more accurate height data can be obtained, and the method has good practical significance.
And S7, acquiring the model of the workpiece from the database according to the width, the length and the height of the workpiece.
Specifically, a three-dimensional model of the workpiece is obtained through fusion of depth-of-field images of the workpiece at multiple angles, and then the model of the workpiece is obtained from a database through parameters of the three-dimensional model, so that the workpiece is identified. The workpiece can be identified without setting marks on the workpiece, and the method has good practical significance.
Example II,
As shown in fig. 3, an embodiment of the present invention provides a workpiece identification method, which includes the following steps:
and S1, acquiring depth images of the workpiece from multiple visual angles. Wherein the plurality of viewing angles include a side viewing angle at which width information of the workpiece is photographed.
And S2, acquiring the width of the workpiece according to the depth-of-field image of the side view.
And S5, fusing the depth images of the multiple visual angles to obtain a three-dimensional image of the workpiece.
And S6, acquiring the length and the height of the workpiece according to the three-dimensional image.
And S7, acquiring the model of the workpiece from the database according to the width, the length and the height of the workpiece.
And obtaining a three-dimensional model of the workpiece through fusion of the depth-of-field images of the workpiece at multiple angles, and then obtaining the model of the workpiece from a database through the parameters of the three-dimensional model so as to identify the workpiece. The workpiece can be identified without setting marks on the workpiece, and the method has good practical significance.
On the basis of the above embodiment, in an alternative embodiment of the present invention, the workpiece is a rounded rectangle or a racetrack-shaped coil.
The plurality of viewing angles are four viewing angles.
On the basis of the foregoing embodiment, in an optional embodiment of the present invention, step S2 specifically includes:
and acquiring the outline of the width edge of the workpiece according to the depth-of-field image of the side view.
And acquiring the width of the workpiece according to the outline of the width edge.
On the basis of the above embodiment, in an alternative embodiment of the present invention, step S5 is preceded by steps S3 and S4:
and S3, selecting a local area of the top surface of the workpiece based on the flatness. Wherein, the planeness is calculated according to the root mean square.
And S4, calculating the vertical distance between the local area and the first support plane below the workpiece.
On the basis of the foregoing embodiment, in an optional embodiment of the present invention, step S5 specifically includes:
and according to the vertical distance, fusing the depth-of-field images of multiple visual angles to obtain a three-dimensional image of the workpiece.
On the basis of the foregoing embodiment, in an optional embodiment of the present invention, step S6 specifically includes:
and S61, acquiring a first top view according to the three-dimensional model. Wherein, the first top view comprises a first supporting plane.
And S62, filtering the first supporting plane in the first top view according to the vertical distance to obtain a second top view of the top surface of the workpiece.
And S63, binarizing the second top view, and acquiring the length of the workpiece according to the binarized second top view. The length is obtained by multiplying the number of pixels in the length direction of the workpiece by the distance value corresponding to the unit pixel in the second top view.
On the basis of the foregoing embodiment, in an optional embodiment of the present invention, step S6 further includes:
and S64, acquiring the top surface of the workpiece and a second supporting plane below the workpiece according to the three-dimensional image. The second supporting plane is located above the first supporting plane, and the area of the second supporting plane is smaller than that of the first supporting plane.
And S65, calculating the average distance between the top surface and the second supporting plane to obtain the height of the workpiece.
Example III,
An embodiment of the present invention provides a workpiece recognition apparatus, including:
the image acquisition module is used for acquiring depth-of-field images of a plurality of visual angles of the workpiece. Wherein the plurality of viewing angles include a side viewing angle at which width information of the workpiece is photographed.
And the width acquisition module is used for acquiring the width of the workpiece according to the depth-of-field image of the side view angle.
And the image fusion module is used for fusing the depth-of-field images of a plurality of visual angles to obtain a three-dimensional image of the workpiece.
And the length and height acquisition module is used for acquiring the length and height of the workpiece according to the three-dimensional image.
And the model acquisition module is used for acquiring the model of the workpiece from the database according to the width, the length and the height of the workpiece.
On the basis of the above embodiment, in an alternative embodiment of the present invention, the workpiece is a rounded rectangle or a racetrack-shaped coil. The plurality of viewing angles are four viewing angles.
On the basis of the foregoing embodiment, in an optional embodiment of the present invention, the width obtaining module specifically includes:
and the contour unit is used for acquiring the contour of the width edge of the workpiece according to the depth-of-field image of the side view angle.
And the width unit is used for acquiring the width of the workpiece according to the outline of the width edge.
On the basis of the above embodiment, in an optional embodiment of the present invention, the workpiece recognition apparatus further includes an area acquisition module and a vertical distance acquisition module.
And the area acquisition module is used for selecting a local area of the top surface of the workpiece based on the flatness. Wherein, the planeness is calculated according to the root mean square.
And the vertical distance acquisition module is used for calculating the vertical distance between the local area and the first support plane below the workpiece.
On the basis of the above embodiment, in an optional embodiment of the present invention, the image fusion module is specifically configured to: and according to the vertical distance, fusing the depth-of-field images of multiple visual angles to obtain a three-dimensional image of the workpiece.
On the basis of the above embodiment, in an optional embodiment of the present invention, the length and height obtaining module specifically includes:
and the first top view unit is used for acquiring a first top view according to the three-dimensional model. Wherein, the first top view comprises a first supporting plane.
And the second top view unit is used for filtering the first supporting plane in the first top view according to the vertical distance to obtain a second top view of the top surface of the workpiece.
And the length acquisition unit is used for binarizing the second top view and acquiring the length of the workpiece according to the binarized second top view. The length is obtained by multiplying the number of pixels in the length direction of the workpiece by the distance value corresponding to the unit pixel in the second top view.
On the basis of the above embodiment, in an optional embodiment of the present invention, the length and height obtaining module further specifically includes:
and the plane acquisition unit is used for acquiring the top surface of the workpiece and a second supporting plane below the workpiece according to the three-dimensional image. The second supporting plane is located above the first supporting plane, and the area of the second supporting plane is smaller than that of the first supporting plane.
And the height acquisition unit is used for calculating the average distance between the top surface and the second supporting plane to obtain the height of the workpiece.
Example four,
The embodiment of the invention provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the workpiece identification method according to any one of the second embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of identifying a workpiece, comprising:
acquiring depth-of-field images of a plurality of visual angles of a workpiece; wherein the plurality of viewing angles include a side viewing angle at which width information of the workpiece is photographed;
acquiring the width of the workpiece according to the depth-of-field image of the side view;
fusing the depth-of-field images of the plurality of viewing angles to obtain a three-dimensional image of the workpiece;
acquiring the length and the height of the workpiece according to the three-dimensional image;
and acquiring the model of the workpiece from a database according to the width, the length and the height of the workpiece.
2. The workpiece recognition method according to claim 1, wherein the workpiece is a rounded rectangular or racetrack-shaped coil;
the plurality of viewing angles are four viewing angles.
3. The method according to claim 1, wherein acquiring the width of the workpiece from the depth-of-field image of the side view comprises:
acquiring the outline of the width edge of the workpiece according to the depth-of-field image of the side view;
and acquiring the width of the workpiece according to the profile of the width edge.
4. The workpiece recognition method of claim 1, wherein fusing the depth-of-field images from the plurality of perspectives to obtain a three-dimensional image of the workpiece further comprises:
selecting a local area of the top surface of the workpiece based on the flatness; wherein the flatness is calculated according to the root mean square;
calculating the vertical distance between the local area and a first support plane below the workpiece;
fusing the depth-of-field images of the multiple viewing angles to obtain a three-dimensional image of the workpiece, specifically comprising:
and fusing the depth-of-field images of the plurality of visual angles according to the vertical distance to obtain a three-dimensional image of the workpiece.
5. The workpiece recognition method of claim 4, wherein obtaining the length and height of the workpiece from the three-dimensional image comprises:
acquiring a first top view according to the three-dimensional model; wherein the first top view comprises the first support plane;
filtering the first support plane in the first top view according to the vertical distance to obtain a second top view of the top surface of the workpiece;
binarizing the second top view, and acquiring the length of the workpiece according to the binarized second top view; and the length is obtained by multiplying the number of pixels in the length direction of the workpiece by a distance value corresponding to a unit pixel in the second top view.
6. The workpiece recognition method of claim 5, wherein the obtaining of the length and height of the workpiece from the three-dimensional image further comprises:
acquiring the top surface of the workpiece and a second supporting plane below the workpiece according to the three-dimensional image; the second supporting plane is positioned above the first supporting plane, and the area of the second supporting plane is smaller than that of the first supporting plane;
and calculating the average distance between the top surface and the second supporting plane to obtain the height of the workpiece.
7. A workpiece recognition apparatus comprising a control assembly; the control assembly includes a processor, a memory, and a computer program stored in the memory; the computer program is executable by the processor to implement a method of workpiece recognition as claimed in any one of claims 1 to 6.
8. The workpiece recognition apparatus of claim 7, further comprising a first base assembly, a second base assembly and a camera assembly disposed on the first base assembly, a third base assembly and a first position limiting assembly disposed on the second base assembly, and a fourth base assembly and a second position limiting assembly disposed on the third base assembly; the control assembly is configured on the first base assembly;
the first base is used for driving the second base to lift; the second base is used for driving the third base to be close to or far away from the first base; the first limiting assembly is used for limiting the third base assembly to move; the third base is used for driving the fourth base to rotate; the second limiting assembly is used for limiting the fourth base assembly to rotate; the fourth base is used for placing a workpiece; the shooting assembly is used for shooting a depth-of-field image above the first base;
the control assembly is electrically connected to the first base assembly, the second base assembly, the third base assembly, the fourth base assembly, the first limiting assembly, the second limiting assembly and the shooting assembly;
the acquiring of the depth-of-field images of the workpiece at multiple viewing angles specifically includes:
determining whether the second base assembly is at a lowest point;
when the second base assembly is judged to be at the lowest point, executing the subsequent steps; otherwise, controlling the first base assembly to drive the second base assembly to descend to the lowest point and then executing the subsequent steps;
controlling the second mount assembly to drive the third mount assembly away from the first mount assembly;
controlling the first pedestal assembly to drive the second pedestal assembly to ascend so that the fourth pedestal assembly is supported on a workpiece;
controlling the second base assembly to drive the third base assembly to approach the first base assembly;
controlling the first base assembly to drive the second base assembly to lower; so that the workpiece on the fourth base assembly is positioned at the shooting station;
controlling the first limiting assembly to limit the third base assembly so as to prevent the third base assembly from moving;
controlling the second limiting assembly to limit the fourth base assembly so as to prevent the fourth base assembly from rotating;
controlling the shooting assembly to shoot a depth-of-field image of a first visual angle;
repeatedly executing the current step to capture depth images of a predetermined number of viewing angles: the first limiting assembly and the second limiting assembly are controlled to release limiting, the third base assembly is controlled to drive the fourth base assembly to rotate, the first limiting assembly and the second limiting assembly are controlled to limit, and then the shooting assembly is controlled to shoot a depth-of-field image of the next visual angle;
after a predetermined number of views of depth images are captured:
controlling the first base assembly to drive the second base assembly to ascend;
controlling the second limiting assembly and the first limiting assembly to release limiting;
controlling the second mount assembly to drive the third mount assembly away from the first mount assembly;
controlling the first pedestal assembly to drive the second pedestal assembly to descend so that the workpiece is separated from the fourth pedestal assembly;
and controlling the second base assembly to drive the third base assembly to be close to the first base assembly so as to reset the equipment.
9. A workpiece recognition apparatus, comprising:
the image acquisition module is used for acquiring depth-of-field images of a plurality of visual angles of the workpiece; wherein the plurality of viewing angles include a side viewing angle at which width information of the workpiece is photographed;
the width acquisition module is used for acquiring the width of the workpiece according to the depth-of-field image of the side view;
the image fusion module is used for fusing the depth-of-field images of the plurality of visual angles to obtain a three-dimensional image of the workpiece;
the length and height acquisition module is used for acquiring the length and the height of the workpiece according to the three-dimensional image;
and the model acquisition module is used for acquiring the model of the workpiece from a database according to the width, the length and the height of the workpiece.
10. A computer-readable storage medium, comprising a stored computer program, wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of workpiece identification as claimed in any one of claims 1 to 6.
CN202111450902.1A 2021-12-01 2021-12-01 Workpiece identification method, device, equipment and storage medium Pending CN114067189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111450902.1A CN114067189A (en) 2021-12-01 2021-12-01 Workpiece identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111450902.1A CN114067189A (en) 2021-12-01 2021-12-01 Workpiece identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114067189A true CN114067189A (en) 2022-02-18

Family

ID=80228397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111450902.1A Pending CN114067189A (en) 2021-12-01 2021-12-01 Workpiece identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114067189A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820575A (en) * 2022-05-24 2022-07-29 四川中绳矩阵技术发展有限公司 Image verification method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018228467A1 (en) * 2017-06-16 2018-12-20 Oppo广东移动通信有限公司 Image exposure method and device, photographing device, and storage medium
CN109377551A (en) * 2018-10-16 2019-02-22 北京旷视科技有限公司 A kind of three-dimensional facial reconstruction method, device and its storage medium
CN110232326A (en) * 2019-05-20 2019-09-13 平安科技(深圳)有限公司 A kind of D object recognition method, device and storage medium
CN112070782A (en) * 2020-08-31 2020-12-11 腾讯科技(深圳)有限公司 Method and device for identifying scene contour, computer readable medium and electronic equipment
CN112560613A (en) * 2020-12-02 2021-03-26 北京理工大学 Part identification method and device and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018228467A1 (en) * 2017-06-16 2018-12-20 Oppo广东移动通信有限公司 Image exposure method and device, photographing device, and storage medium
CN109377551A (en) * 2018-10-16 2019-02-22 北京旷视科技有限公司 A kind of three-dimensional facial reconstruction method, device and its storage medium
CN110232326A (en) * 2019-05-20 2019-09-13 平安科技(深圳)有限公司 A kind of D object recognition method, device and storage medium
CN112070782A (en) * 2020-08-31 2020-12-11 腾讯科技(深圳)有限公司 Method and device for identifying scene contour, computer readable medium and electronic equipment
CN112560613A (en) * 2020-12-02 2021-03-26 北京理工大学 Part identification method and device and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820575A (en) * 2022-05-24 2022-07-29 四川中绳矩阵技术发展有限公司 Image verification method and device, computer equipment and storage medium
CN114820575B (en) * 2022-05-24 2023-01-20 四川中绳矩阵技术发展有限公司 Image verification method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
EP3449466B1 (en) Pallet detection using units of physical length
KR102461759B1 (en) Intelligent Forklift and Container Position and Posture Deviation Detection Method
CN101122457B (en) Image border scanning system and method
CN111284154B (en) Seal control machine seal control method, device and system based on image recognition
CN111768450B (en) Automatic detection method and device for structured light camera row deviation based on speckle pattern
CN107578410B (en) Automatic identification method for notch image of point switch
EP4159643A1 (en) Cargo box extraction and device, system, robot, and storage medium
CN111982921A (en) Hole defect detection method and device, conveying platform and storage medium
CN114067189A (en) Workpiece identification method, device, equipment and storage medium
KR20230031826A (en) Warehouse robot control method, device, robot and warehouse system
CN103824275A (en) System and method for finding saddle point-like structures in an image and determining information from the same
CN112017210A (en) Target object tracking method and device
CN110288619B (en) Vision-based sunflower module surface screw hole position detection method
CN116533308B (en) PCB cutting monitoring system, method, device and storage medium
JP3684799B2 (en) Device for detecting displacement amount of stop position of moving object
WO2023071512A1 (en) Processing control method and apparatus, and device
CN106846419B (en) Method and device for determining portrait outline in image
JP4634250B2 (en) Image recognition method and apparatus for rectangular parts
US11627246B2 (en) Camera adjustment apparatus, camera position adjustment method, and computer readable medium
CN111951334A (en) Identification and positioning method and lifting method for stacking steel billets based on binocular vision technology
CN116741655B (en) Silicon wafer feeding detection method, device, equipment, medium and silicon wafer feeding system
CN114697534B (en) Lifting platform and control method thereof
CN117115240A (en) Universal pallet 3D pose positioning method and system and storage medium
CN117889789B (en) Building wall flatness detection method and system
CN112053335B (en) Hot rolled bar overlapping detection method, system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination