CN114049355B - Method, system and device for identifying and labeling scattered workpieces - Google Patents

Method, system and device for identifying and labeling scattered workpieces Download PDF

Info

Publication number
CN114049355B
CN114049355B CN202210043985.0A CN202210043985A CN114049355B CN 114049355 B CN114049355 B CN 114049355B CN 202210043985 A CN202210043985 A CN 202210043985A CN 114049355 B CN114049355 B CN 114049355B
Authority
CN
China
Prior art keywords
template
point cloud
workpiece
camera
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210043985.0A
Other languages
Chinese (zh)
Other versions
CN114049355A (en
Inventor
王灿
丁丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lingxi Robot Intelligent Technology Co ltd
Original Assignee
Hangzhou Lingxi Robot Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lingxi Robot Intelligent Technology Co ltd filed Critical Hangzhou Lingxi Robot Intelligent Technology Co ltd
Priority to CN202210043985.0A priority Critical patent/CN114049355B/en
Publication of CN114049355A publication Critical patent/CN114049355A/en
Application granted granted Critical
Publication of CN114049355B publication Critical patent/CN114049355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The application relates to a method, a system and a device for identifying and labeling scattered workpieces, wherein the method comprises the following steps: acquiring an actual point cloud picture and a workpiece pose matrix of a current workpiece in a batch to be acquired by a camera, constructing a virtual camera model according to an internal parameter matrix and an external parameter matrix of the camera, calculating template point cloud projection under the virtual camera model according to the workpiece pose matrix of the current workpiece and the template point cloud data of the template workpiece, calculating to obtain a template depth map according to the template point cloud projection, comparing the actual point cloud image with the template depth image, determining whether the current workpiece is matched with the template workpiece, if so, marking the current workpiece, repeating the steps to finish the marking of all the workpieces in the batch to be collected, by the aid of the method and the device, the problems of low efficiency and low accuracy in workpiece identification and marking are solved, depth map information matching based on the workpiece template is realized, time required by workpiece identification and marking is shortened, and the accuracy of identification and marking is improved.

Description

Method, system and device for identifying and labeling scattered workpieces
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, a system, and an apparatus for recognizing and labeling scattered workpieces.
Background
At present, in the process of automatic assembly of a robot, identification and marking of a workpiece and determination of a three-dimensional point cloud pose of the workpiece are the prerequisites of relevant operations such as clamping and the like of the robot. When a robot carries a camera to shoot a frame of workpiece image, all the workpieces shot on the image are generally manually marked one by one, or a pixel identification related algorithm is adopted to identify the complete workpieces contained in the image.
The above labeling methods all have certain problems: the manual labeling method requires manual visual observation, so that the labeling speed is slow. Although the pixel identification method can quickly identify the complete workpieces appearing in the image, when the workpieces are shielded or overlapped, the accuracy of pixel identification is greatly reduced, and the shielded or incomplete workpieces cannot be well identified. In the process of large-batch data acquisition, all workpieces appearing in the images need to be identified and labeled by the camera every time the camera takes one image, so that large-batch repeated labeling can be caused for the workpieces appearing in a plurality of images, and the overall working efficiency is reduced.
At present, no effective solution is provided aiming at the problems of low efficiency and low accuracy of workpiece identification and marking in the related technology.
Disclosure of Invention
The embodiment of the application provides a method, a system and a device for identifying and labeling scattered workpieces, which are used for at least solving the problems of low efficiency and low accuracy in identifying and labeling workpieces in the related technology.
In a first aspect, an embodiment of the present application provides a method for identifying and labeling scattered workpieces, where the method includes:
sequentially placing the workpieces in the batch to be collected in the camera view field, and repeatedly executing the preset step until all the workpieces in the batch to be collected are identified and marked;
the presetting step comprises the following steps:
acquiring an actual point cloud picture and a workpiece pose matrix of the current workpiece in the batch to be acquired through the camera;
constructing a virtual camera model according to the internal parameter matrix and the external parameter matrix of the camera;
calculating template point cloud projection under the virtual camera model according to the workpiece pose matrix of the current workpiece and template point cloud data of the template workpiece;
and calculating to obtain a template depth map according to the template point cloud projection, comparing the actual point cloud map with the template depth map, determining whether the current workpiece is matched with the template workpiece, and if so, marking the current workpiece.
In some of these embodiments, comparing the actual point cloud image to the template depth image, determining whether the current workpiece matches the template workpiece comprises:
creating a mask image with the same size as the template depth map, wherein the value of a pixel point in the mask image is defaulted to be a first numerical value;
traversing each pixel value in the template depth map, and comparing the pixel value with the depth value of a pixel point at the position corresponding to the actual point cloud map;
if the difference value between the pixel value and the depth value is smaller than a preset threshold value, setting the value of a corresponding pixel point on the mask image as a second numerical value;
and determining whether the current workpiece is matched with the template workpiece according to the pixel point of which the median of the mask image is the second numerical value.
In some of these embodiments, obtaining a template depth map from the template point cloud projection comprises:
and reading the depth value of the corresponding coordinate of each pixel point in the template point cloud projection through a glReadPixels function in OpenGL, and storing the depth value information of all the pixel points into a depth map format to obtain a template depth map of the template workpiece.
In some embodiments, capturing, by the camera, an actual point cloud of the current workpiece in the batch to be captured includes:
acquiring an initial point cloud picture of a current workpiece in the batch to be acquired through the camera;
and performing distortion removal on the initial point cloud image, and then storing the initial point cloud image into a three-channel image format to obtain an actual point cloud image of the current workpiece, namely the three channels of each pixel point in the actual point cloud image respectively store coordinate value information of x, y and z.
In some of these embodiments, labeling the current workpiece comprises:
corresponding the pixel point with the median of the mask image as a second numerical value to the actual point cloud picture, and dividing to obtain accurate point cloud data of the current workpiece;
and according to the sequence of successful matching, marking the accurate point cloud data of the workpieces in the batch to be collected from 1 by the serial number.
In some embodiments, traversing each pixel value in the template depth map, and comparing the pixel value with a depth value of a pixel point at a corresponding position of the actual point cloud map includes:
traversing each pixel value in the template depth map, judging whether the pixel value is zero or not,
if yes, namely the pixel point is a background point, skipping the pixel point;
and if not, comparing the pixel value with the depth value of the pixel point at the position corresponding to the actual point cloud picture.
In some embodiments, after labeling the accurate point cloud data of the workpieces in the batch to be collected, the method further comprises:
moving the camera position, and calculating to obtain a motion matrix of the camera according to a first camera pose matrix before the camera moves and a second camera pose matrix after the camera moves;
converting accurate point cloud data of a workpiece under the first camera pose matrix into accurate point cloud data under the second camera pose matrix according to the motion matrix;
and identifying all marked workpieces acquired under the field of view of the camera after the camera is moved by comparing the incomplete point cloud data acquired under the field of view of the camera after the camera is moved with the accurate point cloud data under the second camera position and attitude matrix.
In some of these embodiments, before computing a template point cloud projection under the virtual camera model from the workpiece pose matrix and template point cloud data for the current workpiece, the method further comprises:
acquiring template point cloud data of a template workpiece in a preset service, wherein the template point cloud data is an obj format model file.
In a second aspect, the embodiment of the application provides an identification and labeling system for scattered workpieces, which comprises a data acquisition module, a model operation module and a matching and labeling module;
the data acquisition module sequentially puts the workpieces in the batch to be acquired in a camera view field, and the camera acquires an actual point cloud picture and a workpiece position and posture matrix of the current workpieces in the batch to be acquired;
the model operation module constructs a virtual camera model according to the internal reference matrix and the external reference matrix of the camera, and calculates template point cloud projection under the virtual camera model according to the workpiece pose matrix of the current workpiece and the template point cloud data of the template workpiece;
and the matching and labeling module calculates to obtain a template depth map according to the template point cloud projection, compares the actual point cloud map with the template depth map, determines whether the current workpiece is matched with the template workpiece or not, and labels the current workpiece if the current workpiece is matched with the template workpiece.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to implement the method for identifying and labeling the scattered workpieces according to the first aspect.
Compared with the prior art, the method, the system and the device for identifying and labeling the scattered workpieces, which are provided by the embodiment of the application, sequentially place the workpieces in a to-be-collected batch in the field of view of a camera, collect the actual point cloud image and the workpiece pose matrix of the current workpiece in the to-be-collected batch through the camera, construct a virtual camera model according to the internal reference matrix and the external reference matrix of the camera, calculate the template point cloud projection under the virtual camera model according to the workpiece pose matrix of the current workpiece and the template point cloud data of the template workpiece, calculate the template depth image according to the template point cloud projection, compare the actual point cloud image and the template depth image, determine whether the current workpiece is matched with the template workpiece, label the current workpiece if the current workpiece is matched, repeat the steps to finish the labeling of all the workpieces in the to-be-collected batch, and solve the problems of low efficiency and low accuracy rate in the identification and labeling of the workpieces, accurate workpiece three-dimensional point cloud pose data are obtained, manual marking is not needed, and the time required by workpiece data marking and acquisition is greatly shortened; the depth map information is matched based on the workpiece template, so that the workpiece with overlapping condition in the camera shooting image can be effectively identified, and the accuracy of identifying the label is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flowchart illustrating steps of a method for identifying and labeling scattered workpieces according to an embodiment of the present disclosure;
FIG. 2 is a block diagram of a system for identifying and labeling scattered workpieces according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application.
Description of the drawings: 21. a data acquisition module; 22. a model operation module; 23. and a matching and labeling module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
In the related art, automatic identification and marking of workpieces are mainly realized through a pixel identification method (manual marking is not needed), but although the pixel identification method can quickly identify complete workpieces appearing in an image, when shielding or overlapping exists among the workpieces, templates cannot be completely matched, the identification accuracy is greatly reduced, and shielded or incomplete workpieces (namely, only parts of the workpieces exist in the image) cannot be well identified. Such as: when workpieces are sequentially placed (placed in disorder and scattered mode) in the camera view by using a manipulator and are identified and labeled one by one through a pixel identification method, the workpieces placed previously are easily overlapped with the workpieces to be identified currently, and under the condition, the pixel identification method is prone to identification errors (because the pixel identification method is mainly based on plane data such as color and shape), so that the accuracy is low when the workpieces placed in large batches and scattered mode are identified.
In order to solve the problems, the invention provides a method, a system and a device for identifying and labeling scattered workpieces.
An embodiment of the present application provides a method for identifying and labeling scattered workpieces, fig. 1 is a flowchart illustrating steps of the method for identifying and labeling scattered workpieces according to the embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
s102, sequentially placing workpieces in batches to be collected in a camera view, and collecting an actual point cloud picture and a workpiece pose matrix of a current workpiece through the camera;
preferably, before acquiring data of the workpiece, a hand-eye calibration of a camera, specifically a 3D structured light camera, is required.
Step S104, constructing a virtual camera model according to the internal parameter matrix and the external parameter matrix of the camera;
step S106, calculating template point cloud projection under the virtual camera model according to the workpiece pose matrix of the current workpiece and the template point cloud data of the template workpiece;
preferably, before calculating the template point cloud projection, the template point cloud data of the template workpiece in the preset service needs to be acquired, for example: if the A workpiece needs to be identified and marked in the batch, template point cloud data of the A workpiece needs to be acquired, wherein the template point cloud data is an obj format model file.
Step S108, calculating according to the point cloud of the template to obtain a template depth map, comparing the actual point cloud map with the template depth map, determining whether the current workpiece is matched with the template workpiece, and if so, marking the current workpiece;
step S110, the steps S102 to S108 are repeatedly executed until all the workpieces in the batch to be collected are identified and marked.
Through the steps S102 to S110, the problems of low efficiency and low accuracy in workpiece identification and marking are solved, accurate workpiece three-dimensional point cloud pose data are obtained, manual marking is not needed, and the time required by workpiece data marking and acquisition is greatly shortened; the depth map information is matched based on the workpiece template, so that the workpiece with overlapping condition in the camera shooting image can be effectively identified, and the accuracy of identifying the label is improved.
In some of these embodiments, comparing the actual point cloud image to the template depth image, determining whether the current workpiece matches the template workpiece comprises:
creating a mask image with the same size as the template depth map, wherein the value of a pixel point in the mask image is defaulted to be a first numerical value;
traversing each pixel value in the template depth map, and comparing the pixel value with the depth value of a pixel point at a position corresponding to the actual point cloud map;
if the difference value between the pixel value and the depth value is smaller than the preset threshold value, setting the value of the corresponding pixel point on the mask image as a second numerical value;
and determining whether the current workpiece is matched with the template workpiece according to the pixel points of which the median of the mask image is the second numerical value.
In some of these embodiments, obtaining the template depth map from the template point cloud projection comprises:
and reading the depth value of the corresponding coordinate of each pixel point in the template point cloud projection through a glReadPixels function in OpenGL, and storing the depth value information of all the pixel points into a depth map format to obtain a template depth map of the template workpiece.
In some embodiments, capturing an actual cloud point of the current workpiece in the batch to be captured by the camera includes:
acquiring an initial point cloud picture of a current workpiece in a batch to be acquired through a camera;
and (3) the initial point cloud picture is subjected to distortion removal and then is stored into a three-channel picture format, so that an actual point cloud picture of the current workpiece is obtained, namely the three channels of each pixel point in the actual point cloud picture respectively store the coordinate value information of x, y and z.
In some of these embodiments, labeling the current workpiece includes:
corresponding the pixel point with the median of the mask image as a second numerical value to an actual point cloud picture, and dividing to obtain accurate point cloud data of the current workpiece;
and according to the sequence of successful matching, marking the accurate point cloud data of the workpieces in the batch to be acquired from 1 by the serial number.
In some embodiments, traversing each pixel value in the template depth map, and comparing the pixel value with a depth value of a pixel point at a position corresponding to the actual point cloud map includes:
traversing each pixel value in the template depth map, judging whether the pixel value is zero or not,
if yes, namely the pixel point is a background point (the depth value of the coordinate point corresponding to the current pixel point is the farthest position), skipping the pixel point;
if not, comparing the pixel value with the depth value of the pixel point at the position corresponding to the actual point cloud picture.
In some embodiments, after labeling the accurate point cloud data of the workpieces in the batch to be acquired, the method further comprises:
moving the position of the camera, and calculating to obtain a motion matrix of the camera according to a first camera pose matrix before the camera moves and a second camera pose matrix after the camera moves;
converting the accurate point cloud data of the workpiece under the first camera pose matrix into accurate point cloud data under the second camera pose matrix according to the motion matrix;
and identifying all marked workpieces acquired under the field of view of the camera after the camera is moved by comparing the incomplete point cloud data acquired under the field of view of the camera after the camera is moved with the accurate point cloud data under the second camera position and attitude matrix.
It should be noted that, due to the relative motion relationship between the camera and the workpiece, the position of the camera is moved, the projection condition of the workpiece in the field of view of the camera is also changed, and the changed projection can be obtained after the original workpiece is subjected to the reverse motion opposite to the motion condition of the camera. The motion matrix (including translation and rotation) of the camera can be obtained by converting the position matrix of the front camera and the position matrix of the rear camera, and the accurate point cloud data of the workpiece marked before in the view of the current position and position of the camera can be calculated through the inverse operation of the motion matrix.
And comparing the point cloud coordinates of the marked workpiece in the current camera view field with the point cloud coordinates of the incomplete part in the image, so that the previously marked workpiece serial numbers corresponding to all incomplete workpiece point clouds in the image can be accurately identified.
Through accurate matching and labeling results, the blocked or incomplete workpiece (namely, only part of the workpiece exists in the image) in the image shot by the camera is effectively identified. When a large batch of shot images are processed, the times of repeated labeling and identification are greatly reduced, and the working efficiency is improved.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment of the application provides an identification and labeling system for scattered workpieces, fig. 2 is a structural block diagram of the identification and labeling system for the scattered workpieces according to the embodiment of the application, and as shown in fig. 2, the system comprises a data acquisition module 21, a model operation module 22 and a matching and labeling module 23;
the data acquisition module 21 sequentially puts the workpieces in the batch to be acquired in the camera view field, and acquires the actual point cloud picture and the workpiece position and posture matrix of the current workpieces in the batch to be acquired through the camera;
the model operation module 22 constructs a virtual camera model according to the internal reference matrix and the external reference matrix of the camera, and calculates template point cloud projection under the virtual camera model according to the workpiece pose matrix of the current workpiece and the template point cloud data of the template workpiece;
the matching and labeling module 23 calculates a template depth map according to the template point cloud projection, compares the actual point cloud map with the template depth map, determines whether the current workpiece is matched with the template workpiece, and labels the current workpiece if the current workpiece is matched with the template workpiece.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring the acquired workpiece data;
s2, identifying and matching the current workpiece and the template workpiece;
and S3, if the matching is successful, marking.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the method for identifying and labeling scattered workpieces in the above embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any one of the above-mentioned methods for identifying and labeling scattered workpieces.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize a random workpiece identification and marking method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 3 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 3, there is provided an electronic device, which may be a server, and its internal structure diagram may be as shown in fig. 3. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize a scattered workpiece identification and marking method, and the database is used for storing data.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A method for identifying and marking scattered workpieces is characterized by comprising the following steps:
sequentially placing the workpieces in the batch to be collected in the camera view field, and repeatedly executing the preset step until all the workpieces in the batch to be collected are identified and marked;
the presetting step comprises the following steps:
acquiring an actual point cloud picture and a workpiece pose matrix of the current workpiece in the batch to be acquired through the camera;
constructing a virtual camera model according to the internal parameter matrix and the external parameter matrix of the camera;
calculating template point cloud projection under the virtual camera model according to the workpiece pose matrix of the current workpiece and template point cloud data of the template workpiece;
calculating according to the point cloud of the template to obtain a template depth map, and creating a mask image with the same size as the template depth map, wherein the default of the value of a pixel point in the mask image is a first numerical value; traversing each pixel value in the template depth map, and comparing the pixel value with the depth value of a pixel point at the position corresponding to the actual point cloud map; if the difference value between the pixel value and the depth value is smaller than a preset threshold value, setting the value of a corresponding pixel point on the mask image as a second numerical value; and determining whether the current workpiece is matched with the template workpiece according to the pixel point of which the median of the mask image is the second numerical value, and if so, labeling the current workpiece.
2. The method of claim 1, wherein obtaining a template depth map from the template point cloud projection comprises:
and reading the depth value of the corresponding coordinate of each pixel point in the template point cloud projection through a glReadPixels function in OpenGL, and storing the depth value information of all the pixel points into a depth map format to obtain a template depth map of the template workpiece.
3. The method of claim 1, wherein capturing, by the camera, an actual cloud point of the current workpiece in the batch to be captured comprises:
acquiring an initial point cloud picture of a current workpiece in the batch to be acquired through the camera;
and performing distortion removal on the initial point cloud image, and then storing the initial point cloud image into a three-channel image format to obtain an actual point cloud image of the current workpiece, namely the three channels of each pixel point in the actual point cloud image respectively store coordinate value information of x, y and z.
4. The method of claim 1, wherein labeling the current workpiece comprises:
corresponding the pixel point with the median of the mask image as a second numerical value to the actual point cloud picture, and dividing to obtain accurate point cloud data of the current workpiece;
and according to the sequence of successful matching, marking the accurate point cloud data of the workpieces in the batch to be collected from 1 by the serial number.
5. The method of claim 1, wherein traversing each pixel value in the template depth map, comparing the pixel value with a depth value of a pixel point at a corresponding location of the actual point cloud map comprises:
traversing each pixel value in the template depth map, judging whether the pixel value is zero or not,
if yes, namely the pixel point is a background point, skipping the pixel point;
and if not, comparing the pixel value with the depth value of the pixel point at the position corresponding to the actual point cloud picture.
6. The method of claim 4, wherein after labeling the accurate point cloud data for the workpieces in the batch to be collected, the method further comprises:
moving the camera position, and calculating to obtain a motion matrix of the camera according to a first camera pose matrix before the camera moves and a second camera pose matrix after the camera moves;
converting accurate point cloud data of a workpiece under the first camera pose matrix into accurate point cloud data under the second camera pose matrix according to the motion matrix;
and identifying all marked workpieces acquired under the field of view of the camera after the camera is moved by comparing the incomplete point cloud data acquired under the field of view of the camera after the camera is moved with the accurate point cloud data under the second camera position and attitude matrix.
7. The method of claim 1, wherein prior to computing a template point cloud projection under the virtual camera model from the workpiece pose matrix and template point cloud data for the current workpiece, the method further comprises:
acquiring template point cloud data of a template workpiece in a preset service, wherein the template point cloud data is an obj format model file.
8. The identification and marking system for the scattered workpieces is characterized by comprising a data acquisition module, a model operation module and a matching and marking module;
the data acquisition module sequentially puts the workpieces in the batch to be acquired in a camera view field, and the camera acquires an actual point cloud picture and a workpiece position and posture matrix of the current workpieces in the batch to be acquired;
the model operation module constructs a virtual camera model according to the internal reference matrix and the external reference matrix of the camera, and calculates template point cloud projection under the virtual camera model according to the workpiece pose matrix of the current workpiece and the template point cloud data of the template workpiece;
the matching and labeling module obtains a template depth map according to the template point cloud projection calculation, a mask image with the same size as the template depth map is created, and the value of a pixel point in the mask image is defaulted to be a first numerical value; traversing each pixel value in the template depth map, and comparing the pixel value with the depth value of a pixel point at the position corresponding to the actual point cloud map; if the difference value between the pixel value and the depth value is smaller than a preset threshold value, setting the value of a corresponding pixel point on the mask image as a second numerical value; and determining whether the current workpiece is matched with the template workpiece according to the pixel point of which the median of the mask image is the second numerical value, and if so, labeling the current workpiece.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for identifying and labeling the scattered workpieces according to any one of claims 1 to 7.
CN202210043985.0A 2022-01-14 2022-01-14 Method, system and device for identifying and labeling scattered workpieces Active CN114049355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210043985.0A CN114049355B (en) 2022-01-14 2022-01-14 Method, system and device for identifying and labeling scattered workpieces

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210043985.0A CN114049355B (en) 2022-01-14 2022-01-14 Method, system and device for identifying and labeling scattered workpieces

Publications (2)

Publication Number Publication Date
CN114049355A CN114049355A (en) 2022-02-15
CN114049355B true CN114049355B (en) 2022-04-19

Family

ID=80196561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210043985.0A Active CN114049355B (en) 2022-01-14 2022-01-14 Method, system and device for identifying and labeling scattered workpieces

Country Status (1)

Country Link
CN (1) CN114049355B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109940604A (en) * 2019-01-29 2019-06-28 中国工程物理研究院激光聚变研究中心 Workpiece 3 D positioning system and method based on point cloud data
CN110246127A (en) * 2019-06-17 2019-09-17 南京工程学院 Workpiece identification and localization method and system, sorting system based on depth camera
CN110892725A (en) * 2017-07-13 2020-03-17 交互数字Vc控股公司 Method and apparatus for encoding/decoding a point cloud representing a 3D object
CN111179321A (en) * 2019-12-30 2020-05-19 南京埃斯顿机器人工程有限公司 Point cloud registration method based on template matching
CN111815706A (en) * 2020-06-23 2020-10-23 熵智科技(深圳)有限公司 Visual identification method, device, equipment and medium for single-article unstacking
CN112837371A (en) * 2021-02-26 2021-05-25 梅卡曼德(北京)机器人科技有限公司 Object grabbing method and device based on 3D matching and computing equipment
CN112950562A (en) * 2021-02-22 2021-06-11 杭州申昊科技股份有限公司 Fastener detection algorithm based on line structured light
CN113870430A (en) * 2021-12-06 2021-12-31 杭州灵西机器人智能科技有限公司 Workpiece data processing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182689B (en) * 2016-12-08 2021-06-22 中国科学院沈阳自动化研究所 Three-dimensional identification and positioning method for plate-shaped workpiece applied to robot carrying and polishing field
US10671835B2 (en) * 2018-03-05 2020-06-02 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Object recognition
CN108830902A (en) * 2018-04-19 2018-11-16 江南大学 A kind of workpiece identification at random and localization method based on points cloud processing
CN111784834A (en) * 2020-06-24 2020-10-16 北京百度网讯科技有限公司 Point cloud map generation method and device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110892725A (en) * 2017-07-13 2020-03-17 交互数字Vc控股公司 Method and apparatus for encoding/decoding a point cloud representing a 3D object
CN109940604A (en) * 2019-01-29 2019-06-28 中国工程物理研究院激光聚变研究中心 Workpiece 3 D positioning system and method based on point cloud data
CN110246127A (en) * 2019-06-17 2019-09-17 南京工程学院 Workpiece identification and localization method and system, sorting system based on depth camera
CN111179321A (en) * 2019-12-30 2020-05-19 南京埃斯顿机器人工程有限公司 Point cloud registration method based on template matching
CN111815706A (en) * 2020-06-23 2020-10-23 熵智科技(深圳)有限公司 Visual identification method, device, equipment and medium for single-article unstacking
CN112950562A (en) * 2021-02-22 2021-06-11 杭州申昊科技股份有限公司 Fastener detection algorithm based on line structured light
CN112837371A (en) * 2021-02-26 2021-05-25 梅卡曼德(北京)机器人科技有限公司 Object grabbing method and device based on 3D matching and computing equipment
CN113870430A (en) * 2021-12-06 2021-12-31 杭州灵西机器人智能科技有限公司 Workpiece data processing method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Fast template matching and pose estimation in 3D point clouds;Richard Vock 等;《Computers & Graphics》;20190430;第79卷;36-45 *
The Detection of Non-Cooperative Targets in Space by Using 3D Point Cloud;Qiaoying Ding 等;《2019 5th International Conference on Control, Automation and Robotics》;20191231;545-549 *
三维点云特征的工件识别与位姿估计;张铁 等;《机械设计与制造》;20211116;1-7 *
基于点对特征和局部参考系的六维位姿估计算法;王化明 等;《江苏大学学报》;20191130;第40卷(第6期);695-700 *

Also Published As

Publication number Publication date
CN114049355A (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN109388093B (en) Robot attitude control method and system based on line feature recognition and robot
CN110070564B (en) Feature point matching method, device, equipment and storage medium
CN109658454B (en) Pose information determination method, related device and storage medium
CN107845113B (en) Target element positioning method and device and user interface testing method and device
CN112581546A (en) Camera calibration method and device, computer equipment and storage medium
CN111242240B (en) Material detection method and device and terminal equipment
CN115176274A (en) Heterogeneous image registration method and system
CN110926330A (en) Image processing apparatus, image processing method, and program
CN112348958A (en) Method, device and system for acquiring key frame image and three-dimensional reconstruction method
CN112435223B (en) Target detection method, device and storage medium
CN112991456A (en) Shooting positioning method and device, computer equipment and storage medium
CN111401266A (en) Method, device, computer device and readable storage medium for positioning corner points of drawing book
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
US11544839B2 (en) System, apparatus and method for facilitating inspection of a target object
CN115131741A (en) Method and device for detecting code carving quality, computer equipment and storage medium
CN108805799B (en) Panoramic image synthesis apparatus, panoramic image synthesis method, and computer-readable storage medium
CN114049355B (en) Method, system and device for identifying and labeling scattered workpieces
CN113454684A (en) Key point calibration method and device
US9305235B1 (en) System and method for identifying and locating instances of a shape under large variations in linear degrees of freedom and/or stroke widths
CN110163864B (en) Image segmentation method and device, computer equipment and storage medium
CN107527011B (en) Non-contact skin resistance change trend detection method, device and equipment
CN116129177A (en) Image labeling method and device and electronic equipment
CN111401365B (en) OCR image automatic generation method and device
CN115187769A (en) Positioning method and device
US20210042576A1 (en) Image processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant