CN116309442A - Method for determining picking information and method for picking target object - Google Patents

Method for determining picking information and method for picking target object Download PDF

Info

Publication number
CN116309442A
CN116309442A CN202310259604.7A CN202310259604A CN116309442A CN 116309442 A CN116309442 A CN 116309442A CN 202310259604 A CN202310259604 A CN 202310259604A CN 116309442 A CN116309442 A CN 116309442A
Authority
CN
China
Prior art keywords
picking
information
target area
target
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310259604.7A
Other languages
Chinese (zh)
Other versions
CN116309442B (en
Inventor
戴至修
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310259604.7A priority Critical patent/CN116309442B/en
Publication of CN116309442A publication Critical patent/CN116309442A/en
Application granted granted Critical
Publication of CN116309442B publication Critical patent/CN116309442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/04Sorting according to size
    • B07C5/10Sorting according to size measured by light-responsive means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • B07C5/3422Sorting according to other particular properties according to optical properties, e.g. colour using video scanning devices, e.g. TV-cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Manipulator (AREA)

Abstract

The disclosure provides a method for determining picking information and a method for picking a target object, relates to the technical field of image processing, and particularly relates to the fields of computer vision, artificial intelligence, deep learning and the like. The specific implementation scheme is as follows: determining a picking target area in a target acquisition image; determining a picking coordinate point of the picking target area according to the outline information of the picking target area under the condition that the centroid coordinate point of the picking target area is located outside the picking target area; the picking coordinate points are located in the picking target area; and determining the picking information of the picking target area according to the image attribute information and the picking coordinate points of the picking target area, wherein the picking information is used for indicating a picking mechanism to pick the target object corresponding to the picking target area. According to the technology of the present disclosure, based on the acquired target acquisition image, the picking information of the target area can be accurately determined, so that the picking mechanism can accurately achieve the picking of the target object by using the picking information.

Description

Method for determining picking information and method for picking target object
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to the fields of computer vision, artificial intelligence, deep learning, and the like.
Background
The target object picking technology in the related art cannot well meet the requirement that the target object is accurately picked and the picking rate requirement of the target object.
Disclosure of Invention
The disclosure provides a method for determining picking information and a method for picking a target object.
According to an aspect of the present disclosure, there is provided a method of determining picking information, including:
determining a picking target area in a target acquisition image;
determining a picking coordinate point of the picking target area according to the outline information of the picking target area under the condition that the centroid coordinate point of the picking target area is located outside the picking target area; wherein the picking coordinate point is positioned in the picking target area; and
and determining the picking information of the picking target area according to the image attribute information and the picking coordinate points of the picking target area, wherein the picking information is used for indicating a picking mechanism to pick the target object corresponding to the picking target area.
According to another aspect of the present disclosure, there is provided a method of picking a target object, including:
acquiring image attribute information of a picking target area in a target acquisition image by using the method for determining the picking information in any embodiment of the disclosure based on the target acquisition image;
According to the image attribute information, determining a picking mode and picking coordinate information of the picking target area; and
and controlling the picking mechanism to pick the target object in the region to be picked corresponding to the target acquisition image based on the picking mode and the picking coordinate information.
According to another aspect of the present disclosure, there is provided a determination apparatus of picking information, including:
the first determining module is used for determining a picking target area in the target acquisition image;
the second determining module is used for determining the picking coordinate points of the picking target area according to the outline information of the picking target area under the condition that the centroid coordinate points of the picking target area are located outside the picking target area; wherein the picking coordinate point is positioned in the picking target area; and
and the third determining module is used for determining the picking information of the picking target area according to the image attribute information and the picking coordinate points of the picking target area, wherein the picking information is used for indicating the picking mechanism to pick the target object corresponding to the picking target area.
According to another aspect of the present disclosure, there is provided a target object picking apparatus including:
the acquisition module is used for acquiring image attribute information of a picking target area in the target acquisition image by utilizing the method for determining the picking information in any embodiment of the disclosure based on the target acquisition image;
A fourth determining module for determining a picking mode and picking coordinate information of the picking target area according to the image attribute information; and
and the control module is used for controlling the picking mechanism to pick the target object in the region to be picked corresponding to the target acquisition image based on the picking mode and the picking coordinate information.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a picking station, comprising:
a machine body;
the carrying disc is arranged on the table top of the machine table body and used for carrying materials to be picked;
the image acquisition device is arranged on the machine body and is used for acquiring images of the materials to be picked;
the picking mechanism is arranged on the machine body and used for executing the picking action of the target object;
and a controller electrically connected with the image acquisition device and the picking mechanism for executing the method of any one of the embodiments of the disclosure.
According to the technology of the present disclosure, based on the acquired target acquisition image, the picking information of the target area can be accurately determined, so that the picking mechanism can accurately achieve the picking of the target object by using the picking information.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of a method of determining pick information according to an embodiment of the present disclosure;
Fig. 2 is an application scenario diagram of a method of determining pickout information according to an embodiment of the present disclosure;
fig. 3 is an application scenario diagram of a method of determining pickout information according to an embodiment of the present disclosure;
FIG. 4 is a flow diagram of a method of picking a target object according to an embodiment of the present disclosure;
fig. 5 is an application scenario schematic diagram of a target object picking method according to an embodiment of the present disclosure;
FIG. 6 is an application scenario schematic of a method of picking a target object according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a device for determining pickups information according to embodiments of the present disclosure;
FIG. 8 is a schematic diagram of a target object picking apparatus according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a picking station according to an embodiment of the present disclosure;
FIG. 10 is a block diagram of an electronic device for implementing a method of determining pickinformation and/or a method of picking a target object in accordance with embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, an embodiment of the present disclosure provides a method for determining a picking information, including:
step S101: a pick target region in the target acquisition image is determined.
Step S102: in a case where the centroid coordinate point of the picking target area is located outside the picking target area, the picking coordinate point of the picking target area is determined according to the contour information of the picking target area. Wherein the picking coordinate point is located in the picking target area. And
Step S103: and determining the picking information of the picking target area according to the image attribute information and the picking coordinate points of the picking target area, wherein the picking information is used for indicating a picking mechanism to pick the target object corresponding to the picking target area.
According to the embodiment of the disclosure, it is to be noted that:
the target acquisition image is an image obtained by the image acquisition device after shooting the material to be picked from the upper side of the material to be picked. The target acquisition image can be an original image which is acquired by the image acquisition device and is consistent with the acquisition visual field range, and the target acquisition image can also be a part of image which is cut out from the original image. The target acquisition image comprises an image of the material to be picked and an image of the target object doped in the material to be picked.
The target area is picked up and can be understood as an image area presented by a target object in the target acquisition image. The picking target area may be an area corresponding to an independent target object, or may be a complete area corresponding to a plurality of overlapped target objects. The specific method for determining the picking target area from the target acquisition image is not particularly limited herein, and may be implemented by any image recognition algorithm or image recognition model in the prior art.
When the target object takes on a ring shape, an arc shape or an irregular shape, the centroid of the target object is located outside the target object, and if the centroid coordinate point is taken as a picking coordinate point of the target object, the picking mechanism cannot pick the target object based on the picking coordinate point with a high probability. Therefore, a point located in the target object can be newly determined as a picking coordinate point based on the contour information of the target object. The specific manner of determining the picking coordinate point based on the contour information of the target object is not limited herein, and it is only necessary to ensure that the picking coordinate point is located in the target object. The determination method of the centroid coordinate point is not particularly limited herein, and any centroid determination algorithm in the prior art may be adopted.
The picking information is used for indicating the picking mechanism to pick the target object corresponding to the target area, and can be understood as image attribute information in the picking information, and is used for representing size information, contrast information, brightness information and the like of the picked target area. Based on the image attribute information, the adapted picking mechanism can be accurately selected. The picking mechanism can accurately position the specific position of the target object in the material to be picked based on the picking coordinate points in the picking information, so that the target object can be accurately picked from the material to be picked. Centroid coordinate points and pickoff coordinate points can be understood as pixel coordinates in the image coordinate system.
A target object is understood to be any object that needs to be picked from a material. The specific target object may be adjusted according to the picking scene, and is not particularly limited herein. For example, when the material is a food material, the target object is an impurity in the food material. When the material is a parcel on a logistics conveying line, the target object is a gunny bag parcel to be sorted. When the material is garbage, the target object is a recyclable item in the garbage.
According to the technology of the embodiment of the disclosure, based on the acquired target acquisition image, the picking information of the target area can be accurately determined, so that the picking mechanism can accurately achieve the picking of the target object by using the picking information. According to the method and the device for picking the target object materials, the picking coordinate points determined by the outline information of the picking target area are used for replacing the centroid coordinate points, so that the problem that the target object materials cannot be accurately picked under the condition that the target object forms are irregular can be effectively solved. Meanwhile, as the picking coordinates can be determined in the picking target area, accurate picking of the target object can be achieved, the possibility that materials are picked together is avoided as much as possible, and loss of the materials is reduced.
In one implementation, the method for determining the picking information in the embodiment of the disclosure includes steps S101 to S103, where step S101: determining a picking target area in a target acquisition image comprises:
step S1011: and identifying the target acquisition image by using the image identification model, and determining a candidate target area.
Step S1012: and determining the image attribute information of the candidate target area according to the image information of the plurality of pixel points in the candidate target area.
Step S1013: in the case where the image attribute information satisfies the screening threshold, the candidate target area is determined as the picking target area.
According to the embodiment of the disclosure, it is to be noted that:
the image recognition model may be any model in the prior art, and is not particularly limited herein. Model training can be realized through the target object sample, so that the model can identify the target object in the image.
The image information of the plurality of pixels in the target area may be understood as image information of all pixels forming the target area, or may be understood as image information of a part of pixels included in the target area.
The image information of the pixel may be understood as luminance information, contrast information, and the like of the pixel.
The candidate target area can be understood as an image area presented by a suspected target object in the target acquisition image. The candidate target area may be an area corresponding to an independent target object, or may be a complete area corresponding to a plurality of overlapped target objects. The image attribute information of the candidate target region can be understood as size information, contrast information, luminance information, and the like of the candidate target region.
The filtering threshold value can be selected and adjusted according to the image attribute information. When the image attribute information contains information of a plurality of dimensions, the filtering threshold may correspond to a plurality of thresholds of different dimensions.
According to the techniques of the embodiments of the present disclosure, the image recognition model of the embodiments of the present disclosure may be used regardless of the type of target object. That is, the method of the embodiment of the disclosure does not need to distinguish the types of the target objects, so that when the model is utilized to identify the target objects in the target acquisition image, a certain type of specific target object samples are not needed to be selected for training the model, the model training problem caused by the small number of the target object samples is solved, the difficulty of sample labeling is reduced, the precision of model training is improved, and the model missing report rate and false report rate are reduced.
In one implementation, the method for determining the picking information in the embodiment of the disclosure includes steps S101 to S103 and steps S1011 to S1013, where step S1011: identifying the target acquisition image by using the image identification model, and determining a candidate target area, wherein the method comprises the following steps:
and identifying the target acquisition image by using the image identification model, and determining a plurality of initial target areas.
And under the condition that the contour overlapping of the plurality of initial target areas is determined according to the contour information of the plurality of initial target areas, fusing the contours of the plurality of initial target areas to obtain candidate target areas.
According to the embodiment of the disclosure, it is to be noted that:
fusing the contours of the plurality of initial target areas can be understood as fusing the initial target areas of the plurality of target objects into one target area, i.e. recognizing the plurality of target objects as one integral target object. As shown in the left diagram of fig. 2, there is a case where the outlines of the plurality of initial target areas overlap, in which case the outlines of the plurality of initial target areas are fused to obtain the result shown in the right diagram of fig. 2, that is, the outlines of the plurality of initial target areas are fused into one candidate target area.
The specific manner in which the contours of the plurality of initial target regions are fused may be selected and adjusted as desired, and is not particularly limited herein. For example, all overlapping initial target regions may be merged using a spatial merge tool of the geo (Geometry Engine Open Source, open source geometry engine).
According to the technology of the embodiment of the disclosure, under the condition that the density of the target objects in the material is higher, the target objects are easy to overlap, if each overlapped target object is picked respectively, not only is the picking efficiency low, but also the positions of other overlapped target objects can be affected when one overlapped target object is picked, and further the image is acquired based on one target object, so that the target object cannot be picked cleanly, and the material needs to be repeatedly picked again. The method of the embodiment of the disclosure can effectively solve the problems, by fusing the outlines of the overlapped initial target areas, a plurality of target objects can be confirmed as a complete target object, corresponding picking coordinates can be optimized based on the complete target object, and the target objects can be picked once without being picked for a plurality of times, so that the picking efficiency of the target objects is improved. Meanwhile, a more adaptive picking mechanism and a more adaptive picking mode can be matched based on the fused target area.
In one implementation, the method for determining the picking information in the embodiment of the disclosure includes steps S101 to S103 and steps S1011 to S1013, wherein the image attribute information includes at least one of the following sub information: the length of the candidate target region, the width of the candidate target region, the area of the candidate target region, the perimeter of the candidate target region, the image contrast of the candidate target region, the brightness of the candidate target region.
According to the technology of the embodiment of the disclosure, whether the candidate target area is the picking target area, namely whether the candidate target area is a target object in the material or not can be determined more accurately through comparison of the image attribute information of different dimensions and the screening threshold.
In one example, in one implementation, the method for determining the picking information in the embodiment of the disclosure includes steps S101 to S103 and steps S1011 to S1013, where the image attribute information includes a plurality of pieces of sub information, step S1013: in the case where the image attribute information satisfies the screening threshold, determining the candidate target area as the picking target area includes:
in the case where any one of the sub information in the image attribute information does not satisfy the corresponding screening threshold, the candidate target area is determined as a non-picking target area.
For example, when the length of the candidate target area is smaller than the screening threshold of the length, the candidate target area is determined to be a non-picking target area, and target object picking is not performed on the area. For another example, when the area of the candidate target area is smaller than the screening threshold of the area, determining the candidate target area as a non-picking target area, and not picking the target object in the area. For another example, when the contrast of the candidate target area is smaller than the screening threshold of the contrast, the candidate target area is determined to be a non-picking target area, and target object picking is not performed on the area.
In one implementation, the method for determining the picking information in the embodiment of the disclosure includes steps S101 to S103 and steps S1011 to S1013, wherein the image attribute information includes a plurality of pieces of sub information, step S1013: in the case where the image attribute information satisfies the screening threshold, determining the candidate target area as the picking target area includes:
in the case where the plurality of pieces of sub information in the image attribute information each satisfy the corresponding screening threshold value, and/or in the case where the logical operation relationship between the plurality of pieces of sub information in the image attribute information satisfies the corresponding screening threshold value, the candidate target area is determined as the picking target area.
According to the embodiment of the disclosure, it is to be noted that:
in the case where the plurality of pieces of sub information in the image attribute information satisfy the corresponding screening threshold, determining the candidate target area as the picking target area may be understood as: and if each piece of sub information in the image attribute information meets the screening threshold corresponding to the sub information, determining the candidate target area as the picking target area. For example, the sub information included in the image attribute information of the candidate target area is the length of the candidate target area, the area of the candidate target area, and the contrast of the candidate target area, and when the length of the candidate target area satisfies the screening threshold of the length, the area of the candidate target area satisfies the screening threshold of the area, and the contrast of the candidate target area satisfies the screening threshold of the contrast, the candidate target area is determined to be the picking target area, and the object needs to be picked for the area.
In the case where the logical operation relationship between the plurality of pieces of sub-information in the image attribute information satisfies the corresponding screening threshold, determining the candidate target area as the picking target area may be understood as: when a certain logic operation relation of the plurality of sub-information directly meets a screening threshold value of the corresponding logic operation relation, the candidate target area is determined as the picking target area. For example, the ratio of the length and width of the candidate target region satisfies the corresponding logical operation screening threshold, and the candidate target region may be determined as the picking target region. For another example, when the brightness of the candidate target area does not satisfy the screening threshold of brightness, but the contrast of the candidate target area satisfies the screening threshold of contrast, the candidate target area is determined as the picking target area. For another example, when the ratio of the brightness of the candidate target area to the brightness of the target acquisition image satisfies the screening threshold of brightness, the candidate target area is determined as the picking target area. For example, the sub information included in the image attribute information of the candidate target area is the length of the candidate target area, the area of the candidate target area, and the contrast of the candidate target area, and when the length of the candidate target area satisfies the screening threshold of the length, the area of the candidate target area satisfies the screening threshold of the area, the contrast of the candidate target area satisfies the screening threshold of the contrast, and the logical operation relationship of the length of the candidate target area, the area of the candidate target area, and the contrast of the candidate target area satisfies the corresponding screening threshold, it is determined that the candidate target area is the picking target area, and the object needs to be picked for the area.
In the case where the plurality of pieces of sub-information in the image attribute information all satisfy the corresponding screening threshold and the logical operation relationship between the plurality of pieces of sub-information satisfies the corresponding screening threshold, determining the candidate target area as the picking target area may be understood as: and when each piece of sub information in the image attribute information meets the screening threshold corresponding to the sub information and a certain logic operation relation among multiple pieces of sub information meets the screening threshold of the corresponding logic operation relation, determining the candidate target area as the picking target area.
According to the technology of the embodiment of the disclosure, after the image recognition model preliminarily recognizes the candidate target area, in order to compensate for the false detection and omission situation of the image recognition model, whether the candidate target area is a picking target area of a target object to be picked can be further determined by comparing the image attribute information with the screening threshold value.
In one implementation, the method for determining the picking information in the embodiment of the disclosure includes steps S101 to S103, where step S102: in a case where a centroid coordinate point of the picking target area is located outside the picking target area, determining the picking coordinate point of the picking target area according to profile information of the picking target area, includes:
In a case where the centroid coordinate point of the picking target area is located outside the picking target area, a plurality of contour feature points of the picking target area are determined according to contour information of the picking target area.
A picking coordinate point located in the picking target area is determined based on the plurality of contour feature points.
According to the embodiment of the disclosure, it is to be noted that:
the positions of the contour feature points can be adjusted according to the contour shapes of different picking target areas.
The location of a classical point (representational point) of the picking target region of the shape may be determined as a picking coordinate point based on the general contour shape of the picking target region and a plurality of contour feature points based on a topology analysis method.
According to the technology of the embodiment of the disclosure, according to the outline information and the outline characteristic points of the picking target area, the picking coordinate points can be accurately found by utilizing simplified space analysis, and the picking mechanism can accurately pick the target object.
In one implementation, the method for determining the picking information in the embodiment of the disclosure includes steps S101 to S103, and further includes:
and visually displaying the picking information of the picking target area on the target acquisition image, wherein the picking information comprises contour information and/or picking coordinate points of the picking target area. And/or
The pick information is associated with the image attribute information, and the image attribute information is visually displayed in response to a query instruction of the pick information.
According to the embodiment of the disclosure, it is to be noted that:
as shown in fig. 2 and 3, contour information of the target area may be visually displayed on the target acquisition image so that a user intuitively observes whether the determined picking target area is accurate.
The image attribute information of the picking target area may be previously associated with the visually displayed picking information and stored in the database. In response to a query instruction for picking information, the image attribute information can be called up in the database based on the association relation, and the image attribute information can be visually displayed for a user to view. The image attribute information may be displayed to the user through an interactive interface of the terminal.
According to the technology of the embodiment of the disclosure, by visually displaying the picking information and the image attribute information, a user can intuitively see whether the current picking process of the target object is reasonable, namely whether the contour recognition of the picking target area is accurate, whether the target object is recognized, whether the determined picking coordinate point is suitable, and whether the picking mechanism and the picking mode selected based on the image attribute information are reasonable. Through the visualized information, the user can have the vector-placed parameter adjustment to optimize the picking process of the target object.
As shown in fig. 4, an embodiment of the present disclosure provides a method for picking a target object, including:
step S401: based on the target acquisition image, image attribute information of a picking target area in the target acquisition image is acquired by using the method for determining the picking information according to any embodiment of the disclosure.
Step S402: and determining a picking mode and picking coordinate information of the picking target area according to the image attribute information. And
Step S403: and controlling the picking mechanism to pick the target object in the region to be picked corresponding to the target acquisition image based on the picking mode and the picking coordinate information.
According to the embodiment of the disclosure, it is to be noted that:
the image attribute information is used to characterize size information, contrast information, brightness information, and the like of the picking target area.
The picking mode may include the type of picking mechanism selected and the specific picking speed, residence time, etc. of the picking mechanism. The picking pattern adapted thereto may be selected according to the image attribute information of the picking target area.
Picking coordinate information can be understood as coordinates of a target object in a world coordinate system, and can be obtained by converting the picking coordinate points from the coordinate system of the image acquisition device to the world coordinate system of a carrying tray carrying materials. The picking coordinate information is used for indicating the picking mechanism to accurately pick out the target object from the materials carried by the carrier plate.
According to the technology of the embodiment of the disclosure, the sorting mode can be subjected to typing matching in a refined manner through the image attribute information, the optimal sorting mode is matched based on the visual characteristics (namely the image attribute information), the sorting efficiency of the target object is improved, and the loss rate of materials is reduced. The method has the advantages that the rapid positioning of all target objects in the acquisition view field of the image acquisition device is completed more robustly, meanwhile, through introducing post-processing calculation force of the target picking mode parting, various scenes such as target object picking, material recovery and material sorting can be flexibly dealt with, and the fine picking is realized more efficiently from the global view.
In one implementation, the method for picking a target object according to the embodiment of the present disclosure includes steps S401 to S403, where step S402: determining a picking mode and picking coordinate information of the picking target area according to the image attribute information, including:
step S4021: and determining a picking mode for picking the target area according to the image attribute information.
Step S4022: according to the picking mode, a matching picking mechanism is determined.
Step S4023: and determining picking coordinate information according to the outline information of the picking target area and the working range of the picking mechanism.
According to the embodiment of the disclosure, it is to be noted that:
the picking mode may include a manner of picking (e.g., sucking out the target object by suction or pinching out the target object by grabbing), a speed of picking (e.g., pinching speed or suction power), a residence time of picking one target object, and the like.
The picking mechanism can comprise a suction nozzle with a suction function, a mechanical claw with a clamping function, a plurality of suction nozzles with different calibers, a plurality of mechanical claws with different sizes, a plurality of suction nozzles with linkage, a plurality of mechanical claws with linkage or a suction nozzle and a mechanical claw with linkage. Linkage is understood to mean that one of the nozzles moves in a direction away from the material when it is moving toward the material and picking the target object.
According to the technology of the embodiment of the disclosure, the sorting mode can be subjected to typing matching in a refined manner through the image attribute information, the optimal sorting mode is matched based on the visual characteristics (namely the image attribute information), the sorting efficiency of the target object is improved, and the loss rate of materials is reduced.
In one implementation, the method for picking a target object in the embodiment of the disclosure includes steps S401 to S403, and steps S4021 to S4023, where step S4023: determining picking coordinate information according to the outline information of the picking target area and the operation range of the picking mechanism, including:
In the case where the outline information of the picking target area satisfies the operation range of the picking mechanism, the picking coordinate information is determined from the picking coordinate points of the picking target area. Wherein the picking coordinate points are obtained based on the method for determining the picking information according to any of the embodiments of the present disclosure.
According to the embodiment of the disclosure, it is to be noted that:
the working range of the picking mechanism can be understood as the maximum area range in which the picking mechanism can pick the target object at one time.
As shown in fig. 5, the outline information of the picking target area satisfies the operation range of the picking mechanism, which may be understood that the outline of the picking target area is smaller than or slightly larger than the operation range of the picking mechanism, that is, the picking mechanism may completely pick out the target object corresponding to the picking target area at one time.
According to the technology of the embodiment of the disclosure, the outline information of the picking target area meets the operation range of the picking mechanism, so that whether the target object corresponding to the picking target area can be picked up by the picking mechanism at one time can be accurately judged.
In one implementation, the method for picking a target object in the embodiment of the disclosure includes steps S401 to S403, and steps S4021 to S4023, where step S4023: determining picking coordinate information according to the outline information of the picking target area and the operation range of the picking mechanism, including:
In the case where the contour information of the picking target area does not satisfy the operation range of the picking mechanism, the length information of the picking target area is determined from the contour information.
A plurality of pieces of picking coordinate information are determined in the picking target area according to the length information and the working range of the picking mechanism.
According to the embodiment of the disclosure, it is to be noted that:
the working range of the picking mechanism can be understood as the maximum area range in which the picking mechanism can pick the target object at one time.
As shown in fig. 6, the outline information of the picking target area does not satisfy the operation range of the picking mechanism, which may be understood that the outline of the picking target area is larger than the operation range of the picking mechanism, that is, the picking mechanism cannot completely pick out the target object corresponding to the picking target area at a time, and multiple pieces of picking coordinate information need to be determined, so that the target object is picked multiple times to achieve complete cleaning of the target object.
Determining a plurality of picking coordinate information in the picking target area based on the length information and the job range of the picking mechanism may be understood as dividing the job range by the length information to determine how many job ranges are required to completely pick and clear the target object. As shown in fig. 6, three pieces of picking coordinate information need to be determined according to the length of the target object and the working range of the picking mechanism so that the target object can be completely cleared.
According to the technique of the embodiment of the present disclosure, it is also possible to achieve complete picking and cleaning of a large-sized target object for a case where the picking target area (outline of the target object) is larger than the working range of the picking mechanism.
In one implementation, the method for picking a target object according to the embodiment of the present disclosure includes steps S401 to S403, and further includes:
and identifying the original acquired image, and determining the number of target objects in the original acquired image.
In the case where the number of target objects reaches a threshold number, the original acquired image is divided into a plurality of sub-images.
Each of the plurality of sub-images is determined to be a target acquisition image.
According to the embodiment of the disclosure, it is to be noted that:
the plurality of sub-images are each determined as a target acquisition image, which can be understood as each of the plurality of sub-images being determined as a target acquisition image. Each sub-image requires the determination method of the picking information and the picking method of the target object of any of the embodiments of the present disclosure to be processed.
The original acquired image can be understood as an image which is acquired by the image acquisition device and is consistent with the acquired field of view.
According to the technology of the embodiment of the disclosure, when the density of the target object is high, the calculation burden is increased by identifying the original acquisition image at one time, and the identification efficiency and the picking efficiency can be improved by dividing the original acquisition image into a plurality of sub-images.
In one implementation, the method for picking a target object according to the embodiment of the present disclosure includes steps S401 to S403, and further includes:
and (3) carrying out image acquisition on the region to be picked, which is subjected to target object picking, so as to obtain a repeated shooting target acquisition image.
And comparing the number of the target objects in the repeated shooting target acquisition image with the number of the target objects in the target acquisition image to obtain a picking comparison result.
And determining whether the picking area for picking the target object is finished picking the target object according to the picking comparison result.
According to the embodiment of the disclosure, it is to be noted that:
if it is determined that there are still target objects in the region to be picked for which the target object has been picked, the method for determining the picking information and the method for picking the target object according to any of the embodiments of the present disclosure may be repeatedly performed, and the identification and the picking of the target object may be performed again.
According to the technology of the embodiment of the disclosure, the target object can be ensured to be picked and cleaned.
In one example, when there are more picking target areas in the target acquisition image, the plurality of picking target areas may be numbered in a picking order, i.e., a picking path of the plurality of picking target areas may be planned. The picking mechanism is enabled to pick the target object corresponding to the picking target area in the target acquisition image based on the ordered picking path, the occurrence of the condition that the picking mechanism turns back to pick is reduced, and the picking efficiency is improved.
In one example, the order of picking labels, picking paths, may also be visually displayed on the target acquisition image so that the user can see if the planning of the picking paths is reasonable. As shown in fig. 3, the sequence numbers in the figure represent the sequential labels in which each target object is picked.
In one example, the process of picking a target object includes:
the method comprises the steps that an image acquisition device of a picking machine is used for acquiring images of materials on a carrying disc of the picking machine, and a target acquisition image is obtained;
determining a picking target area in a target acquisition image;
determining a picking coordinate point of the picking target area according to the outline information of the picking target area under the condition that the centroid coordinate point of the picking target area is located outside the picking target area; wherein the picking coordinate point is positioned in the picking target area;
determining picking information of the picking target area according to the image attribute information of the picking target area and the picking coordinate points;
according to the image attribute information, determining a picking mode and picking coordinate information of the picking target area;
and controlling a picking mechanism of the picking machine table to pick the target object in the region to be picked corresponding to the target acquired image based on the picking mode and the picking coordinate information.
As shown in fig. 7, an embodiment of the present disclosure provides a device for determining picking information, including:
a first determining module 710 is configured to determine a picking target area in the target acquisition image.
The second determining module 720 is configured to determine, when the centroid coordinate point of the picking target area is located outside the picking target area, the picking coordinate point of the picking target area according to the profile information of the picking target area. Wherein the picking coordinate point is located in the picking target area. And
And a third determining module 730, configured to determine, according to the image attribute information and the picking coordinate points of the picking target area, picking information of the picking target area, where the picking information is used to instruct the picking mechanism to pick the target object corresponding to the picking target area.
In one embodiment, the first determination module 710 includes:
and the first determining submodule is used for identifying the target acquisition image by utilizing the image identification model and determining a candidate target area.
And the second determining submodule is used for determining image attribute information of the candidate target area according to the image information of the plurality of pixel points in the candidate target area.
And a third determining sub-module for determining the candidate target area as a picking target area in the case that the image attribute information satisfies the screening threshold.
In one embodiment, the first determination submodule is configured to:
and identifying the target acquisition image by using the image identification model, and determining a plurality of initial target areas.
And under the condition that the contour overlapping of the plurality of initial target areas is determined according to the contour information of the plurality of initial target areas, fusing the contours of the plurality of initial target areas to obtain candidate target areas.
In one embodiment, the image attribute information includes at least one of the following sub-information: the length of the candidate target region, the width of the candidate target region, the area of the candidate target region, the perimeter of the candidate target region, the image contrast of the candidate target region, the brightness of the candidate target region.
In one embodiment, the image attribute information includes a plurality of sub-information, and the third determining module 730 is configured to:
in the case where the plurality of pieces of sub information in the image attribute information each satisfy the corresponding screening threshold value, and/or in the case where the logical operation relationship between the plurality of pieces of sub information in the image attribute information satisfies the corresponding screening threshold value, the candidate target area is determined as the picking target area.
In one embodiment, the second determining module 720 is configured to:
In a case where the centroid coordinate point of the picking target area is located outside the picking target area, a plurality of contour feature points of the picking target area are determined according to contour information of the picking target area.
A picking coordinate point located in the picking target area is determined based on the plurality of contour feature points.
In one implementation manner, the device for determining the picking information in the embodiment of the disclosure further includes:
and a visualization module for visually displaying the picking information of the picking target area on the target acquisition image, wherein the picking information comprises the outline information and/or the picking coordinate points of the picking target area. And/or
And an association module for associating the picking information with the image attribute information, and visually displaying the image attribute information in response to a query instruction of the picking information.
As shown in fig. 8, an embodiment of the present disclosure provides a target object picking apparatus, including:
the acquiring module 810 is configured to acquire, based on the target acquired image, image attribute information of a picking target area in the target acquired image by using the method for determining the picking information according to any embodiment of the disclosure.
And a fourth determining module 820 for determining a picking mode and picking coordinate information of the picking target area according to the image attribute information. And
And the control module 830 is configured to control the picking mechanism to pick the target object in the region to be picked corresponding to the target acquired image based on the picking mode and the picking coordinate information.
In one embodiment, the fourth determination module 820 includes:
and a fourth determination sub-module for determining a picking mode of picking the target area according to the image attribute information.
And a fifth determining sub-module for determining a matching picking mechanism according to the picking mode.
And a sixth determination submodule for determining picking coordinate information according to the outline information of the picking target area and the working range of the picking mechanism.
In one embodiment, the sixth determination submodule is to:
in the case where the outline information of the picking target area satisfies the operation range of the picking mechanism, the picking coordinate information is determined from the picking coordinate points of the picking target area. Wherein the picking coordinate points are based on the method of determining the picking information of any of the embodiments of the present disclosure.
In one embodiment, the sixth determination submodule is to:
in the case where the contour information of the picking target area does not satisfy the operation range of the picking mechanism, the length information of the picking target area is determined from the contour information.
A plurality of pieces of picking coordinate information are determined in the picking target area according to the length information and the working range of the picking mechanism.
In one implementation, the device for picking a target object according to an embodiment of the disclosure further includes:
and the fifth determining module is used for identifying the original acquired image and determining the number of target objects in the original acquired image.
The dividing module is used for dividing the original acquired image into a plurality of sub-images under the condition that the number of target objects reaches a threshold number.
And a sixth determining module, configured to determine each of the plurality of sub-images as a target acquisition image.
In one implementation, the device for picking a target object according to an embodiment of the disclosure further includes:
and the acquisition module is used for acquiring images of the to-be-picked areas subjected to target object picking to obtain a repeated shooting target acquisition image.
And the comparison module is used for comparing the number of the target objects in the repeated target acquisition image with the number of the target objects in the target acquisition image to obtain a picking comparison result.
And a seventh determining module, configured to determine, according to the picking comparison result, whether the target object picking is completed in the region to be picked for which the target object picking has been performed.
For descriptions of specific functions and examples of each module and sub-module of the apparatus in the embodiments of the present disclosure, reference may be made to the related descriptions of corresponding steps in the foregoing method embodiments, which are not repeated herein.
As shown in fig. 9, an embodiment of the present disclosure provides a picking machine including:
the machine body 910.
The carrier 920 is disposed on the table of the machine body 910, and is used for carrying the material to be picked.
The image acquisition device 930 is disposed on the machine body 910, and is configured to perform image acquisition on a material to be picked.
The picking mechanism 940 is disposed on the machine body 910, and is configured to perform a picking action of the target object.
And a controller electrically connected with the image capturing device 930 and the picking mechanism 940 for executing the method of determining the picking information and/or the method of picking the target object according to any of the embodiments of the present disclosure.
According to the technology of the embodiment of the disclosure, the picking information of the target area can be accurately determined based on the acquired target acquisition image by using the controller, so that the picking mechanism can accurately pick the target object by using the picking information. According to the method and the device for picking the target object materials, the picking coordinate points determined by the outline information of the picking target area are used for replacing the centroid coordinate points, so that the problem that the target object materials cannot be accurately picked under the condition that the target object forms are irregular can be effectively solved. Meanwhile, as the picking coordinates can be determined in the picking target area, accurate picking of the target object can be achieved, the possibility that materials are picked together is avoided as much as possible, and loss of the materials is reduced. The sorting mode can be subjected to typing matching in a refined mode through the image attribute information, the optimal sorting mode is matched based on visual characteristics (namely the image attribute information), the sorting efficiency of the target object is improved, and the loss rate of materials is reduced.
In one example, the picking station provides multiple operating mode switches: identification mode-pick drive off (only identify not pick), pick mode-support multi-round pick (automatic successive picks per field of view can be configured), repeat shot mode-support pick front-to-back comparison (whether pick is again performed in place after pick is completed), test mode-automatic work in designated pick mode (drive pick mechanism work after manual alignment of target object).
The picking machine supports sample collection and image recognition model iteration in a development stage, supports image recognition model calling and configuration of a picking mechanism in a running stage, and supports picking parameter matching and effect evaluation in a debugging stage. By switching the working mode, saving the site, positioning the target object, testing and picking, tracking the analysis effect, allocating parameters and iterating for a plurality of times, the fine picking system tuning requirement can be met.
In the development stage, the working mode of the picking machine is set to be an identification mode, the feeding is used for taking photo and checking and labeling by contrasting with a real object, and the method can be used for training an image identification model. And counting the distribution condition of the target objects by using the labeling result, and designing the form and the size of the picking mechanism. The picking mechanism is provided with a plurality of clamps, each clamp corresponds to a plurality of different picking modes, the large clamp picks a large target object or a target object cluster, and the small clamp picks a small target object. The working mode is set to be a test mode, the clamp is aligned to a single target object with the diameter within the design range, an automatic picking process is simulated, and picking parameters are allocated based on the picking effect of a real object, such as: the descending speed of the clamp to the disk surface, the picking height from the disk surface, the stay time delay of the clamp on the disk surface and the like.
In the operation stage, the working mode of the picking machine is set to be a picking mode, after the materials to be picked are fed each time, the image acquisition device picks phases in a visual field range, an image recognition model is called to recognize a target object, the recognized target object is subjected to post-processing to determine typical point pixel coordinates, merging, calibration, screening and matching, and is converted into picking coordinate information and a picking signal composed of the picking modes to drive a clamp to pick, each picking coordinate corresponds to one time of picking, and the picking is sequentially carried out according to a picking mode matching clamp in a visual field. In the picking process, the target outline, the target object attribute and the picking identification are reserved through images. For a scene with high target object density, the movable range of the clamp is limited, one round of picking can not be finished, multiple rounds of picking can be set, each round of picking is conducted for phase picking again, and picking signals are returned based on all target objects in the field of view.
And in the debugging stage, starting a repeated shooting mode of the picking machine, picking and storing the picked materials, and visually comparing whether the target objects are missed before and after picking by referring to the images. For the target object left by the target object, firstly, confirming whether a picking action exists in the graph, and judging whether the pre-picking can influence the positioning of the target object or not, wherein the situation can be that a picking path is optimized by adjusting a picking coordinate ordering rule, such as setting a zigzag route, a buffer radius and the like; secondly, whether the picking environment is consistent with the supposed ideal picking environment is confirmed, and the situation can be that the picking environment is maintained by adding auxiliary devices, such as a water adding device, a tray sucker and the like; if the target object does not correspond to the actual picking mark, an attribute threshold value is required to be adjusted, and screening conditions are relaxed; if the target object on the image has a corresponding picking mark, but the actual picking position deviates from the target object position, the calibration parameters need to be corrected again; if the picking environment is ideal and the fixture is aligned to the target object but not successfully picked, checking the size of the target object and the relative relation between the target object and the material, optimizing the picking mode or even replacing the fixture; and if the corresponding position is not actually provided with the target object or is not required to be picked, the screening conditions and the attribute threshold value are adjusted, so that the picking efficiency is improved.
In one example, the specific structure of the picking mechanism may be set according to the picking requirements, and is not particularly limited herein. For example, the picking mechanism provides clamps of different sizes, and a plurality of clamps are connected by a connecting rod, wherein when one of the clamps picks and descends towards the disk surface, the other clamps will ascend away from the disk surface. Each clamp is provided with a safe movable range, so that damage to the clamp due to over-range picking is avoided. All clamps are connected with the connecting rod through universal bayonets, so that each clamp position can be flexibly replaced so as to adapt to different materials. The two-clamp scheme can process target objects in a wider scale range than a single clamp; compared with a plurality of clamps, the mechanism has simple structure.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 10 shows a schematic block diagram of an example electronic device 1000 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM1003, various programs and data required for the operation of the device 1000 can also be stored. The computing unit 1001, the ROM 1002, and the RAM1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Various components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and communication unit 1009 such as a network card, modem, wireless communication transceiver, etc. Communication unit 1009 allows device 1000 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1001 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1001 performs the respective methods and processes described above, for example, a determination method of picking information and/or a picking method of a target object. For example, in some embodiments, the method of determining the pickinformation and/or the method of picking the target object may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communication unit 1009. When the computer program is loaded into the RAM1003 and executed by the computing unit 1001, one or more steps of the above-described determination method of picking information and/or the method of picking a target object may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the determination method of picking information and/or the picking method of the target object in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that the various forms of flow shown above may be used, and that the reordering, adding, or adding of rows may be performed in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions, improvements, etc. that are within the principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (30)

1. A method of determining pickout information, comprising:
determining a picking target area in a target acquisition image;
determining a picking coordinate point of the picking target area according to the outline information of the picking target area under the condition that the centroid coordinate point of the picking target area is located outside the picking target area; wherein the picking coordinate point is located in the picking target area; and
and determining the picking information of the picking target area according to the image attribute information of the picking target area and the picking coordinate point, wherein the picking information is used for indicating a picking mechanism to pick the target object corresponding to the picking target area.
2. The method of claim 1, wherein determining a pick target area in the target acquisition image comprises:
identifying the target acquisition image by using an image identification model, and determining a candidate target area;
determining image attribute information of the candidate target area according to the image information of a plurality of pixel points in the candidate target area;
and determining the candidate target area as a picking target area under the condition that the image attribute information meets a screening threshold value.
3. The method of claim 2, wherein identifying the target acquisition image using the image recognition model, determining the candidate target region, comprises:
identifying the target acquisition image by utilizing an image identification model, and determining a plurality of initial target areas;
and under the condition that the contour overlapping of the plurality of initial target areas is determined according to the contour information of the plurality of initial target areas, fusing the contours of the plurality of initial target areas to obtain candidate target areas.
4. The method of claim 2, wherein the image attribute information includes at least one of the following sub-information: the length of the candidate target region, the width of the candidate target region, the area of the candidate target region, the perimeter of the candidate target region, the image contrast of the candidate target region, the brightness of the candidate target region.
5. The method of claim 4, wherein the image attribute information includes a plurality of the sub-information, and determining the candidate target area as a picking target area in a case where the image attribute information satisfies a screening threshold value includes:
and determining the candidate target area as a picking target area under the condition that the sub-information in the image attribute information meets the corresponding screening threshold value and/or the condition that the logic operation relation among the sub-information in the image attribute information meets the corresponding screening threshold value.
6. The method of claim 1, wherein, in a case where the centroid coordinate point of the picking target area is located outside the picking target area, determining the picking coordinate point of the picking target area according to profile information of the picking target area includes:
determining a plurality of contour feature points of the picking target area according to the contour information of the picking target area under the condition that the centroid coordinate point of the picking target area is located outside the picking target area;
and determining picking coordinate points positioned in the picking target area according to the contour feature points.
7. The method of any one of claims 1 to 6, further comprising:
visually displaying the picking information of the picking target area on the target acquisition image, wherein the picking information comprises the outline information and/or the picking coordinate points of the picking target area; and/or
And associating the picking information with the image attribute information, and visually displaying the image attribute information in response to a query instruction of the picking information.
8. A method of picking a target object, comprising:
acquiring image attribute information of a picking target area in a target acquisition image based on the target acquisition image by using the method of any one of claims 1 to 7;
determining a picking mode and picking coordinate information of the picking target area according to the image attribute information; and
and controlling a picking mechanism to pick the target object in the region to be picked corresponding to the target acquisition image based on the picking mode and the picking coordinate information.
9. The method of claim 8, wherein determining the picking mode and the picking coordinate information of the picking target area according to the image attribute information comprises:
Determining a picking mode of the picking target area according to the image attribute information;
determining a matched picking mechanism according to the picking mode;
and determining picking coordinate information according to the outline information of the picking target area and the working range of the picking mechanism.
10. The method of claim 9, wherein determining picking coordinate information based on the outline information of the picking target area and the job scope of the picking mechanism comprises:
determining picking coordinate information according to the picking coordinate points of the picking target area under the condition that the outline information of the picking target area meets the operation range of the picking mechanism; wherein the picking coordinate points are obtained based on the method of any one of claims 1 to 7.
11. The method of claim 9, wherein determining picking coordinate information based on the outline information of the picking target area and the job scope of the picking mechanism comprises:
determining length information of the picking target area according to the contour information when the contour information of the picking target area does not meet the operation range of the picking mechanism;
And determining a plurality of pieces of picking coordinate information in the picking target area according to the length information and the operation range of the picking mechanism.
12. The method of any of claims 8 to 11, further comprising:
identifying an original acquisition image and determining the number of target objects in the original acquisition image;
dividing the original acquired image into a plurality of sub-images when the number of target objects reaches a threshold number;
and determining all the plurality of sub-images as the target acquisition image.
13. The method of any of claims 8 to 11, further comprising:
image acquisition is carried out on the region to be picked, which is subjected to target object picking, so as to obtain a repeated shooting target acquisition image;
comparing the number of the target objects in the repeated shooting target acquisition image with the number of the target objects in the target acquisition image to obtain a picking comparison result;
and determining whether the target object picking is finished in the region to be picked which is subjected to target object picking according to the picking comparison result.
14. A device for determining pickout information, comprising:
the first determining module is used for determining a picking target area in the target acquisition image;
A second determining module, configured to determine a picking coordinate point of the picking target area according to profile information of the picking target area when a centroid coordinate point of the picking target area is located outside the picking target area; wherein the picking coordinate point is located in the picking target area; and
and the third determining module is used for determining the picking information of the picking target area according to the image attribute information of the picking target area and the picking coordinate point, and the picking information is used for indicating a picking mechanism to pick the target object corresponding to the picking target area.
15. The apparatus of claim 14, wherein the first determination module comprises:
the first determining submodule is used for identifying the target acquisition image by utilizing the image identification model and determining a candidate target area;
a second determining sub-module, configured to determine image attribute information of the candidate target area according to image information of a plurality of pixel points in the candidate target area;
and a third determining sub-module, configured to determine the candidate target area as a picking target area if the image attribute information satisfies a screening threshold.
16. The apparatus of claim 15, wherein the first determination submodule is to:
identifying the target acquisition image by utilizing an image identification model, and determining a plurality of initial target areas;
and under the condition that the contour overlapping of the plurality of initial target areas is determined according to the contour information of the plurality of initial target areas, fusing the contours of the plurality of initial target areas to obtain candidate target areas.
17. The apparatus of claim 15, wherein the image attribute information comprises at least one of the following sub-information: the length of the candidate target region, the width of the candidate target region, the area of the candidate target region, the perimeter of the candidate target region, the image contrast of the candidate target region, the brightness of the candidate target region.
18. The apparatus of claim 17, wherein the image attribute information includes a plurality of the sub-information, the third determining module to:
and determining the candidate target area as a picking target area under the condition that the sub-information in the image attribute information meets the corresponding screening threshold value and/or the condition that the logic operation relation among the sub-information in the image attribute information meets the corresponding screening threshold value.
19. The apparatus of claim 14, wherein the second determination module is configured to:
determining a plurality of contour feature points of the picking target area according to the contour information of the picking target area under the condition that the centroid coordinate point of the picking target area is located outside the picking target area;
and determining picking coordinate points positioned in the picking target area according to the contour feature points.
20. The apparatus of any of claims 14 to 19, further comprising:
a visualization module for visually displaying the picking information of the picking target area on the target acquisition image, wherein the picking information comprises the outline information and/or the picking coordinate points of the picking target area; and/or
And the association module is used for associating the picking information with the image attribute information and responding to a query instruction of the picking information to visually display the image attribute information.
21. A target object picking apparatus comprising:
an acquisition module for acquiring image attribute information of a picking target area in a target acquisition image based on the target acquisition image by using the method of any one of claims 1 to 7;
A fourth determining module, configured to determine a picking mode and picking coordinate information of the picking target area according to the image attribute information; and
and the control module is used for controlling the picking mechanism to pick the target object in the region to be picked corresponding to the target acquisition image based on the picking mode and the picking coordinate information.
22. The apparatus of claim 21, the fourth determination module comprising:
a fourth determining sub-module for determining a picking mode of the picking target area according to the image attribute information;
a fifth determining sub-module for determining a matching picking mechanism according to the picking mode;
and a sixth determination submodule, configured to determine picking coordinate information according to the outline information of the picking target area and the working range of the picking mechanism.
23. The apparatus of claim 22, wherein the sixth determination submodule is to:
determining picking coordinate information according to the picking coordinate points of the picking target area under the condition that the outline information of the picking target area meets the operation range of the picking mechanism; wherein the picking coordinate points are obtained based on the method of any one of claims 1 to 7.
24. The apparatus of claim 22, wherein the sixth determination submodule is to:
determining length information of the picking target area according to the contour information when the contour information of the picking target area does not meet the operation range of the picking mechanism;
and determining a plurality of pieces of picking coordinate information in the picking target area according to the length information and the operation range of the picking mechanism.
25. The apparatus of any of claims 21 to 24, further comprising:
a fifth determining module, configured to identify an original acquired image, and determine the number of target objects in the original acquired image;
the dividing module is used for dividing the original acquired image into a plurality of sub-images under the condition that the number of the target objects reaches a threshold number;
and a sixth determining module, configured to determine each of the plurality of sub-images as the target acquisition image.
26. The apparatus of any of claims 21 to 24, further comprising:
the acquisition module is used for acquiring images of the to-be-picked areas subjected to target object picking to obtain repeated shooting target acquisition images;
the comparison module is used for comparing the number of the target objects in the repeated shooting target acquisition image with the number of the target objects in the target acquisition image to obtain a picking comparison result;
And a seventh determining module, configured to determine, according to the picking comparison result, whether the target object picking is completed in the region to be picked after the target object picking.
27. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 13.
28. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 13.
29. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 13.
30. A pick station, comprising:
a machine body;
the carrying disc is arranged on the table top of the machine table body and used for carrying materials to be picked;
the image acquisition device is arranged on the machine body and is used for acquiring images of the materials to be picked;
The picking mechanism is arranged on the machine body and used for executing the picking action of the target object;
a controller electrically connected to the image acquisition device and the picking mechanism for performing the method of any one of claims 1 to 13.
CN202310259604.7A 2023-03-13 2023-03-13 Method for determining picking information and method for picking target object Active CN116309442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310259604.7A CN116309442B (en) 2023-03-13 2023-03-13 Method for determining picking information and method for picking target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310259604.7A CN116309442B (en) 2023-03-13 2023-03-13 Method for determining picking information and method for picking target object

Publications (2)

Publication Number Publication Date
CN116309442A true CN116309442A (en) 2023-06-23
CN116309442B CN116309442B (en) 2023-10-24

Family

ID=86825265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310259604.7A Active CN116309442B (en) 2023-03-13 2023-03-13 Method for determining picking information and method for picking target object

Country Status (1)

Country Link
CN (1) CN116309442B (en)

Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050057653A1 (en) * 2002-05-07 2005-03-17 Matsushita Electric Industrial Co., Ltd. Surveillance system and a surveillance camera
US20050152604A1 (en) * 2004-01-09 2005-07-14 Nucore Technology Inc. Template matching method and target image area extraction apparatus
KR20130032990A (en) * 2011-09-26 2013-04-03 한국과학기술연구원 Method for detecting grasping points using category recognition and computer readable record medium thereof
CN108182705A (en) * 2016-12-08 2018-06-19 广州映博智能科技有限公司 A kind of three-dimensional coordinate localization method based on machine vision
US20180246563A1 (en) * 2017-02-28 2018-08-30 Seiko Epson Corporation Head-mounted display device, program, and method for controlling head-mounted display device
CN109543665A (en) * 2017-09-22 2019-03-29 凌云光技术集团有限责任公司 Image position method and device
CN109858333A (en) * 2018-12-20 2019-06-07 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and computer-readable medium
US20190197196A1 (en) * 2017-12-26 2019-06-27 Seiko Epson Corporation Object detection and tracking
CN109967359A (en) * 2017-12-28 2019-07-05 北京京东尚科信息技术有限公司 Method and apparatus for sorting article
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN110796702A (en) * 2019-10-23 2020-02-14 中冶赛迪工程技术股份有限公司 Industrial equipment identification and positioning method, system and equipment based on machine vision
CN111144426A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium
CN111144322A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium
CN111178250A (en) * 2019-12-27 2020-05-19 深圳市越疆科技有限公司 Object identification positioning method and device and terminal equipment
CN111243005A (en) * 2020-01-07 2020-06-05 洛阳语音云创新研究院 Livestock weight estimation method, device, equipment and computer readable storage medium
CN111354077A (en) * 2020-03-02 2020-06-30 东南大学 Three-dimensional face reconstruction method based on binocular vision
US20200388078A1 (en) * 2019-06-06 2020-12-10 Canon Kabushiki Kaisha Apparatus for positioning processing between image in real world and image in virtual world, information processing method, and storage medium
CN112720487A (en) * 2020-12-23 2021-04-30 东北大学 Mechanical arm grabbing method and system based on self-adaptive dynamic force balance
CN113487523A (en) * 2021-09-08 2021-10-08 腾讯科技(深圳)有限公司 Method and device for optimizing graph contour, computer equipment and storage medium
CN113657551A (en) * 2021-09-01 2021-11-16 陕西工业职业技术学院 Robot grabbing posture task planning method for sorting and stacking multiple targets
CN113920142A (en) * 2021-11-11 2022-01-11 江苏昱博自动化设备有限公司 Sorting manipulator multi-object sorting method based on deep learning
CN113927601A (en) * 2021-11-11 2022-01-14 盐城工学院 Method and system for realizing precise picking of mechanical arm based on visual recognition
CN113989167A (en) * 2021-12-27 2022-01-28 杭州爱科科技股份有限公司 Contour extraction method, device, equipment and medium based on seed point self-growth
CN114029243A (en) * 2021-11-11 2022-02-11 江苏昱博自动化设备有限公司 Soft object grabbing and identifying method for sorting robot hand
CN114405866A (en) * 2022-01-20 2022-04-29 湖南视比特机器人有限公司 Vision-guided steel plate sorting method, vision-guided steel plate sorting device and system
CN114494463A (en) * 2022-02-11 2022-05-13 黎明职业大学 Robot sorting method and device based on binocular stereoscopic vision technology
CN114596355A (en) * 2022-03-16 2022-06-07 哈尔滨工业大学 High-precision pose measurement method and system based on cooperative target
CN114730177A (en) * 2019-11-19 2022-07-08 通快机床两合公司 Method and flat machine tool for associating information with a workpiece data set
CN114758250A (en) * 2022-06-15 2022-07-15 山东青岛烟草有限公司 Full-specification flexible automatic sorting control method and device based on artificial intelligence
CN114820679A (en) * 2022-07-01 2022-07-29 小米汽车科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN115447924A (en) * 2022-09-05 2022-12-09 广东交通职业技术学院 Machine vision-based garbage classification and sorting method, system, device and medium
CN115619804A (en) * 2022-10-19 2023-01-17 哈尔滨理工大学 CT lung tumor automatic segmentation method combined with lung tumor prior information
CN115719444A (en) * 2022-11-24 2023-02-28 百度(中国)有限公司 Image quality determination method, device, electronic equipment and medium
CN115761196A (en) * 2022-11-15 2023-03-07 软通动力信息技术(集团)股份有限公司 Method, device, equipment and medium for generating expression of object

Patent Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050057653A1 (en) * 2002-05-07 2005-03-17 Matsushita Electric Industrial Co., Ltd. Surveillance system and a surveillance camera
US20050152604A1 (en) * 2004-01-09 2005-07-14 Nucore Technology Inc. Template matching method and target image area extraction apparatus
KR20130032990A (en) * 2011-09-26 2013-04-03 한국과학기술연구원 Method for detecting grasping points using category recognition and computer readable record medium thereof
CN108182705A (en) * 2016-12-08 2018-06-19 广州映博智能科技有限公司 A kind of three-dimensional coordinate localization method based on machine vision
US20180246563A1 (en) * 2017-02-28 2018-08-30 Seiko Epson Corporation Head-mounted display device, program, and method for controlling head-mounted display device
CN109543665A (en) * 2017-09-22 2019-03-29 凌云光技术集团有限责任公司 Image position method and device
US20190197196A1 (en) * 2017-12-26 2019-06-27 Seiko Epson Corporation Object detection and tracking
CN109967359A (en) * 2017-12-28 2019-07-05 北京京东尚科信息技术有限公司 Method and apparatus for sorting article
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN109858333A (en) * 2018-12-20 2019-06-07 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and computer-readable medium
US20200388078A1 (en) * 2019-06-06 2020-12-10 Canon Kabushiki Kaisha Apparatus for positioning processing between image in real world and image in virtual world, information processing method, and storage medium
CN110796702A (en) * 2019-10-23 2020-02-14 中冶赛迪工程技术股份有限公司 Industrial equipment identification and positioning method, system and equipment based on machine vision
CN114730177A (en) * 2019-11-19 2022-07-08 通快机床两合公司 Method and flat machine tool for associating information with a workpiece data set
CN111178250A (en) * 2019-12-27 2020-05-19 深圳市越疆科技有限公司 Object identification positioning method and device and terminal equipment
CN111144426A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium
CN111144322A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium
CN111243005A (en) * 2020-01-07 2020-06-05 洛阳语音云创新研究院 Livestock weight estimation method, device, equipment and computer readable storage medium
CN111354077A (en) * 2020-03-02 2020-06-30 东南大学 Three-dimensional face reconstruction method based on binocular vision
CN112720487A (en) * 2020-12-23 2021-04-30 东北大学 Mechanical arm grabbing method and system based on self-adaptive dynamic force balance
CN113657551A (en) * 2021-09-01 2021-11-16 陕西工业职业技术学院 Robot grabbing posture task planning method for sorting and stacking multiple targets
CN113487523A (en) * 2021-09-08 2021-10-08 腾讯科技(深圳)有限公司 Method and device for optimizing graph contour, computer equipment and storage medium
CN113927601A (en) * 2021-11-11 2022-01-14 盐城工学院 Method and system for realizing precise picking of mechanical arm based on visual recognition
CN114029243A (en) * 2021-11-11 2022-02-11 江苏昱博自动化设备有限公司 Soft object grabbing and identifying method for sorting robot hand
CN113920142A (en) * 2021-11-11 2022-01-11 江苏昱博自动化设备有限公司 Sorting manipulator multi-object sorting method based on deep learning
CN113989167A (en) * 2021-12-27 2022-01-28 杭州爱科科技股份有限公司 Contour extraction method, device, equipment and medium based on seed point self-growth
CN114405866A (en) * 2022-01-20 2022-04-29 湖南视比特机器人有限公司 Vision-guided steel plate sorting method, vision-guided steel plate sorting device and system
CN114494463A (en) * 2022-02-11 2022-05-13 黎明职业大学 Robot sorting method and device based on binocular stereoscopic vision technology
CN114596355A (en) * 2022-03-16 2022-06-07 哈尔滨工业大学 High-precision pose measurement method and system based on cooperative target
CN114758250A (en) * 2022-06-15 2022-07-15 山东青岛烟草有限公司 Full-specification flexible automatic sorting control method and device based on artificial intelligence
CN114820679A (en) * 2022-07-01 2022-07-29 小米汽车科技有限公司 Image annotation method and device, electronic equipment and storage medium
CN115447924A (en) * 2022-09-05 2022-12-09 广东交通职业技术学院 Machine vision-based garbage classification and sorting method, system, device and medium
CN115619804A (en) * 2022-10-19 2023-01-17 哈尔滨理工大学 CT lung tumor automatic segmentation method combined with lung tumor prior information
CN115761196A (en) * 2022-11-15 2023-03-07 软通动力信息技术(集团)股份有限公司 Method, device, equipment and medium for generating expression of object
CN115719444A (en) * 2022-11-24 2023-02-28 百度(中国)有限公司 Image quality determination method, device, electronic equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHOTTON, JAMIE等: "Contour-based learning for object detection", 《TENTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》, vol. 1, no. 1, pages 1 - 8 *
敬启超: "基于体感互动的人机协作脐橙采摘机械手的研究", 《中国优秀硕士学位论文全文数据库 农业科技辑》, no. 2, pages 044 - 321 *

Also Published As

Publication number Publication date
CN116309442B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
CN112528850B (en) Human body identification method, device, equipment and storage medium
US9892504B2 (en) Image inspection method and inspection region setting method
CN111768381A (en) Part defect detection method and device and electronic equipment
CN112150551B (en) Object pose acquisition method and device and electronic equipment
US20200286268A1 (en) Automatic obstacle avoidance optimization method for connecting line of graphical programming software
CN110796640A (en) Small target defect detection method and device, electronic equipment and storage medium
CN111833303A (en) Product detection method and device, electronic equipment and storage medium
CN111428731A (en) Multi-class target identification and positioning method, device and equipment based on machine vision
CN115321090B (en) Method, device, equipment, system and medium for automatically receiving and taking luggage in airport
CN112845143A (en) Household garbage classification intelligent sorting system and method
CN111242240B (en) Material detection method and device and terminal equipment
CN113538459B (en) Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection
CN114758236A (en) Non-specific shape object identification, positioning and manipulator grabbing system and method
CN113378969B (en) Fusion method, device, equipment and medium of target detection results
CN110910401A (en) Semi-automatic image segmentation data annotation method, electronic device and storage medium
CN115781673A (en) Part grabbing method, device, equipment and medium
CN111715559A (en) Garbage sorting system based on machine vision
CN116258682A (en) PCB solder paste defect detection method based on PSPNet and improved YOLOv7
CN116309442B (en) Method for determining picking information and method for picking target object
CN111191619A (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN110910414A (en) Image contour generation method, image labeling method, electronic device and storage medium
CN116051558B (en) Defect image labeling method, device, equipment and medium
CN115909253A (en) Target detection and model training method, device, equipment and storage medium
CN115070757A (en) Object grabbing method and device, robot and storage medium
CN114202526A (en) Quality detection method, system, apparatus, electronic device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant