CN117649450B - Tray grid positioning detection method, system, device and medium - Google Patents

Tray grid positioning detection method, system, device and medium Download PDF

Info

Publication number
CN117649450B
CN117649450B CN202410109262.5A CN202410109262A CN117649450B CN 117649450 B CN117649450 B CN 117649450B CN 202410109262 A CN202410109262 A CN 202410109262A CN 117649450 B CN117649450 B CN 117649450B
Authority
CN
China
Prior art keywords
grid
target
tray
determining
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410109262.5A
Other languages
Chinese (zh)
Other versions
CN117649450A (en
Inventor
付伟男
王磊
卢锐佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lingxi Robot Intelligent Technology Co ltd
Original Assignee
Hangzhou Lingxi Robot Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lingxi Robot Intelligent Technology Co ltd filed Critical Hangzhou Lingxi Robot Intelligent Technology Co ltd
Priority to CN202410109262.5A priority Critical patent/CN117649450B/en
Publication of CN117649450A publication Critical patent/CN117649450A/en
Application granted granted Critical
Publication of CN117649450B publication Critical patent/CN117649450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a tray grid positioning detection method, a system, a device and a medium, comprising the following steps: obtaining a target plan view containing tray grids; performing corner detection based on the target plane graph, and determining a target candidate corner point set; screening based on the target candidate angle point set, and determining a grid position rectangular set; and performing straight line fitting based on the rectangular set of the grid positions, and determining the target grid position corresponding to each grid according to the grid straight line set obtained by fitting. The application obtains the target plane graph containing the tray lattice by using a structured light method based on a 3D imaging camera, and then carries out corner detection and screening to determine the tray lattice contained in the target plane graph. And finally, performing straight line fitting by using the obtained rectangular set of the grid positions, and determining the target grid position corresponding to each grid according to the fitted straight line set of the grid. The method and the device can stably determine the three-dimensional information of the tested tray grid, and have better stability.

Description

Tray grid positioning detection method, system, device and medium
Technical Field
The present application relates to the field of image positioning detection technologies, and in particular, to a method, a system, a device, and a medium for positioning and detecting a tray grid.
Background
In recent years, new energy automobiles have the advantages of low cost and environmental protection, and the market share of the new energy automobiles is increasing. As one of the important components of new energy automobiles, a battery has been attracting attention. With the popularization of new energy automobiles, the transportation and protection links of batteries become particularly important, and automatic production has become a necessary trend.
At present, the transportation and protection of batteries mainly adopt a high-density foam tray, and a plurality of grid openings are punched on the tray for placing the batteries. The batteries are transported from the production line to a designated location, grasped by a nearby robot and placed into a tray. The grabbing step is carried out through a teaching position of the mechanical arm, and a stable battery position is provided by virtue of a limiting mechanism at the tail end of the conveying line; the placement step is then mainly dependent on visual guidance. However, current vision guidance schemes are mainly based on 2D vision with the following fundamental problems:
imaging stability is poor. Because white foam is difficult to polish, the light source is sensitive to distance and position change, and in actual conditions, the position accuracy of the tray cannot be guaranteed to be stable all the time.
The algorithm stability is poor. Considering the first point, the vision algorithm is sensitive to changes in illumination, and when the illumination changes, the imaging also changes, so that redesigning the positioning algorithm may be required.
There is an urgent need to address how to accurately position the tray grid to ensure imaging stability and algorithm stability. The solution to this problem would improve the efficiency and reliability of the transportation and protection links of the battery.
Disclosure of Invention
The application aims to provide a tray grid positioning detection method, a system, a device and a medium, which at least solve the problems of how to accurately position the tray grid in the related technology so as to ensure imaging stability and algorithm stability.
The first aspect of the application provides a tray grid positioning detection method, which is applied to a 3D imaging camera, and comprises the following steps:
obtaining a target plan view containing tray grids;
performing corner detection based on the target plane graph, and determining a target candidate corner point set;
screening based on the target candidate angle point set, and determining a grid position rectangular set;
And performing straight line fitting based on the rectangular set of the grid positions, and determining the target grid position corresponding to each grid according to the grid straight line set obtained by fitting.
In one embodiment, the obtaining a target plan including a tray grid includes:
processing according to the acquired depth map containing the tray grid openings to obtain three-dimensional point cloud data;
preprocessing based on the three-dimensional point cloud data to obtain a target tray image;
and projecting based on the target tray image to obtain the target plan.
In one embodiment, the preprocessing based on the three-dimensional point cloud data to obtain a target tray image includes:
performing plane fitting according to the three-dimensional point cloud data to obtain a first image set;
Area screening is carried out according to the first image set, so that a second image set is obtained;
performing height screening according to the second image set to determine an initial tray image;
And carrying out complement processing based on the initial tray image to obtain the target tray image.
In one embodiment, the projecting based on the target tray image, to obtain the target plan view, includes:
Determining a boundary range according to the target tray image;
determining a projection plane according to the boundary range;
And obtaining the target plan according to the projection of the target tray image on the projection plane.
In one embodiment, the detecting corner points based on the target plan, determining a target candidate corner point set includes:
performing corner detection according to the target plane graph to determine a first candidate corner point set containing grid openings;
Performing angle screening according to the first candidate angular point set, and determining a second candidate angular point set;
and screening by adopting a preset rectangular size according to the second candidate angle point set, and determining the target candidate angle point set.
In one embodiment, the filtering based on the target candidate corner point set, determining the rectangular set of grid positions includes:
Determining corresponding neighbor corner points according to each target candidate corner point in the target candidate corner point set;
And determining the rectangular set of the grid positions according to the target candidate corner points and the corresponding neighbor corner points.
In one embodiment, the performing straight line fitting based on the rectangular set of the bin positions, and determining the target bin position corresponding to each bin according to the bin straight line set obtained by fitting, includes:
Performing straight line fitting according to the grid position rectangular set to obtain an initial grid straight line set;
screening according to angles between adjacent grid straight lines in the initial grid straight line set to determine a target grid straight line set;
and determining the position of the target lattice corresponding to each lattice by adopting a Cramer method according to the target lattice straight line set.
A second aspect of the present application provides a tray grid positioning detection system applied to a 3D imaging camera, the system comprising:
the target plan drawing acquisition module is used for acquiring a target plan drawing containing tray grids;
The target candidate corner set determining module is used for performing corner detection based on the target plane graph and determining a target candidate corner set;
the grid position rectangle set determining module is used for screening based on the target candidate angle point set and determining a grid position rectangle set;
the target position determining module is used for carrying out straight line fitting based on the grid position rectangular set, and determining the target grid position corresponding to each grid according to the grid straight line set obtained by fitting.
A third aspect of the present application provides a tray grid positioning detection device, including:
a 3D imaging camera;
The tray grid positioning detection method comprises a memory and one or more processors, wherein executable codes are stored in the memory, and the one or more processors are used for realizing the tray grid positioning detection method when executing the executable codes.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the above-described tray grid positioning detection method.
The tray grid positioning detection method, system, device and medium provided by the embodiment of the application have at least the following technical effects.
The object plane graph containing the tray lattice is acquired by using a structured light method based on a 3D imaging camera, and then corner detection and screening are performed to determine the tray lattice contained in the object plane graph. And finally, performing straight line fitting by using the obtained rectangular set of the grid positions, and determining the target grid position corresponding to each grid according to the fitted straight line set of the grid. The method and the device can stably determine the three-dimensional information of the tested tray grid, and have better stability.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a flow chart of a tray grid positioning detection method according to an embodiment of the present application;
Fig. 2 is a schematic flow chart of step S101 according to an embodiment of the present application;
fig. 3 is a schematic flow chart of step S202 provided in the embodiment of the present application;
fig. 4 is a schematic flow chart of step S203 provided in the embodiment of the present application;
Fig. 5 is a schematic flow chart of step S102 provided in the embodiment of the present application;
Fig. 6 is a schematic flow chart of step S103 according to an embodiment of the present application;
fig. 7 is a schematic flow chart of step S104 according to an embodiment of the present application;
FIG. 8 is a block diagram of a tray grid positioning detection system provided by an embodiment of the present application;
fig. 9 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The embodiment of the application provides a tray grid positioning detection method, a system, a device and a medium.
In a first aspect, an embodiment of the present application provides a tray dock positioning detection method, and fig. 1 is a schematic flow chart of the tray dock positioning detection method provided by the embodiment of the present application, which is applied to a 3D imaging camera, as shown in fig. 1, and the method includes the following steps:
Step S101, obtaining a target plan view containing tray grids.
Fig. 2 is a schematic flow chart of step S101 provided in the embodiment of the present application, as shown in fig. 2, on the basis of the flow chart shown in fig. 1, step S101 includes the following steps:
step S201, processing is carried out according to the obtained depth map containing the tray grid openings, and three-dimensional point cloud data are obtained.
Specifically, the position and depth information of each point in the depth image are converted into a world coordinate system (x, y, z) through internal and external parameters, and a three-dimensional point cloud data set called a structural point cloud is obtained.
And step S202, preprocessing based on the three-dimensional point cloud data to obtain a target tray image.
Fig. 3 is a schematic flow chart of step S202 provided in the embodiment of the present application, as shown in fig. 3, on the basis of the flow chart shown in fig. 2, step S202 includes the following steps:
And step S301, performing plane fitting according to the three-dimensional point cloud data to obtain a first image set.
Specifically, a hierarchical clustering method is utilized to perform plane fitting on the three-dimensional point cloud data set. Hierarchical clustering is an unsupervised learning method, which groups points in three-dimensional point cloud data according to distance or other similarity metrics to form a hierarchical structure. By continuously merging the most similar point sets, a first image set S, s= { α, β, γ, … … } is finally obtained, wherein each first image is composed of one point set, α= { p (x, y, z) |p e α }, β= { p (x, y, z) |p e β }, γ= { p (x, y, z) |p e γ }.
Step S302, area screening is carried out according to the first image set, and a second image set is obtained.
Specifically, the step of screening the first image set S for a first image with an area greater than the preset threshold th_s to obtain the second image set S1 aims at eliminating noise images with smaller areas. S1= { s|area > th_s }.
And step S303, performing high screening according to the second image set to determine an initial tray image.
Specifically, for each second image in the second image set S1, the height thereof is calculated. The height set H is in one-to-one correspondence with the second image set S1, wherein the height H of each second image is the height average of the second image point set, i.e. the average of the z coordinates of all points on the second image. H= { H 1,h2,h3, … … }, h=avg ({ p.z |p∈α }).
And traversing the height set H to find the position k with the largest numerical value, namely the position where the highest second image is located. The second image is taken as an initial tray image β, β=s1 [ k ]. I.e. the highest second image represents the plane closest to the camera, i.e. the image is the highest object in the entire scene. While the pallet is typically located above other objects in a higher position. Thus, the highest second image is the tray plane.
And step S304, performing completion processing based on the initial tray image to obtain a target tray image.
Specifically, due to the vulnerability of the tray and insufficient calibration accuracy, there may be recesses of different sizes in the actual tray, which may result in the plane fitting in step S301 to remove the damaged area from the corresponding plane. If the completion processing is not performed, the final accurate positioning is affected, so that the positioning accuracy of the tray does not meet the placement requirement. In addition, the same plane may have a height difference, and global damage region complementation cannot be performed by a simple method.
For these problems, selecting a local region to process may be employed for point clouds in the initial tray image. Some region segmentation algorithms (e.g. cluster-based methods) may be used to divide the point cloud into several regions and select one of them as the local region to be processed. In the selected local area, the average or median of the height values of all points is calculated as the reference height. And screening out points near the reference height according to the set height range, and reserving original image information of the points so as to realize the complementation effect.
For example, first, a region of interest (ROI) is constructed, the length and width of which are respectively set to half the width of the tray grid, i.e., (w/2) × (w/2). This ROI can be seen as a small area on the tray plane being treated. Then, for each point in the initial tray image β, a point cloud average height h roi within the region ROI where the point is located is calculated. Next, points in the three-dimensional point cloud data that lie within the region ROI and have a height in the range of [ h roi-50,hroi +50] are retained according to the value of the point cloud average height h roi. And screening out points similar to the average height to perform the completion treatment. And (3) repeating the steps by traversing the point set of the whole initial tray image beta to obtain a completed target tray image beta 1, so that the completed tray plane is smoother.
With continued reference to fig. 2, step S203 is performed after step S202, specifically as follows.
And step S203, performing projection based on the target tray image to obtain a target plan.
Fig. 4 is a schematic flow chart of step S203 provided in the embodiment of the present application, as shown in fig. 4, on the basis of the flow chart shown in fig. 2, step S203 includes the following steps:
step S401, determining a boundary range according to the target tray image.
Step S402, determining a projection plane according to the boundary range.
Step S403, obtaining a target plan according to the projection of the target tray image on the projection plane.
In steps S401 to S403, first, all pixels in the target tray image are inspected one by one on the target tray image β1. Thus, the positions of each pixel point in the x and y directions can be counted, and the maximum and minimum values are recorded, so that the boundary range of the whole image is determined. A projection plane I of size (x max-xmin,ymax-ymin) is then created based on the boundary range, the size of this projection plane I matching the boundary range of the target tray image. Finally, the target tray image beta 1 is mapped into a projection plane I in an orthogonal projection mode. Orthographic projection is a projection mode that maintains parallelism such that the projected image maintains the shape and scaling of the original object. Finally, a target plan of the tray is obtained, and the pixel values in the projection plane I represent the characteristic information of each position on the tray and can be used for subsequent processing and analysis.
Through the steps, the target tray image beta 1 can be converted into a 2D image so as to perform further image processing and analysis, and the shape, the boundary and other characteristic information of the tray can be conveniently extracted.
With continued reference to fig. 1, step S102 is performed after step S101, as follows.
And step S102, performing corner detection based on the target plan, and determining a target candidate corner point set.
Specifically, 2D rectangular recognition is performed on the target tray image in order to find the grid area on the tray. Since the lattice openings are rectangular with uniform length and width, the positions and the sizes of the lattice openings can be found by identifying the rectangular shapes.
Fig. 5 is a schematic flow chart of step S102 provided in the embodiment of the present application, as shown in fig. 5, on the basis of the flow chart shown in fig. 1, step S102 includes the following steps:
Step S501, performing corner detection according to the target plan to determine a first candidate corner point set containing the lattice openings.
Step S502, angle screening is carried out according to the first candidate angular point set, and a second candidate angular point set is determined.
Step S503, screening by adopting a preset rectangular size according to the second candidate angle point set, and determining a target candidate angle point set.
In steps S501 to S503, first, a corner detection algorithm (such as Harri corner detection method) is used to process the target plan view, so as to obtain a first candidate corner point set s_cor. Each first candidate corner point comprises information such as coordinates, angles and confidence degrees. S_cor= { p (coordinates, angle, confidence) }. Traversing the first candidate corner point set, and for each first candidate corner point p_k, searching for a second candidate corner point set S_cor_pk with the angle difference within the range of [80,100 ]. This range is chosen because the angular difference between the adjacent two corner points of the standard rectangle is 90 °, s_cor_pk= { p|p e s_cor ∈ & angle (p, p_k) e [80,100] }.
Assuming that there is a set of corner points p to be matched, the following operation needs to be performed for each corner point.
Calculating Euclidean distance from the first candidate corner point p_k: for each corner p to be matched, the euclidean distance between it and the known first candidate corner p_k is calculated. The euclidean distance refers to a straight line distance between two points, and can be obtained by calculating a difference between coordinates of the two points and using the pythagorean theorem.
Screening according to rectangular size: the euclidean distance of the first candidate corner p_k is screened or defined according to a preset rectangular size, such as a length l and a width w. Only corner points meeting the preset rectangle size requirement are retained in the first candidate corner point set s_cor_pk.
Selecting the best matching corner points: when a plurality of corner points to be matched exist near the distance, the corner point with the smallest angle difference and highest score is selected from the first candidate corner point set S_cor_pk as the best matching corner point. This may be determined by calculating the angular differences between the corner points and using existing scoring or evaluation methods, e.g., feature matching algorithms (e.g., SIFT, SURF, etc.) calculate the matching scores.
And traversing the steps to finally obtain the target candidate angle point set.
With continued reference to fig. 1, step S103 is performed after step S102, as follows.
And step S103, screening based on the target candidate angle point set, and determining a grid position rectangular set.
Fig. 6 is a schematic flow chart of step S103 provided in the embodiment of the present application, as shown in fig. 6, on the basis of the flow chart shown in fig. 1, step S103 includes the following steps:
Step S601, determining corresponding neighboring corner points according to each target candidate corner point in the target candidate corner point set.
Step S602, determining a rectangular set of grid positions according to the target candidate corner points and the corresponding neighbor corner points.
In step S601 to step S602, each target candidate corner in the target candidate corner set is sequentially used as an initial corner, the right, lower and right lower neighboring corners thereof are traversed, the grid position rectangles are formed according to the initial corner and the corresponding neighboring corner thereof, and all the grid position rectangles form a grid position rectangle set RECT.
It should be noted here that, according to the identified area rect_blank where the cells are placed, the grid positions overlapping with the rect_blank may be screened out, so as to obtain a filtered rectangular set RECT1. Thus, the area where the battery cell is placed can be eliminated, and the identified grid is ensured not to conflict with the existing battery cell position. RECT 1= { RECT (center point, length, width) |rect e β1 & & RECT n rect_gap =}
With continued reference to fig. 1, step S104 is performed after step S103, as follows.
And step S104, performing straight line fitting based on the rectangular set of the grid positions, and determining the target grid position corresponding to each grid according to the grid straight line set obtained by fitting.
Fig. 7 is a schematic flow chart of step S104 provided in the embodiment of the present application, as shown in fig. 7, on the basis of the flow chart shown in fig. 1, step S104 includes the following steps:
and step 701, performing straight line fitting according to the grid position rectangular set to obtain an initial grid straight line set.
Step S702, screening according to angles between adjacent lattice straight lines in the initial lattice straight line set, and determining a target lattice straight line set.
And step 703, determining the position of the target grid mouth corresponding to each grid mouth by adopting a Cramer method according to the target grid mouth straight line set.
In steps S701 to S703, for each rectangle of the position of the grid, a least square Line fitting is required to be performed on the edge of each rectangle to obtain an initial grid Line set Line composed of 4 lines. Each initial bin line may be represented by the general formula i n = (a, b, c), where a, b, c represent the coefficients of the line equation, respectively.
Traversing the initial straight Line set Line twice, so that every two initial straight lines are subjected to adjacent judgment. The basis for judging whether the two initial straight lines are adjacent is whether the included angle phi of the two initial straight lines is in the range of 80, 100. The included angle phi can be obtained by calculating the included angle between the direction vectors of the two initial straight lines.
And calculating the intersection point of the two initial straight lines meeting the adjacent conditions, namely meeting the condition that the included angle phi of the two initial straight lines is in the range of 80 and 100. The computation of the intersection point can be solved by the clahm law.
For example, assume that the general equation for the two initial bin lines is:
a1*x+b1*y+c1=0
a2*x+b2*y+c2=0
The direction vectors of the two initial lattice lines are respectively as follows:
v1=(-b1,a1)
v2=(-b2,a2)
The vector quantity product can be known as: cos Φ= (u x v)/(|u|v|), then Φ= arccos Φ=arc ((u x v)/(|u|v|)), if Φ e [80,100], two straight line intersection point calculation is performed, and the intersection point of the two lines can be obtained from the clahm law:
And 4 crossing points p1, p2, p3 and p4 are calculated from each grid position rectangle, namely the accurate target grid positions of 4 corner points corresponding to the grid are obtained, and the positioning accuracy is about +/-1.5 mm.
In summary, according to the tray grid positioning detection method provided by the embodiment of the application, the target plane graph containing the tray grid is obtained by using the structured light method based on the 3D imaging camera, and then the corner points are detected and screened to determine the tray grid contained in the target plane graph. And finally, performing straight line fitting by using the obtained rectangular set of the grid positions, and determining the target grid position corresponding to each grid according to the fitted straight line set of the grid. The method and the device can stably determine the three-dimensional information of the tested tray grid, and have better stability.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
In a second aspect, an embodiment of the present application provides a tray grid positioning detection system, which is used to implement the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 8 is a block diagram of a tray grid positioning detection system according to an embodiment of the present application, which is applied to a 3D imaging camera, as shown in fig. 8, and includes:
an acquire target plan module 801 is configured to acquire a target plan including tray ports.
A target candidate corner set determining module 802, configured to perform corner detection based on the target plan, and determine a target candidate corner set.
The grid position rectangle set determining module 803 is configured to determine a grid position rectangle set by performing screening based on the target candidate angle point set.
The target position determining module 804 is configured to perform straight line fitting based on the rectangular set of the bin positions, and determine a target bin position corresponding to each bin according to the bin straight line set obtained by fitting.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
In a third aspect, an embodiment of the present application provides a tray grid positioning detection device, including: a 3D imaging camera; a memory having executable code stored therein and one or more processors, which when executed, are operable to implement the steps of any of the method embodiments described above.
Optionally, the tray grid positioning detection device may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In a fourth aspect, in combination with the tray grid positioning detection method in the foregoing embodiment, an embodiment of the present application may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the tray bin positioning detection methods of the embodiments described above.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by a processor implements a tray grid positioning detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 9 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, as shown in fig. 9, and an electronic device, which may be a server, and an internal structure diagram of which may be shown in fig. 9, is provided. The electronic device includes a processor, a network interface, an internal memory, and a non-volatile memory connected by an internal bus, where the non-volatile memory stores an operating system, computer programs, and a database. The processor is used for providing computing and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing environment for the operation of an operating system and a computer program, the computer program is executed by the processor to realize a tray grid positioning detection method, and the database is used for storing data.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that the technical features of the above-described embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above-described embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (9)

1. A tray grid positioning detection method, which is applied to a 3D imaging camera, the method comprising:
obtaining a target plan view containing tray grids;
performing corner detection based on the target plane graph, and determining a target candidate corner point set;
screening based on the target candidate angle point set, and determining a grid position rectangular set;
Performing straight line fitting based on the rectangular set of the grid positions, and determining the target grid position corresponding to each grid according to the grid straight line set obtained by fitting;
The method for determining the target grid position corresponding to each grid comprises the steps of:
Performing straight line fitting according to the grid position rectangular set to obtain an initial grid straight line set;
the step of performing straight line fitting according to the rectangular set of the grid positions to obtain an initial grid straight line set comprises the following steps: performing least square straight line fitting on the edge of each grid position rectangle to obtain an initial grid straight line set consisting of 4 straight lines;
screening according to angles between adjacent grid straight lines in the initial grid straight line set to determine a target grid straight line set;
and determining the position of the target lattice corresponding to each lattice by adopting a Cramer method according to the target lattice straight line set.
2. The method for detecting the positioning of a tray pocket according to claim 1, wherein the step of obtaining a target plan including the tray pocket comprises:
processing according to the acquired depth map containing the tray grid openings to obtain three-dimensional point cloud data;
preprocessing based on the three-dimensional point cloud data to obtain a target tray image;
and projecting based on the target tray image to obtain the target plan.
3. The tray grid positioning detection method according to claim 2, wherein the preprocessing based on the three-dimensional point cloud data to obtain a target tray image comprises:
performing plane fitting according to the three-dimensional point cloud data to obtain a first image set;
Area screening is carried out according to the first image set, so that a second image set is obtained;
performing height screening according to the second image set to determine an initial tray image;
And carrying out complement processing based on the initial tray image to obtain the target tray image.
4. The tray grid positioning detection method according to claim 2, wherein the projecting based on the target tray image to obtain the target plan view includes:
Determining a boundary range according to the target tray image;
determining a projection plane according to the boundary range;
And obtaining the target plan according to the projection of the target tray image on the projection plane.
5. The tray grid positioning detection method according to claim 1, wherein the corner detection based on the target plan view, and the determination of the target candidate corner point set, comprises:
performing corner detection according to the target plane graph to determine a first candidate corner point set containing grid openings;
Performing angle screening according to the first candidate angular point set, and determining a second candidate angular point set;
and screening by adopting a preset rectangular size according to the second candidate angle point set, and determining the target candidate angle point set.
6. The tray bin positioning detection method according to claim 1, wherein the screening based on the target candidate corner point set to determine a bin position rectangular set includes:
Determining corresponding neighbor corner points according to each target candidate corner point in the target candidate corner point set;
And determining the rectangular set of the grid positions according to the target candidate corner points and the corresponding neighbor corner points.
7. A system for implementing the tray grid positioning detection method of any one of claims 1 to 6, applied to a 3D imaging camera, the system comprising:
the target plan drawing acquisition module is used for acquiring a target plan drawing containing tray grids;
The target candidate corner set determining module is used for performing corner detection based on the target plane graph and determining a target candidate corner set;
the grid position rectangle set determining module is used for screening based on the target candidate angle point set and determining a grid position rectangle set;
the target position determining module is used for carrying out straight line fitting based on the grid position rectangular set, and determining the target grid position corresponding to each grid according to the grid straight line set obtained by fitting.
8. The utility model provides a tray check mouth location detection device which characterized in that includes:
a 3D imaging camera;
a memory and one or more processors, the memory having executable code stored therein, which when executed by the one or more processors, is operable to implement the tray grid positioning detection method of any of claims 1-6.
9. A computer-readable storage medium, having stored thereon a program which, when executed by a processor, implements the tray pocket positioning detection method of any one of claims 1-6.
CN202410109262.5A 2024-01-26 2024-01-26 Tray grid positioning detection method, system, device and medium Active CN117649450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410109262.5A CN117649450B (en) 2024-01-26 2024-01-26 Tray grid positioning detection method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410109262.5A CN117649450B (en) 2024-01-26 2024-01-26 Tray grid positioning detection method, system, device and medium

Publications (2)

Publication Number Publication Date
CN117649450A CN117649450A (en) 2024-03-05
CN117649450B true CN117649450B (en) 2024-04-19

Family

ID=90049780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410109262.5A Active CN117649450B (en) 2024-01-26 2024-01-26 Tray grid positioning detection method, system, device and medium

Country Status (1)

Country Link
CN (1) CN117649450B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640157A (en) * 2020-05-28 2020-09-08 华中科技大学 Checkerboard corner detection method based on neural network and application thereof
CN111986185A (en) * 2020-08-25 2020-11-24 浙江工业大学 Tray detection and positioning method based on depth camera
CN113191174A (en) * 2020-01-14 2021-07-30 北京京东乾石科技有限公司 Article positioning method and device, robot and computer readable storage medium
CN115330824A (en) * 2022-08-05 2022-11-11 梅卡曼德(上海)机器人科技有限公司 Box body grabbing method and device and electronic equipment
CN116071547A (en) * 2023-02-08 2023-05-05 未来机器人(深圳)有限公司 Tray pose detection method and device, equipment and storage medium
CN116128841A (en) * 2023-01-11 2023-05-16 未来机器人(深圳)有限公司 Tray pose detection method and device, unmanned forklift and storage medium
CN116309880A (en) * 2023-03-27 2023-06-23 西安电子科技大学广州研究院 Object pose determining method, device, equipment and medium based on three-dimensional reconstruction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7086148B2 (en) * 2020-08-31 2022-06-17 三菱ロジスネクスト株式会社 Pallet detectors, forklifts, pallet detection methods, and programs

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191174A (en) * 2020-01-14 2021-07-30 北京京东乾石科技有限公司 Article positioning method and device, robot and computer readable storage medium
CN111640157A (en) * 2020-05-28 2020-09-08 华中科技大学 Checkerboard corner detection method based on neural network and application thereof
CN111986185A (en) * 2020-08-25 2020-11-24 浙江工业大学 Tray detection and positioning method based on depth camera
CN115330824A (en) * 2022-08-05 2022-11-11 梅卡曼德(上海)机器人科技有限公司 Box body grabbing method and device and electronic equipment
CN116128841A (en) * 2023-01-11 2023-05-16 未来机器人(深圳)有限公司 Tray pose detection method and device, unmanned forklift and storage medium
CN116071547A (en) * 2023-02-08 2023-05-05 未来机器人(深圳)有限公司 Tray pose detection method and device, equipment and storage medium
CN116309880A (en) * 2023-03-27 2023-06-23 西安电子科技大学广州研究院 Object pose determining method, device, equipment and medium based on three-dimensional reconstruction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Chessboard Corner Detection Based on EDLines Algorithm;Dan, Xizuo等;《SENSORS》;20220522;第22卷;全文 *
一种自动、快速的Kinect标定方法;孟勃;刘雪君;;计算机工程与科学;20160615(第06期);全文 *
光场相机标定中亚像素角点检测方法研究;蔡叶等;《现代电子科技》;20190715(第7期);全文 *

Also Published As

Publication number Publication date
CN117649450A (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN110992356B (en) Target object detection method and device and computer equipment
CN112966696B (en) Method, device, equipment and storage medium for processing three-dimensional point cloud
CN110136182B (en) Registration method, device, equipment and medium for laser point cloud and 2D image
US7822264B2 (en) Computer-vision system for classification and spatial localization of bounded 3D-objects
US20110274343A1 (en) System and method for extraction of features from a 3-d point cloud
CN105139416A (en) Object identification method based on image information and depth information
CN111401266B (en) Method, equipment, computer equipment and readable storage medium for positioning picture corner points
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
US20200098133A1 (en) Image Processing Method and Apparatus
CN114120149B (en) Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium
CN104460505A (en) Industrial robot relative pose estimation method
WO2023185234A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN111368573A (en) Positioning method based on geometric feature constraint
CN107274446B (en) Method for identifying sharp geometric edge points by using normal consistency
CN108960247B (en) Image significance detection method and device and electronic equipment
US11816857B2 (en) Methods and apparatus for generating point cloud histograms
CN117649450B (en) Tray grid positioning detection method, system, device and medium
CN111986299A (en) Point cloud data processing method, device, equipment and storage medium
CN115131433A (en) Non-cooperative target pose processing method and device and electronic equipment
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
CN114049380A (en) Target object positioning and tracking method and device, computer equipment and storage medium
CN113920525A (en) Text correction method, device, equipment and storage medium
Borrmann et al. A data structure for the 3D hough transform for plane detection
Krueger Model based object classification and localisation in multiocular images
CN116523984B (en) 3D point cloud positioning and registering method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant