CN111598946B - Object pose measuring method and device and storage medium - Google Patents

Object pose measuring method and device and storage medium Download PDF

Info

Publication number
CN111598946B
CN111598946B CN202010182093.XA CN202010182093A CN111598946B CN 111598946 B CN111598946 B CN 111598946B CN 202010182093 A CN202010182093 A CN 202010182093A CN 111598946 B CN111598946 B CN 111598946B
Authority
CN
China
Prior art keywords
point cloud
point
model
scene
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010182093.XA
Other languages
Chinese (zh)
Other versions
CN111598946A (en
Inventor
沈跃佳
贾奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cross Dimension Shenzhen Intelligent Digital Technology Co ltd
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202010182093.XA priority Critical patent/CN111598946B/en
Publication of CN111598946A publication Critical patent/CN111598946A/en
Application granted granted Critical
Publication of CN111598946B publication Critical patent/CN111598946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an object pose measuring method, a device and a storage medium, wherein the method comprises an off-line modeling stage and an on-line matching stage, wherein the off-line modeling stage is used for carrying out feature modeling on an object three-dimensional model and storing the feature modeling for measuring the pose of an object in a subsequent scene; in the on-line matching stage, object pose measurement is carried out on a given scene RGB-D image; the invention provides an efficient model sampling strategy, which can reduce the subsequent operation amount; but also can keep enough object surface change information; the distance range of the point pair characteristic is limited, and the matching interference of excessive background point clouds is reduced; a quantization expansion method is provided, so that the influence of noise on the deviation of the point on the feature calculation is reduced; edge information is extracted from the color image, candidate object poses are screened and ICP registration is carried out, measurement accuracy is improved, and therefore scene recognition rate under the conditions of shielding, gathering, stacking and the like is more accurate. The invention is widely applied to the field of three-dimensional computer vision.

Description

Object pose measuring method and device and storage medium
Technical Field
The invention relates to the field of three-dimensional computer vision, in particular to a method and a device for measuring the pose of an object and a storage medium.
Background
In recent years, with the development of industrial upgrading, the automation of the manufacturing industry becomes an important driving of economic development, and the automatic sorting of objects by robots in the manufacturing industry is an important expression of the automation of the manufacturing industry. The pose of the object in the three-dimensional space is an important reference for the robot to identify, position, grab and manipulate the object. The process of acquiring the pose of an object is called 6-dimensional attitude measurement of the object, which is an important problem in the field of three-dimensional computer vision. An object is converted to B from A in a certain reference coordinate system through rotation translation, and the rotation translation process is marked as T AB ,T AB The three-dimensional X, Y and Z translation system consists of 3 translation amounts of x, y and z and 3 rotation angles of phi, chi and phi relative to coordinate axes, and has 6 degrees of freedom in total becauseThis T AB Referred to as the 6-dimensional pose of the object, i.e., the object pose.
A Point-to-Point Feature (PPF) based method (drop et al. Model Global, match Locally: impact and Robust 3D Object registration. In. The method constructs the global characteristics of the whole three-dimensional model, and then extracts the characteristics in the scene for matching. All point clouds of the model are used in the modeling stage, and the surface information of the whole three-dimensional model is represented. The method uses a 4-dimensional Feature to represent information between two points on the surface of a model, wherein the Feature is composed of the distance between the two points, the included angle of normal vectors of the two points, and the included angle of the normal vector of the two points and the distance vector between the two points, and is referred to as Point-to-Point Feature (PPF) for short. PPF needs to be quantized and then stored in a hash table, so that subsequent quick search is facilitated. The features are constructed by the model and the scene data to be paired, and a plurality of candidate 6-dimensional poses are obtained by voting. These candidate poses are then clustered, and similar poses are clustered together and averaged to obtain a more accurate pose. And then performing refined ICP registration on the gesture by using an Iterative closest Point algorithm (ICP), so as to improve the accuracy of the gesture.
The existing PPF-based method has the defects of (1) over-simplification of a sampling method; when the model is sampled according to the grids with a certain size, the point clouds in the same grid are simply averaged, and when the included angle between the normal vectors of the point clouds in the grid is greatly changed, more key information of surface change can be lost in a sampling mode, so that the expression capability of the difference information of the model surface is reduced. (2) a large amount of calculation; after the model point cloud is sampled, feature calculation needs to be performed on all point pairs, but the part of an object in an actual scene under any view angle is often smaller than the diameter of the model (the diameter of the model refers to the length of a diagonal line of a frame surrounding the model), and calculation redundancy exists. (3) the disadvantage of lack of robustness to point cloud noise; the point cloud shot by the camera has noise, the noise can cause certain deviation of the position and normal vector of the point cloud, certain deviation of the calculated scene characteristics can occur, and the method cannot compensate the noise deviation. (4) The disadvantage of not being able to comprehensively utilize color-depth (RGB-D) images; the method only runs on the depth image, only uses the three-dimensional space information of the scene, and cannot acquire certain information from the color image to assist the attitude measurement.
Disclosure of Invention
In view of at least one of the above technical problems, an object of the present invention is to provide an object pose measurement method, an object pose measurement device, and a storage medium.
The technical scheme adopted by the invention is as follows: in one aspect, the embodiment of the invention comprises an object pose measuring method, which comprises an off-line modeling stage and an on-line matching stage;
the offline modeling phase comprises:
inputting a three-dimensional model of an object, wherein the three-dimensional model comprises model point cloud coordinates and a model point cloud normal vector;
sampling the model point cloud coordinates and the model point cloud normal vector;
constructing a feature set in the model point cloud coordinates and the model point cloud normal vectors obtained by sampling, and calculating the model point pair features;
storing the extracted model point pair characteristics into a hash table;
the online matching stage comprises:
inputting a depth image, calculating scene point cloud coordinates corresponding to each pixel point of the depth image according to camera internal parameters, and calculating a scene point cloud normal vector according to the point cloud coordinates;
sampling the scene point cloud coordinates and the scene point cloud normal vector;
constructing a feature set in the scene point cloud coordinates and the scene point cloud normal vectors obtained by sampling, and calculating scene point pair features;
quantifying the extracted scene point pair characteristics and matching the scene point pair characteristics with the model point pair characteristics stored in a hash table to obtain a plurality of candidate object poses;
inputting a color image and extracting a scene edge point cloud;
screening the candidate object poses according to the scene edge point cloud;
clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses;
and registering the initial object pose by using an iterative closest point algorithm to obtain a final object pose.
Further, the step of sampling the model point cloud coordinates and the model point cloud normal vector specifically includes:
calculating a boundary frame surrounding the model point cloud according to the model point cloud coordinates to obtain a model point cloud space;
rasterizing the model point cloud space to obtain a plurality of grids with equal sizes, wherein each grid comprises a plurality of point clouds, and each point cloud comprises a corresponding model point cloud coordinate and a model point cloud normal vector;
clustering point clouds contained in each grid according to the size of an included angle between normal vectors of the model point clouds;
and averaging the model point cloud coordinates and the model point cloud normal vectors in each cluster to obtain the model point cloud coordinates and the model point cloud normal vectors after each grid sampling.
Further, the step of constructing a feature set in the sampled model point cloud coordinates and model point cloud normal vectors and calculating the model point pair features specifically includes:
constructing a K-D tree for the sampled model point cloud coordinates;
selecting a reference point, wherein the reference point is any point in a model point cloud coordinate obtained by sampling;
searching a target point in the K-D tree, wherein the distance between the target point and the reference point is less than a first threshold value;
and sequentially calculating the model point pair characteristics formed by the reference points and the target points.
Further, the step of storing the extracted model point pair characteristics in a hash table specifically includes:
quantizing the extracted model points to the features;
solving a key value of the quantization result through a hash function, and taking the key value as an index of the point pair characteristics in a hash table;
point pair characteristics with the same index are stored in the same bucket of the hash table, and point pair characteristics with different indexes are stored in different buckets of the hash table.
Further, the step of quantizing the extracted scene point pair features and matching the scene point pair features with the model point pair features stored in the hash table to obtain a plurality of candidate object poses specifically includes:
quantizing the extracted scene points to obtain features;
expanding the quantization result to compensate for the characteristic offset caused by the noise;
using the expanded result values as key values, and searching model point pair characteristics with the same key values in a hash table;
and acquiring a plurality of candidate object poses according to the model point pair characteristics.
Further, the step of extracting the scene edge point cloud from the input color image specifically includes:
graying the color image;
performing edge detection on the grayed image by using an edge detector;
corresponding pixels at the edge of the image to the depth image one by one, and calculating the spatial coordinates of pixel points according to camera internal parameters;
and extracting the space coordinates to be used as a scene edge point cloud.
Further, the step of screening the candidate object poses according to the scene edge point cloud specifically includes:
projecting the object three-dimensional model corresponding to the candidate object pose to an imaging plane according to camera internal parameters to obtain an edge point cloud of the three-dimensional model;
selecting any point from the edge point cloud of the three-dimensional model as a reference point, and finding out a corresponding matching point from the scene edge point cloud, wherein the matching point is the point closest to the reference point;
calculating a first distance, wherein the first distance is the distance from a matching point to a reference point, if the first distance is smaller than a second threshold value, the matching point is reserved, otherwise, the matching point is discarded;
and calculating the proportion of the number of the reserved matching points to the total number of the points in the edge point cloud of the three-dimensional model, if the proportion is greater than a third threshold value, reserving the candidate object pose corresponding to the three-dimensional model, and otherwise, discarding the candidate object pose.
Further, the step of clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses specifically includes:
selecting any one of the candidate object poses obtained by screening as a first candidate object pose;
respectively calculating the distance between the first candidate object pose and the other candidate object-free poses obtained by screening;
respectively initializing the candidate object poses obtained by screening into clusters with corresponding numbers;
clustering candidate object poses obtained by screening according to a hierarchical clustering method;
and extracting candidate object poses with the highest voting scores in each cluster to obtain a plurality of preliminary object poses.
In another aspect, the embodiment of the present invention further includes an object pose measurement apparatus, including a memory for storing at least one program and a processor for loading the at least one program to execute the object pose measurement method.
In another aspect, the embodiment of the present invention further includes a storage medium, in which processor-executable instructions are stored, and when executed by a processor, the processor-executable instructions are used for executing the object pose measurement method.
The invention has the beneficial effects that: (1) The invention provides a more efficient model sampling strategy, and reduces the number of point clouds, thereby reducing the subsequent calculation amount; but also can keep enough object surface change information; (2) The distance range when the point pair characteristics are calculated is limited, the point pair characteristic calculation amount of the model and the scene data is reduced, and the matching interference of excessive background point clouds is also reduced; (3) A quantization expansion method is provided, so that the influence of noise on the deviation of the point on the feature calculation is reduced; (4) The color image information is introduced, the input information is increased, the edge information is extracted from the color image, the candidate object poses are screened and the ICP registration is carried out, the measurement precision is improved, and therefore the scene recognition rate under the conditions of shielding, gathering, stacking and the like is more accurate.
Drawings
FIG. 1 is a schematic flow chart of an object pose measurement method according to an embodiment;
FIG. 2 is a flowchart illustrating the steps of the offline modeling phase according to an embodiment;
FIG. 3 is a flowchart illustrating the steps of the online matching phase according to an embodiment.
Detailed Description
As shown in fig. 1, the present embodiment includes an object pose measurement method, which includes an offline modeling phase and an online matching phase;
the offline modeling phase comprises:
inputting a three-dimensional model of an object, wherein the three-dimensional model comprises model point cloud coordinates and a model point cloud normal vector;
sampling the model point cloud coordinates and the model point cloud normal vector;
constructing a feature set in model point cloud coordinates and model point cloud normal vectors obtained by sampling, and calculating model point pair features;
storing the extracted model point pair characteristics into a hash table;
the online matching stage comprises:
inputting a depth image, calculating scene point cloud coordinates corresponding to each pixel point of the depth image according to camera internal parameters, and calculating a scene point cloud normal vector according to the point cloud coordinates;
sampling the scene point cloud coordinates and the scene point cloud normal vector;
constructing a feature set in the scene point cloud coordinates and the scene point cloud normal vectors obtained by sampling, and calculating scene point pair features;
quantifying the extracted scene point pair characteristics and matching the scene point pair characteristics with the model point pair characteristics stored in a hash table to obtain a plurality of candidate object poses;
inputting a color image and extracting a scene edge point cloud;
screening the candidate object poses according to the scene edge point cloud;
clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses;
and registering the initial object pose by using an iterative closest point algorithm to obtain a final object pose.
In the embodiment, the off-line modeling stage mainly performs feature modeling on the object three-dimensional model and stores the feature modeling for subsequent scene attitude measurement; and the online matching stage is mainly used for measuring the object pose of the RGB-D image of the given scene.
Further, referring to fig. 2, the offline modeling phase includes the steps of:
s1, inputting a three-dimensional model of an object, wherein the three-dimensional model comprises a model point cloud coordinate and a model point cloud normal vector;
s2, sampling the model point cloud coordinates and the model point cloud normal vector;
s3, constructing a feature set in the model point cloud coordinates and the model point cloud normal vectors obtained through sampling, and calculating the model point pair features;
and S4, storing the extracted model point pair characteristics into a hash table.
In this embodiment, the offline modeling stage is called an offline modeling stage because the scene image does not need to be input; step S1 at this stage is to input a three-dimensional model of the object, where the three-dimensional model does not need to have information such as texture and color, and only needs to retain point cloud coordinates and point cloud normal vectors, such as a CAD model obtained by computer modeling or a 3D model obtained by a three-dimensional reconstruction technique, and the process can reduce the complexity of three-dimensional model modeling.
Step S2, namely, the step of sampling the model point cloud coordinates and the model point cloud normal vectors, specifically includes the following steps:
s201, calculating a boundary frame surrounding the model point cloud according to the model point cloud coordinates to obtain a model point cloud space;
s202, rasterizing the model point cloud space to obtain a plurality of grids with the same size, wherein each grid comprises a plurality of point clouds, and each point cloud comprises a corresponding model point cloud coordinate and a model point cloud normal vector;
s203, clustering the point clouds in each grid according to the size of an included angle between normal vectors of the model point clouds;
and S204, averaging the model point cloud coordinates and the model point cloud normal vectors in each cluster to obtain the model point cloud coordinates and the model point cloud normal vectors after each grid sampling.
In the embodiment, a bounding box surrounding the point cloud is calculated according to the maximum and minimum values of the model point cloud coordinates on the X, Y and Z axes to obtain a model point cloud space, and the diagonal diameter of the bounding box is recorded as the model diameter d m (ii) a And rasterizing the model point cloud space, wherein each grid is a small cube and is equal in size. The size of the grid is set to τ × d m τ is the sampling coefficient, set to 0.05; each grid comprises a plurality of point clouds, and each point cloud comprises a corresponding model point cloud coordinate and a model point cloud normal vector; for each grid, clustering point clouds in the grids according to the size of an included angle between point cloud normal vectors, wherein the size of the included angle is not more than a threshold value delta theta, belonging to the same cluster, and then averaging model point cloud coordinates and the model point cloud normal vectors in each cluster to obtain model point cloud coordinates and model point cloud normal vectors after each grid is sampled. This sampling strategy is employed for all grids in the model space, where Δ θ is generally set to
Figure BDA0002412920660000061
By the sampling strategy, the subsequent operation amount is reduced, the influence of the sampling strategy on losing surface change information is reduced, and the point is opposite to the subsequent operation amountThe identification information of the cells can be preserved.
S3, namely, a characteristic set is constructed in the model point cloud coordinates and the model point cloud normal vectors obtained through sampling, and the step of calculating the characteristics of the model point pairs specifically comprises the following steps:
s301, constructing a K-D tree for model point cloud coordinates obtained through sampling;
s302, selecting a reference point, wherein the reference point is any point in a model point cloud coordinate obtained through sampling;
s303, searching a target point in the K-D tree, wherein the distance between the target point and the reference point is smaller than a first threshold value;
and S304, sequentially calculating the model point pair characteristics formed by the reference points and the target points.
In the embodiment, a K-D tree is constructed for the sampled model point cloud coordinates, so that the subsequent rapid distance search is facilitated; recording each point after sampling as a reference point, and searching a target point in the K-D tree, wherein the distance between the target point and the reference point is less than D range Point of (a), here d range And sequentially calculating the model point pair characteristics formed by the reference points and the target points for the first threshold. The specific calculation process is as follows: let the reference point coordinate be m r The normal vector corresponding to the reference point is n r Target point is m s The normal vector corresponding to the target point is n s Model point pair characteristic F formed by reference point and target point r,s =(||d r,s ||,∠(n r ,d r,s ),∠(n s ,d r,s ),∠(n r ,n s ) Wherein d) is r,s Represents the slave point m r To point m s Is given by the distance vector, | d r,s | represents d r,s Distance of (n), angle (n) r ,d r,s ) Denotes the normal vector as n r And a distance vector d r,s The included angle of (n) is analogized in turn s ,d r,s ) Represents a normal vector n s And a distance vector d r,s Angle (n) r ,n s ) Represents a normal vector n r And the normal vector n s The included angle of (c). Wherein the first threshold value
Figure BDA0002412920660000071
d min And d med Respectively, the two short edges of the model bounding box. At most viewing angles of a real scene, the visible part length of the model is often smaller than d m But is greater than d min Therefore, when constructing features for points pairs with longer distances in a scene, more background point clouds are calculated, the error recognition rate of the algorithm is increased, and the calculation amount of the constructed features is also increased, so that d is selected in the embodiment range The distance upper limit is used for reducing the influence, namely, the distance range when the point pair characteristics are calculated is limited by the process, the point pair characteristic calculation amount of the model and the scene data is reduced, and the matching interference of excessive background point clouds is also reduced.
Step S4, namely, the step of storing the extracted model point pair characteristics in a hash table, specifically includes the following steps:
s401, quantizing the extracted model points;
s402, solving a key value of the quantization result through a hash function, and taking the key value as an index of the point pair characteristics in a hash table;
s403, point pair characteristics with the same index are stored in the same bucket of the hash table, and point pair characteristics with different indexes are stored in different buckets of the hash table.
In this embodiment, the extracted model point pair feature F is r,s =(||d r,s ||,∠(n r ,d r,s ),∠(n s ,d r,s ),∠(n r ,n s ) Is quantized to obtain
Figure BDA0002412920660000072
Where Δ d =0.05d is generally set m
Figure BDA0002412920660000073
The quantized value Q r,s Solving a key value through a hash function, wherein the key value is a nonnegative integer and is used as an index of the characteristic of the point pair in a hash table, and the key value has the same indexThe indexed pair characteristics are stored in the same bucket of the hash table, and the differently indexed pair 6 characteristics are stored in different buckets of the hash table.
Regarding the on-line matching stage, which requires the input of a color image and a depth image of a scene, respectively, with reference to fig. 3, the stage includes the following steps:
D1. inputting a depth image, calculating scene point cloud coordinates corresponding to each pixel point of the depth image according to camera internal parameters, and calculating a scene point cloud normal vector according to the point cloud coordinates;
D2. sampling the scene point cloud coordinates and the scene point cloud normal vector;
D3. constructing a feature set in scene point cloud coordinates and scene point cloud normal vectors obtained through sampling, and calculating scene point pair features;
D4. quantifying the extracted scene point pair characteristics and matching the scene point pair characteristics with the model point pair characteristics stored in a hash table to obtain a plurality of candidate object poses;
D5. inputting a color image and extracting a scene edge point cloud;
D6. screening the candidate object poses according to the scene edge point cloud;
D7. clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses;
D8. and registering the initial object pose by using an iterative closest point algorithm to obtain a final object pose.
In step D1, according to a camera imaging formula:
Figure BDA0002412920660000081
wherein u and v are coordinates of the point in the imaging plane, X, Y, Z is 3-dimensional coordinates of the point in the camera coordinate system, and f x 、f y 、c x 、c y Is the camera internal reference. Therefore, the 3-dimensional space coordinate corresponding to each pixel point in the input depth image, namely the scene point cloud coordinate, can be calculated according to the camera internal parameters, and the corresponding scene point cloud normal vector can be estimated according to the scene point cloud coordinate.
Regarding step D2, that is, the process of sampling the scene point cloud coordinates and the scene point cloud normal vectors, the same strategy as that adopted in step S2 of offline modeling is adopted in this embodiment to sample the scene point cloud coordinates and the scene point cloud normal vectors. The process can also reduce the number of point clouds, thereby reducing the subsequent calculation amount; but also can keep enough information of the surface change of the object.
Similarly, regarding step D3, the present embodiment calculates the scene point pair characteristics by the same method as step S3 in the offline modeling phase; the method specifically comprises the following steps: (1) Constructing a K-D tree for the scene point cloud coordinates sampled in the step D2; (2) Assuming that the number of scene point cloud coordinates obtained by sampling in the step D2 is N, taking one point from every N points as a reference point, wherein the total number of points is N
Figure BDA0002412920660000082
A reference point; (3) For each reference point, finding a distance less than D from the reference point in the K-D tree range And constructed as scene point pair characteristics, d range The setting and the feature calculation of the scene point are the same as the step S3 in the offline modeling stage, and are not repeated here.
Step D4, namely, the step of quantizing the extracted scene point pair features and matching the scene point pair features with the model point pair features stored in the hash table to obtain a plurality of candidate object poses, specifically includes the following steps:
D401. quantizing the extracted scene points to obtain features;
D402. expanding the quantization result to compensate for the characteristic offset caused by the noise;
D403. using the expanded result values as key values, and searching model point pair characteristics with the same key values in a hash table;
D404. and acquiring a plurality of candidate object poses according to the model point pair characteristics.
In this embodiment, the feature set F = (F) of the scene point pair calculated in step D3 1 ,F 2 ,F 3 ,F 4 ) Quantization is performed in the same way as in the offline modeling step S4,obtaining a quantized 4-dimensional characteristic Q = (Q) 1 ,Q 2 ,Q 3 ,Q 4 ) (ii) a (2) To reduce the effect of noise on the quantization matching, the present embodiment expands the quantized value Q as follows. It is provided that,
Figure BDA0002412920660000091
i =1,2,3,4, where e i For quantization error of the ith dimension, F i Is the value of each dimension of F, Δ is the quantization interval. Take i =1 as an example, when
Figure BDA0002412920660000092
When is, Q new =(Q 1 -1,Q 2 ,Q 3 ,Q 4 ) (ii) a When the temperature is higher than the set temperature
Figure BDA0002412920660000093
When is, Q new =(Q 1 +1,Q 2 ,Q 3 ,Q 4 ) (ii) a When in use
Figure BDA0002412920660000094
When the Q is not expanded, the Q is not expanded; in this way, a scene point pair feature F can be quantized to a maximum of 16 quantized features to compensate for noise-induced feature shifts; (3) Let the ith reference point be s i ,s i Build up n i A point pair feature. For s i A voting matrix is created, each row of the matrix represents a point in the image scene point cloud, each column represents a quantized rotation angle, and the angle interval is generally taken to be
Figure BDA0002412920660000095
Each coordinate (m, alpha) of the matrix represents that a normal vector of a point m in an image scene rotates to be parallel to an X axis, and then after the normal vector translates to an origin, a connecting line of the point m and other points needs to rotate around the X axis by an angle alpha, and the matrix is initialized to be a full 0 matrix; (4) And taking the expanded multiple Q values as key values, searching in a hash table, and searching for model point pair characteristics with the same key values. For each point pair characteristic found, calculating a point m 'in the image scene and an angle alpha' to be rotated, and voting a matrix
Figure BDA0002412920660000096
Plus 1; (5) Each reference point s i After the voting process is finished, extracting row-column coordinates (m, alpha) corresponding to the maximum value in the voting matrix, and calculating to obtain the object pose of the image scene, wherein the value at the (m, alpha) position is recorded as the score of the pose; (6) And reserving the object poses of the image scene calculated by all the reference points as candidate object poses.
Step D5, namely the step of inputting the color image to extract the scene edge point cloud, specifically includes the following steps:
D501. graying the color image;
D502. performing edge detection on the grayed image by using an edge detector;
D503. corresponding pixels at the edge of the image to the depth image one by one, and calculating the spatial coordinates of pixel points according to camera internal parameters;
D504. and extracting the space coordinates to be used as a scene edge point cloud.
In this embodiment, after graying an input color image, edge detection is performed using a Canny edge detector to extract edge pixels of the image, the pixels located at the edges are in one-to-one correspondence with the depth image, spatial coordinates of pixel points are calculated according to camera internal parameters, and the pixel points are referred to as scene edge point clouds.
Step D6, namely the step of screening the candidate object poses according to the scene edge point cloud, specifically comprises the following steps:
D601. projecting the object three-dimensional model corresponding to the candidate object pose to an imaging plane according to camera internal parameters to obtain an edge point cloud of the three-dimensional model;
D602. selecting any point from the edge point cloud of the three-dimensional model as a reference point, and finding out a corresponding matching point from the scene edge point cloud, wherein the matching point is the point closest to the reference point;
D603. calculating a first distance, wherein the first distance is the distance from a matching point to a reference point, if the first distance is smaller than a second threshold value, the matching point is reserved, otherwise, the matching point is discarded;
D604. and calculating the proportion of the number of the reserved matching points to the total number of the points in the edge point cloud of the three-dimensional model, if the proportion is greater than a third threshold value, reserving the candidate object pose corresponding to the three-dimensional model, and otherwise, discarding the candidate object pose.
In this embodiment, for each candidate object pose obtained in step D4, the three-dimensional model corresponding to the candidate object pose is projected to an imaging plane according to the camera internal parameters, so as to obtain an edge point cloud of the three-dimensional model corresponding to each candidate object pose; for each point in the edge point cloud of the three-dimensional model, searching the point with the shortest distance in the scene edge point cloud extracted in the step D5, and if the shortest distance is less than D ε I.e. said second threshold value, second threshold value d ε Is generally set to 0.1d m If so, the point is called as a correct matching and is called as a matching point; and counting the proportion of the number of the reserved matching points to the total number of the points in the edge point cloud of the three-dimensional model, if the proportion is greater than a third threshold value, reserving the candidate object pose corresponding to the three-dimensional model, and otherwise, discarding the candidate object pose.
Step D7, namely clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses, and specifically comprises the following steps:
D701. selecting any one of the candidate object poses obtained by screening as a first candidate object pose;
D702. respectively calculating the distance between the first candidate object pose and the other candidate non-object poses obtained by screening;
D703. respectively initializing the candidate object poses obtained by screening into clusters with corresponding numbers;
D704. clustering candidate object poses obtained by screening according to a hierarchical clustering method;
D705. and extracting candidate object poses with the highest voting scores in each cluster to obtain a plurality of preliminary object poses.
In this example, for stepD6, calculating the distance between every two candidate object poses obtained after screening, wherein the distance D between the poses comprises two parts: a displacement amount difference Δ dist and a rotation amount difference Δ rot, where Δ dist<d cluster And Δ rot<rot cluster I.e. the displacement difference Δ dist is smaller than the inter-cluster spatial distance threshold d cluster Spatial distance threshold d cluster Is generally set to be one tenth of the diameter of the object, and the difference of the rotation amount delta rot is smaller than the threshold rot of the rotation angle between clusters cluster Threshold rot of rotation angle cluster Typically set at 30 degrees; if N candidate object poses obtained after screening in the step D6 exist, initializing the N pose into N clusters respectively, and clustering adjacent clusters together according to a hierarchical clustering method until the distances between every two clusters are greater than a threshold value; extracting the candidate object pose with the highest vote number in each cluster to obtain a plurality of preliminary object poses.
Finally, for the plurality of preliminary object poses obtained in step D7, model contour point clouds under each preliminary object pose are further obtained, and ICP registration is performed on each contour point cloud in the scene edge point cloud extracted in step D5, so as to obtain a final object pose with high accuracy.
In summary, the object pose measurement method in the embodiment of the present invention has the following advantages:
(1) The embodiment of the invention provides a more efficient model sampling strategy, and the number of point clouds is reduced, so that the subsequent calculation amount can be reduced; but also can keep enough object surface change information; (2) The distance range when the point pair characteristics are calculated is limited, the point pair characteristic calculation amount of the model and the scene data is reduced, and the matching interference of excessive background point clouds is also reduced; (3) A quantization expansion method is provided, so that the influence of noise on the deviation of the point on the feature calculation is reduced; (4) Color image information is introduced, input information is added, edge information is extracted from the color image, candidate object poses are screened and ICP registration is carried out, measurement accuracy is improved, and therefore scene recognition rate under the conditions of shielding, gathering, stacking and the like is more accurate.
In this embodiment, the apparatus for object pose measurement includes a memory for storing at least one program and a processor for loading the at least one program to perform the method for object pose measurement.
The memory may also be separately produced and used to store a computer program corresponding to the one method of object pose measurement. When the memory is connected with the processor, the stored computer program is read out by the processor and executed, so that the method for measuring the object pose is implemented, and the technical effect described in the embodiment is achieved.
It should be noted that, unless otherwise specified, when a feature is referred to as being "fixed" or "connected" to another feature, it can be directly fixed or connected to the other feature or indirectly fixed or connected to the other feature. Furthermore, the descriptions of upper, lower, left, right, etc. used in the present disclosure are only relative to the mutual positional relationship of the constituent parts of the present disclosure in the drawings. As used in this disclosure, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In addition, unless defined otherwise, all technical and scientific terms used in this example have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this embodiment, the term "and/or" includes any combination of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language ("e.g.," such as "or the like") provided with this embodiment is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, operations of processes described in this embodiment can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described in this embodiment (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described in this embodiment includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described in the present embodiment to convert the input data to generate output data that is stored to a non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.

Claims (9)

1. An object pose measurement method is characterized by comprising an off-line modeling stage and an on-line matching stage;
the offline modeling phase comprises:
inputting a three-dimensional model of an object, wherein the three-dimensional model comprises model point cloud coordinates and a model point cloud normal vector;
sampling the model point cloud coordinates and the model point cloud normal vector;
constructing a feature set in the model point cloud coordinates and the model point cloud normal vectors obtained by sampling, and calculating the model point pair features;
storing the extracted model point pair characteristics into a hash table;
the online matching stage comprises:
inputting a depth image, calculating scene point cloud coordinates corresponding to each pixel point of the depth image according to camera internal parameters, and calculating a scene point cloud normal vector according to the point cloud coordinates;
sampling the scene point cloud coordinates and the scene point cloud normal vector;
constructing a feature set in the scene point cloud coordinates and the scene point cloud normal vectors obtained by sampling, and calculating scene point pair features;
quantizing the extracted scene point pair features and matching the scene point pair features with the model point pair features stored in a hash table to obtain a plurality of candidate object poses;
inputting a color image and extracting a scene edge point cloud;
screening the candidate object poses according to the scene edge point cloud;
clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses;
registering the initial object pose by using an iterative closest point algorithm to obtain a final object pose;
the step of quantifying the extracted scene point pair features and matching the quantified scene point pair features with the model point pair features stored in the hash table to obtain a plurality of candidate object poses specifically includes:
quantizing the extracted scene points to obtain features;
expanding the quantization result to compensate for the characteristic offset caused by the noise;
using the expanded result values as key values, and searching model point pair characteristics with the same key values in a hash table;
acquiring a plurality of candidate object poses according to the model point pair characteristics;
the quantization result Q is augmented in the following way:
is provided with
Figure FDA0004025832540000011
Wherein e i For quantization error of the ith dimension, F i Is the value of each dimension of F, Δ is the quantization interval; when i =1, when
Figure FDA0004025832540000012
When Q is new =(Q 1 -1,Q 2 ,Q 3 ,Q 4 ) When is coming into contact with
Figure FDA0004025832540000013
When is, Q new =(Q 1 +1,Q 2 ,Q 3 ,Q 4 ) When is coming into contact with
Figure FDA0004025832540000014
When the Q is not expanded, the Q is not expanded; wherein Q is new The quantized result after the expansion is obtained.
2. The method for measuring the pose of an object according to claim 1, wherein the step of sampling the coordinates of the model point cloud and the normal vector of the model point cloud specifically comprises:
calculating a boundary frame surrounding the model point cloud according to the model point cloud coordinates to obtain a model point cloud space;
rasterizing the model point cloud space to obtain a plurality of grids with equal sizes, wherein each grid comprises a plurality of point clouds, and each point cloud comprises a corresponding model point cloud coordinate and a model point cloud normal vector;
clustering point clouds contained in each grid according to the size of an included angle between normal vectors of the model point clouds;
and averaging the model point cloud coordinates and the model point cloud normal vectors in each cluster to obtain the model point cloud coordinates and the model point cloud normal vectors after each grid sampling.
3. The method for measuring the pose of an object according to claim 1, wherein the step of constructing a feature set in the model point cloud coordinates and the model point cloud normal vectors obtained by sampling and calculating the model point pair features specifically comprises:
constructing a K-D tree for the sampled model point cloud coordinates;
selecting a reference point, wherein the reference point is any point in a model point cloud coordinate obtained by sampling;
searching a target point in the K-D tree, wherein the distance between the target point and the reference point is less than a first threshold value;
and sequentially calculating the model point pair characteristics formed by the reference points and the target points.
4. The object pose measurement method according to claim 1, wherein the step of storing the extracted model point pair features in a hash table specifically comprises:
quantizing the extracted model points to the features;
solving a key value of the quantization result through a hash function, and taking the key value as an index of the point pair characteristics in a hash table;
point pair characteristics with the same index are stored in the same bucket of the hash table, and point pair characteristics with different indexes are stored in different buckets of the hash table.
5. The object pose measurement method according to claim 1, wherein the step of extracting a scene edge point cloud from the input color image specifically comprises:
graying the color image;
performing edge detection on the grayed image by using an edge detector;
corresponding pixels at the edge of the image to the depth image one by one, and calculating the spatial coordinates of pixel points according to camera internal parameters;
and extracting the space coordinates to be used as a scene edge point cloud.
6. The method for measuring the pose of an object according to claim 1, wherein the step of screening the candidate object poses according to the scene edge point cloud specifically comprises:
projecting the object three-dimensional model corresponding to the candidate object pose to an imaging plane according to camera internal parameters to obtain an edge point cloud of the three-dimensional model;
selecting any point from the edge point cloud of the three-dimensional model as a reference point, and finding out a corresponding matching point from the scene edge point cloud, wherein the matching point is the point closest to the reference point;
calculating a first distance, wherein the first distance is the distance from a matching point to a reference point, if the first distance is smaller than a second threshold value, the matching point is reserved, otherwise, the matching point is discarded;
and calculating the proportion of the number of the reserved matching points to the total number of the points in the edge point cloud of the three-dimensional model, if the proportion is greater than a third threshold value, reserving the candidate object pose corresponding to the three-dimensional model, and otherwise, discarding the candidate object pose.
7. The object pose measurement method according to claim 1, wherein the step of clustering candidate object poses obtained by screening to obtain a plurality of preliminary object poses specifically comprises:
selecting any one of the candidate object poses obtained by screening as a first candidate object pose;
respectively calculating the distance between the first candidate object pose and the other candidate object-free poses obtained by screening;
respectively initializing the candidate object poses obtained by screening into clusters with corresponding numbers;
clustering candidate object poses obtained by screening according to a hierarchical clustering method;
and extracting candidate object poses with the highest voting scores in each cluster to obtain a plurality of preliminary object poses.
8. An apparatus for object pose measurement, comprising a memory for storing at least one program and a processor for loading the at least one program to perform the method of any one of claims 1-7.
9. A storage medium having stored therein processor-executable instructions, which when executed by a processor, are for performing the method of any one of claims 1-7.
CN202010182093.XA 2020-03-16 2020-03-16 Object pose measuring method and device and storage medium Active CN111598946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010182093.XA CN111598946B (en) 2020-03-16 2020-03-16 Object pose measuring method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010182093.XA CN111598946B (en) 2020-03-16 2020-03-16 Object pose measuring method and device and storage medium

Publications (2)

Publication Number Publication Date
CN111598946A CN111598946A (en) 2020-08-28
CN111598946B true CN111598946B (en) 2023-03-21

Family

ID=72183322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010182093.XA Active CN111598946B (en) 2020-03-16 2020-03-16 Object pose measuring method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111598946B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932628A (en) * 2020-09-22 2020-11-13 深圳市商汤科技有限公司 Pose determination method and device, electronic equipment and storage medium
CN112734932A (en) * 2021-01-04 2021-04-30 深圳辰视智能科技有限公司 Strip-shaped object unstacking method, unstacking device and computer-readable storage medium
CN113288122B (en) * 2021-05-21 2023-12-19 河南理工大学 Wearable sitting posture monitoring device and sitting posture monitoring method
CN113723468B (en) * 2021-08-06 2023-08-04 西南科技大学 Object detection method of three-dimensional point cloud
CN113627548A (en) * 2021-08-17 2021-11-09 熵智科技(深圳)有限公司 Planar workpiece template matching method, device, medium and computer equipment
CN114004899B (en) * 2021-11-12 2024-05-14 广东嘉腾机器人自动化有限公司 Pallet pose recognition method, storage medium and equipment
CN114170312A (en) * 2021-12-07 2022-03-11 南方电网电力科技股份有限公司 Target object pose estimation method and device based on feature fusion
CN114821125B (en) * 2022-04-08 2024-05-14 跨维(深圳)智能数字科技有限公司 Object six-degree-of-freedom attitude estimation method, system, device and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292965A (en) * 2017-08-03 2017-10-24 北京航空航天大学青岛研究院 A kind of mutual occlusion processing method based on depth image data stream
CN109029322A (en) * 2018-07-16 2018-12-18 北京芯合科技有限公司 A kind of completely new numerical control robot multi-coordinate measuring system and measurement method
CN110706332A (en) * 2019-09-25 2020-01-17 北京计算机技术及应用研究所 Scene reconstruction method based on noise point cloud

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292965A (en) * 2017-08-03 2017-10-24 北京航空航天大学青岛研究院 A kind of mutual occlusion processing method based on depth image data stream
CN109029322A (en) * 2018-07-16 2018-12-18 北京芯合科技有限公司 A kind of completely new numerical control robot multi-coordinate measuring system and measurement method
CN110706332A (en) * 2019-09-25 2020-01-17 北京计算机技术及应用研究所 Scene reconstruction method based on noise point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
应用摄像机位姿估计的点云初始配准;郭清达 等;《光学精密工程》;20170630;第25卷(第6期);第1-2页 *

Also Published As

Publication number Publication date
CN111598946A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN111598946B (en) Object pose measuring method and device and storage medium
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
CN107810522B (en) Real-time, model-based object detection and pose estimation
JP5863440B2 (en) Information processing apparatus and method
Azad et al. Stereo-based 6d object localization for grasping with humanoid robot systems
EP3502958B1 (en) Object recognition processing apparatus, object recognition processing method, and program
CN108122256B (en) A method of it approaches under state and rotates object pose measurement
Azad et al. 6-DoF model-based tracking of arbitrarily shaped 3D objects
CN110176075B (en) System and method for simultaneous consideration of edges and normals in image features through a vision system
CN110942515A (en) Point cloud-based target object three-dimensional computer modeling method and target identification method
WO2011115143A1 (en) Geometric feature extracting device, geometric feature extracting method, storage medium, three-dimensional measurement apparatus, and object recognition apparatus
JP2011133273A (en) Estimation apparatus and control method thereof, and program
CN111815706A (en) Visual identification method, device, equipment and medium for single-article unstacking
CN112651944A (en) 3C component high-precision six-dimensional pose estimation method and system based on CAD model
Yogeswaran et al. 3d surface analysis for automated detection of deformations on automotive body panels
CN116921932A (en) Welding track recognition method, device, equipment and storage medium
US11189053B2 (en) Information processing apparatus, method of controlling information processing apparatus, and non-transitory computer-readable storage medium
Tazir et al. Cluster ICP: Towards sparse to dense registration
Rink et al. Feature based particle filter registration of 3D surface models and its application in robotics
Figueroa et al. A combined approach toward consistent reconstructions of indoor spaces based on 6D RGB-D odometry and KinectFusion
CN111915632A (en) Poor texture target object truth value database construction method based on machine learning
WO2017042852A1 (en) Object recognition appratus, object recognition method and storage medium
Tellaeche et al. 6DOF pose estimation of objects for robotic manipulation. A review of different options
Yano et al. Parameterized B-rep-Based Surface Correspondence Estimation for Category-Level 3D Object Matching Applicable to Multi-Part Items
Flores et al. Evaluating the Influence of Feature Matching on the Performance of Visual Localization with Fisheye Images.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240218

Address after: 510641 Industrial Building, Wushan South China University of Technology, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou South China University of Technology Asset Management Co.,Ltd.

Country or region after: China

Address before: 510641 No. five, 381 mountain road, Guangzhou, Guangdong, Tianhe District

Patentee before: SOUTH CHINA University OF TECHNOLOGY

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240329

Address after: 518057, Building 4, 512, Software Industry Base, No. 19, 17, and 18 Haitian Road, Binhai Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Cross dimension (Shenzhen) Intelligent Digital Technology Co.,Ltd.

Country or region after: China

Address before: 510641 Industrial Building, Wushan South China University of Technology, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: Guangzhou South China University of Technology Asset Management Co.,Ltd.

Country or region before: China