CN116412756A - Size acquisition method, device, robot and readable storage medium - Google Patents

Size acquisition method, device, robot and readable storage medium Download PDF

Info

Publication number
CN116412756A
CN116412756A CN202211555133.6A CN202211555133A CN116412756A CN 116412756 A CN116412756 A CN 116412756A CN 202211555133 A CN202211555133 A CN 202211555133A CN 116412756 A CN116412756 A CN 116412756A
Authority
CN
China
Prior art keywords
image
coordinate system
information
coordinate
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211555133.6A
Other languages
Chinese (zh)
Inventor
张智胜
梅江元
刘三军
李育胜
区志财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Group Co Ltd
Midea Group Shanghai Co Ltd
Original Assignee
Midea Group Co Ltd
Midea Group Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Group Co Ltd, Midea Group Shanghai Co Ltd filed Critical Midea Group Co Ltd
Priority to CN202211555133.6A priority Critical patent/CN116412756A/en
Publication of CN116412756A publication Critical patent/CN116412756A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/22Measuring arrangements characterised by the use of optical techniques for measuring depth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a size acquisition method, a size acquisition device, a robot and a readable storage medium. The size acquisition method comprises the following steps: acquiring first point cloud information of a target object under a first coordinate system and second point cloud information of a target plane under the first coordinate system, wherein the first coordinate system is a camera coordinate system, and the target object is positioned on the target plane; constructing a second coordinate system according to the second point cloud information, wherein the second coordinate system is a coordinate system corresponding to the target plane; and determining the size information of the target object according to the first point cloud information and the second coordinate system.

Description

Size acquisition method, device, robot and readable storage medium
Technical Field
The invention belongs to the technical field of visual inspection, and particularly relates to a size acquisition method, a size acquisition device, a robot and a readable storage medium.
Background
The method for detecting the length, width and height of the object in a visual detection mode is characterized in that after the point cloud of the object is obtained, the size of the object is measured by manually selecting points in the point cloud, the steps of the method are complicated, the length, width and height of the object cannot be obtained automatically, and a large amount of manual operation is needed.
The depth camera can acquire a large range of three-dimensional data in batches at a time. In order to reduce the steps of manual operation, the method for measuring the length, width and height of the object by using the depth camera gradually replaces the traditional measuring method, has the characteristics of trouble saving and labor saving, and is applied in a large scale.
In the related art, images are required to be acquired from a plurality of specific angles through a depth camera, and background images of non-target objects are required to be acquired, so that the size detection process is complex, and the detection efficiency is low.
Disclosure of Invention
The present invention aims to solve one of the technical problems existing in the prior art or related technologies.
To this end, a first aspect of the invention proposes a size acquisition method.
A second aspect of the present invention proposes a size acquisition device.
A third aspect of the present invention proposes a size acquisition device.
A fourth aspect of the invention proposes a readable storage medium.
A fifth aspect of the present invention proposes a robot.
In view of this, according to a first aspect of the present invention, there is provided a size acquisition method including: acquiring first point cloud information of a target object under a first coordinate system and second point cloud information of a target plane under the first coordinate system, wherein the first coordinate system is a camera coordinate system, and the target object is positioned on the target plane; constructing a second coordinate system according to the second point cloud information, wherein the second coordinate system is a coordinate system corresponding to the target plane; and determining the size information of the target object according to the first point cloud information and the second coordinate system.
The size acquisition method provided by the invention can be applied to a robot, and the robot can acquire the first point cloud information of the target object and the second point cloud information of the target plane in the running process.
In the technical scheme, the first point cloud information is a point set of the target object under the camera coordinate system, the second point cloud information is a point set of the target plane under the camera coordinate system, and the plane coordinate system corresponding to the target plane can be constructed through the second point cloud information. In a planar coordinate system, the object plane is a reference plane in the coordinate system. Specifically, after the second point cloud information is determined, a second coordinate system with the target plane as a reference plane is constructed based on the second point cloud information. Because the first point cloud information and the second point cloud information are point sets under the camera coordinate system, the first point cloud information in the first coordinate system can be converted into the second coordinate system, and accordingly corresponding coordinates of the first point cloud information under the second coordinate system, namely the coordinates of the target object under the second coordinate system, are obtained. Since the target plane is a reference plane in the second coordinate system, the size information of the target object can be determined from the coordinates of the target object in the second coordinate system.
The method and the device acquire the first point cloud information of the target object and the second point cloud information of the target plane. And constructing a second coordinate system with the target plane as a reference plane, converting the first point cloud information into the second coordinate system through the coordinate system, and determining the size information of the target object in the second coordinate system. Compared with the prior art, the method has the advantages that images are not required to be acquired from multiple angles, and the images are not required to be acquired at the designated angles, so that the step of acquiring the object size through the robot is simplified.
In addition, the size acquisition method in the technical scheme provided by the invention can also have the following additional technical characteristics:
in the above technical solution, obtaining first point cloud information of the target object in the first coordinate system and second point cloud information of the target plane in the first coordinate system includes: acquiring a first image and a second image of a target scene, wherein the first image is a color image, the second image is a depth image, and the target scene comprises a target object and a target plane;
determining a first mask image and a second mask image through the first image, wherein the first mask image is matched with a target object, and the second mask image is matched with a target plane;
and determining first point cloud information and second point cloud information according to the first mask map, the second mask map and the second image.
In the technical scheme, an image acquisition device is arranged in the robot, and a first image and a second image corresponding to a target scene can be acquired through the image acquisition device in the running process of the robot. The first image is a color image, and the second image is a depth image.
The robot collects image data through the same image collecting device, and the image collecting device is arranged on the robot and moves together with the robot. In acquiring image data, the top surface of the target object needs to be photographed completely, and the image data needs to contain the target plane.
In the running process of the robot, a color image corresponding to the target scene, namely a first image, can be acquired through the image acquisition device, and a depth image of the target scene, namely a second image, can be acquired through the image acquisition device. The first image and the second image are matched in image content, and the first image and the second image comprise a target object and a target plane. The image acquisition device may be a depth camera.
It should be noted that, in the process of acquiring the first image and the second image, the top surface of the target object needs to be completely photographed, and the first image and the second image need to each include the target plane. The first image and the second image are the same in size. The second image is a single-channel depth image, the stored depth value is greater than or equal to 0, and if a region with the depth value of 0 exists in the second image, the region is determined to be incapable of acquiring the corresponding depth value.
In the technical scheme, after the first image is acquired, a first mask image corresponding to the target object and a second mask image corresponding to the target plane are determined according to the first image. The first mask map is a mask map obtained by masking a target object in the first image, and the second mask map is a mask map obtained by masking a target plane in the first image. In the case of obtaining the first mask map and the second mask map, first point cloud information of the target object in the depth camera coordinate system and second point cloud information of the target plane in the depth camera coordinate system can be determined according to the first mask map and the second image.
The first mask pattern is a single-channel mask pattern, and the second mask pattern is a single-channel mask pattern.
In the technical scheme, a first mask image corresponding to a target object and a second mask image corresponding to a target plane are obtained by collecting a first image and a second image of a target scene and performing mask processing on the second image. According to the first mask image, the second mask image and the second image comprising the depth information, the first point cloud information of the target object and the second point cloud information of the target plane can be determined according to the depth camera parameters corresponding to the second image, and manual operation of a user is not needed.
In any of the above technical solutions, determining, by the first image, the first mask map and the second mask map includes: identifying a first image feature and a second image feature in the first image, the first image feature being matched with the target object, the second image feature being matched with the target plane; generating a first mask map through the first image features; and generating a second mask map from the second image features.
In the technical scheme, the first image features matched with the target object in the first image and the second image features matched with the target plane in the first image are adopted. And generating a first mask map matched with the target object according to the first image characteristics, and generating a second mask map matched with the target object according to the second image characteristics.
Specifically, the first image is segmented by an example segmentation algorithm, so that a first mask map and a second mask map can be obtained. Exemplary example segmentation algorithms are the masker-CNN, yolact algorithm. The first mask pattern and the second mask pattern have the same size, and the first mask pattern and the second mask pattern are all single-channel pictures.
According to the method, the first image of the color image and the second image of the depth image are obtained through the depth camera, and the first mask image and the second mask image corresponding to the target object and the target plane are obtained through the example segmentation algorithm, so that the first mask image and the second mask image in the first image and the second image are automatically identified, the mask image in the image is not required to be manually selected by a user, and the operation required by the user is simplified.
In any of the above technical solutions, determining the first point cloud information and the second point cloud information according to the first mask map, the second mask map, and the second image includes: screening a first coordinate set in the first mask graph and a second coordinate set in the second mask graph according to a preset rule according to the depth value in the second image; determining first point cloud information according to a first coordinate set and a depth camera parameter, wherein the depth camera parameter is a parameter of a depth camera for acquiring a second image; and determining second point cloud information according to the second coordinate set and the depth camera parameters.
In the embodiment of the application, the first coordinate set in the first mask image is screened, the second coordinate set in the second mask image is screened, and then the first coordinate set, the second coordinate set and the depth camera parameters are calculated, so that corresponding first point cloud information and second point cloud information are obtained.
Specifically, the first mask map and the second mask map are screened according to the same screening rule, the obtained first coordinate set is the coordinate set of the target object in the first mask map, and the second coordinate set is the coordinate set of the target plane in the second mask map.
In the technical scheme, the first coordinate information is the coordinate of the target object in the first mask image, the second coordinate information is the coordinate of the target plane in the second mask image, and the first coordinate information and the second coordinate information need to be configured into a depth camera coordinate system at this time, so that corresponding first point cloud information and second point cloud information are obtained.
The depth camera parameter is a depth camera for acquiring image data, and when the image data includes a first image and a second image, the first image and the second image can be synchronously acquired by the depth camera. The parameters of the depth camera are intrinsic to the depth camera, including but not limited to scale factors of the depth camera in the u-axis and v-axis directions, and coordinates of the principal point of the depth camera in the image coordinate system.
According to the technical scheme, the coordinate points of the target object and the target plane under the depth camera marking system can be calculated through the formula, and the set of the coordinate points is used as corresponding point cloud information.
The first coordinate information and the second coordinate information are coordinate points in the first mask map and the second mask map, i.e., two-dimensional coordinates. The first point cloud information and the second point cloud information are point clouds in a depth camera coordinate system, namely three-dimensional coordinates.
According to the method and the device, the first point cloud information and the second point cloud information of the target object and the target plane in the depth camera coordinate system can be obtained through calculation through the first coordinate information, the second coordinate information and the depth camera parameters, so that a user does not need to control the depth camera to collect image data for many times, and the data collection process is further simplified.
In any of the foregoing solutions, the depth camera parameters include at least one of: the scale factor of the depth camera, the coordinates of the image coordinate system of the principal point of the depth camera.
In the technical scheme, parameters of the depth camera are internal parameters of the depth camera. The dimension factors of the depth camera include dimension factors of the depth camera in the u-axis and v-axis directions, i.e., dimension factors of the depth camera in the u-axis and v-axis directions in the acquired image coordinate system. The principal point of the depth camera is at the coordinates of the depth camera in the image coordinate system of the image data acquired by the depth camera. The principal point of the depth camera is a sampling point of the depth camera.
According to the method and the device, the first point cloud information and the second point cloud information of the corresponding target object and the target plane under the image coordinate system of the image data acquired by the depth camera can be determined according to the two-dimensional coordinate points in the first mask image and the second mask image through the depth camera parameters.
In any of the above technical solutions, the preset rule includes that the depth value is greater than the preset depth value, and the color value is a preset color value.
In the technical scheme, the color values of all coordinate points in the first mask image and the depth information in the second image are combined to screen the first coordinate information in the first mask image, and the color values of all coordinate points in the second mask image and the depth information of the second image are combined to screen the second coordinate information in the second mask image.
In the technical scheme, the first coordinate information in the first mask map and the second coordinate information in the second mask map are screened through the same preset rule.
Note that, if the depth value of the coordinate point with the depth value of 0 in the first mask map and the second mask map is determined, the depth value of the coordinate position cannot be acquired, so that the first coordinate information and the second coordinate information corresponding to the coordinate point with the depth value of greater than 0 are obtained.
According to the invention, the color values in the first mask image and the second mask image are preset color values, and the coordinates with the depth values larger than the preset depth values are used as the first coordinate information of the target object and the second coordinate information of the target plane, so that the accuracy of the first coordinate information and the second coordinate information is improved.
In any of the above technical solutions, constructing a second coordinate system according to the second point cloud information includes: determining a first plane equation according to the second point cloud information, wherein the first plane equation is matched with the target plane; and constructing the second coordinate system according to the first plane equation.
In the technical scheme, the second point cloud information is point cloud information of the target plane, a second coordinate system taking the target plane as a reference plane is constructed based on the second point cloud information, and a plane equation of the target plane is obtained by fitting based on the second point cloud information. And generating a coordinate system rotation matrix according to the first plane equation obtained by fitting, wherein the coordinate system rotation matrix is a plane coordinate system for converting the rotation of the depth camera coordinate system into a target plane.
According to the method, a plane equation of the target plane is obtained through fitting, the target plane pointed by the plane equation is used as a reference plane of a second coordinate system, the second coordinate system is constructed, the second coordinate system is the coordinate system taking the target plane as the reference plane, and the size information of the target object can be conveniently determined according to the coordinates of the first point cloud information in the second coordinate system.
In any of the above technical solutions, determining size information of the target object according to the first point cloud information and the second coordinate system includes: acquiring a coordinate system rotation matrix of a first coordinate system and a second coordinate system; determining third coordinate information of the first point cloud information in a second coordinate system through a coordinate system rotation matrix; and determining the size information of the target object according to the third coordinate information.
In the technical scheme, after a second coordinate system taking a target plane as a reference plane is constructed, a coordinate system rotation matrix between the first coordinate system and the second coordinate system is determined, first point cloud information in the first coordinate system can be mapped into the second coordinate system through the coordinate system rotation matrix, so that corresponding third coordinate information is obtained, the third coordinate information is a coordinate point set of a target object in the second coordinate system, and size information of the target object can be determined through calculation of the third coordinate information. Since the size information of the target object is calculated in the second coordinate system taking the target plane as the reference plane, the accuracy of determining the size information of the target object is improved.
Specifically, by calculating the distance value between the highest coordinate point in the third coordinate information and the target plane, the height value of the target object can be determined. The length value and the width value of the target object can be determined by calculating coordinate points of the outer contour of the target object in the third coordinate information.
In the technical scheme, the first point cloud information of the target object is converted and projected into the second coordinate system of the target plane, so that third coordinate information is obtained. And the size information of the target object is determined according to the third coordinate information in the second coordinate system, so that the accuracy of determining the size information of the target object is improved.
In any of the above solutions, the size information includes a height value of the target object; determining size information of the target object according to the third coordinate information, including: obtaining a distance value between each coordinate point in the third coordinate information and the target plane; and screening the maximum distance value in the plurality of distance values as the height value of the target object.
In the technical scheme, the target object is positioned at the target plane, so that the distance value between the highest coordinate point in the target object and the target plane is the height value of the target object. And determining the maximum value of the plurality of distance values as the height value of the target object by calculating the distance value of each coordinate point in the third coordinate information of the target object from the target plane.
According to the method and the device, the distance value of each coordinate point of the target object from the target plane is calculated, and the maximum distance value in the distance values is determined to be the height of the target object, so that the accuracy of determining the height value of the target object is improved.
In any of the above solutions, the size information includes a width value and a length value of the target object; determining size information of the target object according to the third coordinate information, including: determining fourth coordinate information corresponding to the target object according to the third coordinate information, wherein the fourth coordinate information is the coordinate information of the circumscribed rectangle of the target object; and determining the width value and the length value of the target object according to the fourth coordinate information.
In the technical scheme, after third coordinate information of the first point cloud information in the second coordinate system is determined, fourth coordinate information of an external rectangle of the target object is obtained, a width value and a length value of the external rectangle can be obtained according to the fourth coordinate information of the external rectangle of the target object, and the width value and the length value are determined to be the width value and the length value of the target object.
The fourth coordinate information is the coordinate information of the minimum circumscribed rectangle of the target object in the second coordinate system.
According to the method and the device, the minimum circumscribed rectangle of the target object is determined, the fourth coordinate information of the minimum circumscribed rectangle in the second coordinate system is determined, and the width value and the length value of the target object can be accurately calculated according to the fourth coordinate information. According to the method and the device, the width value and the length value of the minimum circumscribed rectangle of the target object are used as the width value and the length value of the target object, so that the accuracy of the determined width value and length value of the target object is further improved.
According to a second aspect of the present invention, there is provided a size acquisition apparatus comprising:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring first point cloud information of a target object under a first coordinate system and second point cloud information of a target plane under the first coordinate system, the first coordinate system is a camera coordinate system, and the target object is positioned on the target plane;
The construction module is used for constructing a second coordinate system according to the second point cloud information, wherein the second coordinate system is a coordinate system corresponding to the target plane;
and the determining module is also used for determining the size information of the target object according to the first point cloud information and the second coordinate system.
The method and the device acquire the first point cloud information of the target object and the second point cloud information of the target plane. And constructing a second coordinate system with the target plane as a reference plane, converting the first point cloud information into the second coordinate system through the coordinate system, and determining the size information of the target object in the second coordinate system. Compared with the prior art, the method has the advantages that images are not required to be acquired from multiple angles, and the images are not required to be acquired at the designated angles, so that the step of acquiring the object size through the robot is simplified.
According to a third aspect of the present invention, there is provided a size acquisition apparatus comprising: a memory in which a program or instructions are stored; the processor executes a program or instructions stored in the memory to implement steps of the size obtaining method according to any one of the first aspects, so that all the advantageous technical effects of the size obtaining method according to any one of the first aspects are provided, and will not be described in detail herein.
According to a fourth aspect of the present invention there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor performs the steps of the size acquisition method as in any of the above-mentioned first aspects. Therefore, the method has all the advantages of the method for obtaining the dimension in any of the above-mentioned first aspects, and will not be described in detail herein.
According to a fifth aspect of the present invention there is provided a robot comprising: the size obtaining device as defined in the second or third aspect and/or the readable storage medium as defined in the fourth aspect thus has all the advantageous technical effects of the size obtaining device as defined in the second or third aspect and/or the readable storage medium as defined in the fourth aspect, and will not be described in detail herein.
In the above technical solution, the robot further includes: a depth camera for acquiring image data, the image data comprising a color image and/or a depth image.
In this technical scheme, the depth camera collects image data, the depth camera is mounted on the robot and moves together with the robot, and the image data may include a depth image and a color image.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates one of the schematic flow diagrams of a dimension acquisition method provided in some embodiments of the invention;
FIG. 2 illustrates a second schematic flow diagram of a dimension acquisition method provided in some embodiments of the invention;
FIG. 3 illustrates a schematic diagram of a mask map provided in some embodiments of the invention;
FIG. 4 illustrates a third schematic flow diagram of a dimension acquisition method provided in some embodiments of the invention;
FIG. 5 illustrates a fourth schematic flow diagram of a dimension acquisition method provided in some embodiments of the invention;
FIG. 6 illustrates a fifth schematic flow diagram of a dimension acquisition method provided in some embodiments of the invention;
FIG. 7 illustrates a sixth schematic flow diagram of a dimension acquisition method provided in some embodiments of the invention;
FIG. 8 illustrates a seventh schematic flow diagram of a dimension acquisition method provided in some embodiments of the invention;
FIG. 9 illustrates an eighth schematic flow diagram of a dimension acquisition method provided in some embodiments of the invention;
FIG. 10 illustrates a block diagram of a size acquisition device provided in some embodiments of the invention;
FIG. 11 illustrates a block diagram of a size acquisition device provided in some embodiments of the invention;
fig. 12 illustrates a block diagram of a robot provided by some embodiments of the invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
Size acquisition methods, apparatuses, readable storage media, and robots according to some embodiments of the present invention are described below with reference to fig. 1 to 12.
In one embodiment of the present invention, as shown in fig. 1, a size acquisition method is provided, including:
102, acquiring first point cloud information of a target object under a first coordinate system and second point cloud information of a target plane under the first coordinate system;
the first coordinate system is a camera coordinate system, and the target object is located on the target plane.
104, constructing a second coordinate system according to the second point cloud information, wherein the second coordinate system is a coordinate system corresponding to the target plane;
and 106, determining the size information of the target object according to the first point cloud information and the second coordinate system.
The size acquisition method provided by the embodiment can be applied to a robot, and the robot can acquire first point cloud information of a target object and second point cloud information of a target plane in the running process.
Illustratively, the target plane may be a floor and the target object may be furniture.
For example, the image data may include a depth image and a color image, and the first point cloud information and the second point cloud information are determined in the depth image by performing image feature recognition on the color image, and based on the recognized image features.
For example, the image data may be image data of a target scene acquired by a three-dimensional scanner, first point cloud information of a target object manually framed by a user, and second point cloud information of a target plane.
In this embodiment, the first point cloud information is a point set of the target object under the camera coordinate system, the second point cloud information is a point set of the target plane under the camera coordinate system, and the plane coordinate system corresponding to the target plane can be constructed through the second point cloud information. In a planar coordinate system, the object plane is a reference plane in the coordinate system.
Specifically, after the second point cloud information is determined, a second coordinate system with the target plane as a reference plane is constructed based on the second point cloud information. Because the first point cloud information and the second point cloud information are point sets under the camera coordinate system, the first point cloud information in the first coordinate system can be converted into the second coordinate system, and accordingly corresponding coordinates of the first point cloud information under the second coordinate system, namely the coordinates of the target object under the second coordinate system, are obtained. Since the target plane is a reference plane in the second coordinate system, the size information of the target object can be determined from the coordinates of the target object in the second coordinate system.
For example: the second coordinate system is a three-dimensional coordinate system in which the object plane is an xoy plane, i.e. the Z-axis of the object plane in the second coordinate system is 0.
Illustratively, the target plane is the ground, and the second point cloud information of the ground is fitted to obtain a plane equation of the ground by using a principal component analysis method: ax+by+cz+d=0. The normal vector of the plane is (a, b, c), which is a unit vector and is perpendicular to the plane. Taking the origin of a camera for collecting image data as the origin of a ground coordinate system, taking the normal vector of a ground plane as the Z axis of the ground coordinate system, taking any vector which is not parallel to the normal vector of the ground plane, solving the outer product of the vector and the normal vector of the ground plane, taking the unit vector (e, f, g) of the outer product as the X axis of the ground coordinate system, solving the outer product of the vector (a, b, c) and the vector (e, f, g), and taking the unit vector (i, h, j) of the outer product as the Y axis of the ground coordinate system, thereby constructing and obtaining the ground coordinate system.
The first point cloud information of the target object and the second point cloud information of the target plane are acquired. And constructing a second coordinate system with the target plane as a reference plane, converting the first point cloud information into the second coordinate system through the coordinate system, and determining the size information of the target object in the second coordinate system. Compared with the prior art, the method has the advantages that images are not required to be acquired from multiple angles, and the images are not required to be acquired at the designated angles, so that the step of acquiring the object size through the robot is simplified.
As shown in fig. 2, in the above embodiment, acquiring first point cloud information of a target object in a first coordinate system and second point cloud information of a target plane in the first coordinate system includes:
step 202, acquiring a first image and a second image of a target scene;
the first image is a color image, the second image is a depth image, and the target scene comprises a target object and a target plane;
step 204, determining a first mask map and a second mask map through the first image, wherein the first mask map is matched with the target object, and the second mask map is matched with the target plane;
step 206, determining first point cloud information and second point cloud information according to the first mask map, the second mask map and the second image.
In an embodiment, an image acquisition device is installed in the robot, and a first image and a second image corresponding to a target scene can be acquired through the image acquisition device in the running process of the robot. The first image is a color image, and the second image is a depth image.
The robot collects image data through the same image collecting device, and the image collecting device is arranged on the robot and moves together with the robot. In acquiring image data, the top surface of the target object needs to be photographed completely, and the image data needs to contain the target plane.
In the running process of the robot, a color image corresponding to the target scene, namely a first image, can be acquired through the image acquisition device, and a depth image of the target scene, namely a second image, can be acquired through the image acquisition device. The first image and the second image are matched in image content, and the first image and the second image comprise a target object and a target plane.
The first image and the second image are collected by the same image collecting device, and the first image and the second image are both from the same image collecting device, and the image collecting device is installed on the robot and moves together with the robot, so that the content of the images collected by the image collecting device is identical, and the image collecting device can be a depth camera.
It should be noted that, in the process of acquiring the first image and the second image, the top surface of the target object needs to be completely photographed, and the first image and the second image need to each include the target plane. The first image and the second image are the same in size. The second image is a single-channel depth image, the stored depth value is greater than or equal to 0, and if a region with the depth value of 0 exists in the second image, the region is determined to be incapable of acquiring the corresponding depth value.
In this embodiment, after the first image is acquired, a first mask map corresponding to the target object and a second mask map corresponding to the target plane are determined according to the first image. The first mask map is a mask map obtained by masking a target object in the first image, and the second mask map is a mask map obtained by masking a target plane in the first image. In the case of obtaining the first mask map and the second mask map, first point cloud information of the target object in the depth camera coordinate system and second point cloud information of the target plane in the depth camera coordinate system can be determined according to the first mask map and the second image.
The first mask pattern is a single-channel mask pattern, and the second mask pattern is a single-channel mask pattern.
For example, the color value of each position in the first mask map and the second mask map is 255 or 0, and the first mask map is taken as an example for explanation, the region with the color value of 255 in the first mask map is the region where the target object is located, and the other regions are the regions where the non-target object is located. Taking the second mask diagram as an example, the region with the color value of 255 in the second mask diagram is the region where the target plane is located, and the other regions are the regions where the non-target plane is located.
As shown in fig. 3, the target object in the color map is a bed, the color map is subjected to example segmentation, and each image feature in the color map is identified, so that a first mask map and a second mask map are obtained, wherein the first mask map is a bed mask map, and the second mask map obtained according to the color map is a ground mask map.
In this embodiment, a first mask map corresponding to the target object and a second mask map corresponding to the target plane are obtained by acquiring a first image and a second image of the target scene and performing mask processing on the second image. According to the first mask image, the second mask image and the second image comprising the depth information, the first point cloud information of the target object and the second point cloud information of the target plane can be determined according to the depth camera parameters corresponding to the second image, and manual operation of a user is not needed.
As shown in fig. 4, in any of the above embodiments, determining the first mask map and the second mask map from the first image includes:
step 402, identifying a first image feature and a second image feature in the first image, wherein the first image feature is matched with the target object, and the second image feature is matched with the target plane;
step 404, generating a first mask map from the first image features and generating a second mask map from the second image features.
In this embodiment, the first image features are matched to the target object in the first image and the second image features are matched to the target plane in the first image. And generating a first mask map matched with the target object according to the first image characteristics, and generating a second mask map matched with the target object according to the second image characteristics.
Specifically, the first image is segmented by an example segmentation algorithm, so that a first mask map and a second mask map can be obtained. Exemplary example segmentation algorithms are the masker-CNN, yolact algorithm. The first mask pattern and the second mask pattern have the same size, and the first mask pattern and the second mask pattern are all single-channel pictures.
According to the embodiment, the first image of the color image and the second image of the depth image are obtained through the depth camera, and the first mask image and the second mask image corresponding to the target object and the target plane are obtained through the instance segmentation algorithm, so that the first mask image and the second mask image in the first image and the second image are automatically identified, the mask image in the image is not required to be manually selected by a user, and the operation required by the user is simplified.
As shown in fig. 5, in any of the above embodiments, determining the first point cloud information and the second point cloud information according to the first mask map, the second mask map, and the second image includes:
step 502, screening a first coordinate set in a first mask graph and a second coordinate set in a second mask graph according to a preset rule according to a depth value in a second image;
step 504, determining first point cloud information according to the first coordinate set and the depth camera parameter, wherein the depth camera parameter is a parameter of a depth camera for acquiring the second image, and determining second point cloud information according to the second coordinate set and the depth camera parameter.
In the embodiment of the application, the first coordinate set in the first mask image is screened, the second coordinate set in the second mask image is screened, and then the first coordinate set, the second coordinate set and the depth camera parameters are calculated, so that corresponding first point cloud information and second point cloud information are obtained.
Specifically, the first mask map and the second mask map are screened according to the same screening rule, the obtained first coordinate set is the coordinate set of the target object in the first mask map, and the second coordinate set is the coordinate set of the target plane in the second mask map.
Taking the first mask diagram as an example for screening, the first mask diagram only comprises two different color values, namely 255 and 0, the region with the color value 255 is determined as the region where the target object is located, and the region with the color value 0 is determined as the background region. And screening the first coordinate information in the first mask according to the color value, and determining the coordinate information with the color value of 255 in the first mask as a target object.
Taking the second mask diagram as an example for screening, the second mask diagram only comprises two different color values, namely 255 and 0, the region with the color value 255 is determined as the region where the target object is located, and the region with the color value 0 is determined as the background region. And screening the second coordinate information in the second mask according to the color value, and determining the coordinate information with the color value of 255 in the second mask as a target object.
In this embodiment, the first coordinate information is the coordinate of the target object in the first mask map, and the second coordinate information is the coordinate of the target plane in the second mask map, where the first coordinate information and the second coordinate information need to be configured into the depth camera coordinate system, so as to obtain corresponding first point cloud information and second point cloud information.
The depth camera parameter is a depth camera for acquiring image data, and when the image data includes a first image and a second image, the first image and the second image can be synchronously acquired by the depth camera. The parameters of the depth camera are intrinsic to the depth camera, including but not limited to scale factors of the depth camera in the u-axis and v-axis directions, and coordinates of the principal point of the depth camera in the image coordinate system.
For example, the point cloud information can be calculated from the coordinate information by the following formula, wherein the coordinate information includes first coordinate information and second coordinate information, the point cloud information includes first point cloud information and second point cloud information, the first coordinate information corresponds to the first point cloud information, and the second coordinate information corresponds to the second point cloud information.
X=d×(u-C x )/f x
Y=d×(v-C y )/f y
Z=d;
Wherein X, Y, Z is point cloud information, d is depth value, u and v are coordinate information, C x 、C y Is the coordinate of the principal point of the depth camera in the image coordinate system, f x 、f y Is the scale factor of the depth camera in the u-axis and v-axis directions.
In this embodiment, coordinate points of the target object and the target plane under the depth camera coordinate system can be calculated through the above formula, and the set of the coordinate points is used as corresponding point cloud information.
The first coordinate information and the second coordinate information are coordinate points in the first mask map and the second mask map, i.e., two-dimensional coordinates. The first point cloud information and the second point cloud information are point clouds in a depth camera coordinate system, namely three-dimensional coordinates. D in the above formula is a depth value of the coordinate point in the second image corresponding to the first coordinate information and the second coordinate information.
According to the method and the device, the first point cloud information and the second point cloud information of the target object and the target plane in the depth camera coordinate system can be obtained through calculation through the first coordinate information, the second coordinate information and the depth camera parameters, so that a user does not need to control the depth camera to collect image data for many times, and the data collection process is further simplified.
In any of the above embodiments, the depth camera parameters include at least one of: the scale factor of the depth camera, the coordinates of the image coordinate system of the principal point of the depth camera.
In this embodiment, the parameters of the depth camera are intrinsic to the depth camera. The dimension factors of the depth camera include dimension factors of the depth camera in the u-axis and v-axis directions, i.e., dimension factors of the depth camera in the u-axis and v-axis directions in an image coordinate system of the acquired image data. The principal point of the depth camera is at the coordinates of the depth camera in the image coordinate system of the image data acquired by the depth camera. The principal point of the depth camera is a sampling point of the depth camera.
According to the embodiment, through the depth camera parameters, according to two-dimensional coordinate points in the first mask map and the second mask map, first point cloud information and second point cloud information of a corresponding target object and a corresponding target plane under an image coordinate system of image data acquired by the depth camera can be determined.
In any of the above embodiments, the preset rule includes that the depth value is greater than the preset depth value, and the color value is a preset color value.
In this embodiment, the first coordinate information in the first mask map is screened in combination with the color value of each coordinate point in the first mask map and the depth information in the second image, and the second coordinate information in the second mask map is screened in combination with the color value of each coordinate point in the second mask map and the depth information of the second image.
In this embodiment, the first coordinate information in the first mask map and the second coordinate information in the second mask map are filtered by the same preset rule.
Illustratively, the preset depth value is 0 and the preset color value is 255.
Taking the first mask diagram as an example for screening, the first mask diagram only comprises two different color values, namely 255 and 0, and coordinate points with the color value of 255 and the depth value of more than 0 in the first mask diagram are screened as first coordinate information.
Taking the second mask diagram as an example for screening, the second mask diagram only comprises two different color values, namely 255 and 0, and coordinate points with the color value of 255 and the depth value of more than 0 in the second mask diagram are screened as second coordinate information.
Note that, if the depth value of the coordinate point with the depth value of 0 in the first mask map and the second mask map is determined, the depth value of the coordinate position cannot be acquired, so that the first coordinate information and the second coordinate information corresponding to the coordinate point with the depth value of greater than 0 are obtained.
According to the embodiment, the color values in the first mask image and the second mask image are the preset color values, and the coordinates with the depth values larger than the preset depth values are used as the first coordinate information of the target object and the second coordinate information of the target plane, so that the accuracy of the first coordinate information and the second coordinate information is improved.
As shown in fig. 6, in any of the above embodiments, constructing a second coordinate system according to the second point cloud information includes:
step 602, determining a first plane equation according to the second point cloud information, wherein the first plane equation is matched with the target plane;
step 604, constructing a second coordinate system according to the first plane equation.
In this embodiment, the second point cloud information is point cloud information of the target plane, a second coordinate system using the target plane as a reference plane is constructed based on the second point cloud information, and a plane equation of the target plane is obtained by fitting based on the second point cloud information. And generating a coordinate system rotation matrix according to the first plane equation obtained by fitting, wherein the coordinate system rotation matrix is a plane coordinate system for converting the rotation of the depth camera coordinate system into a target plane.
Specifically, the plane equation of the target plane obtained by fitting is ax+by+cz+d=0, wherein the plane normal vector (a, b, c) is defined as the origin of the second coordinate system, the normal vector of the target plane is defined as the Z axis of the second coordinate system, any vector which is not parallel to the normal vector of the target plane is selected, the outer product of the vector and the normal vector of the ground plane is calculated, the obtained result is a vector, the unit vector (e, f, g) of the outer product is the X axis of the second coordinate system, the outer product of the vector (a, b, c) and the vector (e, f, g) is calculated, and the unit vector (i, h, j) of the outer product is the Y axis of the ground coordinate system, so as to construct the second coordinate system.
According to the embodiment, a plane equation of the target plane is obtained through fitting, the target plane pointed by the plane equation is used as a reference plane of a second coordinate system, the second coordinate system is constructed, the second coordinate system is the coordinate system taking the target plane as the reference plane, and the size information of the target object can be conveniently determined according to the coordinates of the first point cloud information in the second coordinate system.
As shown in fig. 7, in any of the above embodiments, determining the size information of the target object according to the first point cloud information and the second coordinate system includes:
Step 702, acquiring a coordinate system rotation matrix of a first coordinate system and a second coordinate system;
step 704, determining third coordinate information of the first point cloud information in the second coordinate system through the coordinate system rotation matrix;
step 706, determining the size information of the target object according to the third coordinate information.
In this embodiment, after a second coordinate system with the target plane as a reference plane is constructed, a coordinate system rotation matrix between the first coordinate system and the second coordinate system is determined, and by using the coordinate system rotation matrix, first point cloud information in the first coordinate system can be mapped into the second coordinate system, so as to obtain corresponding third coordinate information, where the third coordinate information is a coordinate point set of the target object in the second coordinate system, and by calculating the third coordinate information, size information of the target object can be determined. Since the size information of the target object is calculated in the second coordinate system taking the target plane as the reference plane, the accuracy of determining the size information of the target object is improved.
Illustratively, the target object is an indoor household product and the target plane is an indoor floor.
Illustratively, each point in the first point cloud information is mapped into the second coordinate system to obtain the third coordinate information by the following formula:
Figure BDA0003982987290000181
Wherein (e, f, g) is the X axis of the ground coordinate system, (i, h, j) is the Y axis of the second coordinate system, (a, b, c) is the normal vector of the target plane, (X, Y, z) is the coordinates of each point in the first point cloud information, (X) w ,y w ,z w ) Coordinates of each point in the third coordinate information.
Specifically, by calculating the distance value between the highest coordinate point in the third coordinate information and the target plane, the height value of the target object can be determined. The length value and the width value of the target object can be determined by calculating coordinate points of the outer contour of the target object in the third coordinate information.
In the embodiment defined in this embodiment, the third coordinate information is obtained by converting and projecting the first point cloud information of the target object into the second coordinate system of the target plane. And the size information of the target object is determined according to the third coordinate information in the second coordinate system, so that the accuracy of determining the size information of the target object is improved.
As shown in fig. 8, in any of the above embodiments, the size information includes a height value of the target object; determining size information of the target object according to the third coordinate information, including:
step 802, obtaining a distance value between each coordinate point in the third coordinate information and the target plane;
Step 804, screening a maximum distance value among the plurality of distance values as a height value of the target object.
In this embodiment, the target object is located at the target plane, so the distance value between the highest coordinate point in the target object and the target plane is the height value of the target object. And determining the maximum value of the plurality of distance values as the height value of the target object by calculating the distance value of each coordinate point in the third coordinate information of the target object from the target plane.
Illustratively, each coordinate point in the calculated third coordinate information is (x w ,y w ,z w ) Since the target plane is a plane with a Z axis of 0 in the second coordinate system, i.e., an X0Y plane, Z of each coordinate point w As a distance value, the maximum z in each coordinate point is filtered w As a height value of the target object.
Illustratively, the distance value between each point and the target plane is calculated by the first point cloud information, specifically, the distance value between each coordinate point and the target plane in the third coordinate information is calculated by the following formula:
Figure BDA0003982987290000191
where s is a distance value, a, b, c, d is a parameter of the target plane, x, y, and z are coordinates in the first point cloud information, where the parameter a, b, c, d of the target plane is obtained by solving based on Ping Miandian cloud fitting plane equation.
According to the method and the device, the distance value of each coordinate point of the target object from the target plane is calculated, and the maximum distance value in the distance values is determined to be the height of the target object, so that the accuracy of determining the height value of the target object is improved.
As shown in fig. 9, in any of the above embodiments, the size information includes a width value and a length value of the target object; determining size information of the target object according to the third coordinate information, including:
step 902, determining fourth coordinate information corresponding to the target object according to the third coordinate information, wherein the fourth coordinate information is the coordinate information of the circumscribed rectangle of the target object;
step 904, determining the width value and the length value of the target object according to the fourth coordinate information.
In this embodiment, after determining the third coordinate information of the first point cloud information in the second coordinate system, the fourth coordinate information of the circumscribed rectangle of the target object is obtained, and according to the fourth coordinate information of the circumscribed rectangle of the target object, the width value and the length value of the circumscribed rectangle can be obtained, and the width value and the length value are determined as the width value and the length value of the target object.
The fourth coordinate information is the coordinate information of the minimum circumscribed rectangle of the target object in the second coordinate system.
According to the method, the minimum circumscribed rectangle of the target object is determined, fourth coordinate information of the minimum circumscribed rectangle in the second coordinate system is determined, and the width value and the length value of the target object can be accurately calculated according to the fourth coordinate information. According to the method and the device, the width value and the length value of the minimum circumscribed rectangle of the target object are used as the width value and the length value of the target object, so that the accuracy of the determined width value and length value of the target object is further improved.
In one embodiment according to the present application, as shown in fig. 10, a size acquisition apparatus 1000 is proposed, the size acquisition apparatus 1000 including:
the obtaining module 1002 is configured to obtain first point cloud information of a target object in a first coordinate system, and second point cloud information of a target plane in the first coordinate system, where the first coordinate system is a camera coordinate system, and the target object is located on the target plane;
a construction module 1004, configured to construct a second coordinate system according to the second point cloud information, where the second coordinate system is a coordinate system corresponding to the target plane;
a determining module 1006, configured to determine size information of the target object according to the first point cloud information and the second coordinate system.
The first point cloud information of the target object and the second point cloud information of the target plane are acquired. And constructing a second coordinate system with the target plane as a reference plane, converting the first point cloud information into the second coordinate system through the coordinate system, and determining the size information of the target object in the second coordinate system. Compared with the prior art, the method has the advantages that images are not required to be acquired from multiple angles, and the images are not required to be acquired at the designated angles, so that the step of acquiring the object size through the robot is simplified.
In the above embodiment, the obtaining module 1002 is further configured to obtain a first image and a second image of a target scene, where the first image is a color image, and the second image is a depth image, and the target scene includes a target object and a target plane;
the determining module 1006 is further configured to determine, from the first image, a first mask map and a second mask map, where the first mask map matches the target object, and the second mask map matches the target plane;
a determining module 1006, configured to determine first point cloud information and second point cloud information according to the first mask map, the second mask map, and the second image.
In this embodiment, a first mask map corresponding to the target object and a second mask map corresponding to the target plane are obtained by acquiring a first image and a second image of the target scene and performing mask processing on the second image. According to the first mask image, the second mask image and the second image comprising the depth information, the first point cloud information of the target object and the second point cloud information of the target plane can be determined according to the depth camera parameters corresponding to the second image, and manual operation of a user is not needed.
In any of the above embodiments, the size acquisition apparatus 1000 includes:
The identification module is used for identifying first image features and second image features in the first image, the first image features are matched with the target object, and the second image features are matched with the target plane;
and the generating module is used for generating a first mask image through the first image features and generating a second mask image through the second image features.
According to the embodiment, the first image of the color image and the second image of the depth image are obtained through the depth camera, and the first mask image and the second mask image corresponding to the target object and the target plane are obtained through the instance segmentation algorithm, so that the first mask image and the second mask image in the first image and the second image are automatically identified, the mask image in the image is not required to be manually selected by a user, and the operation required by the user is simplified.
In any of the above embodiments, the size acquisition apparatus 1000 includes:
the screening module is used for screening the first coordinate set in the first mask graph and the second coordinate set in the second mask graph according to the depth value in the second image and a preset rule;
the determining module 1006 is configured to determine first point cloud information according to the first coordinate set and the depth camera parameter, where the depth camera parameter is a parameter of a depth camera that collects the second image, and determine second point cloud information according to the second coordinate set and the depth camera parameter.
According to the method and the device, the first point cloud information and the second point cloud information of the target object and the target plane in the depth camera coordinate system can be obtained through calculation through the first coordinate information, the second coordinate information and the depth camera parameters, so that a user does not need to control the depth camera to collect image data for many times, and the data collection process is further simplified.
In any of the above embodiments, the depth camera parameters include at least one of: the scale factor of the depth camera, the coordinates of the principal point of the depth camera in the image coordinate system of the second image.
According to the embodiment, through the depth camera parameters, according to two-dimensional coordinate points in the first mask map and the second mask map, first point cloud information and second point cloud information of a corresponding target object and a corresponding target plane under an image coordinate system of image data acquired by the depth camera can be determined.
In any of the above embodiments, the preset rule includes that the depth value is greater than the preset depth value, and the color value is a preset color value.
According to the embodiment, the color values in the first mask image and the second mask image are the preset color values, and the coordinates with the depth values larger than the preset depth values are used as the first coordinate information of the target object and the second coordinate information of the target plane, so that the accuracy of the first coordinate information and the second coordinate information is improved.
In any of the foregoing embodiments, the determining module 1006 is configured to determine a first plane equation according to the second point cloud information, where the first plane equation matches the target plane;
a construction module 1004 is configured to construct the second coordinate system according to a first plane equation.
According to the embodiment, a plane equation of the target plane is obtained through fitting, the target plane pointed by the plane equation is used as a reference plane of a second coordinate system, the second coordinate system is constructed, the second coordinate system is the coordinate system taking the target plane as the reference plane, and the size information of the target object can be conveniently determined according to the coordinates of the first point cloud information in the second coordinate system.
In any of the foregoing embodiments, the obtaining module 1002 is configured to obtain a coordinate system rotation matrix of the first coordinate system and the second coordinate system;
a determining module 1006, configured to determine third coordinate information of the first point cloud information in the second coordinate system through the coordinate system rotation matrix;
a determining module 1006, configured to determine size information of the target object according to the third coordinate information.
In the embodiment defined in this embodiment, the third coordinate information is obtained by converting and projecting the first point cloud information of the target object into the second coordinate system of the target plane. And the size information of the target object is determined according to the third coordinate information in the second coordinate system, so that the accuracy of determining the size information of the target object is improved.
In any of the above embodiments, the size information includes a height value of the target object.
An obtaining module 1002, configured to obtain a distance value between each coordinate point in the third coordinate information and the target plane;
and the screening module is used for screening the maximum distance value in the plurality of distance values as the height value of the target object.
According to the method and the device, the distance value of each coordinate point of the target object from the target plane is calculated, and the maximum distance value in the distance values is determined to be the height of the target object, so that the accuracy of determining the height value of the target object is improved.
In any of the foregoing embodiments, the determining module 1006 is configured to determine fourth coordinate information corresponding to the target object according to the third coordinate information, where the fourth coordinate information is coordinate information of an circumscribed rectangle of the target object;
a determining module 1006, configured to determine a width value and a length value of the target object according to the fourth coordinate information.
According to the method, the minimum circumscribed rectangle of the target object is determined, fourth coordinate information of the minimum circumscribed rectangle in the second coordinate system is determined, and the width value and the length value of the target object can be accurately calculated according to the fourth coordinate information. According to the method and the device, the width value and the length value of the minimum circumscribed rectangle of the target object are used as the width value and the length value of the target object, so that the accuracy of the determined width value and length value of the target object is further improved.
In one embodiment according to the present application, as shown in fig. 11, there is provided a size acquisition apparatus including: a processor 1102 and a memory 1104, the memory 1104 having stored therein programs or instructions; the processor 1102 executes a program or instructions stored in the memory 1104 to implement steps of the size acquisition method according to any one of the first aspects, so as to have all the advantages of the size acquisition method according to any one of the first aspects, which will not be described in detail herein.
In an embodiment according to the present application, a readable storage medium is provided, on which a program or an instruction is stored, which when executed by a processor, implements the steps of the size acquisition method as in any of the above-mentioned first aspects. Therefore, the method has all the advantages of the method for obtaining the dimension in any of the above-mentioned first aspects, and will not be described in detail herein.
In one embodiment according to the present application, as shown in fig. 12, a robot 1200 is proposed, comprising: the size obtaining device 1100 as defined in the second aspect or the third aspect and/or the readable storage medium 1202 as defined in the fourth aspect, thus has all the advantageous technical effects of the size obtaining device 1100 in the second aspect or the third aspect and/or the readable storage medium 1202 as defined in the fourth aspect, and will not be repeated here.
In the above technical solution, the robot further includes: a depth camera for acquiring image data, the image data comprising a color image and/or a depth image.
In this technical scheme, the depth camera collects image data, the depth camera is mounted on the robot and moves together with the robot, and the image data may include a depth image and a color image.
It is to be understood that in the claims, specification and drawings of the present invention, the term "plurality" means two or more, and unless otherwise explicitly defined, the orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, only for the convenience of describing the present invention and making the description process easier, and not for the purpose of indicating or implying that the apparatus or element in question must have the particular orientation described, be constructed and operated in the particular orientation, so that these descriptions should not be construed as limiting the present invention; the terms "connected," "mounted," "secured," and the like are to be construed broadly, and may be, for example, a fixed connection between a plurality of objects, a removable connection between a plurality of objects, or an integral connection; the objects may be directly connected to each other or indirectly connected to each other through an intermediate medium. The specific meaning of the terms in the present invention can be understood in detail from the above data by those of ordinary skill in the art.
In the claims, specification, and drawings of the present invention, the descriptions of terms "one embodiment," "some embodiments," "particular embodiments," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In the claims, specification and drawings of the present invention, the schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (15)

1. A size acquisition method, comprising:
acquiring first point cloud information of a target object under a first coordinate system and second point cloud information of a target plane under the first coordinate system, wherein the first coordinate system is a camera coordinate system, and the target object is positioned on the target plane;
Constructing a second coordinate system according to the second point cloud information, wherein the second coordinate system is a coordinate system corresponding to the target plane;
and determining the size information of the target object according to the first point cloud information and the second coordinate system.
2. The size acquisition method according to claim 1, wherein the acquiring first point cloud information of the target object in the first coordinate system and second point cloud information of the target plane in the first coordinate system includes:
acquiring a first image and a second image of a target scene, wherein the first image is a color image, the second image is a depth image, and the target scene comprises the target object and the target plane;
determining a first mask map and a second mask map through the first image, wherein the first mask map is matched with the target object, and the second mask map is matched with the target plane;
and determining the first point cloud information and the second point cloud information according to the first mask map, the second mask map and the second image.
3. The method of claim 2, wherein determining a first mask map and a second mask map from the first image comprises:
Identifying a first image feature and a second image feature in the first image, the first image feature being matched to the target object, the second image feature being matched to the target plane;
generating the first mask map through the first image features;
and generating a second mask map through the second image feature.
4. The size acquisition method according to claim 2, wherein the determining the first point cloud information and the second point cloud information from the first mask map, the second mask map, and the second image includes:
screening a first coordinate set in the first mask graph and a second coordinate set in the second mask graph according to a preset rule according to the depth value in the second image;
determining the first point cloud information according to the first coordinate set and a depth camera parameter, wherein the depth camera parameter is a parameter of a depth camera for acquiring the second image;
and determining the second point cloud information according to the second coordinate set and the sum depth camera parameters.
5. The size acquisition method of claim 4 wherein the depth camera parameters include at least one of: and the scale factor of the depth camera and the coordinates of the principal point of the depth camera in an image coordinate system.
6. The size acquisition method according to claim 5, wherein the preset rule includes: the depth value is greater than a predetermined depth value, and the color value is a predetermined color value.
7. The size acquisition method according to any one of claims 1 to 6, characterized in that the constructing a second coordinate system from the second point cloud information includes:
determining a first plane equation according to the second point cloud information, wherein the first plane equation is matched with the target plane;
and constructing the second coordinate system according to the first plane equation.
8. The size acquisition method according to claim 7, wherein the determining the size information of the target object according to the first point cloud information and the second coordinate system includes:
acquiring a coordinate system rotation matrix of the first coordinate system and the second coordinate system;
determining third coordinate information of the first point cloud information in the second coordinate system through the coordinate system rotation matrix;
and determining the size information of the target object according to the third coordinate information.
9. The size acquisition method according to claim 8, wherein the size information includes a height value of the target object;
The determining the size information of the target object according to the third coordinate information includes:
obtaining a distance value between each coordinate point in the third coordinate information and the target plane;
and screening a maximum distance value in a plurality of distance values as the height value of the target object.
10. The size acquisition method according to claim 8, wherein the size information includes a width value and a length value of the target object;
the determining the size information of the target object according to the third coordinate information includes:
determining fourth coordinate information corresponding to the target object according to the third coordinate information, wherein the fourth coordinate information is coordinate information of an circumscribed rectangle of the target object;
and determining the width value and the length value of the target object according to the fourth coordinate information.
11. A size acquisition apparatus, comprising:
the system comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring first point cloud information of a target object under a first coordinate system and second point cloud information of a target plane under the first coordinate system, the first coordinate system is a camera coordinate system, and the target object is positioned on the target plane;
The construction module is used for constructing a second coordinate system according to the second point cloud information, wherein the second coordinate system is a coordinate system corresponding to the target plane;
and the determining module is used for determining the size information of the target object according to the first point cloud information and the second coordinate system.
12. A size acquisition apparatus, comprising:
a memory having stored thereon programs or instructions;
processor for implementing the steps of the size acquisition method according to any one of claims 1 to 10 when executing said program or instructions.
13. A readable storage medium having stored thereon a program or instructions, which when executed by a processor, implement the steps of the size acquisition method according to any one of claims 1 to 10.
14. A robot, comprising:
the size acquisition device according to claim 11 or 12; or (b)
The readable storage medium of claim 13.
15. The robot of claim 14, further comprising:
a depth camera for acquiring image data, the image data comprising a color image and/or a depth image.
CN202211555133.6A 2022-12-06 2022-12-06 Size acquisition method, device, robot and readable storage medium Pending CN116412756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211555133.6A CN116412756A (en) 2022-12-06 2022-12-06 Size acquisition method, device, robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211555133.6A CN116412756A (en) 2022-12-06 2022-12-06 Size acquisition method, device, robot and readable storage medium

Publications (1)

Publication Number Publication Date
CN116412756A true CN116412756A (en) 2023-07-11

Family

ID=87050354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211555133.6A Pending CN116412756A (en) 2022-12-06 2022-12-06 Size acquisition method, device, robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN116412756A (en)

Similar Documents

Publication Publication Date Title
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
Herráez et al. 3D modeling by means of videogrammetry and laser scanners for reverse engineering
CN110473221B (en) Automatic target object scanning system and method
Kovalev et al. Texture anisotropy in 3-D images
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN115176274A (en) Heterogeneous image registration method and system
Han et al. Automated monitoring of operation-level construction progress using 4D BIM and daily site photologs
CN113344990B (en) Hole site representation projection system and self-adaptive fitting hole site alignment method
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN111724446B (en) Zoom camera external parameter calibration method for three-dimensional reconstruction of building
CN111950440A (en) Method, device and storage medium for identifying and positioning door
CN116152697A (en) Three-dimensional model measuring method and related device for concrete structure cracks
CN110044358B (en) Mobile robot positioning method based on field line characteristics
CN113837204A (en) Hole shape recognition method, computer equipment and storage medium
CN111260735B (en) External parameter calibration method for single-shot LIDAR and panoramic camera
CN116412756A (en) Size acquisition method, device, robot and readable storage medium
CN111898552A (en) Method and device for distinguishing person attention target object and computer equipment
CN111612844A (en) Three-dimensional laser scanner and camera calibration method based on sector features
CN116612091A (en) Construction progress automatic estimation method based on multi-view matching
CN115717865A (en) Method for measuring full-field deformation of annular structure
CN214410073U (en) Three-dimensional detection positioning system combining industrial camera and depth camera
Hoover et al. Range image segmentation: The user's dilemma
Gerogiannis et al. Fast and efficient vanishing point detection in indoor images
JP2021174216A (en) Facility inspection system, facility inspection method
CN114494431A (en) Beam appearance photographing detection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination