CN111127422B - Image labeling method, device, system and host - Google Patents

Image labeling method, device, system and host Download PDF

Info

Publication number
CN111127422B
CN111127422B CN201911333503.XA CN201911333503A CN111127422B CN 111127422 B CN111127422 B CN 111127422B CN 201911333503 A CN201911333503 A CN 201911333503A CN 111127422 B CN111127422 B CN 111127422B
Authority
CN
China
Prior art keywords
camera
coordinate system
dimensional model
dimensional
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911333503.XA
Other languages
Chinese (zh)
Other versions
CN111127422A (en
Inventor
王昌龙
付兴银
皮若言
孙斯瑾
李广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201911333503.XA priority Critical patent/CN111127422B/en
Publication of CN111127422A publication Critical patent/CN111127422A/en
Application granted granted Critical
Publication of CN111127422B publication Critical patent/CN111127422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image labeling method, an image labeling device, an image labeling system and a host, and relates to the technical field of image processing. According to the pose transformation relation between the three-dimensional model of the target object and the first camera, calculating pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object; obtaining defect labeling information of the three-dimensional model; and projecting the defect labeling information of the three-dimensional model to the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, and obtaining a labeling result of the two-dimensional image. The method and the device can effectively improve the marking efficiency, reduce the marking cost, and better unify the quality of the marking result, thereby being beneficial to improving the defect detection effect of the parts.

Description

Image labeling method, device, system and host
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image labeling method, device, system, and host.
Background
Currently, the deep learning algorithm has been widely used in defect detection of industrial parts, such as for detecting defects of scratches of metal parts. In general, it is often necessary to train a neural network to detect part defects. In the training process of the neural network, the defective part images are marked as training data, and the quantity and marking quality of the defective part images directly influence the detection effect of the neural network.
The labeling mode of the image at the present stage mainly depends on manual work, namely each part image to be labeled needs to be labeled manually, and the labeling mode of a single image depending on manual work consumes larger labeling cost and reduces the labeling efficiency of data; and the quality of each image annotation data is difficult to ensure uniformity, so that the detection effect is further negatively influenced.
Disclosure of Invention
In view of the above, the present invention aims to provide an image labeling method, an image labeling device, an image labeling system, and a host machine, which can effectively improve labeling efficiency, reduce labeling cost, and better unify quality of labeling results, thereby helping to improve defect detection effects of parts.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
In a first aspect, an embodiment of the present invention provides an image labeling method, where the method is performed by a host, and the host is connected to a first camera, and the method includes: acquiring a plurality of two-dimensional images to be marked of different angles of a target object through the first camera; according to the pose transformation relation between the three-dimensional model of the target object and the first camera, pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image are calculated; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object; obtaining defect labeling information of the three-dimensional model; and projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, and obtaining a labeling result of the two-dimensional image.
Further, the step of calculating pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to the pose transformation relationship between the three-dimensional model of the target object and the first camera includes: and determining pose parameters of the three-dimensional model corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera and the projection transmission relation between the first camera and the two-dimensional images.
Further, the first camera is arranged at the tail end of the mechanical arm; the host is also connected with a second camera, the second camera is used for acquiring three-dimensional point cloud data of the target object, and the second camera is a depth camera; the pose transformation relationship between the three-dimensional model of the target object and the first camera is determined in advance by: obtaining pose parameters of the checkerboard corner under a first camera coordinate system, a world coordinate system and a second camera coordinate system respectively by adopting a checkerboard calibration method; calibrating a second pose transformation relation between a first camera coordinate system and a mechanical arm coordinate system based on a first pose transformation relation preset between the mechanical arm coordinate system and the world coordinate system and pose parameters of the checkerboard corner points under the first camera coordinate system and the world coordinate system; calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard corner under the first camera coordinate system and the second camera coordinate system; calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner under the second camera coordinate system and the world coordinate system; a pose transformation relationship between the three-dimensional model of the target object and the first camera is determined based on at least one of the first pose transformation relationship, the second pose transformation relationship, the third pose transformation relationship, and the fourth pose transformation relationship.
Further, if the three-dimensional model is a model when modeling the target object, a pose transformation relationship between the three-dimensional model and the first camera is determined based on pose parameters of the three-dimensional model in a world coordinate system, the first pose transformation relationship and the second pose transformation relationship, or is determined based on pose parameters of the three-dimensional model in a world coordinate system, the third pose transformation relationship and the fourth pose transformation relationship.
Further, if the three-dimensional model is a model constructed based on three-dimensional point cloud data of the target object, a pose transformation relationship between the three-dimensional model and the first camera is determined based on pose parameters of the three-dimensional model under a second camera coordinate system and the third pose transformation relationship; if the three-dimensional model is a model of the target object, the method further comprises: registering the point cloud data of the three-dimensional model with the three-dimensional point cloud data of the target object to obtain pose parameters of the three-dimensional model under a second camera coordinate system; the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model under a second camera coordinate system and the third pose transformation relationship.
Further, the defect labeling information comprises defect positions and defect types; the step of projecting the defect labeling information of the three-dimensional model onto the two-dimensional image to obtain a labeling result of the two-dimensional image comprises the following steps: projecting a defect position of the three-dimensional model onto the two-dimensional image; determining a defect boundary box according to the projection position of the defect position on the two-dimensional image; and marking the defect boundary box and the defect type on the two-dimensional image to obtain a marking result of the two-dimensional image.
Further, the step of obtaining defect labeling information of the three-dimensional model includes: and generating defect labeling information of the three-dimensional model according to the triangular patches selected by the user in the three-dimensional model and the labeling information added to the triangular patches.
In a second aspect, an embodiment of the present invention further provides an image labeling apparatus, where the apparatus is applied to a host, and the host is connected to a first camera, and the apparatus includes: the image acquisition module is used for acquiring a plurality of two-dimensional images to be marked of different angles of the target object through the first camera; the pose parameter calculation module is used for calculating pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object; the defect labeling information acquisition module is used for acquiring defect labeling information of the three-dimensional model; and the image labeling module is used for projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image to obtain the labeling result of the two-dimensional image.
In a third aspect, an embodiment of the present invention provides a host, including: the device comprises an image acquisition device, a processor and a storage device; the image acquisition device is used for acquiring a two-dimensional image; the storage means has stored thereon a computer program which, when executed by the processor, performs the method according to any of the first aspects.
In a fourth aspect, an embodiment of the present invention provides an image labeling system, where the system includes a host according to the third aspect, and further includes a first camera connected to the host.
Further, the first camera is arranged at the tail end of the mechanical arm; the host is also connected with a second camera, and the second camera is a depth camera.
In a fifth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the first aspects described above.
The embodiment of the invention provides an image labeling method, an image labeling device, an image labeling system and a host, wherein a plurality of two-dimensional images to be labeled with different angles of a target object are acquired through a first camera; and then, according to the pose transformation relation between the three-dimensional model of the target object and the first camera, calculating pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image, and projecting the obtained defect labeling information of the three-dimensional model onto the two-dimensional images based on the pose parameters to obtain a labeling result of the two-dimensional images. Compared with the manual labeling mode in the prior art, the mode provided by the embodiment only needs to acquire the defect labeling information of the three-dimensional model, and the defect labeling information can be mapped to a large number of two-dimensional images to be labeled according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, so that the labeling efficiency is effectively improved, and the labeling cost is reduced; moreover, the quality of the marking result can be unified well based on the quality of the defect marking information of the three-dimensional model, so that the defect detection effect of the part is improved.
Additional features and advantages of the invention will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the technology of the disclosure.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 2 shows an application scenario schematic diagram of an image labeling method according to an embodiment of the present invention;
FIG. 3 is a flowchart of an image labeling method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram showing a calibration process of a pose transformation relationship according to an embodiment of the present invention;
Fig. 5 shows a block diagram of an image labeling apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In consideration of the problems of high labeling cost, low labeling efficiency and non-uniform quality of labeling data of each image in the existing labeling mode depending on manual images, the defect detection effect of the part is poor. Based on the above, in order to improve at least one of the above problems, the embodiments of the present invention provide an image labeling method, an image labeling device, an image labeling system, and a host, which can effectively improve labeling efficiency, reduce labeling cost, and better unify quality of labeling results, thereby helping to improve defect detection effects of parts. The technology can be applied to various industries such as electronics, chemical engineering, aerospace and the like, and realizes the defect detection function of parts. For ease of understanding, embodiments of the present invention are described in detail below.
Embodiment one:
First, an exemplary electronic device 100 for implementing the image labeling method and apparatus according to the embodiments of the present invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected by a bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 1 are exemplary only and not limiting, and that the electronic device may have some of the components shown in fig. 1 or may have other components and structures not shown in fig. 1, as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 102 to implement client functions and/or other desired functions in embodiments of the present invention as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may capture images (e.g., photographs, videos, etc.) desired by the user and store the captured images in the storage device 104 for use by other components.
For example, an exemplary electronic device for implementing an image labeling method and apparatus according to an embodiment of the present invention may be implemented on an intelligent terminal such as a tablet computer, a computer, and a server.
Embodiment two:
Firstly, to facilitate understanding, the embodiment provides an image labeling system, and an actual application scenario of an image labeling method is illustrated. Referring to fig. 2, the image annotation system includes a host computer, a two-dimensional image acquisition device and a three-dimensional data acquisition device connected to the host computer; for convenience of description, the two-dimensional image capturing apparatus may also be referred to as a first camera, and the three-dimensional data capturing apparatus may be referred to as a second camera. In practical applications, the first camera may be a monocular camera, a binocular camera, or a depth camera; in order to improve the flexibility of the first camera in the image acquisition process, the first camera can be carried on the tail end of the mechanical arm flange. Considering the cost and the control complexity comprehensively, the first camera can be a monocular camera. The second camera is typically a depth camera. In this embodiment, the first camera is mainly used to collect two-dimensional images of different angles of a target object (such as an industrial component) and send the two-dimensional images to the host; the second camera is mainly used for collecting three-dimensional point cloud data of the target object and sending the three-dimensional point cloud data to the host; the host is used for reconstructing the three-dimensional model based on the three-dimensional point cloud data, obtaining defect labeling information of the three-dimensional model manually, and carrying out defect projection output on the two-dimensional image based on the defect labeling information. Such as a device like a computer or a server.
When the first camera also employs a depth camera, the first camera and the second camera may be the same. However, in consideration of different requirements for shooting distance, scanning accuracy, and the like of the two-dimensional image and the three-dimensional point cloud data in practical applications, the first camera and the second camera are generally set as different depth cameras.
Referring to the flow chart of an image annotation method shown in FIG. 3, the method can be applied to a variety of devices used in industry for defect detection of parts, such as the image annotation system described above. In combination with the image labeling system, the image labeling method may be executed by a host, where the host is connected to the first camera, and referring to fig. 3, the method specifically includes steps S304 to S308 as follows:
In step S302, a plurality of two-dimensional images to be annotated of different angles of the target object are acquired by the first camera. Wherein the target object is, for example, an industrial component.
In general, a first camera is controlled to move through a mechanical arm so as to acquire a plurality of two-dimensional images to be marked of different angles of a target object; in addition, when the two-dimensional image is acquired, the pose parameters of the first camera can be recorded at the same time, and the pose parameters can comprise the inner parameters and the outer parameters of the first camera; wherein the internal parameters, such as the focal length of the camera, the distortion coefficient, etc., and the external parameters refer to the position and orientation of the first camera in three-dimensional space, for describing the movement of the first camera in static scenes or the rigid movement of the target object when the first camera is stationary.
Step S304, according to the pose transformation relation between the three-dimensional model of the target object and the first camera, calculating pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object.
Step S306, defect labeling information of the three-dimensional model is obtained. In practical application, the defect labeling information is manually labeled by a user, and the defect labeling information of the three-dimensional model can be generally generated according to the triangular patches selected by the user in the three-dimensional model and the labeling information added to the triangular patches.
And step 308, projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, and obtaining a labeling result of the two-dimensional image.
In this embodiment, the defect labeling information includes defect positions and defect types; the defect positions are positions of defects on the three-dimensional model, and the defect types are scratch of metal parts, scratch defects, black spots of plastic finished products, defect materials, shrinkage defects and the like. In one way of obtaining the labeling result, the defect position of the three-dimensional model may be projected onto the two-dimensional image first; then determining a defect boundary box according to the projection position of the defect position on the two-dimensional image; and finally, marking the defect boundary box and the defect type on the two-dimensional image to obtain a marking result of the two-dimensional image.
According to the image labeling method provided by the embodiment of the invention, a plurality of two-dimensional images to be labeled of different angles of a target object are acquired through a first camera; and then, according to the pose transformation relation between the three-dimensional model of the target object and the first camera, calculating pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image, and projecting the obtained defect labeling information of the three-dimensional model onto the two-dimensional images based on the pose parameters to obtain a labeling result of the two-dimensional images. Compared with the manual labeling mode in the prior art, the mode provided by the embodiment only needs to acquire the defect labeling information of the three-dimensional model, and the defect labeling information can be mapped to a large number of two-dimensional images to be labeled according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, so that the labeling efficiency is effectively improved, and the labeling cost is reduced; moreover, the quality of the marking result can be unified well based on the quality of the defect marking information of the three-dimensional model, so that the defect detection effect of the part is improved.
In another scenario for implementing the image labeling method, the host may further be connected to a second camera, where the second camera is used to obtain three-dimensional point cloud data of the target object, and the second camera is typically a depth camera. The RGB image of the target object is acquired through the depth camera, and the coordinates of each pixel point in the RGB image under a second camera coordinate system (also called as the depth camera coordinate system) can be acquired according to the coordinates of each pixel point in the RGB image under the pixel coordinate system and the internal parameters of the depth camera, wherein each pixel point under the second camera coordinate system is the three-dimensional point cloud data of the target object.
In one embodiment, the three-dimensional point cloud data of the target object may be used to construct a three-dimensional model, and thus the three-dimensional model may be constructed in the following manner by referring to the following description: firstly, splicing and denoising the three-dimensional point cloud data, and then rendering the processed three-dimensional point cloud data to obtain a three-dimensional model of the target object. The three-dimensional model is a three-dimensional grid model, and can construct the complete surface profile of the target object, so that a user can select a triangular patch at a corresponding position on the three-dimensional grid model for marking according to the actual position of the surface defect of the target object, and the defect type and the three-dimensional coordinate area (namely the defect position) of the defect on the three-dimensional model are obtained.
It is to be understood that, when the three-dimensional model is a model constructed based on three-dimensional point cloud data of the target object, the above-described step of constructing the three-dimensional model is performed before the above-described step of calculating pose parameters of the three-dimensional model of the target object corresponding to the respective two-dimensional images.
Of course, the three-dimensional model may also be a model when modeling the target object, such as a model directly acquired through a design drawing of the target object.
Whether the three-dimensional model is a model when modeling the target object or a model constructed based on three-dimensional point cloud data of the target object, for a fixed image labeling system (as shown in fig. 2) that is built in advance in practical application, the pose transformation relationship between the three-dimensional model and the first camera in the step S304 is usually predetermined, and the predetermined pose transformation relationship may be directly applied in the image labeling method. In order to facilitate understanding, the present embodiment provides a method for determining a pose transformation relationship between a three-dimensional model and a first camera, referring to the following steps (one) to (five):
Step one, acquiring pose parameters of the checkerboard corner under a first camera coordinate system, a robot world coordinate system (hereinafter referred to as world coordinate system) and a second camera coordinate system respectively by adopting a checkerboard calibration method. Referring to fig. 4, based on pose parameters of the checkerboard corner under each coordinate system, the determining process of the pose transformation relationship between the three-dimensional model and the first camera may be mainly divided into two branches, where the first branch is a calibration mechanical arm coordinate system and a first camera coordinate system, and refer to the following step (two).
Step (II): calibrating a second pose transformation relation between the first camera coordinate system and the mechanical arm coordinate system based on a first pose transformation relation preset between the mechanical arm coordinate system and the world coordinate system and pose parameters of the checkerboard corner points under the first camera coordinate system and the world coordinate system. In specific implementation, the calibration of the internal parameters of the first camera may be first implemented by a conventional calibration method in the prior art, such as a Zhang calibration method. And secondly, based on the first camera which finishes the internal parameter calibration, acquiring pose parameters of a plurality of groups of checkerboard angular points under a first camera coordinate system and a world coordinate system by moving the mechanical arm for a plurality of times. And then directly reading a first pose transformation relation preset between the mechanical arm coordinate system and the world coordinate system through the mechanical arm, taking pose parameters of the first pose transformation relation and a plurality of groups of checkerboard angular points respectively under the first camera coordinate system and the world coordinate system as constraints, and calibrating a second pose transformation relation between the first camera coordinate system and the mechanical arm coordinate system through solving an optimization equation.
The second branch of the pose transformation relationship between the three-dimensional model and the first camera is determined as a calibration second camera coordinate system and a world coordinate system. The calibration process can be implemented in a first mode shown in the following step (three) or in a second mode shown in the step (four).
Mode one: and (III) calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard corner under the first camera coordinate system and the second camera coordinate system. Specifically, pose parameters of the checkerboard corner under the first camera coordinate system and the second camera coordinate system can be used as constraints to solve a third pose transformation relationship between the first camera coordinate system and the second camera coordinate system. It will be appreciated that in the case where the above-described first, second, and third pose transformation relationships are determined, the transformation relationship between the second camera coordinate system and the world coordinate system may also be calibrated therefrom and used as the fourth pose transformation relationship. Referring to the manner in which the fourth pose conversion relationship is determined in this manner, it is understood that when any three pose conversion relationships are determined, the fourth pose conversion relationship may be determined based on the three pose conversion relationships.
Mode two: and step four, calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner under the second camera coordinate system and the world coordinate system.
Of course, the various pose transformation relationships between the above coordinate systems are merely exemplary descriptions and should not be considered limiting. Moreover, based on the four pose transformation relationships between the coordinate systems, other pose transformation relationships between the coordinate systems can be further determined, for example: a fifth pose transformation relationship between the first camera coordinate system and the world coordinate system may be determined from the third pose transformation relationship and the fourth pose transformation relationship.
And step five, determining the pose transformation relationship between the three-dimensional model of the target object and the first camera based on at least one of the first pose transformation relationship, the second pose transformation relationship, the third pose transformation relationship and the fourth pose transformation relationship. As the pose transformation relations between the coordinate systems are various, the pose transformation relation between the three-dimensional model and the first camera can be determined in various modes, and the pose transformation relation between the determined three-dimensional model and the first camera is also various, so that a user can flexibly select the pose transformation relation according to actual requirements in more application scenes.
In this embodiment, if the three-dimensional model is a model constructed based on three-dimensional point cloud data of the target object, since the three-dimensional point cloud data constructing the three-dimensional model is directly acquired by the second camera, the pose transformation relationship between the three-dimensional model and the first camera may be determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relationship.
If the three-dimensional model models the target object, since the three-dimensional model of the target object is usually built under the world coordinate system and the coordinate systems have pose transformation relations, there are various ways of determining the pose transformation relation between the three-dimensional model and the first camera, for example, the following three examples:
Example one: the pose transformation relationship between the three-dimensional model and the first camera is determined based on pose parameters of the three-dimensional model in a world coordinate system, the first pose transformation relationship and the second pose transformation relationship. In practical application, the pose parameters of the three-dimensional model in the world coordinate system can be converted into the pose parameters of the mechanical arm coordinate system according to the first pose conversion relation, and then the pose parameters of the three-dimensional model in the mechanical arm coordinate system can be converted into the pose parameters of the first camera coordinate system according to the second pose conversion relation; thereby realizing pose transformation between the three-dimensional model and the first camera.
Example two: the pose transformation relationship between the three-dimensional model and the first camera is determined based on pose parameters of the three-dimensional model in a world coordinate system, the third pose transformation relationship and the fourth pose transformation relationship. In practical application, the pose parameters of the three-dimensional model in the world coordinate system can be converted into the pose parameters in the second camera coordinate system according to the fourth pose conversion relation, and then the pose parameters of the three-dimensional model in the second camera coordinate system are converted into the pose parameters in the first camera coordinate system according to the third pose conversion relation; thereby realizing pose transformation between the three-dimensional model and the first camera.
Considering that the three-dimensional point cloud data acquired by the second camera and the three-dimensional model of the target object during modeling are both related to the target object and have the same dimensions (are all three-dimensional), the following may be further provided in this embodiment: registering the point cloud data of the three-dimensional model with the three-dimensional point cloud data of the target object to obtain pose parameters of the three-dimensional model under a second camera coordinate system. The registration can be understood as converting the point cloud data of the three-dimensional model into a coordinate system to which the three-dimensional point cloud data belongs, and because the three-dimensional point cloud data is directly acquired by the second camera, pose parameters of the three-dimensional model under the coordinate system of the second camera can be obtained after registration.
Based on the pose parameters of the registered three-dimensional model, the following example three may be provided: the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relationship. In practical application, directly converting pose parameters of the three-dimensional model under the second camera coordinate system into pose parameters under the first camera coordinate system according to the third pose transformation relationship; thereby realizing pose transformation between the three-dimensional model and the first camera.
After determining the pose conversion relationship between the three-dimensional model of the target object and the first camera according to the above manner, pose parameters of the three-dimensional model of the target object corresponding to the respective two-dimensional images may be calculated with reference to the following embodiments: and determining pose parameters of the three-dimensional model corresponding to each two-dimensional image according to the pose transformation relationship between the three-dimensional model of the target object and the first camera and the projection transmission relationship between the first camera and the two-dimensional images.
In the specific determination of the pose parameters of the three-dimensional model corresponding to each two-dimensional image, the following steps may be referred to: step A, according to a pose transformation relation between a three-dimensional model of a target object and a first camera, converting pose parameters of the three-dimensional model under a second camera coordinate system or a world coordinate system into first pose parameters under the first camera coordinate system; step B, generating second pose parameters of the real pose parameters of each pixel point on the two-dimensional graph under a first camera coordinate system according to the projection transmission relation between the first camera and the two-dimensional image, and acquiring a matching result between the second pose parameters and the first pose parameters; and C, determining pose parameters of the three-dimensional model corresponding to each two-dimensional image according to the matching result.
Or in another implementation, the above step B may be replaced by the following step B' to obtain a matching result. And B', generating a third pose parameter of the first pose parameter corresponding to the two-dimensional image according to the projection transmission relation between the first camera and the two-dimensional image, and acquiring a matching result between the third pose parameter and the real pose parameter of each pixel point on the two-dimensional image.
In the above embodiment of determining pose parameters of the three-dimensional model corresponding to each two-dimensional image, the pose parameters may be understood as the abscissa and ordinate of the pixel point.
In addition, certain errors are possibly caused by considering the reconstructed three-dimensional model, the various pose transformation relations, the defect positions marked on the three-dimensional model by manpower and the like, so that certain deviation exists in the defect boundary box in the marked result after projection, and the deviation can be directly judged by observing the position deviation between the marked result on the two-dimensional image and the real defect on the two-dimensional image. In this case, for the calibration result with errors, a reminder may be generated to remind the user to adjust the calibrated defect position on the three-dimensional model; and then, projecting again based on the adjusted defect labeling information to obtain a new labeling result of the two-dimensional image. It can be understood that the adjusted defect labeling information directly acts on all the two-dimensional images, so that along with the improvement of the quality of the adjusted defect labeling information, the new labeling result corresponding to all the two-dimensional images realizes the improvement of the quality on the whole, and the uniformity of the calibration quality is ensured.
In summary, in the image labeling mode provided by the embodiment, only the defect labeling information of the three-dimensional model is required to be obtained, and the defect labeling information can be mapped to a large number of two-dimensional images to be labeled according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, so that the labeling efficiency is effectively improved, and the labeling cost is reduced; moreover, the quality of the marking result can be unified well based on the quality of the defect marking information of the three-dimensional model, so that the defect detection effect of the part is improved.
Embodiment III:
referring to fig. 5, which is a block diagram of an image labeling apparatus, the apparatus is applied to a host, the host is connected to a first camera, and the apparatus includes:
the image acquisition module 502 acquires a plurality of two-dimensional images to be annotated of different angles of the target object by using the first camera.
The pose parameter calculation module 504 is configured to calculate pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to a pose transformation relationship between the three-dimensional model of the target object and the first camera; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object.
The defect labeling information obtaining module 506 is configured to obtain defect labeling information of the three-dimensional model.
The image labeling module 508 is configured to project defect labeling information of the three-dimensional model onto the two-dimensional image according to pose parameters of the three-dimensional model corresponding to each two-dimensional image, so as to obtain a labeling result of the two-dimensional image.
According to the image labeling device provided by the embodiment of the invention, a plurality of two-dimensional images to be labeled of different angles of a target object are acquired through a first camera; and then, according to the pose transformation relation between the three-dimensional model of the target object and the first camera, calculating pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image, and projecting the obtained defect labeling information of the three-dimensional model onto the two-dimensional images based on the pose parameters to obtain a labeling result of the two-dimensional images. Compared with the manual labeling mode in the prior art, the mode provided by the embodiment only needs to acquire the defect labeling information of the three-dimensional model, and the defect labeling information can be mapped to a large number of two-dimensional images to be labeled according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, so that the labeling efficiency is effectively improved, and the labeling cost is reduced; moreover, the quality of the marking result can be unified well based on the quality of the defect marking information of the three-dimensional model, so that the defect detection effect of the part is improved.
In some embodiments, the pose parameter calculation module 504 is further configured to: and determining pose parameters of the three-dimensional model corresponding to each two-dimensional image according to the pose transformation relationship between the three-dimensional model of the target object and the first camera and the projection transmission relationship between the first camera and the two-dimensional images.
In some embodiments, the host is further connected to a second camera, where the second camera is used to obtain three-dimensional point cloud data of the target object, and the second camera is a depth camera; the image labeling device further comprises a pose transformation relation determining module (not shown in the figure), wherein the pose transformation relation determining module is used for: obtaining pose parameters of the checkerboard corner under a first camera coordinate system, a world coordinate system and a second camera coordinate system respectively by adopting a checkerboard calibration method; calibrating a second pose transformation relation between the first camera coordinate system and the mechanical arm coordinate system based on a first pose transformation relation preset between the mechanical arm coordinate system and the world coordinate system and pose parameters of the checkerboard corner points under the first camera coordinate system and the world coordinate system; calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard corner under the first camera coordinate system and the second camera coordinate system; calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner under the second camera coordinate system and the world coordinate system; a pose transformation relationship between the three-dimensional model of the target object and the first camera is determined based on at least one of the first pose transformation relationship, the second pose transformation relationship, the third pose transformation relationship, and the fourth pose transformation relationship.
In some embodiments, if the three-dimensional model is a model of the target object when modeled, the pose transformation relationship between the three-dimensional model and the first camera is determined based on pose parameters of the three-dimensional model in the world coordinate system, the first pose transformation relationship, and the second pose transformation relationship, or is determined based on pose parameters of the three-dimensional model in the world coordinate system, the third pose transformation relationship, and the fourth pose transformation relationship.
In some embodiments, if the three-dimensional model is a model constructed based on three-dimensional point cloud data of the target object, the pose transformation relationship between the three-dimensional model and the first camera is determined based on pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relationship; the image labeling apparatus further comprises a registration module (not shown) for, if the three-dimensional model models the model when modeling the target object: registering the point cloud data of the three-dimensional model with the three-dimensional point cloud data of the target object to obtain pose parameters of the three-dimensional model under a second camera coordinate system; the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relationship.
In some embodiments, the defect labeling information includes defect positions and defect types; the image annotation module 508 is further configured to: projecting the defect position of the three-dimensional model onto a two-dimensional image; determining a defect boundary box according to the projection position of the defect position on the two-dimensional image; and marking the defect boundary box and the defect type on the two-dimensional image to obtain a marking result of the two-dimensional image.
In some embodiments, the defect labeling information obtaining module 506 is further configured to: and generating defect labeling information of the three-dimensional model according to the triangular patch selected by the user in the three-dimensional model and the labeling information added to the triangular patch.
The device provided in this embodiment has the same implementation principle and technical effects as those of the foregoing embodiment, and for brevity, reference may be made to the corresponding contents of the second embodiment.
Embodiment four:
Based on the foregoing embodiments, this embodiment provides a host, which includes: the device comprises an image acquisition device, a processor and a storage device; the image acquisition device is used for acquiring a two-dimensional image; the storage device has stored thereon a computer program which, when executed by a processor, performs the image annotation method as provided in embodiment two.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not described herein again.
Further, the present embodiment also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processing device, performs the steps of the image labeling method provided in the second embodiment.
The embodiment of the invention provides a method, a device, a system and a computer program product of a host, which comprise a computer readable storage medium storing program codes, wherein the instructions included in the program codes can be used for executing the image labeling method in the previous method embodiment, and specific implementation can be seen in the method embodiment and will not be repeated here.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. A method of image annotation, the method performed by a host coupled to a first camera, the method comprising:
Acquiring a plurality of two-dimensional images to be marked of different angles of a target object through the first camera;
according to the pose transformation relation between the three-dimensional model of the target object and the first camera, pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image are calculated; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object;
Obtaining defect labeling information of the three-dimensional model;
Projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to pose parameters of the three-dimensional model corresponding to each two-dimensional image to obtain a labeling result of the two-dimensional image;
The host is further connected with a second camera, and the second camera is used for acquiring three-dimensional point cloud data of the target object; the pose transformation relationship between the three-dimensional model of the target object and the first camera is determined in advance by:
Obtaining pose parameters of the checkerboard corner under a first camera coordinate system, a world coordinate system and a second camera coordinate system respectively by adopting a checkerboard calibration method;
determining a pose transformation relationship between the three-dimensional model of the target object and the first camera based on pose parameters of the checkerboard corner under the first camera coordinate system, the robot world coordinate system and the second camera coordinate system respectively;
The first camera is arranged at the tail end of the mechanical arm; the second camera is a depth camera;
The pose transformation relationship between the three-dimensional model of the target object and the first camera is determined in advance by:
Calibrating a second pose transformation relation between a first camera coordinate system and a mechanical arm coordinate system based on a first pose transformation relation preset between the mechanical arm coordinate system and the world coordinate system and pose parameters of the checkerboard corner points under the first camera coordinate system and the world coordinate system;
Calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard corner under the first camera coordinate system and the second camera coordinate system;
Calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner under the second camera coordinate system and the world coordinate system;
a pose transformation relationship between the three-dimensional model of the target object and the first camera is determined based on at least one of the first pose transformation relationship, the second pose transformation relationship, the third pose transformation relationship, and the fourth pose transformation relationship.
2. The method according to claim 1, wherein the step of calculating pose parameters of the three-dimensional model of the target object corresponding to the respective two-dimensional images from pose transformation relationships between the three-dimensional model of the target object and the first camera includes:
And determining pose parameters of the three-dimensional model corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera and the projection transmission relation between the first camera and the two-dimensional images.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
And if the three-dimensional model is a model when modeling the target object, determining a pose transformation relation between the three-dimensional model and the first camera based on pose parameters of the three-dimensional model in a world coordinate system, the first pose transformation relation and the second pose transformation relation or determining based on pose parameters of the three-dimensional model in the world coordinate system, the third pose transformation relation and the fourth pose transformation relation.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
If the three-dimensional model is a model constructed based on three-dimensional point cloud data of the target object, determining a pose transformation relationship between the three-dimensional model and the first camera based on pose parameters of the three-dimensional model under a second camera coordinate system and the third pose transformation relationship;
If the three-dimensional model is a model of the target object, the method further comprises:
registering the point cloud data of the three-dimensional model with the three-dimensional point cloud data of the target object to obtain pose parameters of the three-dimensional model under a second camera coordinate system;
the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model under a second camera coordinate system and the third pose transformation relationship.
5. The method of claim 1, wherein the defect labeling information includes defect locations and defect types;
The step of projecting the defect labeling information of the three-dimensional model onto the two-dimensional image to obtain a labeling result of the two-dimensional image comprises the following steps:
projecting a defect position of the three-dimensional model onto the two-dimensional image;
determining a defect boundary box according to the projection position of the defect position on the two-dimensional image;
and marking the defect boundary box and the defect type on the two-dimensional image to obtain a marking result of the two-dimensional image.
6. The method of claim 1, wherein the step of obtaining defect labeling information for the three-dimensional model comprises:
And generating defect labeling information of the three-dimensional model according to the triangular patches selected by the user in the three-dimensional model and the labeling information added to the triangular patches.
7. An image annotation device for use with a host computer coupled to a first camera, the device comprising:
The image acquisition module is used for acquiring a plurality of two-dimensional images to be marked of different angles of the target object through the first camera;
The pose parameter calculation module is used for calculating pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object;
the defect labeling information acquisition module is used for acquiring defect labeling information of the three-dimensional model;
The image labeling module is used for projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image to obtain a labeling result of the two-dimensional image;
The host is further connected with a second camera, and the second camera is used for acquiring three-dimensional point cloud data of the target object; the apparatus further comprises:
the pose transformation relation determining module is used for acquiring pose parameters of the checkerboard corner under a first camera coordinate system, a world coordinate system and a second camera coordinate system respectively by adopting a checkerboard calibration method; determining a pose transformation relationship between the three-dimensional model of the target object and the first camera based on pose parameters of the checkerboard corner under the first camera coordinate system, the robot world coordinate system and the second camera coordinate system respectively;
The first camera is arranged at the tail end of the mechanical arm; the second camera is a depth camera; the pose transformation relation determining module is specifically configured to calibrate a second pose transformation relation between a first camera coordinate system and a mechanical arm coordinate system based on a first pose transformation relation preset between the mechanical arm coordinate system and a world coordinate system and pose parameters of the checkerboard corner points under the first camera coordinate system and the world coordinate system; calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard corner under the first camera coordinate system and the second camera coordinate system; calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner under the second camera coordinate system and the world coordinate system; a pose transformation relationship between the three-dimensional model of the target object and the first camera is determined based on at least one of the first pose transformation relationship, the second pose transformation relationship, the third pose transformation relationship, and the fourth pose transformation relationship.
8. A host, the host comprising: the device comprises an image acquisition device, a processor and a storage device;
The image acquisition device is used for acquiring a two-dimensional image;
The storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 6.
9. An image annotation system comprising the host of claim 8, further comprising a first camera coupled to the host.
10. The system of claim 9, wherein the first camera is disposed at an end of a robotic arm; the host is also connected with a second camera, and the second camera is a depth camera.
11. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor performs the steps of the method of any of the preceding claims 1 to 6.
CN201911333503.XA 2019-12-19 2019-12-19 Image labeling method, device, system and host Active CN111127422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911333503.XA CN111127422B (en) 2019-12-19 2019-12-19 Image labeling method, device, system and host

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911333503.XA CN111127422B (en) 2019-12-19 2019-12-19 Image labeling method, device, system and host

Publications (2)

Publication Number Publication Date
CN111127422A CN111127422A (en) 2020-05-08
CN111127422B true CN111127422B (en) 2024-06-04

Family

ID=70501019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911333503.XA Active CN111127422B (en) 2019-12-19 2019-12-19 Image labeling method, device, system and host

Country Status (1)

Country Link
CN (1) CN111127422B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862199B (en) * 2020-06-17 2024-01-09 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN111951330B (en) * 2020-08-27 2024-09-13 北京小马慧行科技有限公司 Labeling updating method, labeling updating device, storage medium, processor and carrier
CN112258574B (en) * 2020-09-21 2024-10-18 北京沃东天骏信息技术有限公司 Method and device for labeling pose information and computer readable storage medium
CN112288878B (en) * 2020-10-29 2024-01-26 字节跳动有限公司 Augmented reality preview method and preview device, electronic equipment and storage medium
CN114531580B (en) * 2020-11-23 2023-11-21 北京四维图新科技股份有限公司 Image processing method and device
CN112837424B (en) * 2021-02-04 2024-02-06 脸萌有限公司 Image processing method, apparatus, device and computer readable storage medium
CN113033426B (en) * 2021-03-30 2024-03-01 北京车和家信息技术有限公司 Dynamic object labeling method, device, equipment and storage medium
CN113096094B (en) * 2021-04-12 2024-05-17 吴俊� Three-dimensional object surface defect detection method
CN113205447A (en) * 2021-05-11 2021-08-03 北京车和家信息技术有限公司 Road picture marking method and device for lane line identification
CN113570708A (en) * 2021-07-30 2021-10-29 重庆市特种设备检测研究院 Defect three-dimensional modeling method and device and computer readable storage medium
CN114387346A (en) * 2022-03-25 2022-04-22 阿里巴巴达摩院(杭州)科技有限公司 Image recognition and prediction model processing method, three-dimensional modeling method and device
CN114596363B (en) * 2022-05-10 2022-07-22 北京鉴智科技有限公司 Three-dimensional point cloud marking method and device and terminal
CN114898044B (en) * 2022-05-19 2024-01-23 同方威视技术股份有限公司 Imaging method, device, equipment and medium for detection object
CN115423934B (en) * 2022-08-12 2024-03-08 北京城市网邻信息技术有限公司 House type diagram generation method and device, electronic equipment and storage medium
CN115205471B (en) * 2022-09-13 2022-12-30 青岛艾德软件有限公司 Labeling method and system suitable for automatic drawing of assembly modeling
CN115880690B (en) * 2022-11-23 2023-08-11 郑州大学 Method for quickly labeling objects in point cloud under assistance of three-dimensional reconstruction
CN115661493B (en) * 2022-12-28 2023-07-04 航天云机(北京)科技有限公司 Method, device, equipment and storage medium for determining object pose

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460398A (en) * 2017-12-27 2018-08-28 达闼科技(北京)有限公司 Image processing method, device, cloud processing equipment and computer program product
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image
CN109934931A (en) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 Acquisition image, the method and device for establishing target object identification model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214980B (en) * 2017-07-04 2023-06-23 阿波罗智能技术(北京)有限公司 Three-dimensional attitude estimation method, three-dimensional attitude estimation device, three-dimensional attitude estimation equipment and computer storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image
CN109934931A (en) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 Acquisition image, the method and device for establishing target object identification model
CN108460398A (en) * 2017-12-27 2018-08-28 达闼科技(北京)有限公司 Image processing method, device, cloud processing equipment and computer program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《移动机器人及室内环境三维模型重建技术》. *

Also Published As

Publication number Publication date
CN111127422A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN111127422B (en) Image labeling method, device, system and host
US11288492B2 (en) Method and device for acquiring 3D information of object
TWI566204B (en) Three dimensional object recognition
US10008005B2 (en) Measurement system and method for measuring multi-dimensions
CN110070564B (en) Feature point matching method, device, equipment and storage medium
JP5029618B2 (en) Three-dimensional shape measuring apparatus, method and program by pattern projection method
KR100793838B1 (en) Appratus for findinng the motion of camera, system and method for supporting augmented reality in ocean scene using the appratus
CN107564069A (en) The determination method, apparatus and computer-readable recording medium of calibrating parameters
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
TW201520540A (en) Inspection apparatus, method, and computer program product for machine vision inspection
CN113689578B (en) Human body data set generation method and device
Olesen et al. Real-time extraction of surface patches with associated uncertainties by means of kinect cameras
CN109740487B (en) Point cloud labeling method and device, computer equipment and storage medium
JPWO2020188799A1 (en) Camera calibration device, camera calibration method, and program
CN109934873B (en) Method, device and equipment for acquiring marked image
CN115205128A (en) Depth camera temperature drift correction method, system, equipment and medium based on structured light
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN111462246A (en) Equipment calibration method of structured light measurement system
CN104655041B (en) A kind of industrial part contour line multi-feature extraction method of additional constraint condition
CN111553969B (en) Texture mapping method, medium, terminal and device based on gradient domain
CN116469101A (en) Data labeling method, device, electronic equipment and storage medium
JP2016206909A (en) Information processor, and information processing method
CN112634439B (en) 3D information display method and device
CN112652056A (en) 3D information display method and device
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant