CN111127422A - Image annotation method, device, system and host - Google Patents

Image annotation method, device, system and host Download PDF

Info

Publication number
CN111127422A
CN111127422A CN201911333503.XA CN201911333503A CN111127422A CN 111127422 A CN111127422 A CN 111127422A CN 201911333503 A CN201911333503 A CN 201911333503A CN 111127422 A CN111127422 A CN 111127422A
Authority
CN
China
Prior art keywords
camera
dimensional
dimensional model
pose
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911333503.XA
Other languages
Chinese (zh)
Other versions
CN111127422B (en
Inventor
王昌龙
付兴银
皮若言
孙斯瑾
李广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201911333503.XA priority Critical patent/CN111127422B/en
Publication of CN111127422A publication Critical patent/CN111127422A/en
Application granted granted Critical
Publication of CN111127422B publication Critical patent/CN111127422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image annotation method, an image annotation device, an image annotation system and a host, and relates to the technical field of image processing, wherein the method comprises the steps of acquiring a plurality of to-be-annotated two-dimensional images of a target object at different angles through a first camera; calculating the pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object; acquiring defect marking information of the three-dimensional model; and projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image to obtain a labeling result of the two-dimensional image. The invention can effectively improve the marking efficiency, reduce the marking cost and better unify the quality of the marking result, thereby being beneficial to improving the defect detection effect of parts.

Description

Image annotation method, device, system and host
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image annotation method, apparatus, system, and host.
Background
At present, the deep learning algorithm is widely applied to defect detection of industrial parts, such as scratches of metal parts. Generally, it is often necessary to train a neural network to detect part defects. In the training process of the neural network, the defective part images are marked as training data, and the quantity and the marking quality of the defective part images directly influence the detection effect of the neural network.
The marking mode of the image at the present stage mainly depends on manual work, namely, each part image to be marked needs to be marked manually, and the marking mode of a single image depending on manual work consumes larger marking cost and reduces the marking efficiency of data; and the quality of each image labeling data is difficult to ensure uniformity, thereby further having negative influence on the detection effect.
Disclosure of Invention
In view of the above, the present invention provides an image annotation method, device, system and host, which can effectively improve annotation efficiency, reduce annotation cost, and better unify the quality of annotation results, thereby facilitating the improvement of defect detection effect of parts.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image annotation method, where the method is performed by a host, where the host is connected to a first camera, and the method includes: acquiring a plurality of to-be-labeled two-dimensional images of a target object at different angles through the first camera; calculating a pose parameter of the three-dimensional model of the target object corresponding to each two-dimensional image according to a pose transformation relation between the three-dimensional model of the target object and the first camera; wherein the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object; acquiring defect marking information of the three-dimensional model; and projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to the two-dimensional images to obtain a labeling result of the two-dimensional image.
Further, the step of calculating the pose parameters of the three-dimensional model of the target object corresponding to the respective two-dimensional images according to the pose transformation relationship between the three-dimensional model of the target object and the first camera includes: and determining the pose parameters of the three-dimensional model corresponding to the two-dimensional images according to the pose transformation relation between the three-dimensional model of the target object and the first camera and the projection transmission relation between the first camera and the two-dimensional images.
Further, the first camera is arranged at the tail end of the mechanical arm; the host is also connected with a second camera, the second camera is used for acquiring three-dimensional point cloud data of the target object, and the second camera is a depth camera; the pose transformation relationship between the three-dimensional model of the target object and the first camera is determined in advance by: adopting a checkerboard calibration method to obtain pose parameters of checkerboard angular points under a first camera coordinate system, a world coordinate system and a second camera coordinate system respectively; calibrating a second position and posture transformation relation between the first camera coordinate system and the mechanical arm coordinate system based on a first position and posture transformation relation preset between the mechanical arm coordinate system and the world coordinate system and position and posture parameters of the checkerboard angular points under the first camera coordinate system and the world coordinate system; calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard angular points under the first camera coordinate system and the second camera coordinate system; calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner points under the second camera coordinate system and the world coordinate system; determining a pose transformation relationship between the three-dimensional model of the target object and the first camera based on at least one of the first, second, third, and fourth pose transformation relationships.
Further, if the three-dimensional model is a model when modeling the target object, the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameter of the three-dimensional model in the world coordinate system, the first pose transformation relationship and the second pose transformation relationship, or is determined based on the pose parameter of the three-dimensional model in the world coordinate system, the third pose transformation relationship and the fourth pose transformation relationship.
Further, if the three-dimensional model is a model constructed based on three-dimensional point cloud data of the target object, the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relationship; if the three-dimensional model is the model when modeling the target object, the method further comprises: registering the point cloud data of the three-dimensional model with the three-dimensional point cloud data of the target object to obtain a pose parameter of the three-dimensional model in a second camera coordinate system; and the pose transformation relation between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relation.
Further, the defect labeling information comprises a defect position and a defect type; the step of projecting the defect labeling information of the three-dimensional model onto the two-dimensional image to obtain a labeling result of the two-dimensional image comprises the following steps: projecting the defect position of the three-dimensional model onto the two-dimensional image; determining a defect boundary frame according to the projection position of the defect position on the two-dimensional image; and marking the defect boundary frame and the defect type on the two-dimensional image to obtain a marking result of the two-dimensional image.
Further, the step of obtaining the defect labeling information of the three-dimensional model includes: and generating the defect labeling information of the three-dimensional model according to a triangular patch selected by a user in the three-dimensional model and the labeling information added to the triangular patch.
In a second aspect, an embodiment of the present invention further provides an image annotation apparatus, where the apparatus is applied to a host, and the host is connected to a first camera, and the apparatus includes: the image acquisition module is used for acquiring a plurality of to-be-labeled two-dimensional images of the target object at different angles through the first camera; a pose parameter calculation module, configured to calculate a pose parameter of the three-dimensional model of the target object corresponding to each of the two-dimensional images according to a pose transformation relationship between the three-dimensional model of the target object and the first camera; wherein the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object; the defect labeling information acquisition module is used for acquiring defect labeling information of the three-dimensional model; and the image labeling module is used for projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to the two-dimensional images to obtain the labeling result of the two-dimensional image.
In a third aspect, an embodiment of the present invention provides a host, where the host includes: the device comprises an image acquisition device, a processor and a storage device; the image acquisition device is used for acquiring a two-dimensional image; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of the first aspects.
In a fourth aspect, an embodiment of the present invention provides an image annotation system, where the system includes the host according to the third aspect, and further includes a first camera connected to the host.
Further, the first camera is arranged at the tail end of the mechanical arm; the host is also connected with a second camera, and the second camera is a depth camera.
In a fifth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method according to any one of the above first aspects.
The embodiment of the invention provides an image annotation method, device, system and host, which comprises the steps of firstly, acquiring a plurality of to-be-annotated two-dimensional images of a target object at different angles through a first camera; and then, calculating the pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera, and projecting the acquired defect labeling information of the three-dimensional model onto the two-dimensional image based on the pose parameters to obtain the labeling result of the two-dimensional image. Compared with the manual labeling mode in the prior art, the mode provided by the embodiment can map the defect labeling information to a large number of two-dimensional images to be labeled according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image only by acquiring the defect labeling information of the three-dimensional model, so that the labeling efficiency is effectively improved, and the labeling cost is reduced; moreover, the quality of the defect marking information based on the three-dimensional model can be better unified for the quality of the marking result, so that the defect detection effect of the part is favorably improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the above-described technology of the disclosure.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a schematic view illustrating an application scenario of an image annotation method according to an embodiment of the present invention;
FIG. 3 is a flow chart of an image annotation method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a calibration process of pose transformation relationships according to an embodiment of the present invention;
fig. 5 is a block diagram illustrating a structure of an image annotation apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The defect detection effect of parts is poor due to the fact that the problems of high labeling cost, low labeling efficiency and nonuniform quality of labeled data of each image exist in the existing manual image labeling mode. Based on this, in order to improve at least one of the above problems, embodiments of the present invention provide an image annotation method, apparatus, system, and host, which can effectively improve annotation efficiency, reduce annotation cost, and better unify the quality of annotation results, thereby facilitating improvement of defect detection effect of parts. The technology can be applied to various industries such as electronics, chemical engineering, aerospace and the like, and the defect detection function of parts is realized. For ease of understanding, the following detailed description will discuss embodiments of the present invention.
The first embodiment is as follows:
first, an example electronic device 100 for implementing the image annotation method and apparatus according to the embodiment of the invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are only exemplary and not limiting, and the electronic device may have some of the components shown in fig. 1 and may also have other components and structures not shown in fig. 1, as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
Illustratively, an example electronic device for implementing an image annotation method and apparatus according to an embodiment of the present invention may be implemented on an intelligent terminal, such as a tablet computer, a computer, and a server.
Example two:
firstly, for the convenience of understanding, the embodiment provides an image annotation system, and exemplifies a practical application scenario of the image annotation method. Referring to fig. 2, the image annotation system comprises a host, and a two-dimensional image acquisition device and a three-dimensional data acquisition device connected with the host; for convenience of description, the two-dimensional image capturing device may also be referred to as a first camera, and the three-dimensional data capturing device may also be referred to as a second camera. In practical applications, the first camera may be a monocular camera, a binocular camera, or a depth camera; in order to improve the flexibility of the first camera in the image acquisition process, the first camera can be carried at the end of the flange of the mechanical arm. The first camera may be a monocular camera, taking into account cost and complexity of manipulation. The second camera is typically a depth camera. In this embodiment, the first camera is mainly used for acquiring two-dimensional images of a target object (such as an industrial part) at different angles and sending the two-dimensional images to the host; the second camera is mainly used for acquiring three-dimensional point cloud data of the target object and sending the three-dimensional point cloud data to the host; the host is used for reconstructing the three-dimensional model based on the three-dimensional point cloud data, acquiring defect labeling information of the three-dimensional model manually and performing defect projection output on the two-dimensional image based on the defect labeling information. Such as a computer or server.
When the first camera also employs a depth camera, the first camera and the second camera may be identical. However, in consideration of the difference in requirements for the shooting distance, the scanning accuracy, and the like of the two-dimensional image and the three-dimensional point cloud data in practical applications, the first camera and the second camera are generally set to different depth cameras.
Referring to fig. 3, a flow chart of an image labeling method is shown, which can be applied to various apparatuses for detecting defects of parts in industry, such as the image labeling system described above. With reference to the image annotation system, the image annotation method can be executed by a host, and the host is connected to the first camera, and referring to fig. 3, the method specifically includes the following steps S304 to S308:
step S302, a plurality of two-dimensional images to be annotated of the target object at different angles are acquired through a first camera. Wherein the target object is, for example, an industrial part.
Generally, a first camera is controlled to move through a mechanical arm so as to acquire a plurality of two-dimensional images to be marked of a target object at different angles; in addition, when the two-dimensional image is acquired, the pose parameters of the first camera can be recorded simultaneously, and the pose parameters can comprise internal parameters and external parameters of the first camera; wherein the intrinsic parameters are parameters such as focal length of the camera, distortion coefficient, etc., and the extrinsic parameters refer to the position and orientation of the first camera in three-dimensional space, and are used to describe the motion of the first camera in a static scene, or the rigid motion of the target object when the first camera is fixed.
Step S304, calculating the pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object.
And step S306, acquiring defect marking information of the three-dimensional model. In practical application, the defect labeling information is manually labeled by a user, and the defect labeling information of the three-dimensional model can be generated according to a triangular patch selected by the user in the three-dimensional model and labeling information added to the triangular patch.
And S308, projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to the two-dimensional images to obtain a labeling result of the two-dimensional image.
In this embodiment, the defect labeling information includes a defect position and a defect type; wherein, the defect position is the position of the defect on the three-dimensional model, and the defect types are the scratch of the metal part, the defect of the scratch, the defect of the black spot, the missing material and the defect of the shrinkage of the plastic finished product, and the like. In one mode of obtaining the labeling result, the defect position of the three-dimensional model can be projected onto the two-dimensional image; determining a defect boundary frame according to the projection position of the defect position on the two-dimensional image; and finally, marking the defect boundary frame and the defect type on the two-dimensional image to obtain a marking result of the two-dimensional image.
The image annotation method provided by the embodiment of the invention comprises the steps of firstly, acquiring a plurality of to-be-annotated two-dimensional images of a target object at different angles through a first camera; and then, calculating the pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera, and projecting the acquired defect labeling information of the three-dimensional model onto the two-dimensional image based on the pose parameters to obtain the labeling result of the two-dimensional image. Compared with the manual labeling mode in the prior art, the mode provided by the embodiment can map the defect labeling information to a large number of two-dimensional images to be labeled according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image only by acquiring the defect labeling information of the three-dimensional model, so that the labeling efficiency is effectively improved, and the labeling cost is reduced; moreover, the quality of the defect marking information based on the three-dimensional model can be better unified for the quality of the marking result, so that the defect detection effect of the part is favorably improved.
In another scenario for implementing the image annotation method, the host may further be connected to a second camera, where the second camera is used to obtain three-dimensional point cloud data of the target object, and the second camera is generally a depth camera. The RGB image of the target object is acquired through the depth camera, and the coordinates of each pixel point in the RGB image under a second camera coordinate system (also called a depth camera coordinate system) can be acquired according to the coordinates of each pixel point in the RGB image under the pixel coordinate system and the internal parameters of the depth camera, wherein each pixel point under the second camera coordinate system is also the three-dimensional point cloud data of the target object.
In one embodiment, the three-dimensional point cloud data of the target object may be used to construct a three-dimensional model, whereby the three-dimensional model may be constructed as follows: firstly, splicing and denoising the three-dimensional point cloud data, and then rendering the processed three-dimensional point cloud data to obtain a three-dimensional model of the target object. The three-dimensional model is a three-dimensional mesh model, and can construct a complete surface contour of a target object, so that a user can select a triangular patch at a corresponding position on the three-dimensional mesh model for marking according to the actual position of the surface defect of the target object, so as to obtain the defect type and a three-dimensional coordinate area (namely the defect position) of the defect on the three-dimensional model.
It is to be understood that, when the three-dimensional model is a model constructed based on the three-dimensional point cloud data of the target object, the step of constructing the three-dimensional model is performed before the step of calculating the pose parameters of the three-dimensional model of the target object corresponding to the respective two-dimensional images.
Of course, the three-dimensional model may also be a model when modeling the target object, such as a model directly obtained from a design drawing of the target object.
No matter the three-dimensional model is a model when modeling the target object or a model constructed based on three-dimensional point cloud data of the target object, for a fixed image annotation system (as shown in fig. 2) constructed in advance in practical application, the pose transformation relationship between the three-dimensional model and the first camera in the step S304 is usually predetermined, and the predetermined pose transformation relationship can be directly applied in an image annotation method. For ease of understanding, the present embodiment provides a determination manner of the pose transformation relationship between the three-dimensional model and the first camera, referring to the following steps (one) to (five):
the method comprises the following steps of (I) acquiring pose parameters of checkerboard angular points under a first camera coordinate system, a robot world coordinate system (hereinafter referred to as a world coordinate system) and a second camera coordinate system respectively by adopting a checkerboard calibration method. Referring to fig. 4, based on the pose parameters of the checkered corner points in each coordinate system, the determination process of the pose transformation relationship between the three-dimensional model and the first camera may be mainly performed in two branches, where the first branch is used to calibrate the coordinate system of the mechanical arm and the coordinate system of the first camera, and the following step (ii) is referred to.
Step (II): and calibrating a second position posture transformation relation between the first camera coordinate system and the mechanical arm coordinate system based on a preset first position posture transformation relation between the mechanical arm coordinate system and the world coordinate system and position posture parameters of the checkerboard angular points under the first camera coordinate system and the world coordinate system. In specific implementation, the calibration of the internal parameters of the first camera may be first implemented by a conventional calibration method in the prior art, such as a zhang scaling method. And secondly, based on the first camera completing the internal parameter calibration, acquiring pose parameters of a plurality of groups of checkerboard corner points in a first camera coordinate system and a world coordinate system by moving the mechanical arm for a plurality of times. And then directly reading a preset first position and posture transformation relation between a mechanical arm coordinate system and a world coordinate system through a mechanical arm, respectively using the first position and posture transformation relation and position and posture parameters of a plurality of groups of checkerboard angular points under a first camera coordinate system and the world coordinate system as constraints, and calibrating a second position and posture transformation relation between the first camera coordinate system and the mechanical arm coordinate system through solving an optimized equation.
The second branch of determining the pose transformation relationship between the three-dimensional model and the first camera is to calibrate the second camera coordinate system and the world coordinate system. The calibration process can be realized by the following first mode shown in the step (three) or the second mode shown in the step (four).
The first method is as follows: and (III) calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard corner points in the first camera coordinate system and the second camera coordinate system. Specifically, the pose parameters of the checkerboard corner points in the first camera coordinate system and the second camera coordinate system can be used as constraints to solve the third pose transformation relationship between the first camera coordinate system and the second camera coordinate system. It can be understood that, in the case where the first, second and third pose transformation relationships are determined, the transformation relationship between the second camera coordinate system and the world coordinate system may be calibrated accordingly and used as the fourth pose transformation relationship. Referring to the manner of determining the fourth pose transformation relationship in this manner, it can be understood that, when any three pose change relationships are determined, the fourth pose change relationship can be determined based on the three pose change relationships.
The second method comprises the following steps: and (IV) calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner points in the second camera coordinate system and the world coordinate system.
Of course, the various pose transformation relationships between the above coordinate systems are merely exemplary descriptions and should not be considered limiting. Moreover, based on the four pose transformation relations between the coordinate systems, other pose transformation relations between the coordinate systems can be further determined, such as: a fifth pose transformation relationship between the first camera coordinate system and the world coordinate system may be determined from the third pose transformation relationship and the fourth pose transformation relationship.
And (V) determining a pose transformation relation between the three-dimensional model of the target object and the first camera based on at least one of the first pose transformation relation, the second pose transformation relation, the third pose transformation relation and the fourth pose transformation relation. Due to the fact that the pose transformation relations among the coordinate systems are multiple, the pose transformation relations between the three-dimensional model and the first camera can be determined in multiple modes, and the determined pose transformation relations between the three-dimensional model and the first camera are also multiple, so that a user can flexibly select the pose transformation relations according to actual requirements in more application scenes.
In this embodiment, if the three-dimensional model is a model constructed based on the three-dimensional point cloud data of the target object, since the three-dimensional point cloud data for constructing the three-dimensional model is directly acquired by the second camera, the pose transformation relationship between the three-dimensional model and the first camera can be determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relationship.
If the three-dimensional model is a model when modeling the target object, since the three-dimensional model of the target object is usually built under a world coordinate system, and the world coordinate system and the coordinate systems all have pose transformation relationships, the pose transformation relationship between the three-dimensional model and the first camera can be determined in various ways, such as the following three examples:
example one: the pose transformation relation between the three-dimensional model and the first camera is determined based on the pose parameters, the first pose transformation relation and the second pose transformation relation of the three-dimensional model in the world coordinate system. In practical application, the pose parameters of the three-dimensional model in the world coordinate system can be converted into the pose parameters in the mechanical arm coordinate system according to the first pose transformation relation, and then the pose parameters of the three-dimensional model in the mechanical arm coordinate system can be converted into the pose parameters in the first camera coordinate system according to the second pose transformation relation; therefore, pose transformation between the three-dimensional model and the first camera is realized.
Example two: the pose transformation relation between the three-dimensional model and the first camera is determined based on the pose parameters, the third pose transformation relation and the fourth pose transformation relation of the three-dimensional model in the world coordinate system. In practical application, the pose parameters of the three-dimensional model in the world coordinate system can be converted into the pose parameters in the second camera coordinate system according to the fourth pose transformation relation, and then the pose parameters of the three-dimensional model in the second camera coordinate system can be converted into the pose parameters in the first camera coordinate system according to the third pose transformation relation; therefore, pose transformation between the three-dimensional model and the first camera is realized.
Considering that the three-dimensional point cloud data obtained by the second camera and the three-dimensional model when the target object is modeled are both related to the target object and have the same dimension (both are three-dimensional), based on this, the present embodiment may further provide the following contents: and registering the point cloud data of the three-dimensional model and the three-dimensional point cloud data of the target object to obtain the pose parameter of the three-dimensional model under the second camera coordinate system. The registration can be understood as that the point cloud data of the three-dimensional model is converted into the coordinate system to which the three-dimensional point cloud data belongs, and the three-dimensional point cloud data is directly acquired by the second camera, so that the pose parameters of the three-dimensional model in the coordinate system of the second camera can be obtained after the registration.
Based on the pose parameters of the post-registration three-dimensional model, the following example three can be provided: and the pose transformation relation between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relation. In practical application, the pose parameters of the three-dimensional model in the second camera coordinate system are directly converted into the pose parameters in the first camera coordinate system according to the third pose transformation relation; therefore, pose transformation between the three-dimensional model and the first camera is realized.
After the pose transformation relationship between the three-dimensional model of the target object and the first camera is determined in the above manner, the pose parameters of the three-dimensional model of the target object corresponding to the respective two-dimensional images may be calculated with reference to the following embodiments: and determining the pose parameters of the three-dimensional model corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera and the projection transmission relation between the first camera and the two-dimensional image.
When the pose parameters of the three-dimensional model corresponding to each two-dimensional image are specifically determined, the following steps can be referred to: step A, according to the pose transformation relation between the three-dimensional model of the target object and the first camera, converting the pose parameters of the three-dimensional model in a second camera coordinate system or a world coordinate system into first pose parameters in a first camera coordinate system; step B, generating a second pose parameter of the real pose parameter of each pixel point on the two-dimensional graph under the first camera coordinate system according to the projection transmission relation between the first camera and the two-dimensional image, and acquiring a matching result between the second pose parameter and the first pose parameter; and step C, determining the pose parameters of the three-dimensional model corresponding to each two-dimensional image according to the matching result.
Or in another implementation, the step B may be replaced by the following step B' to obtain a matching result. And step B', generating a third pose parameter of which the first pose parameter corresponds to the two-dimensional image according to the projection transmission relation between the first camera and the two-dimensional image, and acquiring a matching result between the third pose parameter and the real pose parameter of each pixel point on the two-dimensional image.
In the above embodiment of determining the pose parameters of the three-dimensional model corresponding to each two-dimensional image, the pose parameters can be understood as the horizontal and vertical coordinates of the pixel points.
In addition, considering that a reconstructed three-dimensional model, the above-mentioned various pose transformation relations, and the defect position marked on the three-dimensional model manually may have a certain error, which causes a certain deviation in the defect boundary frame in the marking result after projection, and the deviation can be directly determined by observing the deviation between the calibration result on the two-dimensional image and the position of the real defect on the two-dimensional image. In this case, for the calibration result with the error, reminding information can be generated to remind the user to adjust the calibrated defect position on the three-dimensional model; and then, projecting again based on the adjusted defect labeling information to obtain a new labeling result of the two-dimensional image. It can be understood that the adjusted defect labeling information directly acts on all the two-dimensional images, so that along with the improvement of the quality of the adjusted defect labeling information, the new labeling results corresponding to all the two-dimensional images realize the improvement of the quality on the whole, and the uniformity of the calibration quality is ensured.
In summary, the image annotation method provided by the above embodiment only needs to acquire the defect annotation information of the three-dimensional model, and the defect annotation information can be mapped onto a large number of two-dimensional images to be annotated according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, so that the annotation efficiency is effectively improved, and the annotation cost is reduced; moreover, the quality of the defect marking information based on the three-dimensional model can be better unified for the quality of the marking result, so that the defect detection effect of the part is favorably improved.
Example three:
referring to fig. 5, a block diagram of an image annotation apparatus is shown, the apparatus is applied to a host, the host is connected to a first camera, and the apparatus includes:
the image obtaining module 502 obtains a plurality of two-dimensional images to be labeled of the target object at different angles by using the first camera.
A pose parameter calculation module 504, configured to calculate pose parameters of the three-dimensional model of the target object corresponding to the two-dimensional images according to a pose transformation relationship between the three-dimensional model of the target object and the first camera; the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object.
And a defect labeling information obtaining module 506, configured to obtain defect labeling information of the three-dimensional model.
And the image labeling module 508 is configured to project the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image, so as to obtain a labeling result of the two-dimensional image.
The image annotation device provided by the embodiment of the invention firstly obtains a plurality of to-be-annotated two-dimensional images of a target object at different angles through a first camera; and then, calculating the pose parameters of the three-dimensional model of the target object corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera, and projecting the acquired defect labeling information of the three-dimensional model onto the two-dimensional image based on the pose parameters to obtain the labeling result of the two-dimensional image. Compared with the manual labeling mode in the prior art, the mode provided by the embodiment can map the defect labeling information to a large number of two-dimensional images to be labeled according to the pose parameters of the three-dimensional model corresponding to each two-dimensional image only by acquiring the defect labeling information of the three-dimensional model, so that the labeling efficiency is effectively improved, and the labeling cost is reduced; moreover, the quality of the defect marking information based on the three-dimensional model can be better unified for the quality of the marking result, so that the defect detection effect of the part is favorably improved.
In some embodiments, the pose parameter calculation module 504 is further configured to: and determining the pose parameters of the three-dimensional model corresponding to each two-dimensional image according to the pose transformation relation between the three-dimensional model of the target object and the first camera and the projection transmission relation between the first camera and the two-dimensional image.
In some embodiments, the host is further connected to a second camera, the second camera is used for acquiring three-dimensional point cloud data of the target object, and the second camera is a depth camera; the image annotation device further includes a pose transformation relation determination module (not shown in the figure) configured to: adopting a checkerboard calibration method to obtain pose parameters of checkerboard angular points under a first camera coordinate system, a world coordinate system and a second camera coordinate system respectively; calibrating a second position posture transformation relation between the first camera coordinate system and the mechanical arm coordinate system based on a first position posture transformation relation preset between the mechanical arm coordinate system and the world coordinate system and position posture parameters of the checkerboard angular points under the first camera coordinate system and the world coordinate system; calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard corner points under the first camera coordinate system and the second camera coordinate system; calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner points in the second camera coordinate system and the world coordinate system; determining a pose transformation relationship between the three-dimensional model of the target object and the first camera based on at least one of the first, second, third, and fourth pose transformation relationships.
In some embodiments, if the three-dimensional model is a model of the target object when modeled, the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameters, the first pose transformation relationship, and the second pose transformation relationship of the three-dimensional model in the world coordinate system, or is determined based on the pose parameters, the third pose transformation relationship, and the fourth pose transformation relationship of the three-dimensional model in the world coordinate system.
In some embodiments, if the three-dimensional model is a model constructed based on three-dimensional point cloud data of the target object, the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relationship; if the three-dimensional model is a model when modeling the target object, the image annotation apparatus further includes a registration module (not shown in the figure) for: registering the point cloud data of the three-dimensional model with the three-dimensional point cloud data of the target object to obtain a pose parameter of the three-dimensional model under a second camera coordinate system; and the pose transformation relation between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relation.
In some embodiments, the defect labeling information includes a defect position and a defect type; the image annotation module 508 is further configured to: projecting the defect position of the three-dimensional model onto a two-dimensional image; determining a defect boundary frame according to the projection position of the defect position on the two-dimensional image; and marking the defect boundary frame and the defect type on the two-dimensional image to obtain a marking result of the two-dimensional image.
In some embodiments, the defect labeling information obtaining module 506 is further configured to: and generating the defect labeling information of the three-dimensional model according to the triangular patch selected by the user in the three-dimensional model and the labeling information added to the triangular patch.
The device provided in this embodiment has the same implementation principle and technical effects as those of the foregoing embodiment, and for the sake of brief description, reference may be made to corresponding contents in the foregoing embodiment.
Example four:
based on the foregoing embodiments, the present embodiment provides a host, including: the device comprises an image acquisition device, a processor and a storage device; the image acquisition device is used for acquiring a two-dimensional image; the storage device stores thereon a computer program which, when executed by the processor, performs the image annotation method as provided in embodiment two.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Further, this embodiment also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processing device, the steps of the image annotation method provided in the second embodiment are executed.
The image annotation method, apparatus, system, and computer program product of the host provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the image annotation method in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (12)

1. An image annotation method, performed by a host, the host being coupled to a first camera, the method comprising:
acquiring a plurality of to-be-labeled two-dimensional images of a target object at different angles through the first camera;
calculating a pose parameter of the three-dimensional model of the target object corresponding to each two-dimensional image according to a pose transformation relation between the three-dimensional model of the target object and the first camera; wherein the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object;
acquiring defect marking information of the three-dimensional model;
and projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to the two-dimensional images to obtain a labeling result of the two-dimensional image.
2. The method according to claim 1, wherein the step of calculating the pose parameters of the three-dimensional model of the target object corresponding to the respective two-dimensional images according to the pose transformation relationship between the three-dimensional model of the target object and the first camera comprises:
and determining the pose parameters of the three-dimensional model corresponding to the two-dimensional images according to the pose transformation relation between the three-dimensional model of the target object and the first camera and the projection transmission relation between the first camera and the two-dimensional images.
3. The method of claim 1, wherein the first camera is disposed at a distal end of a robotic arm; the host is also connected with a second camera, the second camera is used for acquiring three-dimensional point cloud data of the target object, and the second camera is a depth camera;
the pose transformation relationship between the three-dimensional model of the target object and the first camera is determined in advance by:
adopting a checkerboard calibration method to obtain pose parameters of checkerboard angular points under a first camera coordinate system, a world coordinate system and a second camera coordinate system respectively;
calibrating a second position and posture transformation relation between the first camera coordinate system and the mechanical arm coordinate system based on a first position and posture transformation relation preset between the mechanical arm coordinate system and the world coordinate system and position and posture parameters of the checkerboard angular points under the first camera coordinate system and the world coordinate system;
calibrating a third pose transformation relation between the first camera coordinate system and the second camera coordinate system according to pose parameters of the checkerboard angular points under the first camera coordinate system and the second camera coordinate system;
calibrating a fourth pose transformation relation between the second camera coordinate system and the world coordinate system according to pose parameters of the checkerboard corner points under the second camera coordinate system and the world coordinate system;
determining a pose transformation relationship between the three-dimensional model of the target object and the first camera based on at least one of the first, second, third, and fourth pose transformation relationships.
4. The method of claim 3,
if the three-dimensional model is a model when the target object is modeled, the pose transformation relationship between the three-dimensional model and the first camera is determined based on the pose parameters, the first pose transformation relationship and the second pose transformation relationship of the three-dimensional model in a world coordinate system, or is determined based on the pose parameters, the third pose transformation relationship and the fourth pose transformation relationship of the three-dimensional model in the world coordinate system.
5. The method of claim 3,
if the three-dimensional model is a model constructed based on the three-dimensional point cloud data of the target object, the pose transformation relation between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relation;
if the three-dimensional model is the model when modeling the target object, the method further comprises:
registering the point cloud data of the three-dimensional model with the three-dimensional point cloud data of the target object to obtain a pose parameter of the three-dimensional model in a second camera coordinate system;
and the pose transformation relation between the three-dimensional model and the first camera is determined based on the pose parameters of the three-dimensional model in the second camera coordinate system and the third pose transformation relation.
6. The method according to claim 1, wherein the defect labeling information includes a defect location and a defect type;
the step of projecting the defect labeling information of the three-dimensional model onto the two-dimensional image to obtain a labeling result of the two-dimensional image comprises the following steps:
projecting the defect position of the three-dimensional model onto the two-dimensional image;
determining a defect boundary frame according to the projection position of the defect position on the two-dimensional image;
and marking the defect boundary frame and the defect type on the two-dimensional image to obtain a marking result of the two-dimensional image.
7. The method of claim 1, wherein the step of obtaining defect labeling information of the three-dimensional model comprises:
and generating the defect labeling information of the three-dimensional model according to a triangular patch selected by a user in the three-dimensional model and the labeling information added to the triangular patch.
8. An image annotation device, the device being applied to a host, the host being connected to a first camera, the device comprising:
the image acquisition module is used for acquiring a plurality of to-be-labeled two-dimensional images of the target object at different angles through the first camera;
a pose parameter calculation module, configured to calculate a pose parameter of the three-dimensional model of the target object corresponding to each of the two-dimensional images according to a pose transformation relationship between the three-dimensional model of the target object and the first camera; wherein the three-dimensional model is a model when the target object is modeled or a model constructed based on three-dimensional point cloud data of the target object;
the defect labeling information acquisition module is used for acquiring defect labeling information of the three-dimensional model;
and the image labeling module is used for projecting the defect labeling information of the three-dimensional model onto the two-dimensional image according to the pose parameters of the three-dimensional model corresponding to the two-dimensional images to obtain the labeling result of the two-dimensional image.
9. A host, comprising: the device comprises an image acquisition device, a processor and a storage device;
the image acquisition device is used for acquiring a two-dimensional image;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any one of claims 1 to 7.
10. An image annotation system comprising the host of claim 9, and further comprising a first camera coupled to the host.
11. The system of claim 10, wherein the first camera is disposed at a distal end of a robotic arm; the host is also connected with a second camera, and the second camera is a depth camera.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1 to 7.
CN201911333503.XA 2019-12-19 2019-12-19 Image labeling method, device, system and host Active CN111127422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911333503.XA CN111127422B (en) 2019-12-19 2019-12-19 Image labeling method, device, system and host

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911333503.XA CN111127422B (en) 2019-12-19 2019-12-19 Image labeling method, device, system and host

Publications (2)

Publication Number Publication Date
CN111127422A true CN111127422A (en) 2020-05-08
CN111127422B CN111127422B (en) 2024-06-04

Family

ID=70501019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911333503.XA Active CN111127422B (en) 2019-12-19 2019-12-19 Image labeling method, device, system and host

Country Status (1)

Country Link
CN (1) CN111127422B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862199A (en) * 2020-06-17 2020-10-30 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN111951330A (en) * 2020-08-27 2020-11-17 北京小马慧行科技有限公司 Label updating method and device, storage medium, processor and vehicle
CN112258574A (en) * 2020-09-21 2021-01-22 北京沃东天骏信息技术有限公司 Method and device for marking pose information and computer readable storage medium
CN112288878A (en) * 2020-10-29 2021-01-29 字节跳动有限公司 Augmented reality preview method and preview device, electronic device and storage medium
CN112837424A (en) * 2021-02-04 2021-05-25 脸萌有限公司 Image processing method, device, equipment and computer readable storage medium
CN113033426A (en) * 2021-03-30 2021-06-25 北京车和家信息技术有限公司 Dynamic object labeling method, device, equipment and storage medium
CN113096094A (en) * 2021-04-12 2021-07-09 成都市览图科技有限公司 Three-dimensional object surface defect detection method
CN113205447A (en) * 2021-05-11 2021-08-03 北京车和家信息技术有限公司 Road picture marking method and device for lane line identification
CN113313755A (en) * 2021-04-16 2021-08-27 中科创达软件股份有限公司 Method, device and equipment for determining pose of target object and storage medium
CN113570708A (en) * 2021-07-30 2021-10-29 重庆市特种设备检测研究院 Defect three-dimensional modeling method and device and computer readable storage medium
CN114387346A (en) * 2022-03-25 2022-04-22 阿里巴巴达摩院(杭州)科技有限公司 Image recognition and prediction model processing method, three-dimensional modeling method and device
CN114531580A (en) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 Image processing method and device
CN114596363A (en) * 2022-05-10 2022-06-07 北京鉴智科技有限公司 Three-dimensional point cloud labeling method and device and terminal
CN114898044A (en) * 2022-05-19 2022-08-12 同方威视技术股份有限公司 Method, apparatus, device and medium for imaging detection object
CN115205471A (en) * 2022-09-13 2022-10-18 青岛艾德软件有限公司 Labeling method and system suitable for automatic drawing of assembly modeling
CN115423934A (en) * 2022-08-12 2022-12-02 北京城市网邻信息技术有限公司 House type graph generation method and device, electronic equipment and storage medium
CN115661493A (en) * 2022-12-28 2023-01-31 航天云机(北京)科技有限公司 Object pose determination method and device, equipment and storage medium
CN115880690A (en) * 2022-11-23 2023-03-31 郑州大学 Method for quickly marking object in point cloud under assistance of three-dimensional reconstruction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460398A (en) * 2017-12-27 2018-08-28 达闼科技(北京)有限公司 Image processing method, device, cloud processing equipment and computer program product
US20190012807A1 (en) * 2017-07-04 2019-01-10 Baidu Online Network Technology (Beijing) Co., Ltd.. Three-dimensional posture estimating method and apparatus, device and computer storage medium
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image
CN109934931A (en) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 Acquisition image, the method and device for establishing target object identification model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012807A1 (en) * 2017-07-04 2019-01-10 Baidu Online Network Technology (Beijing) Co., Ltd.. Three-dimensional posture estimating method and apparatus, device and computer storage medium
CN109584295A (en) * 2017-09-29 2019-04-05 阿里巴巴集团控股有限公司 The method, apparatus and system of automatic marking are carried out to target object in image
CN109934931A (en) * 2017-12-19 2019-06-25 阿里巴巴集团控股有限公司 Acquisition image, the method and device for establishing target object identification model
CN108460398A (en) * 2017-12-27 2018-08-28 达闼科技(北京)有限公司 Image processing method, device, cloud processing equipment and computer program product

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862199B (en) * 2020-06-17 2024-01-09 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN111862199A (en) * 2020-06-17 2020-10-30 北京百度网讯科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN111951330A (en) * 2020-08-27 2020-11-17 北京小马慧行科技有限公司 Label updating method and device, storage medium, processor and vehicle
CN111951330B (en) * 2020-08-27 2024-09-13 北京小马慧行科技有限公司 Labeling updating method, labeling updating device, storage medium, processor and carrier
CN112258574A (en) * 2020-09-21 2021-01-22 北京沃东天骏信息技术有限公司 Method and device for marking pose information and computer readable storage medium
CN112288878A (en) * 2020-10-29 2021-01-29 字节跳动有限公司 Augmented reality preview method and preview device, electronic device and storage medium
CN112288878B (en) * 2020-10-29 2024-01-26 字节跳动有限公司 Augmented reality preview method and preview device, electronic equipment and storage medium
CN114531580A (en) * 2020-11-23 2022-05-24 北京四维图新科技股份有限公司 Image processing method and device
CN114531580B (en) * 2020-11-23 2023-11-21 北京四维图新科技股份有限公司 Image processing method and device
CN112837424A (en) * 2021-02-04 2021-05-25 脸萌有限公司 Image processing method, device, equipment and computer readable storage medium
CN112837424B (en) * 2021-02-04 2024-02-06 脸萌有限公司 Image processing method, apparatus, device and computer readable storage medium
CN113033426B (en) * 2021-03-30 2024-03-01 北京车和家信息技术有限公司 Dynamic object labeling method, device, equipment and storage medium
CN113033426A (en) * 2021-03-30 2021-06-25 北京车和家信息技术有限公司 Dynamic object labeling method, device, equipment and storage medium
CN113096094B (en) * 2021-04-12 2024-05-17 吴俊� Three-dimensional object surface defect detection method
CN113096094A (en) * 2021-04-12 2021-07-09 成都市览图科技有限公司 Three-dimensional object surface defect detection method
CN113313755A (en) * 2021-04-16 2021-08-27 中科创达软件股份有限公司 Method, device and equipment for determining pose of target object and storage medium
CN113205447A (en) * 2021-05-11 2021-08-03 北京车和家信息技术有限公司 Road picture marking method and device for lane line identification
CN113570708A (en) * 2021-07-30 2021-10-29 重庆市特种设备检测研究院 Defect three-dimensional modeling method and device and computer readable storage medium
CN114387346A (en) * 2022-03-25 2022-04-22 阿里巴巴达摩院(杭州)科技有限公司 Image recognition and prediction model processing method, three-dimensional modeling method and device
CN114596363B (en) * 2022-05-10 2022-07-22 北京鉴智科技有限公司 Three-dimensional point cloud marking method and device and terminal
CN114596363A (en) * 2022-05-10 2022-06-07 北京鉴智科技有限公司 Three-dimensional point cloud labeling method and device and terminal
CN114898044B (en) * 2022-05-19 2024-01-23 同方威视技术股份有限公司 Imaging method, device, equipment and medium for detection object
CN114898044A (en) * 2022-05-19 2022-08-12 同方威视技术股份有限公司 Method, apparatus, device and medium for imaging detection object
CN115423934A (en) * 2022-08-12 2022-12-02 北京城市网邻信息技术有限公司 House type graph generation method and device, electronic equipment and storage medium
CN115423934B (en) * 2022-08-12 2024-03-08 北京城市网邻信息技术有限公司 House type diagram generation method and device, electronic equipment and storage medium
CN115205471A (en) * 2022-09-13 2022-10-18 青岛艾德软件有限公司 Labeling method and system suitable for automatic drawing of assembly modeling
CN115205471B (en) * 2022-09-13 2022-12-30 青岛艾德软件有限公司 Labeling method and system suitable for automatic drawing of assembly modeling
CN115880690B (en) * 2022-11-23 2023-08-11 郑州大学 Method for quickly labeling objects in point cloud under assistance of three-dimensional reconstruction
CN115880690A (en) * 2022-11-23 2023-03-31 郑州大学 Method for quickly marking object in point cloud under assistance of three-dimensional reconstruction
CN115661493A (en) * 2022-12-28 2023-01-31 航天云机(北京)科技有限公司 Object pose determination method and device, equipment and storage medium

Also Published As

Publication number Publication date
CN111127422B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN111127422B (en) Image labeling method, device, system and host
CN107564069B (en) Method and device for determining calibration parameters and computer readable storage medium
JP6465789B2 (en) Program, apparatus and method for calculating internal parameters of depth camera
CN112183171B (en) Method and device for building beacon map based on visual beacon
CN110070564B (en) Feature point matching method, device, equipment and storage medium
KR100793838B1 (en) Appratus for findinng the motion of camera, system and method for supporting augmented reality in ocean scene using the appratus
CN109479082B (en) Image processing method and apparatus
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN113689578B (en) Human body data set generation method and device
TW201520540A (en) Inspection apparatus, method, and computer program product for machine vision inspection
US20130258060A1 (en) Information processing apparatus that performs three-dimensional shape measurement, information processing method, and storage medium
JP2009134509A (en) Device for and method of generating mosaic image
JPWO2020188799A1 (en) Camera calibration device, camera calibration method, and program
CN115205128A (en) Depth camera temperature drift correction method, system, equipment and medium based on structured light
CN114022542A (en) Three-dimensional reconstruction-based 3D database manufacturing method
CN114494388A (en) Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment
CN111383264A (en) Positioning method, positioning device, terminal and computer storage medium
CN117495975A (en) Zoom lens calibration method and device and electronic equipment
CN117788686A (en) Three-dimensional scene reconstruction method and device based on 2D image and electronic equipment
CN117522963A (en) Corner positioning method and device of checkerboard, storage medium and electronic equipment
CN115205129A (en) Depth camera based on structured light and method of use
JP2007034964A (en) Method and device for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter, and program for restoring movement of camera viewpoint and three-dimensional information and estimating lens distortion parameter
CN112241984A (en) Binocular vision sensor calibration method and device, computer equipment and storage medium
CN111915666A (en) Volume measurement method and device based on mobile terminal
CN112634439B (en) 3D information display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant