CN116266402A - Automatic object labeling method and device, electronic equipment and storage medium - Google Patents

Automatic object labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116266402A
CN116266402A CN202111510608.5A CN202111510608A CN116266402A CN 116266402 A CN116266402 A CN 116266402A CN 202111510608 A CN202111510608 A CN 202111510608A CN 116266402 A CN116266402 A CN 116266402A
Authority
CN
China
Prior art keywords
image
marked
marker
information
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111510608.5A
Other languages
Chinese (zh)
Inventor
税国知
钟传琦
李扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202111510608.5A priority Critical patent/CN116266402A/en
Priority to PCT/CN2022/135979 priority patent/WO2023103883A1/en
Publication of CN116266402A publication Critical patent/CN116266402A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Abstract

The embodiment of the application provides an automatic object labeling method, an automatic object labeling device, electronic equipment and a storage medium, and a first image to be labeled, which contains a marker and an object to be labeled, is obtained; carrying out marker identification on the first image to be marked to obtain target marker information; determining a target three-dimensional model corresponding to the target marker information according to the corresponding relation between the preset marker information and the three-dimensional model of the object; performing contour matching on the first image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the first image to be marked; and labeling the object to be labeled in the first image to be labeled according to the position information of the object to be labeled in the first image to be labeled. The automatic labeling of the object is realized, the labeling cost of the image can be reduced, and the labeling efficiency of the image can be increased.

Description

Automatic object labeling method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an automatic object labeling method, an apparatus, an electronic device, and a storage medium.
Background
With the development of artificial intelligence technology, computer vision technology, especially computer vision technology based on a deep learning model, has been rapidly developed. In the computer vision technology, the deep learning model needs to be trained based on a large amount of labeling data, for example, when the vehicle is required to be identified by the computer vision technology, a large amount of sample images labeled with the vehicle are selected to train the deep learning model.
In the training process of the deep learning model, the labeling cost of massive sample images becomes the maximum cost in the training process, and how to reduce the manual labeling cost of the sample images becomes a technical problem to be solved.
Disclosure of Invention
An object of the embodiment of the application is to provide an automatic object labeling method, an automatic object labeling device, electronic equipment and a storage medium, so that automatic labeling of objects in an image is realized, and the labeling cost of the image is reduced. The specific technical scheme is as follows:
in a first aspect, the present application provides an automatic object labeling method, where the method includes:
acquiring a first image to be marked, which contains a marker and an object to be marked;
carrying out marker identification on the first image to be marked to obtain target marker information;
determining a target three-dimensional model corresponding to the target marker information according to the corresponding relation between the preset marker information and the three-dimensional model of the object;
performing contour matching on the first image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the first image to be marked;
and labeling the object to be labeled in the first image to be labeled according to the position information of the object to be labeled in the first image to be labeled.
In one possible implementation manner, the marker is a two-dimensional code, the target marker information is target two-dimensional code information, and the corresponding relationship is a corresponding relationship between the two-dimensional code information and a three-dimensional model of the object;
the step of identifying the marker of the first image to be marked to obtain target marker information comprises the following steps:
and carrying out two-dimensional code recognition on the first image to be marked by using a two-dimensional code recognition technology to obtain target two-dimensional code information in the first image to be marked.
In one possible embodiment, the method further comprises:
acquiring a plurality of sample images which are acquired by image acquisition equipment and contain the markers and the objects to be marked, wherein the markers are arranged at a plurality of key points of the objects to be marked in the plurality of sample images;
determining the position of the marker in each sample image respectively;
acquiring pose information when the image acquisition equipment acquires each sample image;
for each sample image, determining the position of the marker corresponding to the sample image in a world coordinate system according to pose information when the image acquisition device acquires the sample image and the position of the marker in the sample image;
Establishing a three-dimensional model of the object to be marked according to the positions of the markers corresponding to the sample images in the world coordinate system;
and acquiring the marker information of the marker, and establishing a corresponding relation between the marker information of the marker and the three-dimensional model of the object to be marked.
In one possible implementation manner, the acquiring pose information when the image acquisition device acquires each sample image includes:
and determining pose information when the image acquisition equipment acquires each sample image by utilizing a synchronous positioning and mapping SLAM algorithm according to each sample image.
In one possible implementation manner, the determining, for each sample image, the position of the marker corresponding to the sample image in the world coordinate system according to pose information when the image acquisition device acquires the sample image and the position of the marker in the sample image includes:
and for each sample image, determining the position of the marker corresponding to the sample image in a world coordinate system by utilizing a SLAM algorithm according to pose information when the image acquisition equipment acquires the sample image and the position of the marker in the sample image.
In one possible embodiment, the method further comprises:
setting the marker at a key point of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment;
adjusting the pose of the image acquisition equipment and/or the position of the marker at the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment;
the above steps are repeatedly performed: and adjusting the position of the image acquisition equipment and/or the position of the marker at the position of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment until the acquisition termination condition is met.
In one possible embodiment, the method further comprises:
determining the positions of the key points of the object to be marked in the image coordinate system of the image acquisition equipment according to the obtained positions of the markers in the world coordinate system and the current pose information of the image acquisition equipment to obtain the image positions of the key points;
fitting to obtain a rectangular frame based on the obtained key point image position;
And displaying the key point image position and the rectangular frame in a display screen corresponding to the image acquisition equipment.
In one possible embodiment, the method further comprises:
acquiring a second image to be marked, which contains the object to be marked and does not contain a marker;
performing contour matching on the second image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the second image to be marked;
and labeling the object to be labeled in the second image to be labeled according to the position information of the object to be labeled in the second image to be labeled.
In a second aspect, an embodiment of the present application provides an automatic object labeling apparatus, where the apparatus includes:
the image to be marked acquisition module is used for acquiring a first image to be marked containing a marker and an object to be marked;
the marker information identification module is used for carrying out marker identification on the first image to be marked to obtain target marker information;
the three-dimensional model determining module is used for determining a target three-dimensional model corresponding to the target marker information according to the corresponding relation between the preset marker information and the three-dimensional model of the object;
The position information determining module is used for carrying out contour matching on the first image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the first image to be marked;
and the object to be annotated annotating module is used for annotating the object to be annotated in the first image to be annotated according to the position information of the object to be annotated in the first image to be annotated.
In one possible implementation manner, the marker is a two-dimensional code, the target marker information is target two-dimensional code information, and the corresponding relationship is a corresponding relationship between the two-dimensional code information and a three-dimensional model of the object;
the marker information identification module is specifically configured to: and carrying out two-dimensional code recognition on the first image to be marked by using a two-dimensional code recognition technology to obtain target two-dimensional code information in the first image to be marked.
In one possible embodiment, the apparatus further comprises:
the sample image acquisition module is used for acquiring a plurality of sample images which are acquired by the image acquisition equipment and contain the markers and the objects to be marked, wherein the markers are arranged at a plurality of key points of the objects to be marked in the plurality of sample images;
The marker position determining module is used for determining the positions of the markers in each sample image respectively;
the pose information acquisition module is used for acquiring pose information when the image acquisition equipment acquires each sample image;
the world coordinate determining module is used for determining the position of the marker corresponding to each sample image in a world coordinate system according to pose information when the image acquisition equipment acquires the sample image and the position of the marker in the sample image;
the three-dimensional model building module is used for building a three-dimensional model of the object to be marked according to the positions of the markers corresponding to the sample images in the world coordinate system;
the corresponding relation establishing module is used for acquiring the marker information of the marker and establishing the corresponding relation between the marker information of the marker and the three-dimensional model of the object to be marked.
In a possible implementation manner, the pose information obtaining module is specifically configured to: and determining pose information when the image acquisition equipment acquires each sample image by utilizing a synchronous positioning and mapping SLAM algorithm according to each sample image.
In one possible implementation manner, the world coordinate determining module is specifically configured to: and for each sample image, determining the position of the marker corresponding to the sample image in a world coordinate system by utilizing a SLAM algorithm according to pose information when the image acquisition equipment acquires the sample image and the position of the marker in the sample image.
In one possible embodiment, the apparatus further comprises:
the marker setting module is used for setting the marker at the key point of the object to be marked and collecting a sample image containing the marker and the object to be marked by using the image collecting equipment;
the sample image acquisition module is used for adjusting the pose of the image acquisition equipment and/or the position of the marker at the position of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment;
and the acquisition completion judging module is used for calling the sample image acquisition module to repeatedly acquire the sample image until the acquisition termination condition is met.
In one possible embodiment, the apparatus further comprises:
the rectangular frame display module is used for determining the position of the key point of the object to be marked in the image coordinate system of the image acquisition device to obtain the image position of the key point according to the obtained position of the marker in the world coordinate system and the current pose information of the image acquisition device; fitting to obtain a rectangular frame based on the obtained key point image position; and displaying the key point image position and the rectangular frame in a display screen corresponding to the image acquisition equipment.
In one possible implementation manner, the image acquisition module to be annotated is further configured to: acquiring a second image to be marked, which contains the object to be marked and does not contain a marker;
the position information determining module is further used for performing contour matching on the second image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the second image to be marked;
the object to be marked marking module is further configured to mark the object to be marked in the second image to be marked according to the position information of the object to be marked in the second image to be marked.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is used for realizing any one of the automatic object labeling methods in the application when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where a computer program is stored in the computer readable storage medium, where the computer program when executed by a processor implements a method for automatically labeling an object according to any one of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions, which when executed on a computer, cause the computer to perform the method for automatic labeling of objects described in any of the present applications.
The beneficial effects of the embodiment of the application are that:
according to the automatic labeling method, the automatic labeling device, the electronic equipment and the storage medium for the objects, which are provided by the embodiment of the application, a first image to be labeled, which comprises a marker and the objects to be labeled, is obtained; carrying out marker identification on the first image to be marked to obtain target marker information; determining a target three-dimensional model corresponding to the target marker information according to the corresponding relation between the preset marker information and the three-dimensional model of the object; performing contour matching on the first image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the first image to be marked; and labeling the object to be labeled in the first image to be labeled according to the position information of the object to be labeled in the first image to be labeled. The automatic labeling of the object is realized, the labeling cost of the image can be reduced, and the labeling efficiency of the image can be increased; in addition, the three-dimensional model corresponding to the object to be marked is obtained by using the marker, so that the three-dimensional model of the object to be marked can be automatically obtained, manual setting is not needed, the manual workload is reduced, and the marking efficiency of the image is increased. Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a first schematic diagram of an automatic object labeling method according to an embodiment of the present application;
FIG. 2 is a second schematic diagram of an automatic object labeling method according to an embodiment of the present application;
FIG. 3 is a third schematic diagram of an automatic object labeling method according to an embodiment of the present application;
FIG. 4 is a fourth schematic diagram of an automatic object labeling method according to an embodiment of the present application;
FIG. 5 is a first schematic diagram of an object to be labeled according to an embodiment of the present application;
FIG. 6 is a second schematic diagram of an object to be labeled according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a three-dimensional sparse point cloud model according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an automatic object labeling apparatus according to an embodiment of the present application;
fig. 9 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. Based on the embodiments herein, a person of ordinary skill in the art would be able to obtain all other embodiments based on the disclosure herein, which are within the scope of the disclosure herein.
The embodiment of the application provides an automatic object labeling method, referring to fig. 1, the method comprises the following steps:
s101, a first image to be marked, which contains a marker and an object to be marked, is obtained.
The method for automatically labeling the object according to the embodiment of the present application may be implemented by an electronic device having an image processing function, and in one example, the electronic device may be a handheld electronic device, for example, a smart camera, a hard disk video recorder, a smart phone, and in one example, the electronic device may also be a personal computer or a server.
The first image to be marked comprises a marker and an object to be marked, the marker needs to have obvious appearance characteristics so that the marker can be accurately identified from the image by utilizing a computer vision technology, the specific type of the marker can be set in a self-defined mode according to actual conditions, and the marker can be a two-dimensional code, a black-white chessboard pattern or other specific images and the like. The object to be marked is any object to be marked, and can be, for example, a vehicle, a building, an industrial part, an animal or plant, etc. The object to be marked here may be the same object as the object to be marked in the embodiment shown in fig. 3, or may be different objects of the same type, for example, two automobiles of the same model, etc.
S102, carrying out marker identification on the first image to be marked to obtain target marker information.
And carrying out marker identification on the first image to be marked by utilizing a computer vision technology to obtain marker information of the markers contained in the first image to be marked, wherein the marker information is called target marker information. In one example, the marker information of the markers may be the marks of the markers, and one mark may be set in advance for each marker as the marker information of the marker. In one example, the marker information for markers having the same visual characteristics is the same and the marker information for markers having different visual characteristics is different.
In a possible implementation manner, the marker is a two-dimensional code, the target marker information is target two-dimensional code information, and the performing marker identification on the first image to be marked to obtain target marker information includes: and carrying out two-dimensional code recognition on the first image to be marked by using a two-dimensional code recognition technology to obtain target two-dimensional code information in the first image to be marked. In one example, the two-dimensional code information of the two-dimensional code may be character information, where the character information uniquely corresponds to a three-dimensional model of a class of objects, and the correspondence is a preset correspondence between the marker information and the three-dimensional model of the objects. The two-dimensional code information of the two-dimensional code can be address information or index information, the address information or the index information uniquely points to the three-dimensional model of the object, and the pointing relationship is the corresponding relationship between the preset marker information and the three-dimensional model of the object.
S103, determining a target three-dimensional model corresponding to the target marker information according to the corresponding relation between the preset marker information and the three-dimensional model of the object.
In one example, the correspondence relationship is a correspondence relationship between two-dimensional code information and a three-dimensional model of an object, for example, two-dimensional code information a corresponds to a three-dimensional model of a vehicle, two-dimensional code information B corresponds to a three-dimensional model of a signal lamp, and the like. And inquiring the corresponding relation according to the target marker information, thereby obtaining a three-dimensional model corresponding to the target marker information, which is called a target three-dimensional model.
And S104, performing contour matching on the first image to be annotated based on the target three-dimensional model, and determining the position information of the object to be annotated in the first image to be annotated.
And performing contour matching on the first image to be marked by using the target three-dimensional model, so that the position information of the object to be marked is determined in the first image to be marked. In one example, a two-dimensional model of the object to be marked under multiple angles can be obtained based on the target three-dimensional model, and each two-dimensional model is respectively compared with an image area in the first image to be marked, so that the position information of the object to be marked in the first image to be marked is obtained. In one example, taking violent matching as an example, adjusting the angle of a unit step length for a target three-dimensional model each time according to a preset unit step length, projecting the target three-dimensional model with the current angle adjusted to a two-dimensional plane to obtain a two-dimensional model representing the two-dimensional contour of the object, performing contour matching on the current two-dimensional model and an image area in a first image to be marked, and if matching is successful, obtaining the position information of the object to be marked in the first image to be marked; and if the matching is failed, adjusting the angle of the unit step length for the target three-dimensional model, and carrying out contour matching again until the matching is successful or the target three-dimensional model under each angle is matched. In one example, the location information of the object to be marked may be a pixel area corresponding to the object to be marked.
S105, labeling the object to be labeled in the first image to be labeled according to the position information of the object to be labeled in the first image to be labeled.
After the position information of the object to be marked in the first image to be marked is obtained, marking of the object to be marked in the first image to be marked can be completed according to the position information. In one example, the object to be annotated in the first image to be annotated may be annotated by a rectangular annotation box.
In the embodiment of the application, the automatic labeling of the object is realized, the labeling cost of the image can be reduced, and the labeling efficiency of the image can be increased; in addition, the three-dimensional model corresponding to the object to be marked is obtained by using the marker, so that the three-dimensional model of the object to be marked can be automatically obtained, manual setting is not needed, the manual workload is reduced, and the marking efficiency of the image is increased.
In one possible embodiment, referring to fig. 2, the method further comprises:
s201, a second image to be marked which contains the object to be marked and does not contain a marker is obtained.
After the target three-dimensional model corresponding to the object to be marked is determined by using the marker, the image acquisition equipment can be used for acquiring a second image to be marked, which contains the object to be marked and does not contain the marker.
S202, performing contour matching on the second image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the second image to be marked.
And performing contour matching on the second image to be marked by using the target three-dimensional model, so that the position information of the object to be marked is determined in the second image to be marked.
S203, labeling the object to be labeled in the second image to be labeled according to the position information of the object to be labeled in the second image to be labeled.
For example, in the process of labeling the vehicle, the target three-dimensional model of the vehicle is firstly called by using the marker corresponding to the vehicle, then in the labeling process of the subsequent vehicle, the target three-dimensional model of the vehicle is called and is used continuously, so that the target three-dimensional model is not required to be called by using the marker, and in this case, the target three-dimensional model can be directly used for carrying out contour matching on the second image to be labeled which does not contain the marker, thereby realizing the labeling of the object.
In the embodiment of the application, the target three-dimensional model is utilized to carry out contour matching on the second image to be marked which does not contain the marker, so that the object marking on the second image to be marked which contains the marker can be realized, the image after marking which does not contain the marker can be obtained, and the influence of the marker on the training result in the subsequent model training process can be reduced.
The three-dimensional model of the object can be established by modeling modes such as manual modeling, three-dimensional laser scanning modeling, two-dimensional image deepening information modeling or SLAM (Simultaneous Localization And Mapping) algorithm modeling. In one possible embodiment, referring to fig. 3, the method further comprises:
s301, acquiring a plurality of sample images which are acquired by image acquisition equipment and contain the markers and the objects to be marked, wherein the markers are arranged at a plurality of key points of the objects to be marked in the plurality of sample images.
The image acquisition device can be a monocular camera, a binocular camera or a smart phone comprising a camera shooting function. Each sample image comprises at least one marker, and the positions of the markers on the object to be marked in different sample images can be the same or different, but the markers in the sample images need to be capable of representing the positions of a plurality of key points on the object to be marked for all sample images. In one example, to prevent repeated acquisitions, the positions of the markers and the object to be marked in the different sample images are not all the same. In one example, the positions of the markers in the different sample images at the positions of the objects to be marked are different and/or the angles at which the objects to be marked are acquired in the different sample images are different. The key points of the object to be marked can be set in a self-defined mode according to actual conditions, the key points are used for representing the outline of the object to be marked and can be points on the outline of the object to be marked, and in one example, the marked corner positions and the like on the object to be marked can be selected as the key points of the object to be marked. It can be understood that a certain error exists in the setting of the markers in the actual scene, the markers may be just set on the key points, or may be a small distance from the key points, so long as the outlines of the objects to be marked can be represented.
S302, determining the positions of the markers in the sample images respectively.
The location of the marker in the sample image is determined using computer vision techniques. In one example, the markers are two-dimensional codes, and two-dimensional code recognition can be performed on each sample image by using a two-dimensional code recognition technology, so as to obtain the positions of the two-dimensional codes in each sample image.
S303, acquiring pose information when the image acquisition equipment acquires each sample image.
The pose information of the image capturing device may include position information (e.g., a position in a world coordinate system) of the image capturing device and pose information (e.g., a photographing angle at the time of capturing a sample image). In one example, one or more of a gyroscope, a geomagnetic sensor and an acceleration sensor are installed in the image acquisition device, so that pose information of the image acquisition device is acquired.
In one possible implementation manner, the acquiring pose information when the image acquisition device acquires each sample image includes: and determining pose information when the image acquisition equipment acquires each sample image by utilizing a SLAM algorithm according to each sample image.
SLAM algorithms are also known as CML (Concurrent Mapping and Localization, instant localization and mapping) algorithms or concurrent mapping and localization algorithms; the SLAM algorithm refers to placing a robot in an unknown position in an unknown environment, so that the robot gradually draws a map of the environment while moving, and specifically, the SLAM algorithm can utilize a two-dimensional image acquired by an image acquisition device to model the unknown environment, acquire the position and the posture of the image acquisition device in the unknown environment, and acquire the position of each object (object) in the position environment. The SLAM algorithm is used for requiring the object to be marked to be a static object, namely the object to be marked cannot move and deform in a world coordinate system. For a specific calculation process of the SLAM algorithm, reference may be made to a SLAM algorithm implementation process in the related art, which is not specifically limited in this application.
S304, for each sample image, determining the position of the marker corresponding to the sample image in a world coordinate system according to pose information and the position of the marker in the sample image when the image acquisition device acquires the sample image.
The world coordinate system in the embodiment of the application refers to a real world coordinate system in which a sample to be marked is located, and a coordinate system with a high longitude and latitude can be adopted, or a three-dimensional coordinate system which is built for the scene in which the sample to be marked is located in a self-defining mode can be adopted.
In one example, the external parameters of the image acquisition device can be obtained, and the position of the marker in the three-dimensional coordinate system of the image acquisition device is obtained according to the position of the marker in the sample image, the gesture information when the image acquisition device acquires the sample image and the external parameters of the image acquisition device; and then obtaining the position of the marker in the world coordinate system according to the position information of the image acquisition device in the world coordinate system and the position of the marker in the three-dimensional coordinate system of the image acquisition device.
In one example, SLAM algorithms may be utilized to derive the location of markers in the world coordinate system. In one possible implementation manner, the determining, for each sample image, the position of the marker corresponding to the sample image in the world coordinate system according to pose information when the image acquisition device acquires the sample image and the position of the marker in the sample image includes: and determining the position of the marker corresponding to each sample image in a world coordinate system by utilizing a SLAM algorithm according to the pose information of the sample image acquired by the image acquisition equipment and the position of the marker in the sample image.
S305, establishing a three-dimensional model of the object to be marked according to the positions of the markers corresponding to the sample images in the world coordinate system.
The markers are arranged at key points of the object to be marked, the markers in the plurality of sample images can be arranged at different key points of the object to be marked, and the positions of the object to be marked in the world coordinate system, namely the positions of the key points of the object to be marked in the world coordinate system. Therefore, the position of the key points of the object to be marked in the world coordinate system can be utilized to build a three-dimensional model of the object to be marked in the world coordinate system. In one example, the three-dimensional model herein is a three-dimensional sparse point cloud model, e.g., a three-dimensional model composed of key points represented by markers.
S306, obtaining the marker information of the marker, and establishing the corresponding relation between the marker information of the marker and the three-dimensional model of the object to be marked.
And establishing a corresponding relation between the marker information of the marker and the three-dimensional model of the object to be marked, so that the three-dimensional model of the object to be marked can be directly called according to the marker in the follow-up process, and the object to be marked can be rapidly and automatically calibrated.
In the embodiment of the application, the two-dimensional image can be utilized to obtain the position of the key point of the object to be marked in the three-dimensional world coordinate system by combining the two-dimensional code and the SLAM algorithm, and the problem of difficult interaction between the two-dimensional image and the three-dimensional scene is solved by fusing the SLAM algorithm and the two-dimensional code for recognition and interpretation. Compared with automatic labeling of image tracking, the method of combining the two-dimensional code with the SLAM algorithm greatly reduces engineering and labor costs; the key points of the labeling object are obtained by the two-dimensional code, an accurate contour description method is used, and the labeling result is high in accuracy.
The process of acquiring a sample image is described below, and in one possible embodiment, referring to fig. 4, the method further includes:
s401, setting the marker at the key point of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment.
In an example, referring to fig. 5, taking a natural gas pipeline interface as an object to be marked as an example, a preset two-dimensional code is set at a key point of the object to be marked as a marker, for example, as shown in fig. 6.
S402, adjusting the pose of the image acquisition equipment and/or the position of the marker at the position of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment.
The angle and the position of the object to be marked shot by the image acquisition equipment are adjusted, so that the object to be marked containing the marker under different poses is obtained; and placing the markers on different key points of the object to be marked, thereby obtaining the positions of the different key points of the object to be marked.
S403, repeatedly executing the steps: s402, adjusting the position of the image acquisition equipment and/or the position of the marker at the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment until an acquisition termination condition is met.
Step S402 is repeatedly performed until the acquisition termination condition is satisfied. The collection termination condition can be set in a self-defined manner according to the actual situation, for example, the collection termination condition can be that a preset number of sample images are collected, wherein the preset number can be set in a self-defined manner according to the actual situation, but the sample images of preset data are required to be ensured to be enough to establish a three-dimensional model of an object to be marked; for example, the acquisition termination condition may be an instruction or the like for the user to trigger a stop acquisition. In one example, a three-dimensional sparse point cloud model of eight key points of a natural gas pipeline interface may be as shown in fig. 7.
In the embodiment of the application, the three-dimensional model is obtained through a two-dimensional image sampling mode, so that the three-dimensional model can be automatically generated, the adaptability to a scene is good, the light is bright and dark, the imaging effect of a camera is achieved, and the three-dimensional model has great advantages both indoors and outdoors.
In a possible implementation manner, as shown in fig. 4, the method for automatically labeling the object is used for sampling the image to be labeled (including the first image to be labeled and the second image to be labeled), labeling the object to be labeled in the image to be labeled can be performed in real time, and after the image to be labeled is acquired, the object to be labeled in real time can be automatically labeled. In the automatic labeling process, labeling can be adopted by utilizing the two-dimension code on site, and the labeling can be finished only by pre-manufacturing the two-dimension code and almost without preparation work, namely, sampling is finished, and the later-stage re-labeling is not needed.
In order to facilitate the user to perceive the effect of creating the three-dimensional model, in one possible implementation manner, the method further includes:
step one, determining the position of the key point of the object to be marked in the image coordinate system of the image acquisition device to obtain the position of the key point image according to the obtained position of the marker in the world coordinate system and the current pose information of the image acquisition device.
The marker represents the key point of the object to be marked, and the position of the marker in the world coordinate system is the position of the key point of the object to be marked in the actual coordinate system. According to the real-time pose information of the image acquisition equipment, the conversion relation between the image coordinate system of the image acquisition equipment and the world coordinate system can be obtained, so that the position of the key point in the image coordinate system, namely the position of the key point image, can be obtained.
And step two, fitting to obtain a rectangular frame based on the obtained key point image position.
In one example, in the case where there is only one key point image position, fitting of a rectangular frame is not performed. And under the condition that at least two key point image positions exist, rectangular fitting can be carried out on each key point image position, so that a rectangular frame is obtained. The method of obtaining the rectangle by fitting a plurality of points can be referred to as a rectangle fitting method in the related art, in one example, the positions of the images of the key points can be used as corner points of the rectangle frame to obtain the largest rectangle frame by fitting, and the positions of the images of the key points can fall in the rectangle frame and on the rectangle frame.
And thirdly, displaying the key point image position and the rectangular frame in a display screen corresponding to the image acquisition equipment.
The display screen corresponding to the image acquisition equipment can be a display screen built in the image acquisition equipment or a display screen externally connected with the image acquisition equipment. The key point image position and the rectangular frame are displayed in the display screen corresponding to the image acquisition equipment, so that a user can intuitively perceive the establishment effect of the three-dimensional model, and the rectangular frame marking result can be intuitively perceived, so that the user can conveniently adjust the position of the marker in real time, and the three-dimensional model with better marking effect is obtained.
The embodiment of the application also provides an automatic object labeling device, referring to fig. 8, the device comprises:
the image to be annotated acquisition module 801 is configured to acquire a first image to be annotated including a marker and an object to be annotated;
the marker information identifying module 802 is configured to identify a marker of the first image to be marked, so as to obtain target marker information;
the three-dimensional model determining module 803 is configured to determine a target three-dimensional model corresponding to the target marker information according to a preset correspondence between the marker information and the three-dimensional model of the object;
the location information determining module 804 is configured to perform contour matching on the first image to be annotated based on the target three-dimensional model, and determine location information of the object to be annotated in the first image to be annotated;
The to-be-annotated object annotation module 805 is configured to annotate the to-be-annotated object in the first to-be-annotated image according to the position information of the to-be-annotated object in the first to-be-annotated image.
In one possible implementation manner, the marker is a two-dimensional code, the target marker information is target two-dimensional code information, and the corresponding relationship is a corresponding relationship between the two-dimensional code information and a three-dimensional model of the object;
the marker information identification module is specifically configured to: and carrying out two-dimensional code recognition on the first image to be marked by using a two-dimensional code recognition technology to obtain target two-dimensional code information in the first image to be marked.
In one possible embodiment, the apparatus further comprises:
the sample image acquisition module is used for acquiring a plurality of sample images which are acquired by the image acquisition equipment and contain the markers and the objects to be marked, wherein the markers are arranged at a plurality of key points of the objects to be marked in the plurality of sample images;
the marker position determining module is used for determining the positions of the markers in each sample image respectively;
the pose information acquisition module is used for acquiring pose information when the image acquisition equipment acquires each sample image;
The world coordinate determining module is used for determining the position of the marker corresponding to each sample image in a world coordinate system according to pose information when the image acquisition equipment acquires the sample image and the position of the marker in the sample image;
the three-dimensional model building module is used for building a three-dimensional model of the object to be marked according to the positions of the markers corresponding to the sample images in the world coordinate system;
the corresponding relation establishing module is used for acquiring the marker information of the marker and establishing the corresponding relation between the marker information of the marker and the three-dimensional model of the object to be marked.
In a possible implementation manner, the pose information obtaining module is specifically configured to: and determining pose information when the image acquisition equipment acquires each sample image by utilizing a synchronous positioning and mapping SLAM algorithm according to each sample image.
In one possible implementation manner, the world coordinate determining module is specifically configured to: and for each sample image, determining the position of the marker corresponding to the sample image in a world coordinate system by utilizing a SLAM algorithm according to pose information when the image acquisition equipment acquires the sample image and the position of the marker in the sample image.
In one possible embodiment, the apparatus further comprises:
the marker setting module is used for setting the marker at the key point of the object to be marked and collecting a sample image containing the marker and the object to be marked by using the image collecting equipment;
the sample image acquisition module is used for adjusting the pose of the image acquisition equipment and/or the position of the marker at the position of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment;
and the acquisition completion judging module is used for calling the sample image acquisition module to repeatedly acquire the sample image until the acquisition termination condition is met.
In one possible embodiment, the apparatus further comprises:
the rectangular frame display module is used for determining the position of the key point of the object to be marked in the image coordinate system of the image acquisition device to obtain the image position of the key point according to the obtained position of the marker in the world coordinate system and the current pose information of the image acquisition device; fitting to obtain a rectangular frame based on the obtained key point image position; and displaying the key point image position and the rectangular frame in a display screen corresponding to the image acquisition equipment.
In one possible implementation manner, the image acquisition module to be annotated is further configured to: acquiring a second image to be marked, which contains the object to be marked and does not contain a marker;
the position information determining module is further used for performing contour matching on the second image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the second image to be marked;
the object to be marked marking module is further configured to mark the object to be marked in the second image to be marked according to the position information of the object to be marked in the second image to be marked.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory;
the memory is used for storing a computer program;
the processor is used for implementing any one of the automatic object labeling methods in the application when executing the computer program stored in the memory.
Optionally, referring to fig. 9, the electronic device of the embodiment of the present application further includes a communication interface 902 and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete communication with each other through the communication bus 904.
The communication bus mentioned for the above-mentioned electronic devices may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include RAM (Random Access Memory ) or NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processing, digital signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the automatic object labeling method is realized.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the method for automatic labeling of objects as described in any of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It should be noted that, in this document, the technical features in each alternative may be combined to form a solution, so long as they are not contradictory, and all such solutions are within the scope of the disclosure of the present application. Relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for embodiments of the apparatus, electronic device and storage medium, the description is relatively simple as it is substantially similar to the method embodiments, where relevant see the section description of the method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (13)

1. An automatic labeling method for objects, characterized in that the method comprises the following steps:
acquiring a first image to be marked, which contains a marker and an object to be marked;
carrying out marker identification on the first image to be marked to obtain target marker information;
determining a target three-dimensional model corresponding to the target marker information according to the corresponding relation between the preset marker information and the three-dimensional model of the object;
performing contour matching on the first image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the first image to be marked;
and labeling the object to be labeled in the first image to be labeled according to the position information of the object to be labeled in the first image to be labeled.
2. The method of claim 1, wherein the marker is a two-dimensional code, the target marker information is target two-dimensional code information, and the correspondence is between the two-dimensional code information and a three-dimensional model of the object;
The step of identifying the marker of the first image to be marked to obtain target marker information comprises the following steps:
and carrying out two-dimensional code recognition on the first image to be marked by using a two-dimensional code recognition technology to obtain target two-dimensional code information in the first image to be marked.
3. The method according to claim 1, wherein the method further comprises:
acquiring a plurality of sample images which are acquired by image acquisition equipment and contain the markers and the objects to be marked, wherein the markers are arranged at a plurality of key points of the objects to be marked in the plurality of sample images;
determining the position of the marker in each sample image respectively;
acquiring pose information when the image acquisition equipment acquires each sample image;
for each sample image, determining the position of the marker corresponding to the sample image in a world coordinate system according to pose information when the image acquisition device acquires the sample image and the position of the marker in the sample image;
establishing a three-dimensional model of the object to be marked according to the positions of the markers corresponding to the sample images in the world coordinate system;
And acquiring the marker information of the marker, and establishing a corresponding relation between the marker information of the marker and the three-dimensional model of the object to be marked.
4. A method according to claim 3, wherein said acquiring pose information when said image acquisition device acquires each of said sample images comprises:
and determining pose information when the image acquisition equipment acquires each sample image by utilizing a synchronous positioning and mapping SLAM algorithm according to each sample image.
5. The method according to claim 4, wherein determining, for each sample image, the position of the marker corresponding to the sample image in the world coordinate system according to pose information when the image acquisition device acquires the sample image and the position of the marker in the sample image comprises:
and for each sample image, determining the position of the marker corresponding to the sample image in a world coordinate system by utilizing a SLAM algorithm according to pose information when the image acquisition equipment acquires the sample image and the position of the marker in the sample image.
6. A method according to claim 3, characterized in that the method further comprises:
Setting the marker at a key point of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment;
adjusting the pose of the image acquisition equipment and/or the position of the marker at the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment;
the above steps are repeatedly performed: and adjusting the position of the image acquisition equipment and/or the position of the marker at the position of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment until the acquisition termination condition is met.
7. The method according to any one of claims 3-6, further comprising:
determining the positions of the key points of the object to be marked in the image coordinate system of the image acquisition equipment according to the obtained positions of the markers in the world coordinate system and the current pose information of the image acquisition equipment to obtain the image positions of the key points;
fitting to obtain a rectangular frame based on the obtained key point image position;
And displaying the key point image position and the rectangular frame in a display screen corresponding to the image acquisition equipment.
8. The method according to claim 1, wherein the method further comprises:
acquiring a second image to be marked, which contains the object to be marked and does not contain a marker;
performing contour matching on the second image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the second image to be marked;
and labeling the object to be labeled in the second image to be labeled according to the position information of the object to be labeled in the second image to be labeled.
9. An automatic object labeling apparatus, comprising:
the image to be marked acquisition module is used for acquiring a first image to be marked containing a marker and an object to be marked;
the marker information identification module is used for carrying out marker identification on the first image to be marked to obtain target marker information;
the three-dimensional model determining module is used for determining a target three-dimensional model corresponding to the target marker information according to the corresponding relation between the preset marker information and the three-dimensional model of the object;
The position information determining module is used for carrying out contour matching on the first image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the first image to be marked;
and the object to be annotated annotating module is used for annotating the object to be annotated in the first image to be annotated according to the position information of the object to be annotated in the first image to be annotated.
10. The apparatus of claim 9, wherein the marker is a two-dimensional code, the target marker information is target two-dimensional code information, and the correspondence is between two-dimensional code information and a three-dimensional model of the object;
the marker information identification module is specifically configured to: performing two-dimensional code recognition on the first image to be marked by using a two-dimensional code recognition technology to obtain target two-dimensional code information in the first image to be marked;
the apparatus further comprises:
the sample image acquisition module is used for acquiring a plurality of sample images which are acquired by the image acquisition equipment and contain the markers and the objects to be marked, wherein the markers are arranged at a plurality of key points of the objects to be marked in the plurality of sample images;
The marker position determining module is used for determining the positions of the markers in each sample image respectively;
the pose information acquisition module is used for acquiring pose information when the image acquisition equipment acquires each sample image;
the world coordinate determining module is used for determining the position of the marker corresponding to each sample image in a world coordinate system according to pose information when the image acquisition equipment acquires the sample image and the position of the marker in the sample image;
the three-dimensional model building module is used for building a three-dimensional model of the object to be marked according to the positions of the markers corresponding to the sample images in the world coordinate system;
the corresponding relation establishing module is used for acquiring the marker information of the marker and establishing the corresponding relation between the marker information of the marker and the three-dimensional model of the object to be marked;
the pose information acquisition module is specifically configured to: according to each sample image, determining pose information when the image acquisition equipment acquires each sample image by utilizing a synchronous positioning and mapping SLAM algorithm;
the world coordinate determining module is specifically configured to: for each sample image, determining the position of the marker corresponding to the sample image in a world coordinate system by utilizing a SLAM algorithm according to pose information when the image acquisition equipment acquires the sample image and the position of the marker in the sample image;
The apparatus further comprises:
the marker setting module is used for setting the marker at the key point of the object to be marked and collecting a sample image containing the marker and the object to be marked by using the image collecting equipment;
the sample image acquisition module is used for adjusting the pose of the image acquisition equipment and/or the position of the marker at the position of the object to be marked, and acquiring a sample image containing the marker and the object to be marked by using the image acquisition equipment;
the acquisition completion judging module is used for calling the sample image acquisition module to repeatedly acquire sample images until acquisition termination conditions are met;
the apparatus further comprises:
the rectangular frame display module is used for determining the position of the key point of the object to be marked in the image coordinate system of the image acquisition device to obtain the image position of the key point according to the obtained position of the marker in the world coordinate system and the current pose information of the image acquisition device; fitting to obtain a rectangular frame based on the obtained key point image position; displaying the key point image position and the rectangular frame in a display screen corresponding to the image acquisition equipment;
The image acquisition module to be annotated is also used for: acquiring a second image to be marked, which contains the object to be marked and does not contain a marker;
the position information determining module is further used for performing contour matching on the second image to be marked based on the target three-dimensional model, and determining the position information of the object to be marked in the second image to be marked;
the object to be marked marking module is further configured to mark the object to be marked in the second image to be marked according to the position information of the object to be marked in the second image to be marked.
11. An electronic device, comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the method for automatically labeling objects according to any one of claims 1 to 8 when executing the program stored in the memory.
12. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the method for automatically labeling objects according to any one of claims 1-8 is implemented.
13. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of automatic labeling of objects according to any of claims 1-8.
CN202111510608.5A 2021-12-10 2021-12-10 Automatic object labeling method and device, electronic equipment and storage medium Pending CN116266402A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111510608.5A CN116266402A (en) 2021-12-10 2021-12-10 Automatic object labeling method and device, electronic equipment and storage medium
PCT/CN2022/135979 WO2023103883A1 (en) 2021-12-10 2022-12-01 Automatic object annotation method and apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111510608.5A CN116266402A (en) 2021-12-10 2021-12-10 Automatic object labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116266402A true CN116266402A (en) 2023-06-20

Family

ID=86729648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111510608.5A Pending CN116266402A (en) 2021-12-10 2021-12-10 Automatic object labeling method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN116266402A (en)
WO (1) WO2023103883A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117235831B (en) * 2023-11-13 2024-02-23 北京天圣华信息技术有限责任公司 Automatic part labeling method, device, equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102589443B (en) * 2012-01-31 2013-03-13 华中科技大学 System and method for intelligently detecting duct piece splicing quality based on image identification
US10489651B2 (en) * 2017-04-14 2019-11-26 Microsoft Technology Licensing, Llc Identifying a position of a marker in an environment
CN109948671B (en) * 2019-03-04 2021-11-30 腾讯医疗健康(深圳)有限公司 Image classification method, device, storage medium and endoscopic imaging equipment
CN110287934B (en) * 2019-07-02 2022-12-02 北京搜狐互联网信息服务有限公司 Object detection method and device, client and server
CN113627248A (en) * 2021-07-05 2021-11-09 深圳拓邦股份有限公司 Method, system, lawn mower and storage medium for automatically selecting recognition model

Also Published As

Publication number Publication date
WO2023103883A1 (en) 2023-06-15

Similar Documents

Publication Publication Date Title
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN110568447B (en) Visual positioning method, device and computer readable medium
CN111325796B (en) Method and apparatus for determining pose of vision equipment
CN112258567B (en) Visual positioning method and device for object grabbing point, storage medium and electronic equipment
CN110176032B (en) Three-dimensional reconstruction method and device
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN109520500A (en) One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
CN111986214B (en) Construction method of pedestrian crossing in map and electronic equipment
CN109345567B (en) Object motion track identification method, device, equipment and storage medium
WO2023103883A1 (en) Automatic object annotation method and apparatus, electronic device and storage medium
CN109712197B (en) Airport runway gridding calibration method and system
CN109034214B (en) Method and apparatus for generating a mark
US11048345B2 (en) Image processing device and image processing method
CN116858215B (en) AR navigation map generation method and device
CN111724432A (en) Object three-dimensional detection method and device
CN113378605A (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN110827340B (en) Map updating method, device and storage medium
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
Uchiyama et al. Photogrammetric system using visible light communication
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN114140771A (en) Automatic annotation method and system for image depth data set
CN111738906B (en) Indoor road network generation method and device, storage medium and electronic equipment
CN114089836A (en) Labeling method, terminal, server and storage medium
CN111210471B (en) Positioning method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination