CN117274370A - Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium - Google Patents

Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium Download PDF

Info

Publication number
CN117274370A
CN117274370A CN202311228340.5A CN202311228340A CN117274370A CN 117274370 A CN117274370 A CN 117274370A CN 202311228340 A CN202311228340 A CN 202311228340A CN 117274370 A CN117274370 A CN 117274370A
Authority
CN
China
Prior art keywords
line segment
point cloud
image
cloud model
line segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311228340.5A
Other languages
Chinese (zh)
Inventor
李皓
邹智康
叶晓青
谭啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202311228340.5A priority Critical patent/CN117274370A/en
Publication of CN117274370A publication Critical patent/CN117274370A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The disclosure provides a three-dimensional pose determining method, a three-dimensional pose determining device, electronic equipment and a three-dimensional pose determining medium, relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, virtual reality, deep learning, large models and the like, and can be applied to scenes such as automatic driving and the like. The implementation scheme is as follows: acquiring a 3D point cloud model of a target scene; acquiring a 2D image obtained by a shooting device, wherein the shooting device is positioned in a target scene; extracting a 3D line segment in the 3D point cloud model, and determining a main direction of the 3D point cloud model; 2D line segments in the 2D image are extracted, and the main direction of the 2D image is determined; matching the main direction of the 3D point cloud model with the main direction of the 2D image to determine a plurality of matching modes; determining a rotation matrix and a translation vector of the shooting device according to any matching mode; calculating the number of matching points in the 3D point cloud model and the 2D image based on the rotation matrix and the translation vector; and determining the pose of the shooting device based on the matching point corresponding to the matching mode with the most matching points.

Description

Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, virtual reality, deep learning, large models and the like, and can be applied to scenes such as automatic driving and the like, and particularly relates to a three-dimensional pose determining method, a device, electronic equipment, a computer readable storage medium and a computer program product.
Background
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
The method combines the image information acquired by the computer generated graph and the camera in the real physical world, has the new characteristics of virtual-real combination and real-time interaction, can enable users to generate brand new experience, and improves insight on things and physical phenomena in the real world. Therefore, there is a certain requirement for positioning accuracy of the camera for the acquired image information.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a three-dimensional pose determination method, apparatus, electronic device, computer-readable storage medium, and computer program product.
According to an aspect of the present disclosure, there is provided a three-dimensional pose determining method, including: acquiring a 3D point cloud model of a target scene; acquiring a 2D image obtained by a photographing device, wherein the photographing device is located in the target scene; extracting a 3D line segment in the 3D point cloud model, and determining a main direction of the 3D point cloud model based on the extracted 3D line segment; extracting 2D line segments in the 2D image and determining a main direction of the 2D image based on the extracted 2D line segments; matching a principal direction of the 3D point cloud model with a principal direction of the 2D image to determine a plurality of matching modes between the principal direction of the 3D point cloud model and the principal direction of the 2D image; determining a rotation matrix of the shooting device according to any matching mode; calculating a translation vector based on the rotation matrix; calculating the number of matching points in the 3D point cloud model and the 2D image based on the rotation matrix and the translation vector; and determining the pose of the shooting device based on the matching point corresponding to the matching mode with the most matching points.
According to another aspect of the present disclosure, there is provided a three-dimensional pose determining apparatus including: the first acquisition module is configured to acquire a 3D point cloud model of the target scene; a second acquisition module configured to acquire a 2D image obtained by a photographing device, wherein the photographing device is located in the target scene; a first determination module configured to extract 3D line segments in the 3D point cloud model and determine a main direction of the 3D point cloud model based on the extracted 3D line segments; a second determination module configured to extract 2D line segments in the 2D image and determine a main direction of the 2D image based on the extracted 2D line segments; and a third determining module configured to match a principal direction of the 3D point cloud model with a principal direction of the 2D image to determine a plurality of matching ways between the principal direction of the 3D point cloud model and the principal direction of the 2D image; a fourth determining module configured to determine a rotation matrix of the photographing device for any matching manner; the first calculation module is configured to calculate a translation vector according to any matching mode based on a rotation matrix corresponding to the matching mode; a second calculation module configured to calculate the number of matching points in the 3D point cloud model and the 2D image based on the rotation matrix and the translation vector; and a fifth determining module configured to determine a pose of the photographing device based on a matching point corresponding to a matching manner having the most matching points.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program, when executed by a processor, implements the above method.
According to one or more embodiments of the present disclosure, a three-dimensional pose determining method is provided, and by matching a 2D line segment in a 2D image with a 3D line segment in a constructed 3D point cloud model, decoupling of rotation and translation of a pose of a photographing device is achieved, so that accurate 3D positioning of the photographing device can be effectively achieved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 is a schematic diagram illustrating an example system in which various methods described herein may be implemented, according to an example embodiment
FIG. 2 illustrates a flow chart of a three-dimensional pose determination method according to an embodiment of the present disclosure;
FIG. 3 shows a flow chart of a portion of a process of a three-dimensional pose determination method according to an embodiment of the present disclosure;
FIG. 4A shows a schematic diagram of a 3D point cloud model according to an embodiment of the present disclosure;
FIG. 4B shows a schematic diagram of a 3D line segment in a main direction of a 3D point cloud model, according to an embodiment of the present disclosure;
fig. 5 shows a block diagram of a structure of a three-dimensional pose determination apparatus according to an embodiment of the present disclosure; and
fig. 6 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
In the related art, three-dimensional pose determination of the photographing device can be realized through WiFi and INS or vision SLAM and laser radar SLAM. However, although the positioning scheme based on WiFi and INS can directly obtain the absolute 3D pose of the shooting device, the positioning accuracy is greatly influenced by the quality of WiFi signals and is lower. Based on the scheme of visual SLAM or laser radar SLAM, the absolute pose is obtained by estimating the relative pose between two frames, the problem of error exists, the robustness is poor by means of loop back, the absolute scale information cannot be obtained by the scheme of monocular SALM, and in addition, the algorithm is complex and is unfavorable for being deployed in embedded equipment.
In order to solve the problems, the present disclosure provides a three-dimensional pose determining method, which realizes decoupling of rotation and translation of a pose of a photographing device by matching a 2D line segment in a 2D image with a 3D line segment in a constructed 3D point cloud model, and can effectively realize accurate 3D positioning of the photographing device.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of the three-dimensional pose determination method.
In some embodiments, server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the methods described herein and is not intended to be limiting.
The user may perform the three-dimensional pose determination method using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 shows a flowchart of a three-dimensional pose determination method according to an embodiment of the present disclosure.
As shown in fig. 2, the three-dimensional pose determination method 200 includes:
step S201, acquiring a 3D point cloud model of a target scene;
step S202, acquiring a 2D image shot by a shooting device, wherein the shooting device is positioned in the target scene;
step S203, extracting a 3D line segment in the 3D point cloud model, and determining a main direction of the 3D point cloud model based on the extracted 3D line segment;
step S204, 2D line segments in the 2D image are extracted, and the main direction of the 2D image is determined based on the extracted 2D line segments;
step S205, matching the main direction of the 3D point cloud model with the main direction of the 2D image to determine a plurality of matching modes between the main direction of the 3D point cloud model and the main direction of the 2D image;
step S206, for any matching mode,
step S206-1, determining a rotation matrix of the shooting device;
step S206-2, calculating a translation vector based on the rotation matrix;
step S206-3, calculating the number of matching points in the 3D point cloud model and the 2D image based on the rotation matrix and the translation vector;
Step S207, determining the pose of the shooting device based on the matching point corresponding to the matching mode with the most matching points.
It will be appreciated that the pose of the camera is typically determined from six degrees of freedom of information about the position and pose of the camera, where the position of the camera is represented by three-dimensional position coordinates. In the case that the current viewpoint of the photographing device is unknown, the positioning of the photographing device in the target scene can be achieved by matching the 2D line segments in the 2D image with the 3D line segments in the constructed 3D point cloud model. The photographing device may be a camera, for example.
Specifically, the main direction of the 2D image is obtained by extracting the 3D line segments in the 3D point cloud model and clustering, and the main direction of the 2D image is obtained by extracting the 2D line segments in the 2D image and clustering, and then the main direction of the 3D point cloud model is matched with the main direction of the 2D image, so that decoupling of rotation and translation of the pose of the shooting device is realized by introducing the main direction of the line segments, and the positioning problem of the shooting device is converted into how to calculate the rotation matrix and translation vector corresponding to the shooting device, namely, the pose of the shooting device can be represented by the rotation matrix and the translation vector.
According to the matching of the main direction of the 3D point cloud model and the main direction of the 2D image, multiple matching modes can be obtained, the rotation matrixes are calculated according to each matching mode, and corresponding translation vectors are calculated according to the appointed matching relation of the 2D line segments and the 3D line segments according to each rotation matrix, so that the pose of the shooting device is obtained. And verifying the number of the points which can be matched in the 2D image and the 3D point cloud model according to the obtained pose, so as to determine whether the pose is the pose calculated in the correct main direction matching mode. And finally, recalculating the translation vector based on the matching point corresponding to the matching mode with the most matching points, so as to determine the pose of the shooting device based on the rotation matrix and the translation vector in the matching mode. Therefore, the real-time accurate positioning of the indoor scene is realized by matching the 2D line segment of the 2D image with the 3D line segment of the 3D point cloud model.
According to some embodiments, the 3D point cloud model and the 2D image each have three main directions. It will be appreciated that the real physical world has three principal directions perpendicular to each other, which are typically represented by x, y and z in a three-dimensional stereoscopic coordinate system. In the real physical world, the wall surface is generally perpendicular to the ground, most objects are placed in parallel on the ground and the objects themselves are also generally composed of line segments perpendicular to each other. Therefore, most 3D line segments in the 3D point cloud model can be parallel to one of three main directions, and most 2D line segments in the 2D image can be parallel to one of three main directions, so that decoupling of rotation and translation of the shooting device pose can be achieved by extracting and matching the three main directions of the 3D point cloud model and the three main directions of the 2D image.
According to some embodiments, the target scene is an indoor scene.
According to some embodiments, step S203 comprises: and extracting the 3D line segments in the 3D point cloud model by using a point cloud 3D line segment detector. Exemplary, the 3D line segment may be represented as Wherein N and N are both positive integers, and P represents a point in the 3D point cloud model.
In one example, other 3D line segment detection algorithms may also be employed to extract 3D line segments in the 3D point cloud model.
According to some embodiments, step S203 further comprises: clustering the 3D line segments based on the directions of the extracted 3D line segments by utilizing a random sampling consistency algorithm to obtain a first plurality of 3D line segment clusters; and performing non-maximum suppression on the first plurality of 3D line segment clusters based on the number of 3D line segments in each 3D line segment cluster in the first plurality of 3D line segment clusters and included angles among 3D line segments among the 3D line segment clusters so as to determine a main direction of the 3D point cloud model.
In one example, a first 3D line segment is randomly selected from the extracted 3D line segment set to cluster with the remaining 3D line segments in the set, so as to cluster a 3D line segment with an included angle smaller than a threshold value with the first 3D line segment; and selecting a second 3D line segment to cluster with the rest 3D line segments in the 3D line segment set until the clustering of all 3D line segments in the 3D line segment set is completed, so as to obtain a first plurality of 3D line segment clusters, and further determining the main direction of the 3D point cloud model based on the first plurality of 3D line segment clusters.
Fig. 3 shows a flowchart of a part of a procedure of a three-dimensional pose determination method according to an embodiment of the present disclosure.
As shown in fig. 3, the process 300 of determining the principal direction of the 3D point cloud model includes:
step S301, taking a 3D line segment cluster with the most 3D line segments among the first plurality of 3D line segment clusters as a first 3D line segment cluster;
step S302, comparing the included angles between the first 3D line segment cluster and each 3D line segment cluster in the first plurality of 3D line segment clusters, and removing the 3D line segment clusters with the included angles smaller than a threshold value between the first 3D line segment clusters to obtain a second plurality of 3D line segment clusters;
step S303, taking the 3D line segment cluster with the most 3D line segments except the first 3D line segment cluster in the second plurality of 3D line segment clusters as a second 3D line segment cluster;
step S304, comparing the included angles between the second 3D line segment cluster and each 3D line segment cluster in the second plurality of 3D line segment clusters, and removing the 3D line segment clusters with the included angles smaller than a threshold value between the second 3D line segment clusters to obtain a third plurality of 3D line segment clusters;
step S305, taking a 3D line segment cluster having the most 3D line segments except the first 3D line segment cluster and the second 3D line segment cluster in the third plurality of 3D line segment clusters as a third 3D line segment cluster; and
Step S306, taking the directions of the first 3D line segment cluster, the second 3D line segment cluster and the third 3D line segment cluster as the main directions of the 3D point cloud model.
Thus, the line segment clusters closer to the finally determined main direction can be removed by non-maximum suppression, and the directions of the three line segment clusters with the largest number are finally reserved as the main directions. For example, a cluster of three 3D line segments with the largest number of 3D line segments may be represented asThree main directions of the 3D point cloud model can be expressed as +.>
Fig. 4A shows a schematic diagram of a 3D point cloud model, and fig. 4B shows a schematic diagram of a 3D line segment in a main direction of the 3D point cloud model, according to an embodiment of the present disclosure.
According to some embodiments, the camera is a spherical camera and the 2D image is a 2D spherical image.
According to some embodiments, step S204 comprises: and extracting 2D line segments in the 2D spherical image by using a spherical Hough transform algorithm. The extracted 2D line segments can be represented as The representation on the 2D line segment per unit sphere can be obtained by the following equation (1),where M and M are both positive integers, p represents a point in the 2D image, W is the 2D image width, and H is the 2D image height.
According to some embodiments, step S204 further comprises: clustering the 2D line segments based on the directions of the extracted 2D line segments by utilizing a random sampling consistency algorithm to obtain a plurality of 2D line segment clusters; and performing non-maximum suppression on the plurality of 2D line segment clusters based on the number of 2D line segments in each 2D line segment cluster and included angles among 2D line segments among the 2D line segment clusters, so as to determine the main direction of the 2D image.
It can be appreciated that the clustering manner of the 2D line segments is similar to that of the 3D line segments, and the process of determining the main direction of the 2D image is similar to the process 300 of determining the main direction of the 3D point cloud model, which is not described herein.
For example, a cluster of three 2D line segments with the largest number of 2D line segments may be represented asThree main directions of the 2D image can be expressed as +.>Wherein i is an integer.
According to some embodiments, step S205 comprises: and respectively matching the three main directions of the 3D point cloud model with the three main directions of the 2D image, and removing the matching modes which do not accord with the right-handed rule so as to determine a plurality of matching modes between the main directions of the 3D point cloud model and the main directions of the 2D image.
It can be appreciated that when three main directions of the 3D point cloud model areThree main directions with 2D imageWhen matching is performed in a permutation and combination mode, the two directions are considered to be positive and negative, so 48 matching modes are provided in total, and the corresponding rotation matrix R can be expressed as
Among the 48 matching modes, 24 matching modes do not accord with the right-hand rule, the rule can be eliminated by calculating determinant, subsequent calculation is carried out according to the rotation matrixes corresponding to the rest 24 matching modes, the calculated amount can be effectively reduced, and the positioning efficiency is improved.
According to some embodiments, step S206-2 includes: for each matching mode, the following random operations are performed multiple times: three 3D line segments in three main directions are randomly selected from the 3D point cloud model, and three 2D line segments in three main directions are randomly selected from the 2D image; and calculating a translation vector corresponding to each random operation based on the three randomly selected 3D line segments, the three randomly selected 2D line segments and the rotation matrix corresponding to the matching mode.
For example, the translation vector t may be calculated according to formula (2) p=r×p+t, where P represents a point in the 2D image, P represents a point in the 3D point cloud model, and R is a rotation matrix. The corresponding translation vector can be calculated for each matching mode, so that the pose T of the shooting device is obtained based on the rotation matrix R and the translation vector T corresponding to the matching mode, and the positioning of the shooting device is realized.
Specifically, for each matching mode, a rotation matrix R corresponding to the matching mode can be determined, and one 3D line segment is randomly selected in each of three main directions of the 3D point cloud model to obtain Randomly selecting one 2D line segment in each of three main directions of the 2D image to obtain +. >And appointing the matching relation between the three 3D line segments and the 3 2D line segments, so as to calculate and obtain a translation vector t according to a formula (2). After the rotation matrix R and the translation vector t are obtained, a corresponding P point can be calculated for each point P in the 3D point cloud model according to the formula (2), and the calculated result is compared with each point in the 2D image to obtain the number of P and P matched, namely the number of matched points.
Repeating the operations of randomly selecting the 3D line segments and the 2D line segments for a plurality of times, calculating corresponding translation vectors for each random operation, and determining the number of matching points. Thus, the number of matching points corresponding to each random operation in each matching mode can be obtained. Based on the rotation matrix corresponding to the matching mode with the most matching points and the matching points, the final translation vector can be calculated again according to the formula (2), so that the pose of the shooting device with higher precision is obtained to realize the positioning of the shooting device.
Therefore, accurate 3D positioning in an indoor environment can be effectively realized, compared with a scheme based on vision/laser radar SLAM, the method disclosed by the invention can be used for avoiding the problem of error accumulation caused by estimating the relative pose between two frames, and can be used for estimating the absolute 3D pose based on a single-frame 2D image and has an absolute scale, and on the other hand, the method disclosed by the invention is simpler and more efficient and is more suitable for being deployed in embedded equipment.
According to another aspect of the present disclosure, a three-dimensional pose determination apparatus is provided. As shown in fig. 5, the three-dimensional pose determination apparatus 500 includes: a first acquisition module 501 configured to acquire a 3D point cloud model of a target scene; a second acquisition module 502 configured to acquire a 2D image obtained by a camera, wherein the camera is located in the target scene; a first determining module 503 configured to extract 3D line segments in the 3D point cloud model and determine a main direction of the 3D point cloud model based on the extracted 3D line segments; a second determination module 504 configured to extract 2D line segments in the 2D image and determine a main direction of the 2D image based on the extracted 2D line segments; and a third determining module 505 configured to match a principal direction of the 3D point cloud model with a principal direction of the 2D image to determine a plurality of matching ways between the principal direction of the 3D point cloud model and the principal direction of the 2D image; a fourth determining module 506 configured to determine, for any matching manner, a rotation matrix of the photographing device; a first calculation module 507 configured to calculate, for any matching manner, a translation vector based on a rotation matrix corresponding to the matching manner; a second calculation module 508 configured to calculate the number of matching points in the 3D point cloud model and the 2D image based on the rotation matrix and the translation vector; and a fifth determining module 509 configured to determine a pose of the photographing device based on a matching point corresponding to a matching manner having the most matching points.
The first determining module 503 extracts and clusters 3D line segments in the 3D point cloud model so as to reach a main direction of the 3D point cloud model, the second determining module 504 extracts and clusters 2D line segments in the 2D image so as to obtain a main direction of the 2D image, and the third determining module 505 matches the main direction of the 3D point cloud model with the main direction of the 2D image, so that decoupling of rotation and translation of the pose of the photographing device is realized by introducing the main direction of the line segments, and thus the positioning problem of the photographing device is converted into how to calculate a rotation matrix and a translation vector corresponding to the photographing device, that is, the pose of the photographing device can be represented by the rotation matrix and the translation vector.
The third determining module 505 performs permutation and combination matching between the main direction of the 3D point cloud model and the main direction of the 2D image to obtain multiple matching modes, the fourth determining module 506 calculates rotation matrices for each matching mode, and the first calculating module 507 calculates corresponding translation vectors for each rotation matrix according to the specified matching relationship between the 2D line segment and the 3D line segment, so as to obtain the pose of the photographing device. The second calculation module 508 verifies the number of points that can be matched in the 2D image and the 3D point cloud model according to the obtained pose, so as to determine whether the pose is a pose calculated by the correct main direction matching mode. Finally, the fifth determining module 509 recalculates the translation vector based on the matching point corresponding to the matching mode with the most matching points, so as to determine the pose of the photographing device based on the rotation matrix and the translation vector in the matching mode. Therefore, the real-time accurate positioning of the indoor scene is realized by matching the 2D line segment of the 2D image with the 3D line segment of the 3D point cloud model.
According to some embodiments, the first determining module 503 is further configured to: and extracting the 3D line segments in the 3D point cloud model by using a point cloud 3D line segment detector. Exemplary, the 3D line segment may be represented asWherein N and N are both positive integers, and P represents a point in the 3D point cloud model.
In one example, the first determination module 503 may also employ other 3D line segment detection algorithms to extract 3D line segments in the 3D point cloud model.
According to some embodiments, the first determining module 503 comprises: a first clustering unit configured to cluster the 3D line segments based on the directions of the extracted 3D line segments by using a random sampling consistency algorithm to obtain a first plurality of 3D line segment clusters; and a first determining unit configured to perform non-maximum suppression on the first plurality of 3D line segment clusters based on the number of 3D line segments in each 3D line segment cluster in the first plurality of 3D line segment clusters and the included angles between 3D line segments among the 3D line segment clusters, so as to determine a main direction of the 3D point cloud model.
In one example, the first clustering unit randomly selects a first 3D line segment from the extracted 3D line segment set and clusters the first 3D line segment with the rest 3D line segments in the set, so as to cluster the 3D line segment with the included angle smaller than the threshold value with the first 3D line segment; the first clustering unit then picks the second 3D line segments and the rest 3D line segments in the 3D line segment set to cluster until the clustering of all 3D line segments in the 3D line segment set is completed, so as to obtain a first plurality of 3D line segment clusters, and then the main direction of the 3D point cloud model is determined based on the first plurality of 3D line segment clusters.
According to some embodiments, the first determining unit comprises: a first determining subunit configured to take a 3D line segment cluster having the most 3D line segments among the first plurality of 3D line segment clusters as a first 3D line segment cluster; a second determining subunit configured to compare the first 3D line segment cluster with each 3D line segment cluster in the first plurality of 3D line segment clusters, and remove 3D line segment clusters with an included angle smaller than a threshold value between the first 3D line segment cluster to obtain a second plurality of 3D line segment clusters; a third determining subunit configured to take, as a second 3D line segment cluster, a 3D line segment cluster having the most 3D line segments among the second plurality of 3D line segment clusters except the first 3D line segment cluster; a fourth determining subunit configured to compare the second 3D line segment cluster with each 3D line segment cluster in the second plurality of 3D line segment clusters, and remove 3D line segment clusters with an included angle smaller than a threshold value between the second 3D line segment cluster and the second 3D line segment cluster to obtain a third plurality of 3D line segment clusters; a fifth determining subunit configured to take, as a third 3D line segment cluster, a 3D line segment cluster having the most 3D line segments among the third plurality of 3D line segment clusters except the first 3D line segment cluster and the second 3D line segment cluster; and a sixth determining subunit configured to take directions of the first 3D line segment cluster, the second 3D line segment cluster, and the third 3D line segment cluster as main directions of the 3D point cloud model.
Thus, the main direction determined finally can be removed by non-maximum suppressionAnd finally, reserving the directions of the three line segment clusters with the largest quantity as main directions. For example, a cluster of three 3D line segments with the largest number of 3D line segments may be represented asThree main directions of the 3D point cloud model can be expressed as +.>
According to some embodiments, the camera is a spherical camera and the 2D image is a 2D spherical image.
According to some embodiments, the second determination module 504 is further configured to: and extracting 2D line segments in the 2D spherical image by using a spherical Hough transform algorithm.
According to some embodiments, the second determining module 504 includes: a second clustering unit configured to cluster the 2D line segments based on the directions of the extracted 2D line segments by using a random sampling consistency algorithm to obtain a plurality of 2D line segment clusters; and a second determining unit configured to perform non-maximum suppression on the plurality of 2D line segment clusters based on the number of 2D line segments in each of the plurality of 2D line segment clusters and the included angles between 2D line segments among the 2D line segment clusters, so as to determine a main direction of the 2D image.
It can be appreciated that the clustering manner of the second clustering unit on the 2D line segments is similar to that of the first clustering unit on the 3D line segments, and the process of determining the main direction of the 2D image by the second determining unit is similar to that of the process 300 of determining the main direction of the 3D point cloud model by the first determining unit, which is not described herein.
According to some embodiments, the third determination module 505 is further configured to: and respectively matching the three main directions of the 3D point cloud model with the three main directions of the 2D image, and removing the matching modes which do not accord with the right-handed rule so as to determine a plurality of matching modes between the main directions of the 3D point cloud model and the main directions of the 2D image.
It can be appreciated that when three masters of the 3D point cloud modelDirectionThree main directions with 2D imageWhen matching is performed in a permutation and combination mode, the two directions are considered to be positive and negative, so 48 matching modes are provided in total, and the corresponding rotation matrix R can be expressed as
Among the 48 matching modes, 24 matching modes do not accord with the right-hand rule, the rule can be eliminated by calculating determinant, subsequent calculation is carried out according to the rotation matrixes corresponding to the rest 24 matching modes, the calculated amount can be effectively reduced, and the positioning efficiency is improved.
According to some embodiments, the first computing module 507 comprises: a random unit configured to perform the following random operations a plurality of times for each matching pattern: three 3D line segments in three main directions are randomly selected from the 3D point cloud model, and three 2D line segments in three main directions are randomly selected from the 2D image; and a calculating unit configured to calculate, for each random operation, a translation vector corresponding to the random operation based on the three randomly selected 3D line segments, the three randomly selected 2D line segments, and the rotation matrix corresponding to the matching manner.
For example, the calculation unit may calculate the translation vector t according to formula (2) p=r×p+t, where P represents a point in the 2D image, P represents a point in the 3D point cloud model, and R is a rotation matrix. The calculation unit can calculate the corresponding translation vector according to each matching mode, so that the pose T of the shooting device is obtained based on the rotation matrix R and the translation vector T corresponding to the matching mode to realize the positioning of the shooting device.
Specifically, for each matching pattern, the rotational moment corresponding to the matching pattern can be determinedMatrix R, wherein random units randomly select one 3D line segment in each of three main directions of the 3D point cloud model to obtainRandomly selecting one 2D line segment in each of three main directions of the 2D image to obtain +.>And appointing the matching relation between the three 3D line segments and the 3 2D line segments, so that the calculation unit calculates and obtains the translation vector t according to the formula (2). After the rotation matrix R and the translation vector t are obtained, a corresponding P point can be calculated for each point P in the 3D point cloud model according to the formula (2), and the calculated result is compared with each point in the 2D image to obtain the number of P and P matched, namely the number of matched points.
The random unit repeats the above operations of randomly selecting the 3D line segment and the 2D line segment a plurality of times, and the calculation unit calculates a corresponding translation vector for each random operation and determines the number of matching points. Thus, the number of matching points corresponding to each random operation in each matching mode can be obtained. Based on the rotation matrix corresponding to the matching mode with the most matching points and the matching points, the final translation vector can be calculated again according to the formula (2), so that the pose of the shooting device with higher precision is obtained to realize the positioning of the shooting device.
Therefore, the three-dimensional pose determining device 500 can effectively realize accurate 3D positioning in an indoor environment, compared with a scheme based on vision/laser radar SLAM, on one hand, the problem of error accumulation caused by estimating relative poses between two frames can be avoided, absolute 3D poses can be estimated based on a single-frame 2D image and have absolute dimensions, and on the other hand, the steps of the three-dimensional pose determining device 500, which are configured and executed, are simpler and more efficient and are more suitable for being deployed in embedded equipment.
According to another aspect of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a three-dimensional pose determination method.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a three-dimensional pose determination method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program when executed by a processor implements a three-dimensional pose determination method.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic device 600 can also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the electronic device 600, the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 608 may include, but is not limited to, magnetic disks, optical disks. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunication networks, and may include, but is not limited to, modulation and demodulation Tuner, network card, infrared communication device, wireless communication transceiver and/or chipset, e.g. bluetooth TM Devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, for example, a three-dimensional pose determination method. For example, in some embodiments, the three-dimensional pose determination method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the three-dimensional pose determination method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the three-dimensional pose determination method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (23)

1. A three-dimensional pose determination method, comprising:
acquiring a 3D point cloud model of a target scene;
Acquiring a 2D image obtained by a photographing device, wherein the photographing device is located in the target scene;
extracting a 3D line segment in the 3D point cloud model, and determining a main direction of the 3D point cloud model based on the extracted 3D line segment;
extracting 2D line segments in the 2D image and determining a main direction of the 2D image based on the extracted 2D line segments;
matching a principal direction of the 3D point cloud model with a principal direction of the 2D image to determine a plurality of matching modes between the principal direction of the 3D point cloud model and the principal direction of the 2D image;
in the case of any one of the matching modes,
determining a rotation matrix of the photographing device;
calculating a translation vector based on the rotation matrix;
calculating the number of matching points in the 3D point cloud model and the 2D image based on the rotation matrix and the translation vector; and
and determining the pose of the shooting device based on the matching point corresponding to the matching mode with the most matching points.
2. The method of claim 1, wherein the extracting 3D line segments in the 3D point cloud model comprises:
and extracting the 3D line segments in the 3D point cloud model by using a point cloud 3D line segment detector.
3. The method of claim 1 or 2, wherein the determining a principal direction of the 3D point cloud model based on the extracted 3D line segments comprises:
Clustering the 3D line segments based on the directions of the extracted 3D line segments by utilizing a random sampling consistency algorithm to obtain a first plurality of 3D line segment clusters; and
and performing non-maximum suppression on the first plurality of 3D line segment clusters based on the number of 3D line segments in each 3D line segment cluster in the first plurality of 3D line segment clusters and included angles among 3D line segments among the 3D line segment clusters so as to determine the main direction of the 3D point cloud model.
4. The method of claim 3, wherein the non-maximum suppression of the first plurality of 3D line segment clusters based on the number of 3D line segments in each of the first plurality of 3D line segment clusters and the angles between 3D line segments between 3D line segment clusters to determine the principal direction of the 3D point cloud model comprises:
taking a 3D line segment cluster with the most 3D line segments in the first plurality of 3D line segment clusters as a first 3D line segment cluster;
comparing the included angles between the first 3D line segment cluster and each 3D line segment cluster in the first plurality of 3D line segment clusters, and removing the 3D line segment clusters with the included angles smaller than a threshold value between the first 3D line segment clusters to obtain a second plurality of 3D line segment clusters;
taking a 3D line segment cluster with the most 3D line segments except the first 3D line segment cluster in the second plurality of 3D line segment clusters as a second 3D line segment cluster;
Comparing the included angles between the second 3D line segment cluster and each 3D line segment cluster in the second plurality of 3D line segment clusters, and removing the 3D line segment clusters with the included angles smaller than a threshold value between the second 3D line segment clusters to obtain a third plurality of 3D line segment clusters;
taking a 3D line segment cluster with the most 3D line segments except the first 3D line segment cluster and the second 3D line segment cluster in the third plurality of 3D line segment clusters as a third 3D line segment cluster; and
and taking the directions of the first 3D line segment cluster, the second 3D line segment cluster and the third 3D line segment cluster as the main direction of the 3D point cloud model.
5. The method of any of claims 1-4, wherein the camera is a spherical camera and the 2D image is a 2D spherical image.
6. The method of claim 5, wherein the extracting 2D line segments in the 2D image comprises:
and extracting 2D line segments in the 2D spherical image by using a spherical Hough transform algorithm.
7. The method of any of claims 1-6, wherein the determining a main direction of the 2D image based on the extracted 2D line segments comprises:
clustering the 2D line segments based on the directions of the extracted 2D line segments by utilizing a random sampling consistency algorithm to obtain a plurality of 2D line segment clusters; and
And performing non-maximum suppression on the plurality of 2D line segment clusters based on the number of 2D line segments in each 2D line segment cluster and included angles among 2D line segments among the 2D line segment clusters so as to determine the main direction of the 2D image.
8. The method of any of claims 1-7, wherein the 3D point cloud model and the 2D image each have three principal directions, and wherein the matching the principal directions of the 3D point cloud model and the 2D image to determine a plurality of matching ways between the principal directions of the 3D point cloud model and the 2D image comprises:
and respectively matching the three main directions of the 3D point cloud model with the three main directions of the 2D image, and removing the matching modes which do not accord with the right-handed rule so as to determine a plurality of matching modes between the main directions of the 3D point cloud model and the main directions of the 2D image.
9. The method of claim 8, wherein calculating a translation vector based on the rotation matrix for each matching manner comprises:
for each matching mode, the following random operations are performed multiple times: three 3D line segments in three main directions are randomly selected from the 3D point cloud model, and three 2D line segments in three main directions are randomly selected from the 2D image; and
For each random operation, calculating a translation vector corresponding to the random operation based on three randomly selected 3D line segments, three randomly selected 2D line segments and a rotation matrix corresponding to the matching mode.
10. The method of any of claims 1-9, wherein the target scene is an indoor scene.
11. A three-dimensional pose determination device, comprising:
the first acquisition module is configured to acquire a 3D point cloud model of the target scene;
a second acquisition module configured to acquire a 2D image obtained by a photographing device, wherein the photographing device is located in the target scene;
a first determination module configured to extract 3D line segments in the 3D point cloud model and determine a main direction of the 3D point cloud model based on the extracted 3D line segments;
a second determination module configured to extract 2D line segments in the 2D image and determine a main direction of the 2D image based on the extracted 2D line segments; and
a third determining module configured to match a principal direction of the 3D point cloud model with a principal direction of the 2D image to determine a plurality of matching ways between the principal direction of the 3D point cloud model and the principal direction of the 2D image;
A fourth determining module configured to determine a rotation matrix of the photographing device for any matching manner;
the first calculation module is configured to calculate a translation vector according to any matching mode based on a rotation matrix corresponding to the matching mode;
a second calculation module configured to calculate the number of matching points in the 3D point cloud model and the 2D image based on the rotation matrix and the translation vector; and
and a fifth determining module configured to determine a pose of the photographing device based on a matching point corresponding to a matching manner having the most matching points.
12. The apparatus of claim 11, wherein the first determination module is further configured to:
and extracting the 3D line segments in the 3D point cloud model by using a point cloud 3D line segment detector.
13. The apparatus of claim 11 or 12, wherein the first determining module comprises:
a first clustering unit configured to cluster the 3D line segments based on the directions of the extracted 3D line segments by using a random sampling consistency algorithm to obtain a first plurality of 3D line segment clusters; and
the first determining unit is configured to perform non-maximum suppression on the first plurality of 3D line segment clusters based on the number of 3D line segments in each 3D line segment cluster in the first plurality of 3D line segment clusters and included angles between 3D line segments among the 3D line segment clusters, so as to determine a main direction of the 3D point cloud model.
14. The apparatus of claim 13, wherein the first determining unit comprises:
a first determining subunit configured to take a 3D line segment cluster having the most 3D line segments among the first plurality of 3D line segment clusters as a first 3D line segment cluster;
a second determining subunit configured to compare the first 3D line segment cluster with each 3D line segment cluster in the first plurality of 3D line segment clusters, and remove 3D line segment clusters with an included angle smaller than a threshold value between the first 3D line segment cluster to obtain a second plurality of 3D line segment clusters;
a third determining subunit configured to take, as a second 3D line segment cluster, a 3D line segment cluster having the most 3D line segments among the second plurality of 3D line segment clusters except the first 3D line segment cluster;
a fourth determining subunit configured to compare the second 3D line segment cluster with each 3D line segment cluster in the second plurality of 3D line segment clusters, and remove 3D line segment clusters with an included angle smaller than a threshold value between the second 3D line segment cluster and the second 3D line segment cluster to obtain a third plurality of 3D line segment clusters;
a fifth determining subunit configured to take, as a third 3D line segment cluster, a 3D line segment cluster having the most 3D line segments among the third plurality of 3D line segment clusters except the first 3D line segment cluster and the second 3D line segment cluster; and
And a sixth determining subunit configured to take directions of the first 3D line segment cluster, the second 3D line segment cluster and the third 3D line segment cluster as main directions of the 3D point cloud model.
15. The apparatus of any of claims 11-14, wherein the camera is a spherical camera and the 2D image is a 2D spherical image.
16. The apparatus of claim 15, wherein the second determination module is further configured to:
and extracting 2D line segments in the 2D spherical image by using a spherical Hough transform algorithm.
17. The apparatus of any of claims 11-16, wherein the second determination module comprises:
a second clustering unit configured to cluster the 2D line segments based on the directions of the extracted 2D line segments by using a random sampling consistency algorithm to obtain a plurality of 2D line segment clusters; and
and the second determining unit is configured to perform non-maximum suppression on the plurality of 2D line segment clusters based on the number of 2D line segments in each 2D line segment cluster and included angles among 2D line segments among the 2D line segment clusters so as to determine the main direction of the 2D image.
18. The apparatus of any of claims 11-17, wherein the 3D point cloud model and the 2D image each have three principal directions, and wherein the third determination module is further configured to:
And respectively matching the three main directions of the 3D point cloud model with the three main directions of the 2D image, and removing the matching modes which do not accord with the right-handed rule so as to determine a plurality of matching modes between the main directions of the 3D point cloud model and the main directions of the 2D image.
19. The apparatus of claim 18, wherein the first computing module comprises:
a random unit configured to perform the following random operations a plurality of times for each matching pattern: three 3D line segments in three main directions are randomly selected from the 3D point cloud model, and three 2D line segments in three main directions are randomly selected from the 2D image; and
the calculating unit is configured to calculate, for each random operation, a translation vector corresponding to the random operation based on the three randomly selected 3D line segments, the three randomly selected 2D line segments and the rotation matrix corresponding to the matching mode.
20. The apparatus of any of claims 11-19, wherein the target scene is an indoor scene.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-10.
CN202311228340.5A 2023-09-21 2023-09-21 Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium Pending CN117274370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311228340.5A CN117274370A (en) 2023-09-21 2023-09-21 Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311228340.5A CN117274370A (en) 2023-09-21 2023-09-21 Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117274370A true CN117274370A (en) 2023-12-22

Family

ID=89207596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311228340.5A Pending CN117274370A (en) 2023-09-21 2023-09-21 Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117274370A (en)

Similar Documents

Publication Publication Date Title
CN115147558B (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method and device
CN115631418B (en) Image processing method and device and training method of nerve radiation field
CN114972958B (en) Key point detection method, neural network training method, device and equipment
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
CN115239888B (en) Method, device, electronic equipment and medium for reconstructing three-dimensional face image
CN116228867B (en) Pose determination method, pose determination device, electronic equipment and medium
CN115082740B (en) Target detection model training method, target detection device and electronic equipment
CN114627268A (en) Visual map updating method and device, electronic equipment and medium
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN116402844A (en) Pedestrian tracking method and device
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN116030185A (en) Three-dimensional hairline generating method and model training method
CN115359309A (en) Training method, device, equipment and medium of target detection model
CN114596476A (en) Key point detection model training method, key point detection method and device
CN114882587A (en) Method, apparatus, electronic device, and medium for generating countermeasure sample
CN117274370A (en) Three-dimensional pose determining method, three-dimensional pose determining device, electronic equipment and medium
CN114494797A (en) Method and apparatus for training image detection model
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN115797455B (en) Target detection method, device, electronic equipment and storage medium
CN116580212B (en) Image generation method, training method, device and equipment of image generation model
CN114821233B (en) Training method, device, equipment and medium of target detection model
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
CN115512131B (en) Image detection method and training method of image detection model
CN116246026B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
CN115423827B (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination