CN111862180A - Camera group pose acquisition method and device, storage medium and electronic equipment - Google Patents

Camera group pose acquisition method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111862180A
CN111862180A CN202010723102.1A CN202010723102A CN111862180A CN 111862180 A CN111862180 A CN 111862180A CN 202010723102 A CN202010723102 A CN 202010723102A CN 111862180 A CN111862180 A CN 111862180A
Authority
CN
China
Prior art keywords
camera
dimensional
pose
feature points
camera group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010723102.1A
Other languages
Chinese (zh)
Other versions
CN111862180B (en
Inventor
郭科
周庆亮
焦继超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shengjing Intelligent Technology Jiaxing Co ltd
Original Assignee
Sany Heavy Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sany Heavy Industry Co Ltd filed Critical Sany Heavy Industry Co Ltd
Priority to CN202010723102.1A priority Critical patent/CN111862180B/en
Publication of CN111862180A publication Critical patent/CN111862180A/en
Application granted granted Critical
Publication of CN111862180B publication Critical patent/CN111862180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a camera set pose acquisition method and device, a storage medium and electronic equipment. Two-dimensional feature points in a two-dimensional image acquired by each camera at the current moment are extracted, and then the two-dimensional feature points are respectively matched with three-dimensional feature points in a map, so that two-dimensional target feature points and three-dimensional target feature points are screened out. The camera set pose is determined by combining a plurality of groups of two-dimensional target feature points, three-dimensional target feature points and pose transformation matrixes and internal reference matrixes of all cameras, so that the current camera set pose obtained by the camera set pose is more accurate, and a synchronous positioning and mapping system established according to the camera set pose is more stable.

Description

Camera group pose acquisition method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of images, and in particular, to a camera group pose acquisition method, a camera group pose acquisition device, a storage medium, and an electronic device.
Background
Visual SLAM has been widely developed and applied in recent years. The traditional positioning method is mostly completed by adopting an expensive laser radar, and the pure vision SLAM using a camera can realize the positioning under a specific scene at a low price. However, due to the complex light and scene change in the real environment, the performance of most visual SLAM algorithms is difficult to meet the requirements of practical applications. And whether the size and the direction of the front obstacle can be accurately sensed in the navigation process determines whether the obstacle can be successfully and intelligently avoided. The ability to accurately sense the size and orientation of the obstacle ahead depends on the stability of the SLAM system.
However, the current vision SLAM system only adopts a forward binocular camera to complete the positioning and obstacle avoidance tasks, and the positioning work often cannot be stably completed in an open scene or a large scene change, which becomes an important reason that the vision SLAM is difficult to fall to the ground.
Disclosure of Invention
An object of the present application is to provide a camera group pose acquisition method, apparatus, storage medium, and electronic device, so as to solve the above problems.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in a first aspect, an embodiment of the present application provides a camera group pose acquisition method, where the camera group includes at least 3 cameras, and relative positions of the at least 3 cameras are fixed, the method includes:
extracting two-dimensional feature points in two-dimensional images acquired by each camera at the current moment;
matching the two-dimensional feature points with three-dimensional feature points in a map respectively, and screening out two-dimensional target feature points and three-dimensional target feature points, wherein the two-dimensional target feature points and the three-dimensional target feature points are matched points;
and determining the pose of the camera group according to the two-dimensional target feature points, the three-dimensional target feature points, the pose transformation matrix and the internal reference matrix of each camera, wherein the pose of the camera group represents the difference between the current coordinate system of the camera group and the initial coordinate system of the camera group, and the pose transformation matrix is the transformation matrix between the pose of the camera group and the pose of each camera.
In a second aspect, an embodiment of the present application provides a camera group pose acquisition apparatus, where the camera group includes at least 3 cameras, and the relative positions of the at least 3 cameras are fixed, the apparatus includes:
the characteristic extraction unit is used for extracting two-dimensional characteristic points in the two-dimensional images acquired by the cameras at the current moment;
the processing unit is used for respectively matching the two-dimensional feature points with three-dimensional feature points in a map and screening out two-dimensional target feature points and three-dimensional target feature points, wherein the two-dimensional target feature points and the three-dimensional target feature points are matched points;
the processing unit is further configured to determine a camera group pose according to the two-dimensional target feature points, the three-dimensional target feature points, a pose transformation matrix and the internal reference matrices of the cameras, where the camera group pose represents a difference between a current coordinate system of the camera group and an initial coordinate system of the camera group, and the pose transformation matrix is a transformation matrix between the camera group pose and the pose of each camera.
In a third aspect, the present application provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method described above.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a processor and memory for storing one or more programs; the one or more programs, when executed by the processor, implement the methods described above.
Compared with the prior art, the camera set pose acquisition method, the camera set pose acquisition device, the storage medium and the electronic equipment provided by the embodiment of the application have the beneficial effects that: two-dimensional feature points in a two-dimensional image acquired by each camera at the current moment are extracted, and then the two-dimensional feature points are respectively matched with three-dimensional feature points in a map, so that two-dimensional target feature points and three-dimensional target feature points are screened out. The camera set pose is determined by combining a plurality of groups of two-dimensional target feature points, three-dimensional target feature points and pose transformation matrixes and internal reference matrixes of all cameras, so that the current camera set pose obtained by the camera set pose is more accurate, and a synchronous positioning and mapping system established according to the camera set pose is more stable.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and it will be apparent to those skilled in the art that other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a camera group pose acquisition method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a camera group pose acquisition method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a camera group pose acquisition method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a camera group pose acquisition method according to an embodiment of the present application;
fig. 6 is a schematic unit diagram of a camera group pose acquisition apparatus according to an embodiment of the present application.
In the figure: 10-a processor; 11-a memory; 12-a bus; 13-a communication interface; 14-a camera; 201-a feature extraction unit; 202-processing unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the present application, it should be noted that the terms "upper", "lower", "inner", "outer", and the like indicate orientations or positional relationships based on orientations or positional relationships shown in the drawings or orientations or positional relationships conventionally found in use of products of the application, and are used only for convenience in describing the present application and for simplification of description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present application.
In the description of the present application, it is also to be noted that, unless otherwise explicitly specified or limited, the terms "disposed" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The embodiment of the application provides an electronic device which can be a vehicle-mounted monitoring device. Please refer to fig. 1, a schematic structural diagram of an electronic device. The electronic device includes a processor 10, a memory 11, a camera 14, and a bus 12. The processor 10, the memory 11 and the camera 14 are connected by a bus 12, the processor 10 being adapted to execute executable modules, such as computer programs, stored in the memory 11.
The processor 10 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the camera group pose acquisition method can be implemented by integrated logic circuits of hardware in the processor 10 or instructions in the form of software. The Processor 10 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The Memory 11 may comprise a high-speed Random Access Memory (RAM) and may further comprise a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The bus 12 may be an ISA (Industry Standard architecture) bus, a PCI (peripheral component interconnect) bus, an EISA (extended Industry Standard architecture) bus, or the like. Only one bi-directional arrow is shown in fig. 1, but this does not indicate only one bus 12 or one type of bus 12.
The memory 11 is used for storing programs, such as programs corresponding to the camera group pose acquisition apparatus. The camera group pose acquisition device comprises at least one software functional module which can be stored in a memory 11 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device. The processor 10 executes the program to implement the camera group pose acquisition method after receiving the execution instruction.
The camera 14 is an image capturing device and the electronic device comprises at least 3 cameras, although the electronic device may comprise more cameras, for example a combination of a binocular camera and a full view camera.
Possibly, the electronic device provided by the embodiment of the present application further includes a communication interface 13. The communication interface 13 is connected to the processor 10 via a bus. The electronic device may receive a control instruction of the other terminal or transmit a message to the other terminal through the communication interface 13.
It should be understood that the structure shown in fig. 1 is merely a structural schematic diagram of a portion of an electronic device, which may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
The camera group pose acquisition method provided by the embodiment of the present invention can be applied to, but is not limited to, the electronic device shown in fig. 1, and please refer to fig. 2:
and S105, extracting two-dimensional feature points in the two-dimensional images acquired by the cameras at the current moment.
Specifically, the camera group in the embodiment of the present application includes at least 3 cameras 14, and the relative positions of the at least 3 cameras 14 are fixed. Possibly, each camera 14 is mounted to the vehicle body, the structure of the vehicle body is not changed, and the relative position of each camera 14 is fixed.
Each camera 14 acquires a two-dimensional image at the current time and transmits the two-dimensional image to the processor 10, and the processor 10 extracts two-dimensional feature points in each two-dimensional image.
And S106, respectively matching the two-dimensional feature points with the three-dimensional feature points in the map, and screening out two-dimensional target feature points and three-dimensional target feature points, wherein the two-dimensional target feature points and the three-dimensional target feature points are matched points.
Specifically, a two-dimensional image obtained by the camera a is exemplified. When the pose of the camera a changes, the same feature points (123 and 4) and different feature points (567 and 8) exist in two images continuously captured by the camera a. All the characteristic points in the previous image are recorded into the map, and three-dimensional coordinates based on a world coordinate system are established. Now, all feature points in the latter image are matched with the feature points in the map, so that the corresponding matched three-dimensional feature points can be found in the map (123 and 4), the two-dimensional target feature points are (123 and 4) in the latter image, and the feature points matched with (123 and 4) in the map are three-dimensional target feature points.
And S107, determining the pose of the camera group according to the two-dimensional target feature points, the three-dimensional target feature points and the pose transformation matrix corresponding to each camera and the internal reference matrix of each camera.
The camera group position and posture represents the difference between the current coordinate system of the camera group and the initial coordinate system of the camera group, and the position and posture conversion matrix is the conversion matrix between the camera group position and posture and each camera.
Specifically, whether a synchronous positioning and mapping System (SLAM) in the electronic device is stable depends on whether the pose of the current camera group can be accurately obtained. The camera set pose has six degrees of freedom and comprises world coordinates XYZ of the optical center of the camera and inclination angles of three axes. If the position posture of the camera group has errors, the current coordinate system of the camera group is inaccurate, so that the positioning result has differences, and the built map has access to the actual map. In the prior art, a binocular camera is used for monitoring the environment, the binocular camera has more interference, and the pose cannot be stably obtained. According to the embodiment of the application, the cameras 14 are added, even the full-view camera group is added, the pose of the current camera group is acquired according to the two-dimensional images shot by all the cameras and the established map, the pose is accurate, and therefore the system is stable.
In summary, in the camera group pose acquisition method provided in the embodiment of the present application, the two-dimensional feature points in the two-dimensional image acquired by each camera at the current time are extracted, and then the two-dimensional feature points are respectively matched with the three-dimensional feature points in the map, so as to screen out the two-dimensional target feature points and the three-dimensional target feature points. The camera set pose is determined by combining a plurality of groups of two-dimensional target feature points, three-dimensional target feature points and pose transformation matrixes and internal reference matrixes of all cameras, so that the current camera set pose obtained by the camera set pose is more accurate, and a synchronous positioning and mapping system established according to the camera set pose is more stable.
On the basis of fig. 2, regarding how to obtain the pose of the camera group, the embodiment of the present application further provides a possible implementation manner, please refer to the following, and the pose of the camera group is determined according to the following equation;
Figure BDA0002600731230000091
wherein, [ R, t [ ]]Representing the pose of the camera set; m represents the total number of cameras; n isjRepresenting the total number of the two-dimensional target feature points in the two-dimensional image corresponding to the j-th camera; u. ofijRepresenting the two-dimensional coordinates of the ith two-dimensional target feature point of the jth camera; sijCharacterizing feature point depth of an ith three-dimensional target feature point of a jth camera; kjCharacterizing an internal reference matrix of a jth camera; t is0 jRepresenting the pose between the pose of the camera set and the pose of the jth camera; pijRepresenting a three-dimensional coordinate corresponding to the ith three-dimensional target feature point of the jth camera; argmin represents the operation of solving the variable value that minimizes the following equation.
Preferably, j is equal to or greater than 3 and i is equal to or greater than 4. The larger the number of cameras, the larger the covered field of view, and the more three-dimensional feature points can be matched at each moment. In the least square optimization equation, the more constraint conditions participate in optimization, the more stable the attitude of the camera group obtained by integrating all information.
The feature point depth is the perpendicular distance from the three-dimensional feature point to the camera imaging plane. Possibly, the depth of the feature point is a value corresponding to a Z-axis obtained by multiplying the three-dimensional coordinates of the feature point by the pose of the camera group.
On the basis of fig. 2, regarding a manner of obtaining an internal reference matrix, a possible implementation manner is further provided in the embodiment of the present application, please refer to fig. 3, where the method for obtaining the pose of the camera group further includes:
s101, calibrating an internal reference matrix of each camera according to the calibration board, wherein the internal reference matrix comprises the focal length and distortion parameters of the cameras.
Wherein, the Calibration board (Calibration Target) is used for correcting lens distortion in machine vision, image measurement, photogrammetry, three-dimensional reconstruction and other applications; determining a conversion relation between the physical size and the pixel; and determining the mutual relation between the three-dimensional geometric position of a certain point on the surface of the space object and the corresponding point in the image, wherein a geometric model imaged by a camera needs to be established. The camera shoots the array flat plate with the fixed-spacing pattern, and a geometric model of the camera, including the focal length and distortion parameters of the camera, can be obtained through calculation of a calibration algorithm. A flat plate with an array of fixed pitch patterns is a calibration plate.
On the basis of fig. 3, regarding an obtaining manner of the pose transformation matrix, an embodiment of the present application further provides a possible implementation manner, please refer to fig. 4, where the camera group pose obtaining method further includes:
and S102, obtaining a calibration image corresponding to each camera, wherein the calibration image is an image obtained by shooting a target determined by each position in a calibration field by the camera.
Specifically, a plurality of position-determined targets are provided within the calibration field. And keeping the relative positions of the cameras unchanged, and acquiring calibration images corresponding to the cameras at the same moment.
S103, determining external parameters according to the positions of the targets in the calibration images corresponding to the cameras and the internal parameter matrix, wherein the external parameters comprise conversion matrixes between the coordinate system of one camera and the coordinate systems of any other cameras.
Specifically, the external parameters can be determined by combining the positions of the targets in the calibration images acquired by the cameras and the internal parameter matrixes corresponding to the cameras.
And S104, determining a pose transformation matrix according to the external parameters.
Specifically, the external reference includes a transformation matrix between the coordinate system of one of the cameras and the coordinate system of any other camera, so that a pose transformation matrix between the pose of the camera group and the pose of any other camera can be obtained.
On the basis of fig. 2, regarding the construction of the map, the embodiment of the present application further provides a possible implementation manner, please refer to fig. 5, the method for acquiring the pose of the camera group further includes:
and S107, calculating the three-dimensional coordinates of the new two-dimensional feature points by combining a triangulation algorithm and the camera group position and posture, wherein the new two-dimensional feature points do not have three-dimensional feature points matched with the new two-dimensional feature points in the current map.
Referring to the foregoing example, the feature points (567 and 8) corresponding to camera a are the new two-dimensional feature points.
And S108, generating a corresponding new three-dimensional feature point in the map according to the three-dimensional coordinate of the new two-dimensional feature point.
The map content is continuously enriched and constructed, and the map is perfected so as to facilitate subsequent matching and positioning.
Possibly, the embodiment of the application creatively combines binocular, visual perception and full-view information to construct a whole set of visual SLAM system, compared with the existing method, the system can monitor the size and the direction of a forward obstacle in real time, further realize intelligent obstacle avoidance, improve the accuracy and the stability of visual positioning by using the full-view information of the surrounding environment, and greatly improve the performance of the visual SLAM system in practical application.
Referring to fig. 6, fig. 6 is a view of a camera set pose acquisition apparatus according to an embodiment of the present application, where optionally, the camera set pose acquisition apparatus is applied to the electronic device described above.
The camera group position and posture acquisition device comprises: a feature extraction unit 201 and a processing unit 202.
A feature extraction unit 201, configured to extract two-dimensional feature points in two-dimensional images acquired by the respective cameras at the current time. Specifically, the feature extraction unit 201 may perform S105 described above.
The processing unit 202 is configured to match the two-dimensional feature points with three-dimensional feature points in a map, and screen out two-dimensional target feature points and three-dimensional target feature points, where the two-dimensional target feature points and the three-dimensional target feature points are matching points; the camera group pose determination method is further used for determining the camera group pose according to the two-dimensional target feature points, the three-dimensional target feature points, the pose conversion matrix and the internal reference matrix of each camera corresponding to each camera, wherein the camera group pose represents the difference between the current coordinate system of the camera group and the initial coordinate system of the camera group, and the pose conversion matrix is the conversion matrix between the camera group pose and the pose of each camera. Specifically, the processing unit 202 may execute S106 and S107 described above.
Further, determining the pose of the camera set according to the following formula;
Figure BDA0002600731230000111
wherein, [ R, t [ ]]Representing the pose of the camera set; m represents the total number of cameras; n isjRepresenting the total number of the two-dimensional target feature points in the two-dimensional image corresponding to the j-th camera; u. ofijRepresenting the two-dimensional coordinates of the ith two-dimensional target feature point of the jth camera; sijCharacterization of ith third of jth cameraThe feature point depth of the dimensional target feature point; kjCharacterizing an internal reference matrix of a jth camera; t is0 jRepresenting a pose transformation matrix between the pose of the camera set and the pose of the jth camera; pijRepresenting a three-dimensional coordinate corresponding to the ith three-dimensional target feature point of the jth camera; argmin represents the operation of solving the variable value that minimizes the following equation.
Further, the processing unit 202 is further configured to calculate three-dimensional coordinates of a new two-dimensional feature point through a triangulation algorithm in combination with the pose of the camera group, where the new two-dimensional feature point does not have a three-dimensional feature point matching with the new two-dimensional feature point in the current map; and generating a corresponding new three-dimensional feature point in the map according to the three-dimensional coordinates of the new two-dimensional feature point. Specifically, the processing unit 202 may execute S108 and S109 described above.
It should be noted that the camera group pose acquisition apparatus provided in this embodiment may execute the method flows shown in the above method flow embodiments to achieve the corresponding technical effects. For the sake of brevity, the corresponding contents in the above embodiments may be referred to where not mentioned in this embodiment.
The embodiment of the invention also provides a storage medium, wherein the storage medium stores computer instructions and a program, and the computer instructions and the program execute the camera group position and posture acquisition method of the embodiment when being read and run. The storage medium may include memory, flash memory, registers, or a combination thereof, etc.
The following provides an electronic device, which may be a vehicle-mounted monitoring device, and as shown in fig. 1, the electronic device may implement the above-mentioned camera group pose acquisition method; specifically, the electronic device includes: processor 10, memory 11, bus 12. The processor 10 may be a CPU. The memory 11 is used for storing one or more programs, and when the one or more programs are executed by the processor 10, the camera group pose acquisition method of the above-described embodiment is performed.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A camera group pose acquisition method is characterized in that a camera group comprises at least 3 cameras, the relative positions of the at least 3 cameras are fixed, and the method comprises the following steps:
extracting two-dimensional feature points in two-dimensional images acquired by each camera at the current moment;
matching the two-dimensional feature points with three-dimensional feature points in a map respectively, and screening out two-dimensional target feature points and three-dimensional target feature points, wherein the two-dimensional target feature points and the three-dimensional target feature points are matched points;
and determining the pose of the camera group according to the two-dimensional target feature points, the three-dimensional target feature points, the pose transformation matrix and the internal reference matrix of each camera, wherein the pose of the camera group represents the difference between the current coordinate system of the camera group and the initial coordinate system of the camera group, and the pose transformation matrix is the transformation matrix between the pose of the camera group and the pose of each camera.
2. The camera group pose acquisition method according to claim 1, wherein the camera group pose is determined according to the following equation;
Figure FDA0002600731220000011
wherein, [ R, t [ ]]Representing the pose of the camera set; m represents the total number of cameras; n isjRepresenting the total number of the two-dimensional target feature points in the two-dimensional image corresponding to the j-th camera; u. ofijRepresenting the two-dimensional coordinates of the ith two-dimensional target feature point of the jth camera; sijCharacterizing feature point depth of an ith three-dimensional target feature point of a jth camera; kjCharacterizing an internal reference matrix of a jth camera; t is0 jRepresenting a pose transformation matrix between the pose of the camera set and the pose of the jth camera; pijRepresenting a three-dimensional coordinate corresponding to the ith three-dimensional target feature point of the jth camera; argmin represents the operation of solving the variable value that minimizes the following equation.
3. The method for acquiring pose information of a camera group according to claim 1, wherein before the extracting the two-dimensional feature points in the two-dimensional image acquired by each camera at the current time, the method further comprises:
and calibrating the internal reference matrix of each camera according to the calibration board, wherein the internal reference matrix comprises the focal length and distortion parameters of the camera.
4. The camera group pose acquisition method of claim 3, wherein after said calibrating the internal reference matrix of each camera according to the calibration board, the method further comprises:
acquiring a calibration image corresponding to each camera, wherein the calibration image is an image obtained by shooting a target determined by each position in a calibration field by the camera;
determining external parameters according to the positions of the targets in the calibration images corresponding to the cameras and the internal parameter matrix, wherein the external parameters comprise conversion matrixes between the coordinate system of one camera and the coordinate systems of any other cameras;
and determining the pose transformation matrix according to the external parameters.
5. The method of acquiring pose positions of a camera group according to claim 1, wherein after determining the pose positions of the camera group based on the two-dimensional object feature points, the three-dimensional object feature points, the pose transformation matrix, and the internal reference matrix of each camera corresponding to each camera, the method further comprises:
calculating the three-dimensional coordinates of a new two-dimensional feature point by combining the position and posture of the camera group through a triangulation algorithm, wherein the new two-dimensional feature point does not have a three-dimensional feature point matched with the new two-dimensional feature point in the current map;
and generating a corresponding new three-dimensional feature point in the map according to the three-dimensional coordinate of the new two-dimensional feature point.
6. A camera group pose acquisition device is characterized in that a camera group comprises at least 3 cameras, the relative positions of the at least 3 cameras are fixed, and the device comprises:
the characteristic extraction unit is used for extracting two-dimensional characteristic points in the two-dimensional images acquired by the cameras at the current moment;
the processing unit is used for respectively matching the two-dimensional feature points with three-dimensional feature points in a map and screening out two-dimensional target feature points and three-dimensional target feature points, wherein the two-dimensional target feature points and the three-dimensional target feature points are matched points;
the processing unit is further configured to determine a camera group pose according to the two-dimensional target feature points, the three-dimensional target feature points, a pose transformation matrix and the internal reference matrices of the cameras, where the camera group pose represents a difference between a current coordinate system of the camera group and an initial coordinate system of the camera group, and the pose transformation matrix is a transformation matrix between the camera group pose and the pose of each camera.
7. The camera group pose acquisition apparatus according to claim 6, wherein the camera group pose is determined according to the following equation;
Figure FDA0002600731220000031
wherein, [ R, t [ ]]Representing the pose of the camera set; m represents the total number of cameras; n isjRepresenting the total number of the two-dimensional target feature points in the two-dimensional image corresponding to the j-th camera; u. ofijRepresenting the two-dimensional coordinates of the ith two-dimensional target feature point of the jth camera; sijCharacterizing feature point depth of an ith three-dimensional target feature point of a jth camera; kjCharacterizing an internal reference matrix of a jth camera; t is0 jRepresenting a pose transformation matrix between the pose of the camera set and the pose of the jth camera; pijRepresenting a three-dimensional coordinate corresponding to the ith three-dimensional target feature point of the jth camera; argmin represents the operation of solving the variable value that minimizes the following equation.
8. The camera group pose acquisition apparatus according to claim 6, wherein the processing unit is further configured to calculate three-dimensional coordinates of a new two-dimensional feature point by using a triangulation algorithm in combination with the camera group pose, wherein the new two-dimensional feature point does not have a matching three-dimensional feature point in a current map; and generating a corresponding new three-dimensional feature point in the map according to the three-dimensional coordinate of the new two-dimensional feature point.
9. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
10. An electronic device, comprising: a processor and memory for storing one or more programs; the one or more programs, when executed by the processor, implement the method of any of claims 1-5.
CN202010723102.1A 2020-07-24 2020-07-24 Camera set pose acquisition method and device, storage medium and electronic equipment Active CN111862180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010723102.1A CN111862180B (en) 2020-07-24 2020-07-24 Camera set pose acquisition method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010723102.1A CN111862180B (en) 2020-07-24 2020-07-24 Camera set pose acquisition method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111862180A true CN111862180A (en) 2020-10-30
CN111862180B CN111862180B (en) 2023-11-17

Family

ID=72950909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010723102.1A Active CN111862180B (en) 2020-07-24 2020-07-24 Camera set pose acquisition method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111862180B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419429A (en) * 2021-01-25 2021-02-26 中国人民解放军国防科技大学 Large-scale workpiece surface defect detection calibration method based on multiple viewing angles
CN112907671A (en) * 2021-03-31 2021-06-04 深圳市慧鲤科技有限公司 Point cloud data generation method and device, electronic equipment and storage medium
CN112927273A (en) * 2021-01-28 2021-06-08 北京字节跳动网络技术有限公司 Three-dimensional video processing method, equipment and storage medium
CN113611143A (en) * 2021-07-29 2021-11-05 同致电子科技(厦门)有限公司 Novel memory parking system and map building system thereof
CN114098980A (en) * 2021-11-19 2022-03-01 武汉联影智融医疗科技有限公司 Camera pose adjusting method, space registration method, system and storage medium
WO2022237787A1 (en) * 2021-05-10 2022-11-17 武汉联影智融医疗科技有限公司 Robot positioning and pose adjustment method and system
WO2024093635A1 (en) * 2022-11-04 2024-05-10 深圳市其域创新科技有限公司 Camera pose estimation method and apparatus, and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014095553A (en) * 2012-11-07 2014-05-22 Nippon Telegr & Teleph Corp <Ntt> Camera pause estimation device and camera pause estimation program
WO2015120910A1 (en) * 2014-02-17 2015-08-20 Longsand Limited Determining pose and focal length
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN109242913A (en) * 2018-09-07 2019-01-18 百度在线网络技术(北京)有限公司 Scaling method, device, equipment and the medium of collector relative parameter
CN110473262A (en) * 2019-08-22 2019-11-19 北京双髻鲨科技有限公司 Outer ginseng scaling method, device, storage medium and the electronic equipment of more mesh cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014095553A (en) * 2012-11-07 2014-05-22 Nippon Telegr & Teleph Corp <Ntt> Camera pause estimation device and camera pause estimation program
WO2015120910A1 (en) * 2014-02-17 2015-08-20 Longsand Limited Determining pose and focal length
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN109242913A (en) * 2018-09-07 2019-01-18 百度在线网络技术(北京)有限公司 Scaling method, device, equipment and the medium of collector relative parameter
CN110473262A (en) * 2019-08-22 2019-11-19 北京双髻鲨科技有限公司 Outer ginseng scaling method, device, storage medium and the electronic equipment of more mesh cameras

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张慧智;高箴;周健;: "基于激光视觉技术的运动目标位姿测量与误差分析", 激光杂志, no. 04 *
王晨宇: "激光雷达和多相机融合的智能标定方法研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 02 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419429A (en) * 2021-01-25 2021-02-26 中国人民解放军国防科技大学 Large-scale workpiece surface defect detection calibration method based on multiple viewing angles
CN112419429B (en) * 2021-01-25 2021-08-10 中国人民解放军国防科技大学 Large-scale workpiece surface defect detection calibration method based on multiple viewing angles
CN112927273A (en) * 2021-01-28 2021-06-08 北京字节跳动网络技术有限公司 Three-dimensional video processing method, equipment and storage medium
CN112907671A (en) * 2021-03-31 2021-06-04 深圳市慧鲤科技有限公司 Point cloud data generation method and device, electronic equipment and storage medium
WO2022237787A1 (en) * 2021-05-10 2022-11-17 武汉联影智融医疗科技有限公司 Robot positioning and pose adjustment method and system
CN113611143A (en) * 2021-07-29 2021-11-05 同致电子科技(厦门)有限公司 Novel memory parking system and map building system thereof
CN113611143B (en) * 2021-07-29 2022-10-18 同致电子科技(厦门)有限公司 Parking memory system and map building system thereof
CN114098980A (en) * 2021-11-19 2022-03-01 武汉联影智融医疗科技有限公司 Camera pose adjusting method, space registration method, system and storage medium
CN114098980B (en) * 2021-11-19 2024-06-11 武汉联影智融医疗科技有限公司 Camera pose adjustment method, space registration method, system and storage medium
WO2024093635A1 (en) * 2022-11-04 2024-05-10 深圳市其域创新科技有限公司 Camera pose estimation method and apparatus, and computer-readable storage medium

Also Published As

Publication number Publication date
CN111862180B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN111862180A (en) Camera group pose acquisition method and device, storage medium and electronic equipment
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN110956660B (en) Positioning method, robot, and computer storage medium
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN110568447A (en) Visual positioning method, device and computer readable medium
CN110501036A (en) The calibration inspection method and device of sensor parameters
CN110738703B (en) Positioning method and device, terminal and storage medium
CN110095089B (en) Method and system for measuring rotation angle of aircraft
CN110779491A (en) Method, device and equipment for measuring distance of target on horizontal plane and storage medium
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN114387347B (en) Method, device, electronic equipment and medium for determining external parameter calibration
CN112613381A (en) Image mapping method and device, storage medium and electronic device
CN112229323A (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
CN114979956A (en) Unmanned aerial vehicle aerial photography ground target positioning method and system
CN114926538A (en) External parameter calibration method and device for monocular laser speckle projection system
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
CN112288813B (en) Pose estimation method based on multi-view vision measurement and laser point cloud map matching
CN113406604A (en) Device and method for calibrating positions of laser radar and camera
CN113034615B (en) Equipment calibration method and related device for multi-source data fusion
CN114092564B (en) External parameter calibration method, system, terminal and medium for non-overlapping vision multi-camera system
CN112115930B (en) Method and device for determining pose information
CN115018922A (en) Distortion parameter calibration method, electronic device and computer readable storage medium
CN113223163A (en) Point cloud map construction method and device, equipment and storage medium
CN112750165B (en) Parameter calibration method, intelligent driving method, device, equipment and storage medium thereof
CN113792645A (en) AI eyeball fusing image and laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230110

Address after: 314506 room 116, building 4, No. 288, development avenue, Tongxiang Economic Development Zone, Tongxiang City, Jiaxing City, Zhejiang Province

Applicant after: Shengjing Intelligent Technology (Jiaxing) Co.,Ltd.

Address before: 102200 5th floor, building 6, No.8 Beiqing Road, Changping District, Beijing

Applicant before: SANY HEAVY INDUSTRY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant