WO2021115331A1 - 基于三角测量的坐标定位方法、装置、设备及存储介质 - Google Patents

基于三角测量的坐标定位方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021115331A1
WO2021115331A1 PCT/CN2020/134947 CN2020134947W WO2021115331A1 WO 2021115331 A1 WO2021115331 A1 WO 2021115331A1 CN 2020134947 W CN2020134947 W CN 2020134947W WO 2021115331 A1 WO2021115331 A1 WO 2021115331A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional coordinates
coordinates
initial
point
unit
Prior art date
Application number
PCT/CN2020/134947
Other languages
English (en)
French (fr)
Inventor
吴昆临
许秋子
Original Assignee
深圳市瑞立视多媒体科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市瑞立视多媒体科技有限公司 filed Critical 深圳市瑞立视多媒体科技有限公司
Publication of WO2021115331A1 publication Critical patent/WO2021115331A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • This application belongs to the field of computer technology, and in particular relates to a coordinate positioning method, device, equipment and storage medium based on triangulation.
  • Triangulation is widely used in coordinate positioning in real-time motion capture systems.
  • the projection point of the target space position of the marker point predicted by triangulation on each motion capture camera is different from the position of the projection point actually acquired by each motion capture camera.
  • the distance error is optimized and calculated to obtain the accurate spatial position of the marking point.
  • it is necessary to output the optimized three-dimensional coordinate value within the preset time.
  • the target space predicted by triangulation The distance error between the projection point of the position and the actual projection point will also become larger, which increases the calculation amount of coordinate optimization.
  • the preset time arrives, the problem of large coordinate positioning error will inevitably occur.
  • the embodiments of the present application provide a coordinate positioning method, device, equipment, and storage medium based on triangulation to solve the technical problem of large errors in the coordinate positioning method based on triangulation in the prior art.
  • an embodiment of the present application provides a coordinate positioning method based on triangulation, including:
  • the optimized three-dimensional coordinates of the target marking point are obtained.
  • obtaining the initial three-dimensional coordinates of the target marking point according to the multiple image coordinates includes:
  • determining the angular positioning error of the target marking point according to the initial three-dimensional coordinates and multiple image coordinates includes:
  • the reference angle positioning error of the target marker point projected to the projection point of the camera unit is calculated and obtained ,include:
  • the obtaining the optimized three-dimensional coordinates of the target marking point according to the initial three-dimensional coordinates and the angular positioning error includes:
  • the gradient direction in the gradient descent algorithm is the one with the fastest decline in angular positioning error direction.
  • the iterative calculation of the initial three-dimensional coordinates based on the gradient descent algorithm until the preset conditions are met includes:
  • the optimized three-dimensional coordinates are calculated
  • determining whether the iteration result meets a preset condition includes:
  • an embodiment of the present application provides a coordinate positioning device based on triangulation, including:
  • An acquisition module configured to acquire the image coordinates of the target marker point projected onto the imaging planes of multiple camera units, and obtain the initial three-dimensional coordinates of the target marker point according to the multiple image coordinates;
  • the determining module is used to determine the angular positioning error of the target marking point according to the initial three-dimensional coordinates and multiple image coordinates;
  • the positioning module is used to obtain the optimized three-dimensional coordinates of the target marking point according to the initial three-dimensional coordinates and angular positioning error.
  • an embodiment of the present application provides a coordinate positioning device based on triangulation, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor implements the above-mentioned first when the computer program is executed. On the one hand, the steps of any method.
  • an embodiment of the present application provides a computer-readable storage medium, and the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of any one of the methods in the first aspect are implemented.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the method in any one of the above-mentioned first aspects.
  • the coordinate positioning method based on triangulation provided by the embodiment of the application obtains the image coordinates of the target mark point projected to multiple camera units, and obtains the initial three-dimensional coordinates of the target mark point according to the multiple image coordinates, and according to the initial three-dimensional coordinates and multiple Determine the angular positioning error of the target mark point, and then obtain the optimized three-dimensional coordinates of the target mark point according to the initial three-dimensional coordinates and the angular positioning error.
  • the coordinate positioning method based on triangulation provided by the embodiments of the present application corrects the initial three-dimensional coordinates according to the angular positioning error of the target mark point. Compared with the method of correcting the distance error based on the three-dimensional coordinate in the prior art, it is not affected by the camera. The influence of the distance between the unit and the target marking point improves the accuracy of coordinate positioning under the premise of satisfying the positioning speed.
  • Figure 1 is a schematic diagram of triangulation
  • FIG. 2 is a schematic flowchart of a coordinate positioning method based on triangulation provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a process of obtaining the initial three-dimensional coordinates of a target marker according to an embodiment of the present application
  • FIG. 4 is a projection relationship diagram of the three-dimensional coordinates of the marker points and the image coordinates provided by an embodiment of the present application;
  • FIG. 5 is a schematic flowchart of determining the angular positioning error of a target mark point according to an embodiment of the present application
  • FIG. 6 is a schematic flowchart of iterative calculation of initial three-dimensional coordinates based on a gradient descent algorithm according to an embodiment of the application;
  • FIG. 7 is a schematic structural diagram of a coordinate positioning device based on triangulation provided by an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of a coordinate positioning device based on triangulation provided by an embodiment of the present application.
  • Triangulation is a method of determining the distance of the target point or the position of the target point by measuring the angle between the target point and the known end point of a fixed reference line. Because it does not need to directly perform trilateral measurement (distance measurement) to determine the position of the target point, it is widely used in real-time motion capture systems.
  • the real-time motion capture system includes multiple motion capture cameras, at least one data processing workstation, and multiple optical mark recognition points.
  • optical marking points are pasted on key parts of moving objects (such as the joints of the human body, etc.).
  • Multiple motion capture cameras detect the optical marking points from different angles in real time, and obtain the optical marking points and project them on different motion capture cameras.
  • the image coordinates of the optical mark recognition point are transmitted to the data processing workstation in real time.
  • the data processing workstation accepts multiple image coordinates of the target mark point sent by each motion capture camera, and performs optical identification of the point according to the principle of triangulation Locating in the space coordinates, and then calculating the degree of freedom movement of the bones from the principle of biokinematics.
  • the principle diagram of triangulation can refer to Figure 1.
  • P is the optical marking point.
  • Two cameras R 0 and R 1 shoot the optical marking point P from different angles.
  • the projection points of P on the two cameras are X 0 and X 1 respectively .
  • the three-dimensional coordinates of the optical marking point P are obtained as the first vector And the second vector
  • the coordinates of the intersection A between A; among them, the first vector C 0 is the optical center of the camera projection point R 0 of the vector X by 0, 1 second vector V R through the optical center of the camera C 1 and the projection of a point Vector.
  • the projection point of the intersection point A (the target spatial position predicted by triangulation) on each motion capture camera is different from the position of the P point on the motion capture camera projection point X 0 or X 1 .
  • the distance error needs to be optimized, and then the accurate three-dimensional coordinates (spatial position) of the optical marking point are calculated.
  • the distance error between the motion capture camera and the optical marking point increases, the distance error between the actual projection point of P and the projection point of intersection A will also increase, resulting in a coordinate positioning method that optimizes the coordinate position based on the distance error The amount of calculation is greatly increased. In order to ensure the timeliness of the spatial coordinate positioning of the optical marking point, it is impossible to avoid the technical problem of large coordinate positioning errors.
  • FIG. 2 is a schematic flow chart of a coordinate positioning method based on triangulation provided by an embodiment of the application. As shown in FIG. 2, the coordinate positioning method based on triangulation includes:
  • S201 Obtain the image coordinates of the target marker point projected to the imaging planes of the multiple camera units, and obtain the initial three-dimensional coordinates of the target marker point according to the multiple image coordinates.
  • the image coordinates are the position coordinates on the image coordinate system after the target marker point is projected onto the imaging plane of the camera unit, and are two-dimensional position coordinates.
  • the camera unit may be a motion capture camera in a motion capture system.
  • acquiring the image coordinates of the target marker point projected to the multiple camera units may be receiving multiple image coordinates of the target marker point sent by each motion capture camera.
  • obtaining the initial three-dimensional coordinates of the target marking point based on multiple image coordinates includes determining the initial three-dimensional coordinates of the target marking point based on the multiple image coordinates based on the principle of triangulation.
  • the initial three-dimensional coordinates are the coordinates of the world coordinate system.
  • the initial three-dimensional coordinates of the target marker point are determined according to the two image coordinates of the target marker point projected on the two camera units.
  • Fig. 1 is a schematic diagram of coordinate positioning based on triangulation.
  • P is the target mark point.
  • the two cameras R 0 and R 1 shoot the target mark point P from different angles.
  • the projection points of P on the two cameras are X 0 and X 1 respectively ; X 0 and
  • the two-dimensional position coordinates of X 1 on the imaging plane of the respective cameras are the image coordinates.
  • the three-dimensional coordinates of the target marking point P is the first vector And the second vector The coordinates of the intersection A between A; among them, the first vector Is the vector passing through the optical center C 0 of the camera R 0 and the projection point X 0 , the second vector Is the vector passing through the optical center C 1 of the camera R 1 and the projection point X 1.
  • S202 Determine the angular positioning error of the target marking point according to the initial three-dimensional coordinates and multiple image coordinates.
  • determining the angular positioning error of the target marking point according to the initial three-dimensional coordinates and multiple image coordinates includes: for each image coordinate, obtaining the angular error of the target marking point relative to the image coordinate, and then according to the target marking point For all angular errors, determine the angular positioning error of the target mark point.
  • the angular error of the target mark point relative to a certain image coordinate is: the line between the initial three-dimensional coordinates of the target mark point and the optical center coordinates of the camera unit corresponding to the image coordinates and the distance between the image coordinates and the corresponding optical center The angle of the connection.
  • P is the target mark point
  • the projection point of P on the camera R 0 is X 0
  • the initial three-dimensional coordinates of the target mark point P are the three-dimensional coordinates of point B (It can be predicted according to the triangulation method)
  • the angle error of the target mark point p with respect to the projection point X 0 is the angle between the first vector V 0 and the line between C 0 and B.
  • determining the angular positioning error of the target marking point according to all the angular errors of the target marking point includes determining the angular positioning error of the target marking point according to the average of all the angular errors.
  • S203 Obtain optimized three-dimensional coordinates of the target marking point according to the initial three-dimensional coordinates and the angular positioning error.
  • obtaining the optimized three-dimensional coordinates of the target mark point is to optimize the initial three-dimensional coordinates through the angular positioning error.
  • optimization methods include, but are not limited to, gradient descent algorithm, singular value decomposition method, least square method, etc.
  • the initial three-dimensional coordinates are optimized based on the gradient descent algorithm, including the three-dimensional coordinates of the target marker point as the target value of the loss function, and the initial three-dimensional coordinates are iteratively calculated based on the gradient descent algorithm until the preset conditions are met, and the current time
  • the three-dimensional coordinates are used as the optimized three-dimensional coordinates of the target marking point; among them, the gradient direction in the gradient descent algorithm is the direction where the angular positioning error drops the fastest.
  • the coordinate positioning method based on triangulation provided by the embodiment of the application obtains the image coordinates of the target mark point projected to multiple camera units, and obtains the initial three-dimensional coordinates of the target mark point according to the multiple image coordinates, and according to the initial three-dimensional coordinates and multiple Determine the angular positioning error of the target mark point, and then obtain the optimized three-dimensional coordinates of the target mark point according to the initial three-dimensional coordinates and the angular positioning error.
  • the coordinate positioning method based on triangulation provided by the embodiments of the present application corrects the initial three-dimensional coordinates according to the angular positioning error of the target mark point. Compared with the method of correcting the distance error based on the three-dimensional coordinate in the prior art, it is not affected by the camera. The influence of the distance between the unit and the target marking point improves the accuracy of coordinate positioning under the premise of ensuring the coordinate positioning speed.
  • the intersection generated by the projection points on the two different camera units may not be one, and the initial three-dimensional coordinates can be determined based on the average value of the coordinates of the multiple intersections.
  • FIG. 3 is a schematic diagram of the process of obtaining the initial three-dimensional coordinates of the target mark point according to an embodiment of the application, and describes a method of obtaining the initial three-dimensional coordinates of the target mark point according to multiple image coordinates in step 201 of the embodiment shown in FIG. 2 Possible implementation.
  • the method for obtaining the initial three-dimensional coordinates of the target marking point includes:
  • the coordinates of the optical center of each camera unit are the coordinates of the optical center of the camera unit in the world coordinate system.
  • S302 Determine the first unit vector of each camera unit according to the optical center coordinates of each camera unit and the image coordinates corresponding to the camera unit.
  • the image coordinates are the position coordinates of the target marker point projected on the imaging plane (image coordinate system) of the camera unit, which are two-dimensional coordinates, and each camera unit corresponds to an image coordinate.
  • FIG. 4 is a projection relationship diagram of the three-dimensional coordinates of the target mark point and the image coordinates (two-dimensional coordinates).
  • O C --X C Y C Z C is the camera coordinate system
  • o-xy is the image coordinate system
  • the origin of the camera coordinate system O C is the optical center of the camera unit
  • the image The origin o of the coordinate system is the projection of the optical center of the camera unit on the image plane
  • the distance between O C and o is the focal length f of the camera.
  • P is the target mark point
  • the imaging point p is obtained after the point P is projected onto the camera.
  • the coordinates of p in the image coordinate system are (x, y).
  • the optical center of the camera is connected with the point p to obtain the projection line of the marked point of the camera shooting target, that is, the first unit vector.
  • Each camera unit corresponds to a first unit vector.
  • the intersection of the first unit vectors of the two camera units is the target mark determined based on the two camera units.
  • the three-dimensional coordinates of the point is the target mark determined based on the two camera units.
  • every two first unit vectors corresponds to one intersection point, that is, one intersection point can be determined for every two camera units.
  • Obtaining the three-dimensional coordinates of multiple intersection points between all the first unit vectors may include: selecting a first unit vector A after all the first unit vectors as a reference to obtain the distance between the first unit vector A and other first unit vectors And then replace the first unit vector B to obtain the three-dimensional coordinates of the multiple intersection points between the first unit vector B and other first unit vectors, until all the first unit vectors are obtained The three-dimensional coordinates of all intersections.
  • N first unit vectors can be obtained in step S301, and every two first unit vectors corresponds to an intersection, then a total of N(N-1)/2 intersections can be obtained .
  • S304 Perform averaging processing on the three-dimensional coordinates of all intersections to obtain the initial three-dimensional coordinates of the target marked point.
  • the three-dimensional coordinates of all intersections are averaged, and the average value is used as the initial three-dimensional coordinates of the target mark point.
  • the coordinate positioning method based on triangulation determines the initial three-dimensional coordinates according to the image coordinates of N camera units, where N is greater than 2, which improves the positioning accuracy of the initial three-dimensional coordinates and improves the optimization of the initial three-dimensional coordinates. effectiveness.
  • FIG. 5 is a schematic diagram of the process of determining the angular positioning error of the target marker according to an embodiment of the application, and describes the determination of the angle of the target marker according to the initial three-dimensional coordinates and multiple image coordinates in step 202 of the embodiment shown in FIG. 2 A possible realization of positioning error.
  • determining the angular positioning error of the target marker point includes:
  • S501 Determine the second unit vector of each camera unit according to the optical center cursor and the initial three-dimensional coordinates of each camera unit.
  • Each camera unit corresponds to a second unit vector.
  • S502 For each camera unit, according to the first unit vector of the camera unit and the second unit vector of the camera unit, calculate and obtain the reference angle positioning error of the target marker point projected to the projection point of the camera unit.
  • the first unit vector of the camera unit is the actual projection line of the camera unit shooting the target marker point;
  • the second unit vector of the camera unit is the connecting line between the camera unit and the three-dimensional coordinates of the target marker point, where the The three-dimensional coordinates may be the initial three-dimensional coordinates of the target marking point, or may be the three-dimensional coordinates of the target marking point updated in real time.
  • calculating and obtaining the reference angle positioning error of the target marker point projected to the projection point of the camera unit includes: performing an arithmetic operation to obtain The dot product of the first unit vector and the second unit vector, and the difference between the unit value and the dot product is used as the reference angle positioning error for projecting the target marker point to the projection point of the imaging unit.
  • the size of the included angle between the first unit vector and the second unit vector corresponds to the size of the reference angle positioning error of the target mark point projected to the projection point of the camera unit in a one-to-one correspondence.
  • the reference angle positioning error of the target mark point captured by the camera unit can be expressed by the following formula (1):
  • the dot product of two unit vectors is equal to the value of the two unit vectors multiplied by the cosine value of the angle between the two unit vectors. Since the value of the unit vector is both 1, the dot product of the two unit vectors is the two The size of the cosine of the angle between the unit vectors. For example: if the angle between two unit vectors is 0°, that is, the two unit vectors are parallel, then the dot product of the two unit vectors is 1, and the difference between the unit value and the dot product is also 0. If the angle between the two unit vectors is 90°, and the dot product of the two unit vectors is 0, the difference between the unit value and the dot product is 1.
  • the first unit vector and the second unit vector can be described by a scalar (the dot product of the first unit vector and the second unit vector)
  • the unit value the normalization of the angle between the first unit vector and the second unit vector is realized.
  • S503 Perform averaging processing on all the reference angle positioning errors to obtain the angle positioning error of the target mark point.
  • the angular positioning error of the target mark point can be expressed by the following formula (2):
  • E is the angular positioning error
  • n is the total number of camera units.
  • the first unit vector of the camera unit is the actual projection line on which the camera unit shoots the target mark point.
  • the second unit vector of the imaging unit changes based on changes in the three-dimensional coordinates of the target mark point. Therefore, the angular positioning error of the target marking point changes according to the three-dimensional coordinate transformation of the target marking point, which can be specifically expressed by the above formula (2).
  • the gradient direction in the gradient descent algorithm can be set as the direction with the fastest decline in angular positioning error, and then the three-dimensional coordinates of the target marker point can be optimized based on the gradient descent algorithm.
  • a possible implementation method in obtaining the optimized three-dimensional coordinates of the target mark point includes optimizing the initial three-dimensional coordinates based on the gradient descent algorithm, including the use of the three-dimensional coordinates of the target mark point as a loss function
  • iterative calculation of the initial three-dimensional coordinates based on the gradient descent algorithm until the preset condition is met is exemplified.
  • Fig. 6 is a schematic flow chart of iterative calculation of initial three-dimensional coordinates based on a gradient descent algorithm provided by an embodiment of the application. It describes that in step 203 of the embodiment shown in Fig. 2, the target marker points are obtained according to the initial three-dimensional coordinates and angular positioning error.
  • S601 Initialize the descent speed and gradient direction of the gradient descent algorithm, and use the initial three-dimensional coordinates as the initial value of the loss function, and the loss function is used to describe the three-dimensional coordinates of the target marker point;
  • the loss function can be expressed by the following formula (3), specifically:
  • Is the three-dimensional coordinates of the next moment Is the three-dimensional coordinates at the current moment
  • ⁇ m is the descending speed
  • the gradient direction is the direction in which the angular positioning error declines the fastest.
  • the angular positioning error can be differentiated to obtain the current gradient direction; the decline speed can be a preset value.
  • the gradient direction is the direction in which the angular positioning error drops the fastest.
  • the angular positioning error of the target mark point can be expressed by the above formula (2).
  • the gradient direction can be obtained as:
  • the initial gradient direction in the gradient descent algorithm can be calculated according to the initial three-dimensional coordinates and the respective image coordinates of the target mark point to obtain the first vector and the second vector of each camera unit, and the first vector of all camera units And the second vector is substituted into equation (4) to obtain the initial gradient direction.
  • the descending speed can be expressed by the following formula (5):
  • the descending speed can be initialized according to the preset value.
  • the initial value ⁇ 0 of the descending speed is set to 0.001.
  • S602 Calculate and obtain optimized three-dimensional coordinates according to the current descent speed, gradient direction, and three-dimensional coordinates.
  • the updated 3D coordinates of the target mark point will be obtained, that is, the optimized 3D coordinates.
  • judging whether the iteration result meets the preset condition includes: obtaining the angular positioning error of the optimized three-dimensional coordinate according to the optimized three-dimensional coordinate and multiple image coordinates, and judging the optimized three-dimensional coordinate Whether the angular positioning error is less than the first preset value, if yes, it means that the iteration result meets the preset condition, and if not, it means that the iteration result does not meet the preset condition.
  • the first preset value can be preset.
  • the angular positioning error of obtaining the updated three-dimensional coordinates refers to the above formula (2).
  • judging whether the iteration result meets the preset condition includes: judging whether the error between the optimized three-dimensional coordinate and the three-dimensional coordinate at the previous moment is less than the second preset value, and if so, it means that the iteration result meets the preset condition.
  • the preset condition if not, it means that the iteration result does not meet the preset condition.
  • the second preset value can be preset.
  • satisfying either one means that the iteration result meets the preset condition, and when the preset conditions in the two implementation manners are not met, it means that the iteration result does not meet the preset condition.
  • the first unit vector and the second unit vector of all camera units are calculated according to the optimized three-dimensional coordinates, and substituted into the above formulas (4) and (5). Perform the update, and then return to step 602 until the iteration result meets the preset condition.
  • the gradient direction in the gradient descent algorithm is set as the direction with the fastest decline in angular positioning error, and then the three-dimensional coordinates of the target marker point are optimized based on the gradient descent algorithm, which is improving While improving the positioning accuracy of the target mark point, it also improves the efficiency of iterative calculation.
  • m target marker points are seen by N camera units.
  • the iterative cycle time of the target marker points is longer, which leads to the loss of the target marker points.
  • the positioning speed is slow.
  • the positioning of m target marker points can be performed based on the Graphics Processing Unit (GPU). Compared with the central processing unit CPU, the GPU can run a large number of threads at the same time, thereby realizing m
  • the simultaneous positioning operation of the target marking points realizes the acceleration of the three-dimensional coordinate positioning.
  • multiple threads may also be used to simultaneously calculate the relevant data (such as the first unit vector) of the N camera units to increase the iteration speed of each target mark point.
  • a total of m target marker points are seen by N camera units.
  • a total of m thread groups are designed, and each thread group is used to calculate a target mark point, and simultaneously perform positioning operations of m target mark points.
  • Each thread group includes at least n threads, and each thread calculates the above formula (4) Then use the parallel reduction method to calculate the sum to obtain the current gradient direction.
  • Multiple threads can calculate at the same time to quickly obtain the optimized three-dimensional coordinates of a target mark; multiple thread groups can calculate at the same time to quickly obtain multiple target marks
  • the optimized three-dimensional coordinate of the point realizes the acceleration of the three-dimensional coordinate positioning and greatly improves the positioning speed of the three-dimensional coordinate.
  • the embodiment of the present invention further provides an embodiment of a device for realizing the foregoing method embodiment.
  • FIG. 7 is a schematic structural diagram of a coordinate positioning device based on triangulation provided by an embodiment of the application.
  • the coordinate positioning device 70 based on triangulation includes: an acquisition model 701, a determination module 702 and a positioning module 703.
  • the obtaining module 701 is configured to obtain the image coordinates of the target marker point projected onto the imaging planes of multiple camera units, and obtain the initial three-dimensional coordinates of the target marker point according to the multiple image coordinates.
  • the determining module 702 is configured to determine the angular positioning error of the target marking point according to the initial three-dimensional coordinates and multiple image coordinates.
  • the positioning module 703 is configured to obtain the optimized three-dimensional coordinates of the target marking point according to the initial three-dimensional coordinates and the angular positioning error.
  • the coordinate positioning device based on triangulation obtains the image coordinates of the target mark point projected to multiple camera units, and obtains the initial three-dimensional coordinates of the target mark point according to the multiple image coordinates. Determine the angular positioning error of the target mark point, and then obtain the optimized three-dimensional coordinates of the target mark point according to the initial three-dimensional coordinates and the angular positioning error.
  • the coordinate positioning method based on triangulation provided by the embodiments of the present application corrects the initial three-dimensional coordinates according to the angular positioning error of the target mark point. Compared with the method of correcting the distance error based on the three-dimensional coordinate in the prior art, it is not affected by the camera. The influence of the distance between the unit and the target marking point improves the accuracy of coordinate positioning under the premise of ensuring the coordinate positioning speed.
  • the obtaining module 701 is specifically used for:
  • the determining module 702 is specifically used for:
  • the determining module 702 is also specifically used for:
  • the positioning module 703 is specifically used for:
  • the gradient direction in the gradient descent algorithm is the one with the fastest decline in angular positioning error direction.
  • the positioning module 703 is also specifically used for:
  • the optimized three-dimensional coordinates are calculated
  • the positioning module 703 is also specifically used for:
  • the gradient direction in the gradient descent algorithm is set as the direction with the fastest decline in angular positioning error, and then the three-dimensional coordinates of the target marker point are optimized based on the gradient descent algorithm , While improving the positioning accuracy of the target mark point, it also improves the efficiency of iterative calculation.
  • the coordinate positioning device based on triangulation provided in the embodiment shown in FIG. 7 can be used to implement the technical solutions in the foregoing method embodiments, and the implementation principles and technical effects are similar, and the details are not repeated here in this embodiment.
  • Fig. 8 is a schematic diagram of a coordinate positioning device based on triangulation provided by an embodiment of the present application.
  • the coordinate positioning device 80 based on triangulation in this embodiment includes: at least one processor 801, a memory 802, and a computer program stored in the memory 802 and running on the processor 801.
  • the coordinate positioning device based on triangulation further includes a communication component 803, wherein the processor 801, the memory 802, and the communication component 803 are connected by a bus 804.
  • the processor 801 implements the steps in each embodiment of the coordinate positioning method based on triangulation when executing the computer program, such as step S201 to step S203 in the embodiment shown in FIG. 2.
  • the processor 801 implements the functions of the modules/units in the foregoing device embodiments when executing the computer program, such as the functions of the modules 701 to 703 shown in FIG. 7.
  • the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 802 and executed by the processor 801 to complete the application.
  • One or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the coordinate positioning device 80 based on triangulation.
  • FIG. 8 is only an example of a coordinate positioning device based on triangulation, and does not constitute a limitation on a coordinate positioning device based on triangulation. It may include more or fewer components than those shown in the figure, or a combination Certain components, or different components, such as input and output devices, network access devices, buses, etc.
  • the so-called processor 801 can be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 802 can be an internal storage unit of a coordinate positioning device based on triangulation, or an external storage device of a coordinate positioning device based on triangulation, such as plug-in hard disks, smart media cards (SMC), and secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 802 is used to store the computer program and other programs and data required by the coordinate positioning device based on triangulation.
  • the memory 802 can also be used to temporarily store data that has been output or will be output.
  • the bus can be an Industry Standard Architecture (ISA) bus, Peripheral Component (PCI) bus, or Extended Industry Standard Architecture (EISA) bus, etc.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the buses in the drawings of this application are not limited to only one bus or one type of bus.
  • the embodiments of the present application also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, which can be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer program is being processed. When the device is executed, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device that can carry the computer program code to the camera/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM) , Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
  • ROM read-only memory
  • RAM random access memory
  • electric carrier signal telecommunications signal and software distribution medium.
  • U disk mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/network equipment and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Abstract

适用于计算机技术领域,提供了一种基于三角测量的坐标定位方法、装置、设备及存储介质。方法包括:获取目标标记点投影到多个摄像单元的成像平面的图像坐标,并根据多个图像坐标获得目标标记点的初始三维坐标(S201);根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差(S202);根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标(S203)。基于三角测量的坐标定位方法,根据目标标记点的角度定位误差对初始三维坐标进行校正,与现有技术中基于三维坐标的距离误差进行校正的方法相比,不受摄像单元与目标标记点之间距离的影响,提高了坐标定位的精度。

Description

基于三角测量的坐标定位方法、装置、设备及存储介质 技术领域
本申请属于计算机技术领域,尤其涉及一种基于三角测量的坐标定位方法、装置、设备及存储介质。
背景技术
三角测量被广泛的应用在实时动作捕捉系统中的坐标定位。
当前技术中,实时动作捕捉系统大多需获取标记点投射到周围两个以上动作捕捉相机上的二维投影点,然后基于三角测量原理预测得到标记点的空间位置(三维坐标)。
由于动作捕捉相机本身的误差,根据三角测量预测得到的标记点的目标空间位置在各动作捕捉相机上的投射点,与各动作捕捉相机实际获取的投影点的位置不同,存在距离误差,需要对该距离误差进行优化计算,从而获得标记点的准确空间位置。为了满足实时动作捕捉系统的定位速度要求,需要在预设的时间内输出优化计算后的三维坐标值,当动作捕捉相机与标记点之间的距离变大时,根据三角测量预测得到的目标空间位置的投射点和实际投影点之间的距离误差也会变大,增加了坐标优化的计算量,在预设时间到达时不可避免的会出现坐标定位误差较大的问题。
发明内容
有鉴于此,本申请实施例提供了一种基于三角测量的坐标定位方法、装置、设备及存储介质,以解决现有技术中基于三角测量的坐标定位方法误差大的技术问题。
第一方面,本申请实施例提供了一种基于三角测量的坐标定位方法,包括:
获取目标标记点投影到多个摄像单元的成像平面的图像坐标,并根据多个图像坐标获得目标标记点的初始三维坐标;
根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差;
根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标。
在第一方面的一种可能的实现方式中,根据所述多个图像坐标获得目标标记点的初始三维坐标包括:
获取每个摄像单元的光心坐标;
根据每个摄像单元的光心坐标和该摄像单元对应的图像坐标,确定每个摄像单元的第一单位向量;
获取所有第一单位向量之间多个交叉点的三维坐标;其中,每两个第一单位向量对应一个交叉点;
对所有交叉点的三维坐标进行求平均处理,获得所述目标标记点的初始三维坐标。
在第一方面的一种可能的实现方式中,根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差包括:
根据每个摄像单元的光心光标与所述初始三维坐标,确定每个摄像单元的第二单位向量;
针对每个摄像单元,根据该摄像单元的第一单位向量和该摄像单元的第二单位向量,计算获得目标标记点投影到该摄像单元的投影点的参考角度定位误差;
对所有的参考角度定位误差进行求平均处理,获取目标标记点的角度定位误差。
在第一方面的一种可能的实现方式中,根据所述摄像单元的第一单位向量和摄像单元的第二单位向量,计算获得目标标记点投影到该摄像单元的投影点的参考角度定位误差,包括:
执行运算操作,获取第一单位向量和第二单位向量的点积,并将单位值与 点积之间的差值作为目标标记点投影到该摄像单元的投影点的参考角度定位误差。
在第一方面的一种可能的实现方式中,所述根据所述初始三维坐标和所述角度定位误差,获得所述目标标记点的优化三维坐标,包括:
基于梯度下降算法对初始三维坐标进行迭代计算直至满足预设条件,并将当前时刻的三维坐标作为目标标记点的优化三维坐标;其中,梯度下降算法中梯度方向为角度定位误差下降速度最快的方向。
在第一方面的一种可能的实现方式中,基于梯度下降算法对初始三维坐标进行迭代计算直至满足预设条件包括:
初始化梯度下降算法的下降速度和梯度方向,并将初始三维坐标作为损失函数的初始值;其中,损失函数用于描述所述目标标记点的三维坐标;
根据当前的下降速度、梯度方向以及三维坐标,计算获得优化后的三维坐标;
判断迭代结果是否满足预设条件;
若否,则根据优化后的三维坐标对下降速度和梯度方向进行更新,并返回执行根据当前的下降速度、梯度方向以及三维坐标,计算获得优化后的三维坐标的步骤,直至迭代结果满足预设条件。
在第一方面的一种可能的实现方式中,判断迭代结果是否满足预设条件,包括:
根据优化后的三维坐标以及多个图像坐标,确定优化后的角度定位误差;
判断优化后的角度定位误差是否小于第一预设值。
第二方面,本申请实施例提供了一种基于三角测量的坐标定位装置,包括:
获取模块,用于获取目标标记点投影到多个摄像单元的成像平面的图像坐标,并根据多个图像坐标获得目标标记点的初始三维坐标;
确定模块,用于根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差;
定位模块,用于根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标。
第三方面,本申请实施例提供了一种基于三角测量的坐标定位设备,包括存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述第一方面任一项方法的步骤。
第四方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时实现上述第一方面任一项方法的步骤。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项的方法。
本申请实施例提供的基于三角测量的坐标定位方法,获取目标标记点投影到多个摄像单元后的图像坐标,并根据多个图像坐标获得目标标记点的初始三维坐标,根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差,然后根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标。本申请实施例提供的基于三角测量的坐标定位方法,根据目标标记点的角度定位误差对初始三维坐标进行校正,与现有技术中基于三维坐标的距离误差进行校正的方法相比,不受摄像单元与目标标记点之间距离的影响,在满足定位速度的前提下,提高了坐标定位的精度。
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳 动的前提下,还可以根据这些附图获得其他的附图。
图1是三角测量的原理图;
图2是本申请一实施例提供的基于三角测量的坐标定位方法的流程示意图;
图3是本申请一实施例提供的获得目标标记点的初始三维坐标的流程示意图;
图4是本申请一实施例提供的标记点的三维坐标与图像坐标的投影关系图;
图5是本申请一实施例提供的确定目标标记点的角度定位误差的流程示意图;
图6为本是申请一实施例提供的基于梯度下降算法对初始三维坐标进行迭代计算的流程示意图;
图7是本申请一实施例提供的基于三角测量的坐标定位装置的结构示意图;
图8是本申请一实施例提供的基于三角测量的坐标定位设备的结构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
三角测量是借由测量目标点与固定基准线的已知端点的角度,确定目标点的距离或者目标点的位置的方法。由于其不需要直接进行三边测量(距离测量)即可确定目标点的位置,被广泛的应用在实时动作捕捉系统上。
示例性的,实时动作捕捉系统包括多个动作捕捉相机、至少一个数据处理工作站,以及多个光学标识别点。实际应用中,在运动物体关键部位(如人体的关节处等)粘贴光学标识点,多个动作捕捉相机从不同角度实时探测光学标识点,分别获取该光学标识点投影到不同动作扑捉相机上的图像坐标,并将光学标识别点的图像坐标实时传输至数据处理工作站,数据处理工作站接受各动作捕捉相机发送的该目标标记点的多个图像坐标,并根据三角测量的原理进行光学标识点的空间坐标定位,从而再从生物运动学原理出发计算骨骼的自由度运动。
其中,三角测量的原理图可以参阅图1。如图1所示,P为光学标识点,两台相机R 0和R 1分别从不同角度拍摄光学标识点P,P在两台相机上的投影点分别为X 0和X 1。则根据三角测量原理,获得光学标识点P的三维坐标为第一向量
Figure PCTCN2020134947-appb-000001
和第二向量
Figure PCTCN2020134947-appb-000002
之间的交叉点A的坐标;其中,第一向量
Figure PCTCN2020134947-appb-000003
为通过相机R 0的光心C 0与投影点X 0的向量,第二向量V 1为通过相机R 1的光心C 1与投影点
Figure PCTCN2020134947-appb-000004
的向量。
由于动作捕捉相机本身的误差,交叉点A(根据三角测量预测得到的目标空间位置)在各动作捕捉相机上的投射点,与P点在动作捕捉相机的投影点X 0或X 1的位置不同,存在距离误差,需要对该距离误差进行优化,进而计算获得光学标识点的准确三维坐标(空间位置)。当动作捕捉相机与光学标识点之间的距离变大时,P的实际投射点和交叉点A的投影点之间的距离误差也会变大,导致基于距离误差进行坐标位置优化的坐标定位方法的计算量大大增加,为了保障光学标识点的空间坐标定位的时效性,不可以避免会带来坐标定位误差大的技术问题。
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解 决上述技术问题进行示例性说明。值得说明的是,下文中列举的具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
图2为本申请一实施例提供的基于三角测量的坐标定位方法的流程示意图,如图2所示,该基于三角测量的坐标定位方法包括:
S201、获取目标标记点投影到多个摄像单元的成像平面后的图像坐标,并根据多个图像坐标获得目标标记点的初始三维坐标。
图像坐标为目标标记点投影到摄像单元成像平面后的图像坐标系上的位置坐标,为二维位置坐标。
其中,摄像单元可以为动作捕捉系统中的动作捕捉相机。
本实施例中,获取目标标记点投影到多个摄像单元后的图像坐标可以为接收各动作捕捉相机发送的该目标标记点的多个图像坐标。
在本实施例中,根据多个图像坐标获得目标标记点的初始三维坐标,包括基于三角测量的原理,根据多个图像坐标确定目标标记点的初始三维坐标。其中,初始三维坐标为世界坐标系的坐标。
例如,基于三角测量的原理,根据目标标记点在两个摄像单元上投影的两个图像坐标确定目标标记点的初始三维坐标。
示例性的,请一并参阅图1,图1为基于三角测量进行坐标定位的原理图。如图1所示,P为目标标记点,两台相机R 0和R 1分别从不同角度拍摄目标标记点P,P在两台相机上的投影点分别为X 0和X 1;X 0和X 1在各自相机的成像平面的二维位置坐标即为图像坐标。
则根据三角测量原理,目标标记点P的三维坐标为第一向量
Figure PCTCN2020134947-appb-000005
和第二向量
Figure PCTCN2020134947-appb-000006
之间的交叉点A的坐标;其中,第一向量
Figure PCTCN2020134947-appb-000007
为通过相机R 0的光心C 0与投影点X 0的向量,第二向量
Figure PCTCN2020134947-appb-000008
为通过相机R 1的光心C 1与投影点X 1的向量。
S202、根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差。
本实施例中,根据初始三维坐标以及多个图像坐标,确定目标标记点的角 度定位误差包括:针对每个图像坐标,获取目标标记点相对于该图像坐标的角度误差,然后根据目标标记点的所有角度误差,确定目标标记点的角度定位误差。
其中,目标标记点相对于某一图像坐标的角度误差为:目标标记点的初始三维坐标与该图像坐标对应的摄像单元的光心坐标之间连线与该图像坐标与对应的光心之间连线的夹角。
示例性的,请一并参阅图1,如图1所示,P为目标标记点,P在相机R 0上的投影点为X 0,目标标记点P的初始三维坐标为B点的三维坐标(可以根据三角测量方法预测得到),则目标标记点p相对于投影点X 0的角度误差为第一向量V 0与C 0与B之间连线的夹角。
在本实施例中,根据目标标记点的所有角度误差,确定目标标记点的角度定位误差,包括根据所有角度误差的平均值确定目标标记点的角度定位误差。
S203、根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标。
在本实施例中,根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标即为通过角度定位误差对初始三维坐标进行优化。
其中,优化的方法包括但不限于梯度下降算法、奇异值分解方法、最小二乘法等。
其中,基于梯度下降算法对初始三维坐标进行优化,包括将目标标记点的三维坐标作为损失函数的目标值,基于梯度下降算法对初始三维坐标进行迭代计算直至满足预设条件,并将当前时刻的三维坐标作为目标标记点的优化三维坐标;其中,梯度下降算法中梯度方向为角度定位误差下降速度最快的方向。
本申请实施例提供的基于三角测量的坐标定位方法,获取目标标记点投影到多个摄像单元后的图像坐标,并根据多个图像坐标获得目标标记点的初始三维坐标,根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差,然后根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标。 本申请实施例提供的基于三角测量的坐标定位方法,根据目标标记点的角度定位误差对初始三维坐标进行校正,与现有技术中基于三维坐标的距离误差进行校正的方法相比,不受摄像单元与目标标记点之间距离的影响,在保障坐标定位速度的前提下,提高了坐标定位的精度。
实际应用中,为了提高目标标记点的定位精度,摄像单元一般会多于两台。此时不同两台摄像单元上投影点生成的交叉点可能不为一个,可以根据多个交叉点坐标的平均值确定初始三维坐标。下述将通过图3的实施例进行示例性说明。
图3为本是申请一实施例提供的获得目标标记点的初始三维坐标的流程示意图,描述了图2所述实施例步骤201中根据多个图像坐标获得目标标记点的初始三维坐标的一种可能的实现方式。参见图3,获得目标标记点的初始三维坐标的方法包括:
S301、获取每个摄像单元的光心坐标。
每个摄像单元的光心坐标为该摄像单元的光心在世界坐标系上的坐标。
S302、根据每个摄像单元的光心坐标和该摄像单元对应的图像坐标,确定每个摄像单元的第一单位向量。
图像坐标为目标标记点投影到摄像单元的成像平面(图像坐标系)上的位置坐标,为二维坐标,每一个摄像单元对应一个图像坐标。
请一并参阅图4,图4为目标标记点的三维坐标与图像坐标(二维坐标)的投影关系图。假设摄像单元为相机,如图4所示,O C--X CY CZ C为相机坐标系,o-xy为图像坐标系,相机坐标系的原点O C为摄像单元的光心,图像坐标系原点o为摄像单元的光心在图像平面上的投影,O C与o之间的距离为相机的焦距f。
如图4所示,P为目标标记点,点P投影到相机上后得到成像点p,p在图像坐标系中的坐标为(x,y)。将该相机的光心与p点进行连接,得到该相机拍摄目标标记点的投影线,即第一单位向量。
S303、获取所有第一单位向量之间多个交叉点的三维坐标;其中,每两个第一单位向量对应一个交叉点。
每一个摄像单元对应一个第一单位向量,当仅有两台摄像单元时,根据三角测量原理可知,两台摄像单元的第一单位向量的交叉点即为基于该两台摄像单元确定的目标标记点的三维坐标。
本实施例中,每两个第一单位向量对应一个交叉点即为每两个摄像单元可以确定一个交叉点。
获取所有第一单位向量之间多个交叉点的三维坐标可以包括,从所有第一单位向量之后选取一个第一单位向量A作为基准,获得该第一单位向量A与其他第一单位向量之间的多个交叉点的三维坐标,然后更换第一单位向量B,获取该第一单位向量B与其他第一单位向量之间的多个交叉点的三维坐标,直至获取所有第一单位向量之间所有交叉点的三维坐标。
示例性的,假设有N台摄像单元,则步骤S301可以获得N个第一单位向量,每两个第一单位向量对应一个交叉点,则一共可以获得N(N-1)/2个交叉点。
S304、对所有交叉点的三维坐标进行求平均处理,获得目标标记点的初始三维坐标。
对所有交叉点的三维坐标求平均处理,并将该平均值作为目标标记点的初始三维坐标。
本申请实施例提供的基于三角测量的坐标定位方法,根据N个摄像单元的图像坐标确定初始三维坐标,其中N大于2,提高了初始三维坐标的定位精度,可以提高对初始三维坐标进行优化的效率。
图5为本是申请一实施例提供的确定目标标记点的角度定位误差的流程示意图,描述了图2所述实施例步骤202中根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差的一种可能的实现方式。参见图5,根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差包括:
S501、根据每个摄像单元的光心光标与初始三维坐标,确定每个摄像单元 的第二单位向量。
每一个摄像单元对应一个第二单位向量。
S502、针对每个摄像单元,根据该摄像单元的第一单位向量和该摄像单元的第二单位向量,计算获得目标标记点投影到该摄像单元的投影点的参考角度定位误差。
本实施例中,摄像单元的第一单位向量为摄像单元拍摄目标标记点的实际投影线;摄像单元的第二单位向量为摄像单元与目标标记点的三维坐标之间的连接线,其中,该三维坐标可以为目标标记点初始三维坐标,也可以为目标标记点实时更新的三维坐标。
本实施例中,根据该摄像单元的第一单位向量和该摄像单元的第二单位向量,计算获得目标标记点投影到该摄像单元的投影点的参考角度定位误差,包括:执行运算操作,获取第一单位向量和第二单位向量的点积,并将单位值与点积之间的差值作为目标标记点投影到该摄像单元的投影点的参考角度定位误差。
本实施例中,第一单位向量和第二单位向量之间的夹角的大小与目标标记点投影到该摄像单元的投影点的参考角度定位误差大小一一对应。
示例性的,摄像单元拍摄的目标标记点的参考角度定位误差可以通过下式(1)表示:
Figure PCTCN2020134947-appb-000009
其中,
Figure PCTCN2020134947-appb-000010
是第i个摄像单元的第一单位向量;
Figure PCTCN2020134947-appb-000011
是第i个摄像单元的第二单位向量。
两个单位向量的点积等于两个单位向量的数值与该两个单位向量的夹角的余弦值相乘,由于单位向量的数值均为1,故两个单位向量的点积即为该两个单位向量的夹角的余弦值的大小。例如:若两个单位向量的夹角为0°,即两个单位向量平行,此时两个单位向量的点积为1,单位值与点积之间的差值也为0。若两个单位向量之间的夹角为90°,此时两个单位向量的点积为0,则 单位值与点积之间的差值为1。
本实施例中,通过对第一单位向量和第二单位向量进行求点积处理,可以通过一个标量(第一单位向量和第二单位向量的点积)描述第一单位向量和第二单位向量之间的夹角大小,同时通过引入单位值,实现了第一单位向量和第二单位向量之间夹角的归一化处理。
S503、对所有的参考角度定位误差进行求平均处理,获取目标标记点的角度定位误差。
示例性的,目标标记点的角度定位误差可以通过下式(2)表示:
Figure PCTCN2020134947-appb-000012
其中,E为角度定位误差,
Figure PCTCN2020134947-appb-000013
是第i个摄像单元的第一单位向量;
Figure PCTCN2020134947-appb-000014
是第i个摄像单元的第二单位向量,n为摄像单元的总个数。
摄像单元的第一单位向量为摄像单元拍摄目标标记点的实际投影线,当摄像单元以及目标标记点位置确定后,第一单位向量保持不变。摄像单元的第二单位向量基于目标标记点的三维坐标变化而变化。因此,目标标记点的角度定位误差根据目标标记点的三维坐标变换而变化,具体可以通过上式(2)表达。
为了提高目标标记点的定位精度和定位速度,可以将梯度下降算法中梯度方向设为角度定位误差下降速度最快的方向,然后基于梯度下降算法对目标标记点的三维坐标进行优化。例如:根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标中一种可能的实现方式包括基于梯度下降算法对初始三维坐标进行优化,包括将目标标记点的三维坐标作为损失函数的目标值,基于梯度下降算法对初始三维坐标进行迭代计算直至满足预设条件,并将当前时刻的三维坐标作为目标标记点的优化三维坐标。下面通过图6所述实施例,对基于梯度下降算法对初始三维坐标进行迭代计算直至满足预设条件进行示例性说明。
图6为本是申请一实施例提供的基于梯度下降算法对初始三维坐标进行迭代计算的流程示意图,描述了图2所述实施例步骤203中根据初始三维坐标和 角度定位误差,获得目标标记点的优化三维坐标中一种可能的实现方式。参见图6,基于梯度下降算法对初始三维坐标进行迭代计算直至满足预设条件包括:
S601、初始化梯度下降算法的下降速度和梯度方向,并将初始三维坐标作为损失函数的初始值,损失函数用于描述目标标记点的三维坐标;
在本实施例中,损失函数可以通过下述式(3)表示,具体地:
Figure PCTCN2020134947-appb-000015
式(3)中
Figure PCTCN2020134947-appb-000016
为下一时刻的三维坐标,
Figure PCTCN2020134947-appb-000017
为当前时刻的三维坐标,ε m为下降速度,
Figure PCTCN2020134947-appb-000018
为梯度方向。梯度方向为角度定位误差下降速度最快的方向,可以对角度定位误差求微分,获得当前的梯度方向;下降速度可以为预设值。
其中,每进行一次迭代计算,则进行一次时刻的更新。
梯度方向为角度定位误差下降速度最快的方向。
本实施例中,目标标记点的角度定位误差可以通过上式(2)表示,对于式(2)求微分处理,可以获得梯度方向为:
Figure PCTCN2020134947-appb-000019
其中,初始化梯度下降算法中的梯度方向,可以为根据初始三维坐标以及目标标记点的各个图像坐标,计算获得每个摄像单元的第一向量以及第二向量,并将所有摄像单元的第一向量以及第二向量代入式(4)获取初始梯度方向。
本实施例中,下降速度可以通过下式(5)表示:
Figure PCTCN2020134947-appb-000020
其中,
Figure PCTCN2020134947-appb-000021
为当前时刻的梯度方向,
Figure PCTCN2020134947-appb-000022
为上一个时刻的梯度方向。由于在初始状态时,没有
Figure PCTCN2020134947-appb-000023
故可以根据预设值初始化下降速度。例如,下降速度到的初始值ε 0设置为0.001。
S602、根据当前下降速度、梯度方向以及三维坐标,计算获得优化后的三维坐标。
根据上式(2)进行计算,获得下一个时刻的目标标记点的三维坐标,即为 优化后的三维坐标。
S603、判断迭代结果是否满足预设条件。
每一次迭代过程中,会得到目标标记点更新后的三维坐标,即优化后的三维坐标。
在一种可能实现方式中,判断迭代结果是否满足预设条件包括:根据优化后的三维坐标以及多个图像坐标,获取该优化后的三维坐标的角度定位误差,判断该优化后的三维坐标的角度定位误差是否小于第一预设值,若是,则表示迭代结果满足预设条件,若否,则表示迭代结果不满足预设条件。其中,第一预设值可以预先设定。
其中,根据优化后的三维坐标以及多个图像坐标,获取该更新后的三维坐标的角度定位误差参照上式(2)。
在另一种可能实现方式中,判断迭代结果是否满足预设条件包括:判断优化后的三维坐标与上一时刻的三维坐标之间的误差是否小于第二预设值,若是则表示迭代结果满足预设条件,若否,则表示迭代结果不满足预设条件。其中,第二预设值可以预先设定。
上述两种实现方式中,满足任意一个即表示迭代结果满足预设条件,当两种实现方式中的预设条件均不满足时,表示迭代结果不满足预设条件。
S604、若否,则根据优化后的三维坐标对下降速度和梯度方向进行更新,并返回执行上述根据当前时刻的下降速度、梯度方向以及三维坐标,计算获得下一时刻的三维坐标的步骤。
若迭代结果不满足预设条件,则根据优化后的三维坐标计算获得所有摄像单元的第一单位向量和第二单位向量,并代入上式(4)和(5),对下降速度和梯度方向进行更新,然后返回执行步骤602,直至迭代结果满足预设条件。
S605、若是,则将优化后的三维坐标作为目标标记点的优化三维坐标。
本申请实施例提供的基于三角测量的坐标定位方法,将梯度下降算法中梯度方向设为角度定位误差下降速度最快的方向,然后基于梯度下降算法对目标 标记点的三维坐标进行优化,在提高了目标标记点的定位精度的同时,提高了迭代计算的效率。
实际应用中,目标标记点可以为多个,例如:m个目标标记点被N个摄像单元看到,基于梯度下降算法进行计算时,目标标记点的迭代循环时间较长,导致目标标记点的定位速度较慢。为了提高目标标记点定位速度,可以基于图形处理器(Graphics Processing Unit,以下简称GPU)进行m个目标标记点的定位,相对于中央处理器CPU,GPU可以同时运行大量的线程,进而实现m个目标标记点的同时定位运算,实现了三维坐标定位的加速。
可选地,在进行每一个目标标记点的定位运算时,也可以通过多个线程同时计算N个摄像单元的相关数据(例如第一单位向量)以提高每个目标标记点的迭代速度。
示例性的,假设共有m个目标标记点被N个摄像单元看到。则共设计m个线程组,每一个线程组用于计算一个目标标记点,同时进行m个目标标记点的定位运算。每一个线程组包括至少有n个线程,每一个线程计算上述式(4)中的
Figure PCTCN2020134947-appb-000024
然后再用并行归约的方式计算总和获得当前的梯度方向,多个线程同时进行计算,可以快速得到一个目标标记点的优化三维坐标;多个线程组同时进行计算,可以快速得到多个目标标记点的优化三维坐标,实现了三维坐标定位的加速,极大的提高了三维坐标的定位速度。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
基于上述实施例所提供的基于三角测量的坐标定位方法,本发明实施例进一步给出实现上述方法实施例的装置实施例。
图7为本申请一实施例提供的基于三角测量的坐标定位装置的结构示意图。如图7所示,基于三角测量的坐标定位装置70包括:获取模型701、确定模块702以及定位模块703。
获取模块701,用于获取目标标记点投影到多个摄像单元的成像平面的图像坐标,并根据多个图像坐标获得目标标记点的初始三维坐标。
确定模块702,用于根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差。
定位模块703,用于根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标。
本申请实施例提供的基于三角测量的坐标定位装置,获取目标标记点投影到多个摄像单元后的图像坐标,并根据多个图像坐标获得目标标记点的初始三维坐标,根据初始三维坐标以及多个图像坐标,确定目标标记点的角度定位误差,然后根据初始三维坐标和角度定位误差,获得目标标记点的优化三维坐标。本申请实施例提供的基于三角测量的坐标定位方法,根据目标标记点的角度定位误差对初始三维坐标进行校正,与现有技术中基于三维坐标的距离误差进行校正的方法相比,不受摄像单元与目标标记点之间距离的影响,在保障坐标定位速度的前提下,提高了坐标定位的精度。
获取模块701,具体用于:
获取每个摄像单元的光心坐标;
根据每个摄像单元的光心坐标和该摄像单元对应的图像坐标,确定每个摄像单元的第一单位向量;
获取所有第一单位向量之间多个交叉点的三维坐标;其中,每两个第一单位向量对应一个交叉点;
对所有交叉点的三维坐标进行求平均处理,获得所述目标标记点的初始三维坐标。
确定模块702,具体用于:
根据每个摄像单元的光心光标与所述初始三维坐标,确定每个摄像单元的第二单位向量;
针对每个摄像单元,根据该摄像单元的第一单位向量和该摄像单元的第二 单位向量,计算获得目标标记点投影到该摄像单元的投影点的参考角度定位误差;
对所有的参考角度定位误差进行求平均处理,获取目标标记点的角度定位误差。
确定模块702,还具体用于:
执行运算操作,获取第一单位向量和第二单位向量的点积,并将单位值与点积之间的差值作为目标标记点投影到该摄像单元的投影点的参考角度定位误差。
定位模块703,具体用于:
基于梯度下降算法对初始三维坐标进行迭代计算直至满足预设条件,并将当前时刻的三维坐标作为目标标记点的优化三维坐标;其中,梯度下降算法中梯度方向为角度定位误差下降速度最快的方向。
定位模块703,还具体用于:
初始化梯度下降算法的下降速度和梯度方向,并将初始三维坐标作为损失函数的初始值;其中,损失函数用于描述所述目标标记点的三维坐标;
根据当前的下降速度、梯度方向以及三维坐标,计算获得优化后的三维坐标;
判断迭代结果是否满足预设条件;
若否,则根据优化后的三维坐标对下降速度和梯度方向进行更新,并返回执行根据当前的下降速度、梯度方向以及三维坐标,计算获得优化后的三维坐标的步骤,直至迭代结果满足预设条件。
定位模块703,还具体用于:
根据优化后的三维坐标以及多个图像坐标,确定优化后的角度定位误差;
判断优化后的角度定位误差是否小于第一预设值。
另一方面,本实施例提供的基于三角测量的坐标定位装置,将梯度下降算法中梯度方向设为角度定位误差下降速度最快的方向,然后基于梯度下降算法 对目标标记点的三维坐标进行优化,在提高了目标标记点的定位精度的同时,提高了迭代计算的效率。
图7所示实施例提供的基于三角测量的坐标定位装置,可用于执行上述方法实施例中的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
图8是本申请一实施例提供的基于三角测量的坐标定位设备的示意图。如图8所示,该实施例的基于三角测量的坐标定位设备80包括:至少一个处理器801、存储器802以及存储在所述存储器802中并可在所述处理器801上运行的计算机程序。基于三角测量的坐标定位设备还包括通信部件803,其中,处理器801、存储器802以及通信部件803通过总线804连接。
处理器801执行所述计算机程序时实现上述各个基于三角测量的坐标定位方法实施例中的步骤,例如图2所示实施例中的步骤S201至步骤S203。或者,处理器801执行计算机程序时实现上述各装置实施例中各模块/单元的功能,例如图7所示模块701至703的功能。
示例性的,计算机程序可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器802中,并由处理器801执行,以完成本申请。一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段, 该指令段用于描述计算机程序在所述基于三角测量的坐标定位设备80中的执行过程。
本领域技术人员可以理解,图8仅仅是基于三角测量的坐标定位设备的示例,并不构成对基于三角测量的坐标定位设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如输入输出设备、网络接入设备、总线等。
所称处理器801可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器802可以是基于三角测量的坐标定位设备的内部存储单元,也可以是基于三角测量的坐标定位设备的外部存储设备,例如插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。所述存储器802用于存储所述计算机程序以及基于三角测量的坐标定位设备所需的其他程序和数据。存储器802还可以用于暂时地存储已经输出或者将要输出的数据。
总线可以是工业标准体系结构(Industry Standard Architecture,ISA)总线、外部设备互连(Peripheral Component,PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,本申请附图中的总线并不限定仅有一根总线或一种类型的总线。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,计算机程序包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间 的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种基于三角测量的坐标定位方法,其特征在于,包括:
    获取目标标记点投影到多个摄像单元的成像平面的图像坐标,并根据所述多个图像坐标获得所述目标标记点的初始三维坐标;
    根据所述初始三维坐标以及所述多个图像坐标,确定所述目标标记点的角度定位误差;
    根据所述初始三维坐标和所述角度定位误差,获得所述目标标记点的优化三维坐标。
  2. 如权利要求1所述的基于三角测量的坐标定位方法,其特征在于,所述根据所述多个图像坐标获得所述目标标记点的初始三维坐标包括:
    获取每个摄像单元的光心坐标;
    根据每个摄像单元的光心坐标和该摄像单元对应的图像坐标,确定每个摄像单元的第一单位向量;
    获取所有第一单位向量之间多个交叉点的三维坐标;其中,每两个第一单位向量对应一个交叉点;
    对所有交叉点的三维坐标进行求平均处理,获得所述目标标记点的初始三维坐标。
  3. 如权利要求2所述的基于三角测量的坐标定位方法,其特征在于,根据所述初始三维坐标以及所述多个图像坐标,确定所述目标标记点的角度定位误差包括:
    根据每个摄像单元的光心光标与所述初始三维坐标,确定每个摄像单元的第二单位向量;
    针对每个摄像单元,根据所述摄像单元的第一单位向量和所述摄像单元的第二单位向量,计算获得所述目标标记点投影到所述摄像单元的投影点的参考角度定位误差;
    对所有的参考角度定位误差进行求平均处理,获取所述目标标记点的角度定位误差。
  4. 如权利要求3所述的基于三角测量的坐标定位方法,其特征在于,所述根据所述摄像单元的第一单位向量和所述摄像单元的第二单位向量,计算获得所述目标标记点投影到所述摄像单元的投影点的参考角度定位误差,包括:
    执行运算操作,获得所述第一单位向量和所述第二单位向量的点积,并将单位值与所述点积的差值作为所述目标标记点投影到所述摄像单元的投影点的参考角度定位误差。
  5. 如权利要求1-4任一项所述的基于三角测量的坐标定位方法,其特征在于,所述根据所述初始三维坐标和所述角度定位误差,获得所述目标标记点的优化三维坐标,包括:
    基于梯度下降算法对所述初始三维坐标进行迭代计算直至满足预设条件,并将当前时刻的三维坐标作为所述目标标记点的优化三维坐标;其中,所述梯度下降算法的梯度方向为所述角度定位误差下降速度最快的方向。
  6. 如权利要求5所述的基于三角测量的坐标定位方法,其特征在于,所述基于梯度下降算法对所述初始三维坐标进行迭代计算直至满足预设条件包括:
    初始化所述梯度下降算法的下降速度和梯度方向,并将所述初始三维坐标作为损失函数的初始值;其中,所述损失函数用于描述所述目标标记点的三维坐标;
    根据当前的下降速度、梯度方向以及三维坐标,计算获得优化后的三维坐标;
    判断迭代结果是否满足预设条件;
    若否,则根据所述优化后的三维坐标对所述下降速度和所述梯度方向进行更新,并返回执行所述根据当前的下降速度、梯度方向以及三维坐标,计算获得优化后的三维坐标的步骤,直至迭代结果满足预设条件。
  7. 如权利要求6所述的基于三角测量的坐标定位方法,其特征在于,所述 判断迭代结果是否满足预设条件,包括:
    根据所述优化后的三维坐标以及所述多个图像坐标,确定优化后的角度定位误差;
    判断所述优化后的角度定位误差是否小于第一预设值。
  8. 一种基于三角测量的坐标定位装置,其特征在于,包括:
    获取模块,用于获取目标标记点投影到多个摄像单元的成像平面的图像坐标,并根据所述多个图像坐标获得所述目标标记点的初始三维坐标;
    确定模块,用于根据所述初始三维坐标以及所述多个图像坐标,确定所述目标标记点的角度定位误差;
    定位模块,用于根据所述初始三维坐标和所述角度定位误差,基于梯度下降算法获得所述目标标记点的优化三维坐标。
  9. 一种基于三角测量的坐标定位设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述方法的步骤。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述方法的步骤。
PCT/CN2020/134947 2019-12-13 2020-12-09 基于三角测量的坐标定位方法、装置、设备及存储介质 WO2021115331A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911289442.1A CN111179339B (zh) 2019-12-13 2019-12-13 基于三角测量的坐标定位方法、装置、设备及存储介质
CN201911289442.1 2019-12-13

Publications (1)

Publication Number Publication Date
WO2021115331A1 true WO2021115331A1 (zh) 2021-06-17

Family

ID=70652030

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134947 WO2021115331A1 (zh) 2019-12-13 2020-12-09 基于三角测量的坐标定位方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN111179339B (zh)
WO (1) WO2021115331A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102020124688A1 (de) * 2019-09-27 2021-04-01 Electronic Theatre Controls, Inc. Systeme und verfahren zur standortbestimmung von leuchten
CN111179339B (zh) * 2019-12-13 2024-03-08 深圳市瑞立视多媒体科技有限公司 基于三角测量的坐标定位方法、装置、设备及存储介质
CN111681268B (zh) * 2020-06-15 2023-06-02 深圳市瑞立视多媒体科技有限公司 光学标记点序号误识别检测方法、装置、设备及存储介质
CN111914359B (zh) * 2020-07-09 2022-09-02 吉林重通成飞新材料股份公司 一种风电叶片后缘间隙模拟方法、系统、设备及存储介质
CN112650250A (zh) * 2020-12-23 2021-04-13 深圳市杉川机器人有限公司 一种地图构建方法及机器人
CN112945240B (zh) * 2021-03-16 2022-06-07 北京三快在线科技有限公司 特征点位置的确定方法、装置、设备及可读存储介质
CN113473834B (zh) * 2021-06-23 2022-04-15 珠海格力电器股份有限公司 异型元件的插装方法、装置、系统、电子设备和存储介质
CN113616350B (zh) * 2021-07-16 2022-04-19 元化智能科技(深圳)有限公司 标记点选取位置的验证方法、装置、终端设备和存储介质
CN113496135B (zh) * 2021-08-31 2023-06-20 北京紫光青藤微系统有限公司 码图的定位方法、装置、电子设备及存储介质
CN114648611B (zh) * 2022-04-12 2023-07-18 清华大学 局域轨道函数的三维重构方法及装置
WO2023237074A1 (zh) * 2022-06-09 2023-12-14 上海市胸科医院 基于超声定位的结节定位方法、装置和电子设备
CN115389246B (zh) * 2022-10-31 2023-03-03 之江实验室 一种动作捕捉系统的速度精度测量方法、系统及装置
CN115546284B (zh) * 2022-11-18 2023-04-28 浙江晶盛机电股份有限公司 晶炉双目三维测量补偿方法、装置、计算机设备和存储介质
CN116499470B (zh) * 2023-06-28 2023-09-05 苏州中德睿博智能科技有限公司 环视相机定位系统优化控制方法、装置及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103983186A (zh) * 2014-04-17 2014-08-13 内蒙古大学 双目视觉系统校正方法及校正设备
CN105091744A (zh) * 2015-05-07 2015-11-25 中国科学院自动化研究所 一种基于视觉传感器和激光测距仪的位姿检测装置与方法
CN106500619A (zh) * 2016-10-21 2017-03-15 哈尔滨理工大学 基于视觉测量的相机内部图像传感器安装误差分离方法
WO2017101150A1 (zh) * 2015-12-14 2017-06-22 深圳先进技术研究院 结构光三维扫描系统的标定方法及装置
CN108839027A (zh) * 2018-08-31 2018-11-20 河南工程学院 基于激光测距传感器的机器人自动对准控制方法
US20190235047A1 (en) * 2018-01-26 2019-08-01 Easymap Digital Technology Inc. Unmanned aerial vehicle detection system and detection method
CN111179339A (zh) * 2019-12-13 2020-05-19 深圳市瑞立视多媒体科技有限公司 基于三角测量的坐标定位方法、装置、设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1655573B1 (en) * 2003-08-13 2012-07-11 Kabushiki Kaisha TOPCON 3-dimensional measurement device and electronic storage medium
WO2015061750A1 (en) * 2013-10-24 2015-04-30 Ali Kord Motion capture system
JP6975929B2 (ja) * 2017-04-18 2021-12-01 パナソニックIpマネジメント株式会社 カメラ校正方法、カメラ校正プログラム及びカメラ校正装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103983186A (zh) * 2014-04-17 2014-08-13 内蒙古大学 双目视觉系统校正方法及校正设备
CN105091744A (zh) * 2015-05-07 2015-11-25 中国科学院自动化研究所 一种基于视觉传感器和激光测距仪的位姿检测装置与方法
WO2017101150A1 (zh) * 2015-12-14 2017-06-22 深圳先进技术研究院 结构光三维扫描系统的标定方法及装置
CN106500619A (zh) * 2016-10-21 2017-03-15 哈尔滨理工大学 基于视觉测量的相机内部图像传感器安装误差分离方法
US20190235047A1 (en) * 2018-01-26 2019-08-01 Easymap Digital Technology Inc. Unmanned aerial vehicle detection system and detection method
CN108839027A (zh) * 2018-08-31 2018-11-20 河南工程学院 基于激光测距传感器的机器人自动对准控制方法
CN111179339A (zh) * 2019-12-13 2020-05-19 深圳市瑞立视多媒体科技有限公司 基于三角测量的坐标定位方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN111179339B (zh) 2024-03-08
CN111179339A (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2021115331A1 (zh) 基于三角测量的坐标定位方法、装置、设备及存储介质
WO2021115071A1 (zh) 单目内窥镜图像的三维重建方法、装置及终端设备
CN106558080B (zh) 一种单目相机外参在线标定方法
WO2020207190A1 (zh) 一种三维信息确定方法、三维信息确定装置及终端设备
CN110070598B (zh) 用于3d扫描重建的移动终端及其进行3d扫描重建方法
CN111612852B (zh) 用于验证相机参数的方法和装置
WO2022156755A1 (zh) 一种室内定位方法、装置、设备和计算机可读存储介质
WO2021004416A1 (zh) 一种基于视觉信标建立信标地图的方法、装置
WO2022160787A1 (zh) 一种机器人手眼标定方法, 装置, 可读存储介质及机器人
WO2022267285A1 (zh) 机器人位姿的确定方法、装置、机器人及存储介质
CN112509057B (zh) 相机外参标定方法、装置、电子设备以及计算机可读介质
CN108364313B (zh) 一种自动对位的方法、系统及终端设备
CN108182708B (zh) 一种双目相机的标定方法、标定装置及终端设备
CN112967344B (zh) 相机外参标定的方法、设备、存储介质及程序产品
CN106570907B (zh) 一种相机标定方法及装置
CN113256718B (zh) 定位方法和装置、设备及存储介质
CN111311632A (zh) 一种物体位姿跟踪方法、装置及设备
US20220327740A1 (en) Registration method and registration apparatus for autonomous vehicle
WO2022222291A1 (zh) 光轴检测系统的光轴标定方法、装置、终端、系统和介质
CN112308925A (zh) 可穿戴设备的双目标定方法、设备及存储介质
CN112686950A (zh) 位姿估计方法、装置、终端设备及计算机可读存储介质
CN110930444B (zh) 一种基于双边优化的点云匹配方法、介质、终端和装置
CN113362445B (zh) 基于点云数据重建对象的方法及装置
CN111368927A (zh) 一种标注结果处理方法、装置、设备及存储介质
CN113298870B (zh) 一种物体的姿态跟踪方法、装置、终端设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20898278

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20898278

Country of ref document: EP

Kind code of ref document: A1