WO2023169186A1 - Positioning method and related device - Google Patents

Positioning method and related device Download PDF

Info

Publication number
WO2023169186A1
WO2023169186A1 PCT/CN2023/077019 CN2023077019W WO2023169186A1 WO 2023169186 A1 WO2023169186 A1 WO 2023169186A1 CN 2023077019 W CN2023077019 W CN 2023077019W WO 2023169186 A1 WO2023169186 A1 WO 2023169186A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
positioning
camera
coordinate
target object
Prior art date
Application number
PCT/CN2023/077019
Other languages
French (fr)
Chinese (zh)
Inventor
王伟杰
郑月
张小龙
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023169186A1 publication Critical patent/WO2023169186A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present application relates to the field of positioning technology, and in particular, to a positioning method and related equipment.
  • Positioning technology has been increasingly widely used in many industries in the past few years and plays an increasingly important role in our daily lives. From map navigation to social networks, location services play a key role. Among them, indoor positioning technology is becoming a hot area of academic research and industry application. Common indoor positioning systems usually require the application of computer vision technology and wireless communication technology. Specifically, the image information captured by the camera during the positioning process is compared with the pre-collected visual photo database of the positioning venue, and then the location coordinates are determined through a computer vision algorithm.
  • Embodiments of the present application provide a positioning method and related equipment to facilitate determining the positioning coordinates of any position within the camera coverage in a multi-camera scenario.
  • this application provides a positioning method. Specifically, when the target object is at the first position, the processing device obtains the first positioning coordinates of the mark point on the target object in the first coordinate system of the first camera, and the second coordinates of the mark point on the target object in the second camera. The second positioning coordinate in the system. When the target object is at the second position, the processing device obtains the third positioning coordinates of the mark point on the target object in the first coordinate system, and the fourth positioning coordinates of the mark point on the target object in the second coordinate system. Wherein, the first position and the second position are both located in the overlapping area of the first camera and the second camera.
  • the processing device determines the first coordinate system transformation parameter between the first coordinate system and the second coordinate system based on the first positioning coordinates, the second positioning coordinates, the third positioning coordinates and the fourth positioning coordinates. Furthermore, the processing device locates the object to be measured according to the first coordinate system transformation parameter.
  • this application provides a specific implementation method for external parameter calibration of multiple cameras.
  • the coordinate systems of multiple cameras can be unified through the obtained coordinate system transformation parameters, and the subsequent acquisition is based on each camera.
  • the positioning coordinates obtained can be represented by a unified coordinate system, which facilitates the determination of the positioning coordinates of any position within the camera coverage in a multi-camera scenario.
  • the method further includes: the processing device controls the target object to move from the first position. to the second position.
  • the processing device obtaining the first positioning coordinates of the mark point on the target object in the first coordinate system and the second positioning coordinates of the mark point on the target object in the second coordinate system includes: when the target object is in At the first position, the processing device obtains the first imaging information of the marked point on the target object by the first camera and the second imaging information of the marked point on the target object by the second camera, determines the first positioning coordinates according to the first imaging information, and determines the first positioning coordinates according to the first imaging information.
  • the second imaging information determines second positioning coordinates.
  • the processing device obtains the third positioning coordinates of the mark point on the target object in the first coordinate system and the fourth positioning coordinates of the mark point on the target object in the second coordinate system, including: when the target object is at the second position, the processing device obtains The third imaging information of the marked point on the target object by the first camera and the fourth imaging information of the marked point on the target object by the second camera are used to determine the third positioning coordinates based on the third imaging information and the fourth positioning information is determined based on the fourth imaging information. coordinate.
  • a specific implementation method for obtaining the positioning coordinates of a target object based on a camera is provided, which enhances the realizability of this solution.
  • the first coordinate system transformation parameters include a rotation matrix and a translation vector, so as to accurately obtain the relative positional relationship between the two camera coordinate systems.
  • the first positioning coordinates, the second positioning coordinates and the first coordinate system transformation parameters satisfy the formula
  • the third positioning coordinates, the fourth positioning coordinates and the first coordinate system transformation parameters satisfy the formula.
  • Formulas include: Among them, ⁇ represents the rotation angle between the first coordinate system and the second coordinate system, represents the rotation matrix, represents the translation vector, Indicates the first positioning coordinate or the third positioning coordinate, Indicates the second positioning coordinate or the fourth positioning coordinate.
  • the processing device locating the object to be measured according to the first coordinate system transformation parameters includes: the processing device unifies the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system, Obtain target imaging information of the object to be measured by the first camera or the second camera, and determine the target positioning coordinates of the object to be measured in the target coordinate system based on the target imaging information.
  • the processing device unifying the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system includes: the processing device converts the second coordinate system according to the first coordinate system transformation parameters. is the first coordinate system, and the target coordinate system is the first coordinate system.
  • the processing device converts the first coordinate system into the second coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the second coordinate system.
  • the coordinate system after unification can be either the first coordinate system or the second coordinate system, which improves the flexibility of this solution.
  • the method further includes: when the target object is at the first position, the processing device obtains the fifth positioning coordinates of the marker point on the target object in the third coordinate system of the third camera.
  • the first position is located in a shooting overlap area of the first camera, the second camera and the third camera.
  • the processing device obtains the sixth positioning coordinates of the mark point on the target object in the third coordinate system.
  • the second position is located in the shooting overlap area of the first camera, the second camera and the third camera; the processing device is based on the first positioning coordinates, the second positioning coordinates, the third positioning coordinates, the fourth positioning coordinates, the fifth positioning coordinates and The sixth positioning coordinate determines the second coordinate system transformation parameter between the first coordinate system and the third coordinate system and the third coordinate system transformation parameter between the second coordinate system and the third coordinate system.
  • the processing device unifies the first coordinate system and the third coordinate system according to the transformation parameters of the second coordinate system, and unifies the second coordinate system and the third coordinate system according to the transformation parameters of the third coordinate system.
  • multiple cameras that have completed external parameter calibration can be combined to perform external parameter calibration on another camera, thereby improving the calibration accuracy.
  • the processing device unifying the first coordinate system and the third coordinate system according to the second coordinate system transformation parameter includes: the processing device converts the third coordinate system into the first coordinate according to the second coordinate system transformation parameter. system, or the processing device converts the first coordinate system into the third coordinate system according to the second coordinate system transformation parameter.
  • the processing device unifying the second coordinate system and the third coordinate system according to the third coordinate system transformation parameter includes: the processing device converts the third coordinate system into the second coordinate system according to the third coordinate system transformation parameter, or the processing device according to the third coordinate system transformation parameter.
  • the three-coordinate system transformation parameter converts the second coordinate system into the third coordinate system.
  • different coordinate systems can be used as a unified coordinate system, which improves the scalability of this solution.
  • this application provides a processing device, including: a detection module and a calculation module.
  • the detection module is used to: when the target object is at the first position, obtain the first positioning coordinates of the mark point on the target object in the first coordinate system of the first camera, and the second coordinates of the mark point on the target object in the second camera. The second positioning coordinate in the system.
  • the first position is located in the overlapping area of the first camera and the second camera.
  • the target object is at the second position, the third positioning coordinates of the mark point on the target object in the first coordinate system and the fourth positioning coordinates of the mark point on the target object in the second coordinate system are obtained.
  • the second position is located in the overlapping area of the first camera and the second camera; the calculation module is used to: determine the first coordinate system and the third positioning coordinate according to the first positioning coordinates, the second positioning coordinates, the third positioning coordinates and the fourth positioning coordinates.
  • the first coordinate system transformation parameter between the two coordinate systems is used to position the object to be measured according to the first coordinate system transformation parameter.
  • the processing device further includes a control module. After the detection module obtains the first positioning coordinates and the second positioning coordinates, and before the detection module obtains the third positioning coordinates and the fourth positioning coordinates, the control module is used to: control the target object to move from the first position to the second position.
  • the detection module is specifically configured to: when the target object is at the first position, obtain the first imaging information of the marked point on the target object by the first camera and the third imaging information of the marked point on the target object by the second camera. Two imaging information, determine the first positioning coordinates according to the first imaging information, and determine the second positioning coordinates according to the second imaging information.
  • the third imaging information of the marked point on the target object by the first camera and the fourth imaging information of the marked point on the target object by the second camera are obtained, and the third positioning coordinates are determined based on the third imaging information. , determine the fourth positioning coordinates according to the fourth imaging information.
  • the first coordinate system transformation parameter includes a rotation matrix and a translation vector.
  • the first positioning coordinates, the second positioning coordinates and the first coordinate system transformation parameters satisfy the formula
  • the third positioning coordinates, the fourth positioning coordinates and the first coordinate system transformation parameters satisfy the formula.
  • Formulas include: Among them, ⁇ represents the rotation angle between the first coordinate system and the second coordinate system, represents the rotation matrix, represents the translation vector, Indicates the first positioning coordinate or the third positioning coordinate, Indicates the second positioning coordinate or the fourth positioning coordinate.
  • the calculation module is specifically configured to: unify the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system, and obtain the image of the object to be measured by the first camera or the second camera.
  • Target imaging information determine the target positioning coordinates of the object to be measured in the target coordinate system based on the target imaging information.
  • the calculation module is specifically configured to: convert the second coordinate system into the first coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the first coordinate system.
  • the first coordinate system is converted into the second coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the second coordinate system.
  • the detection module is further configured to: when the target object is at the first position, obtain the fifth positioning coordinates of the mark point on the target object in the third coordinate system of the third camera.
  • the first position is located in a shooting overlap area of the first camera, the second camera and the third camera.
  • the sixth positioning coordinates of the mark point on the target object in the third coordinate system are obtained.
  • the second position is located in a shooting overlap area of the first camera, the second camera and the third camera.
  • the calculation module is also used to: determine the distance between the first coordinate system and the third coordinate system based on the first positioning coordinate, the second positioning coordinate, the third positioning coordinate, the fourth positioning coordinate, the fifth positioning coordinate and the sixth positioning coordinate.
  • the second coordinate system transformation parameter and the third coordinate system transformation parameter between the second coordinate system and the third coordinate system are used to unify the first coordinate system and the third coordinate system according to the second coordinate system transformation parameter, and according to the third coordinate system
  • the system transformation parameters unify the second coordinate system and the third coordinate system.
  • the calculation module is specifically configured to: convert the third coordinate system into the first coordinate system according to the second coordinate system transformation parameter, or convert the first coordinate system into the first coordinate system according to the second coordinate system transformation parameter.
  • Three coordinate system The third coordinate system is converted into the second coordinate system according to the third coordinate system transformation parameter, or the second coordinate system is converted into the third coordinate system according to the third coordinate system transformation parameter.
  • this application provides a positioning system, including: a processing device, a first camera, a second camera, and a target object, wherein the processing device is used to perform the positioning method described in any embodiment of the first aspect.
  • the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program When the computer program is executed by hardware, it can implement any of the methods executed by the processing device in the first aspect. some or all of the steps.
  • the first camera and the second camera have a shooting overlap area, and the target object can be controlled to move within the shooting overlap area.
  • the positioning coordinates of the mark point on the target object in the coordinate system of the first camera and the coordinate system of the second camera are respectively obtained.
  • the positioning coordinates of the mark point on the target object in the coordinate system of the first camera and the coordinate system of the second camera are respectively obtained.
  • the coordinate system transformation parameters between the coordinate system of the first camera and the coordinate system of the second camera are determined based on the obtained multiple positioning coordinates, and the object to be measured is positioned based on the coordinate system transformation parameters.
  • this application provides a specific implementation method for external parameter calibration of multiple cameras.
  • the coordinate systems of multiple cameras can be unified through the obtained coordinate system transformation parameters.
  • Positioning coordinates can be represented by a unified coordinate system, which makes it easy to determine the positioning coordinates of any position within the camera coverage in a multi-camera scenario.
  • Figure 1 is a schematic diagram of a scene for multi-camera positioning in the embodiment of the present application
  • Figure 2 is a schematic diagram of the coordinate systems of different cameras in the embodiment of the present application.
  • Figure 3 is a schematic diagram of a positioning system provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of an embodiment of the positioning method in the embodiment of the present application.
  • Figure 5 is a schematic diagram of another positioning system provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of another embodiment of the positioning method in the embodiment of the present application.
  • Figure 7 is a schematic structural diagram of a processing device in an embodiment of the present application.
  • Figure 8 is another structural schematic diagram of a processing device in an embodiment of the present application.
  • the embodiments of the present application provide a positioning method and related equipment, which can unify the coordinate systems of multiple cameras according to the coordinate system transformation parameters. Subsequent positioning coordinates obtained based on each camera can be represented by the unified coordinate system, which facilitates In a multi-camera scenario, determine the positioning coordinates of any location within the camera coverage.
  • Figure 1 is a schematic diagram of a scene for multi-camera positioning in an embodiment of the present application.
  • camera 1, camera 2 and camera 3 each have their own shooting area. Since the shooting area of a single camera is limited, multiple cameras are usually considered to expand the positioning area.
  • One or more marking points can be installed on the object to be measured to locate the object to be measured through the coordinates of each marking point.
  • the five circular marks on the object to be measured shown in Figure 1 can be regarded as mark points.
  • the relative position relationship between these mark points can be represented by a certain coordinate system. For example, taking one of the mark points as the origin can The coordinates of other marker points are determined, and these coordinates are called the calibration coordinates of the marker points.
  • the camera will capture and image the object to be measured, and the pixel coordinates of each marker point can be extracted based on the captured images.
  • Combining the calibration coordinates of the marker points and the pixel coordinates of the marker points to perform N-point perspective (perspective n point, PnP) calculations can determine the positioning coordinates of each marker point.
  • the positioning coordinates of the marker points specifically refer to the camera as the origin.
  • Figure 2 is a schematic diagram of the coordinate systems of different cameras in the embodiment of the present application.
  • the coordinate system of the camera is the coordinate system with the optical center of the camera as the origin.
  • the optical center of camera 1 is O 1
  • f 1 is the focal length of camera 1
  • C 1 is the optical center on image plane 1 of camera 1 .
  • the two mutually perpendicular directions on the image plane 1 of the camera 1 can be defined as the X 1 axis and the Y 1 axis.
  • the direction passing through the optical center O 1 and perpendicular to the image plane 1 can be positioned as the Z 1 axis, then O 1 -X 1 Y 1 Z 1 is the coordinate system of camera 1.
  • O 2 -X 2 Y 2 Z 2 is the coordinate system of camera 2. It should be understood that converting the coordinate system of one camera into the coordinate system of another camera requires determining the coordinate system transformation parameters between the two coordinate systems. This process can be called external parameter calibration of the camera. After completing the external parameter calibration of the camera, the positioning coordinates of multiple cameras can be represented by the same coordinate system. This coordinate system can be called a unified coordinate system.
  • FIG 3 is a schematic diagram of a positioning system provided by an embodiment of the present application.
  • the positioning system includes: a processing device, camera 1, camera 2, communication equipment and target objects.
  • the processing device is used to send instructions to the target object through the communication device to control the movement of the target object.
  • camera 1 and camera 2 respectively send the imaging information obtained by shooting the target object to the processing device through the communication device.
  • the processing device can perform analysis and calculation based on the imaging information fed back by camera 1 and camera 2 to determine the coordinate system transformation parameters between the coordinate system of camera 1 and the coordinate system of camera 2, thereby obtaining a unified coordinate system.
  • the positioning coordinates of the object to be measured can be expressed according to the unified coordinate system.
  • the information transmission between the processing device, the camera 1, the camera 2 and the target object can be communicated through a wireless network or a wired network.
  • the communication device can be a router.
  • the communication device and the processing device may be independent of each other or integrated together, and the details are not limited here.
  • FIG 4 is a schematic diagram of a positioning method according to the embodiment of the present application. This method is specifically implemented based on the positioning system introduced in Figure 3 above. In this example, the positioning method includes the following steps.
  • the processing device obtains the first positioning coordinates of the mark point on the target object in the first coordinate system of camera 1, and the first positioning coordinates of the mark point on the target object in the second coordinate system of camera 2. Second positioning coordinates.
  • one or more marking points are provided on the target object, and the first position where the target object stays is in the overlapping area of photography by camera 1 and camera 2 .
  • the camera 1 images the marked points on the target object to obtain first imaging information, and sends the first imaging information to the processing device.
  • the camera 2 images the marked points on the target object to obtain second imaging information, and sends the second imaging information to the processing device.
  • the processing device may determine the first positioning coordinates of the marker point on the target object in the first coordinate system of the camera 1 based on the first imaging information, and determine the location of the marker point on the target object in the second coordinate system of the camera 2 based on the second imaging information. the second positioning coordinates.
  • the processing device obtains the third positioning coordinates of the mark point on the target object in the first coordinate system of camera 1, and the third positioning coordinates of the mark point on the target object in the second coordinate system of camera 2. The fourth positioning coordinate.
  • the processing device can control the target object to move from the first position to the second position, and the second position where the target object stays is also in the overlapping area of the camera 1 and the camera 2 .
  • the camera 1 images the marked points on the target object to obtain third imaging information, and sends the third imaging information to the processing device.
  • the camera 2 images the marked point on the target object to obtain fourth imaging information, and sends the fourth imaging information to the processing device.
  • the processing device may determine the third positioning coordinates of the marker point on the target object in the first coordinate system of the camera 1 based on the third imaging information, and determine the location of the marker point on the target object in the second coordinate system of the camera 2 based on the fourth imaging information. The fourth positioning coordinates.
  • the processing device determines the specific implementation method of determining the positioning coordinates of the target object in the camera coordinate system based on the imaging information of the camera. Reference can be made to the relevant introduction in Figure 1 above, which will not be described again here. It should also be understood that when the target object stays at the first position or the second position, the camera 1 and the camera 2 can photograph the target object at the same time, or they can photograph the target object at different times, which is not limited here.
  • the processing device determines the first coordinate system transformation parameter between the first coordinate system and the second coordinate system based on the first positioning coordinates, the second positioning coordinates, the third positioning coordinates and the fourth positioning coordinates.
  • the first coordinate system transformation parameters include a rotation matrix (rotation matrix) and a translation vector (translation vector).
  • the rotation matrix is a matrix that when multiplied by a vector has the effect of changing the direction of the vector but not changing the size and maintaining chirality.
  • the processing device may calculate the first coordinate system transformation parameter according to the following formula.
  • represents the rotation angle between the first coordinate system and the second coordinate system
  • the first positioning coordinates, the second positioning coordinates and the first coordinate system transformation parameters satisfy this formula
  • the third positioning coordinates, the fourth positioning coordinates and the first coordinate system transformation parameters also satisfy this formula. That is to say, the first positioning coordinate and the second positioning coordinate are substituted into the formula as one equation, and the third positioning coordinate and the fourth positioning coordinate are substituted into the formula as another equation.
  • the calculation can be obtained by combining the two equations.
  • first positioning coordinates and the second positioning coordinates can be regarded as the first set of positioning information
  • third positioning coordinates and the fourth positioning coordinates can be regarded as the second set of positioning information.
  • the processing device can also control the target to move to more locations and collect more sets of positioning information, so as to calculate the first coordinate system transformation parameters based on more sets of positioning information, which can reduce calculation errors. .
  • the processing device locates the object to be measured according to the first coordinate system transformation parameters.
  • the processing device After the processing device calculates the first coordinate system transformation parameters, it first unifies the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system. Afterwards, whether the object to be measured is positioned based on camera 1 or camera 2, the positioning coordinates of the object to be measured will be represented by the target coordinate system. Specifically, the processing device can obtain the target imaging information of the object to be measured by the camera 1 or the camera 2, and determine the target positioning coordinates of the object to be measured in the target coordinate system based on the target imaging information.
  • the processing device may convert the second coordinate system into the first coordinate system according to the first coordinate system transformation parameter, and the first coordinate system is the target coordinate system.
  • the processing device may convert the first coordinate system into the second coordinate system according to the first coordinate system transformation parameter, and the second coordinate system is the target coordinate system.
  • the target coordinate system can also be a custom coordinate system other than the first coordinate system and the second coordinate system.
  • the premise is that the relationship between the custom coordinate system and the first coordinate system or the second coordinate system needs to be determined. coordinate system transformation parameters.
  • the above embodiment introduces the process of external parameter calibration between two cameras.
  • multiple cameras that have completed external parameter calibration can also be combined to perform external parameter calibration on another camera, thereby improving the calibration accuracy. Further introduction will be made below on the basis of the above embodiments.
  • FIG. 5 is a schematic diagram of another positioning system provided by an embodiment of the present application.
  • the positioning system provided in this implementation also includes a camera 3.
  • camera 1, camera 2 and camera 3 respectively send the imaging information obtained by shooting the target object to the processing device through the communication device.
  • the external parameter calibration between camera 1 and camera 2 has been completed according to the method introduced in the embodiment shown in Figure 4 above.
  • the processing device can perform analysis and calculation based on the imaging information fed back by camera 1, camera 2 and camera 3 to determine the coordinate system transformation parameters between the coordinate system of camera 1 and the coordinate system of camera 3 and the coordinate system of camera 2 and the coordinates of camera 3.
  • the coordinate system transformation parameters between the systems are used to obtain a unified coordinate system.
  • the positioning coordinates of the object to be measured can be expressed according to the unified coordinate system.
  • the first position and the second position in the embodiment shown in FIG. 4 are also in the shooting area of the camera 3 .
  • the camera 3 images the marker point on the target object to obtain fifth imaging information, and sends the fifth imaging information to the processing device.
  • the processing device may determine the fifth positioning coordinates of the mark point on the target object in the third coordinate system of the camera 3 based on the fifth imaging information.
  • the camera 3 images the marker point on the target object to obtain sixth imaging information, and sends the sixth imaging information to the processing device.
  • the processing device may determine the sixth positioning coordinates of the mark point on the target object in the third coordinate system of the camera 3 based on the sixth imaging information.
  • the processing device determines the second position between the first coordinate system and the third coordinate system based on the first positioning coordinate, the second positioning coordinate, the third positioning coordinate, the fourth positioning coordinate, the fifth positioning coordinate and the sixth positioning coordinate. coordinate system transformation parameters and third coordinate system transformation parameters between the second coordinate system and the third coordinate system. Furthermore, the processing device unifies the first coordinate system and the third coordinate system according to the second coordinate system transformation parameter, and unifies the second coordinate system and the third coordinate system according to the third coordinate system transformation parameter.
  • the second coordinate system transformation parameters and the third coordinate system transformation parameters also include rotation matrices and translation vectors.
  • the second coordinate system transformation parameter and the third coordinate system transformation parameter can be calculated according to the set of equations provided below.
  • the system of equations includes:
  • the rotation matrix from camera i to camera j is expressed as The rotation angle from camera i to camera j is expressed as ⁇ i,j , and the translation variable from camera i to camera j is expressed as t i,j , Represents the positioning coordinates obtained based on camera 1 (such as the first positioning coordinates or the third positioning coordinates), Represents the positioning coordinates obtained based on camera 2 (such as the second positioning coordinates or the fourth positioning coordinates), Indicates the positioning coordinates obtained based on camera 3 (such as the fifth positioning coordinates or the sixth positioning coordinates).
  • the second coordinate system transformation parameters and the third coordinate system transformation parameters can be calculated.
  • the processing device can convert the third coordinate system into the first coordinate system according to the second coordinate system transformation parameter, or convert the first coordinate system into the third coordinate system according to the second coordinate system transformation parameter, thereby completing the process.
  • External parameter calibration between camera 1 and camera 3. the processing device can convert the third coordinate system into the second coordinate system according to the third coordinate system transformation parameter, or convert the second coordinate system into the third coordinate system according to the third coordinate system transformation parameter, thereby completing the camera 2 External parameter calibration with camera 3. Since the external parameter calibration between camera 1 and camera 2 has been completed, the external parameter calibration between camera 1, camera 2 and camera 3 can be completed through the above method. In practical applications, the coordinate system of any of the above cameras can be used as a unified coordinate system. After that, no matter which camera is used to locate the object to be measured, the positioning coordinates of the object to be measured will be expressed in the unified coordinate system.
  • FIG. 6 is a schematic diagram of another embodiment of the positioning method in the embodiment of the present application.
  • the positioning method includes the following steps.
  • the processing device When the processing device controls the movement of the target object, it will first enter the shooting area of a certain camera. For example, if the camera is camera 1, then the coordinate system of camera 1 can be determined as the initial coordinate system.
  • step 602. Determine whether the target object enters the overlapping shooting area of multiple cameras and the external parameters between the multiple cameras are not calibrated. If not, perform step 603; if yes, perform step 604.
  • Control the target object to stop for a period of time every time it walks a certain distance to allow each camera to image the target object.
  • each stop of the target object should be greater than the duration required for camera shooting. For example, if the camera shooting frame rate is 30 frames per second, the duration of each stop of the target object should be greater than 33 milliseconds.
  • Each camera takes a picture of the stopped target and sends the acquired imaging information to the processing device.
  • step 605. Determine whether the target object has walked out of the overlapping shooting area of multiple cameras or the number of sets of positioning information has reached the upper limit. If not, continue to step 604; if so, execute step 606.
  • the positioning coordinates obtained based on each camera can be used as a set of positioning information. If the number of sets of positioning information reaches the upper limit, it can be considered that the required positioning information is enough, and subsequent operations will be performed. Of course, if the target has walked out of the overlapping shooting area of multiple cameras, it will no longer be able to collect a new set of positioning information, and the following operations will be performed.
  • step 606. Determine whether the number of groups of positioning information is greater than or equal to 2. If not, perform step 607; if yes, perform step 608.
  • step 608 reference can be made to the relevant introduction of steps 403 and 404 in the embodiment shown in FIG. 4, and will not be described again here.
  • this application provides a specific implementation method for external parameter calibration of multiple cameras.
  • the obtained coordinate system transformation parameters can be used to unify the coordinate systems of multiple cameras.
  • the positioning coordinates obtained by each camera can be represented by a unified coordinate system, which facilitates the determination of the positioning coordinates of any position within the camera coverage in a multi-camera scenario.
  • the positioning system provided by this application is a fully automated system that does not require manual intervention and is more practical.
  • the processing device of the above positioning system is introduced below.
  • FIG. 7 is a schematic structural diagram of a processing device in an embodiment of the present application.
  • the processing device includes a detection module 701, a calculation module 702 and a control module 703.
  • the detection module 701 can be used to determine the positioning coordinates of the target object based on the imaging information fed back by each camera.
  • the calculation module 702 is used to calculate coordinate system transformation parameters between cameras and establish a unified coordinate system.
  • the control module 703 is used to control the movement of the target object.
  • the detection module 701 is used to perform step 401 and step 402 in the embodiment shown in Figure 4, and the calculation module 702 is used to perform step 403 and step 404 in the embodiment shown in Figure 4. .
  • the detection module 701 is used to perform step 602, step 605 and step 606 in the above-mentioned embodiment shown in FIG. 6 .
  • the calculation module 702 is used to execute step 601, step 607 and step 608 in the embodiment shown in FIG. 6 .
  • the control module 703 is used to execute steps 603 and 604 in the embodiment shown in FIG. 6 . It should be understood that the above-mentioned detection module 701, calculation module 702 and control module 703 are only modules divided in terms of functional implementation. In practical applications, the detection module 701, calculation module 702 and control module 703 can be independent modules, or they can integrated together to achieve.
  • FIG. 8 is another structural schematic diagram of a processing device in an embodiment of the present application.
  • the processing device includes a processor 801, a memory 802 and a transceiver 803.
  • the processor 801, the memory 802 and the transceiver 803 are connected to each other through lines, where the transceiver 803 is used to exchange data or instructions with the camera and the target object.
  • Memory 802 is used to store program instructions and data.
  • the processor 801 is used to perform the operation steps in the embodiment shown in FIG. 4 and FIG. 6 .
  • the processor shown in Figure 8 above can be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit ASIC, or at least one integrated circuit for executing related programs.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the memory shown in Figure 8 above can store the operating system and other application programs.
  • the program code used to implement the technical solutions provided by the embodiments of this application is stored in the memory and executed by the processor.
  • the processor may include memory internally.
  • the processor and memory are two separate structures.
  • the above-mentioned processing unit or processor may be a central processing unit, a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic devices. , transistor logic devices, hardware components, or any combination thereof. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
  • the computer program product includes one or more computer instructions.
  • the processes or functions described in the embodiments of the present application are generated in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

A positioning method and a related device. The positioning method comprises the following steps: when a target object is in a first position, a processing apparatus acquiring first positioning coordinates of a mark point on the target object in a first coordinate system of a first camera, and second positioning coordinates of the mark point on the target object in a second coordinate system of a second camera; when the target object is in a second position, the processing apparatus acquiring third positioning coordinates of the mark point on the target object in the first coordinate system, and fourth positioning coordinates of the mark point on the target object in the second coordinate system, the first position and the second position being located in a photographing overlap area of the first camera and the second camera; according to the first positioning coordinates, the second positioning coordinates, the third positioning coordinates and the fourth positioning coordinates, the processing apparatus determining a first coordinate system transformation parameter between the first coordinate system and the second coordinate system; and then, according to the first coordinate system transformation parameter, the processing apparatus positioning an object to be detected. Further provided are a processing apparatus and a positioning system which use the method.

Description

一种定位方法和相关设备A positioning method and related equipment
本申请要求于2022年3月10日提交中国国家知识产权局、申请号为202210240303.5、申请名称为“一种定位方法和相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application submitted to the State Intellectual Property Office of China on March 10, 2022, with application number 202210240303.5 and the application title "A positioning method and related equipment", the entire content of which is incorporated herein by reference. Applying.
技术领域Technical field
本申请涉及定位技术领域,尤其涉及一种定位方法和相关设备。The present application relates to the field of positioning technology, and in particular, to a positioning method and related equipment.
背景技术Background technique
定位技术在过去的数年间正在诸多行业得到越来越广泛的应用,在我们日常生活中扮演着越来越重要的角色。从地图导航到社交网络,位置服务都在其中发挥关键作用。其中,室内定位技术正在成为学术研究和行业应用的热点领域。常见的室内定位系统通常需要应用计算机视觉技术和无线通讯技术。具体地,利用定位过程中摄像头拍摄到的图像信息与预先采集的定位场馆的视觉照片数据库做比对,进而通过计算机视觉算法确定位置坐标。Positioning technology has been increasingly widely used in many industries in the past few years and plays an increasingly important role in our daily lives. From map navigation to social networks, location services play a key role. Among them, indoor positioning technology is becoming a hot area of academic research and industry application. Common indoor positioning systems usually require the application of computer vision technology and wireless communication technology. Specifically, the image information captured by the camera during the positioning process is compared with the pre-collected visual photo database of the positioning venue, and then the location coordinates are determined through a computer vision algorithm.
在实际应用中,考虑到购物商场、体育场馆、医疗中心等大型建筑都拥有巨大的室内空间,因此需要布设多个摄像头来拓展定位区域。但是,由于每个摄像头摆放的位置是不同的,通过不同摄像头采集的图像信息所确定的定位信息都是在不同坐标系下表示的,导致无法用统一的坐标来表示室内空间中对各个位置的定位信息。In practical applications, considering that large buildings such as shopping malls, stadiums, and medical centers have huge indoor spaces, multiple cameras need to be deployed to expand the positioning area. However, since each camera is placed in a different position, the positioning information determined by the image information collected by different cameras is expressed in different coordinate systems, making it impossible to use unified coordinates to represent each position in the indoor space. positioning information.
发明内容Contents of the invention
本申请实施例提供了一种定位方法和相关设备,便于在多摄像头的场景下确定摄像头覆盖范围内任意位置的定位坐标。Embodiments of the present application provide a positioning method and related equipment to facilitate determining the positioning coordinates of any position within the camera coverage in a multi-camera scenario.
第一方面,本申请提供了一种定位方法。具体地,当目标物在第一位置时,处理装置获取目标物上标记点在第一摄像头的第一坐标系中的第一定位坐标,以及目标物上标记点在第二摄像头的第二坐标系中的第二定位坐标。当目标物在第二位置时,处理装置获取目标物上标记点在第一坐标系中的第三定位坐标,以及目标物上标记点在第二坐标系中的第四定位坐标。其中,第一位置和第二位置都位于第一摄像头与第二摄像头的拍摄重叠区域。之后,处理装置根据第一定位坐标、第二定位坐标、第三定位坐标和第四定位坐标确定第一坐标系与第二坐标系之间的第一坐标系变换参数。进而,处理装置根据第一坐标系变换参数对待测物体进行定位。In the first aspect, this application provides a positioning method. Specifically, when the target object is at the first position, the processing device obtains the first positioning coordinates of the mark point on the target object in the first coordinate system of the first camera, and the second coordinates of the mark point on the target object in the second camera. The second positioning coordinate in the system. When the target object is at the second position, the processing device obtains the third positioning coordinates of the mark point on the target object in the first coordinate system, and the fourth positioning coordinates of the mark point on the target object in the second coordinate system. Wherein, the first position and the second position are both located in the overlapping area of the first camera and the second camera. Afterwards, the processing device determines the first coordinate system transformation parameter between the first coordinate system and the second coordinate system based on the first positioning coordinates, the second positioning coordinates, the third positioning coordinates and the fourth positioning coordinates. Furthermore, the processing device locates the object to be measured according to the first coordinate system transformation parameter.
在该实施方式中,本申请提供了一种对多个摄像头进行外参标定的具体实现方式,通过获取到的坐标系变换参数可以对多个摄像头的坐标系进行统一,后续基于每个摄像头获取到的定位坐标都可以用统一坐标系表示,便于在多摄像头的场景下确定摄像头覆盖范围内任意位置的定位坐标。In this embodiment, this application provides a specific implementation method for external parameter calibration of multiple cameras. The coordinate systems of multiple cameras can be unified through the obtained coordinate system transformation parameters, and the subsequent acquisition is based on each camera. The positioning coordinates obtained can be represented by a unified coordinate system, which facilitates the determination of the positioning coordinates of any position within the camera coverage in a multi-camera scenario.
在一些可能的实施方式中,处理装置获取第一定位坐标和第二定位坐标之后,处理装置获取第三定位坐标和第四定位坐标之前,方法还包括:处理装置控制目标物从第一位置移动到第二位置。通过上述方式,本申请采用的是全自动化的系统,无需人工干预,实用性更强。In some possible implementations, after the processing device acquires the first positioning coordinates and the second positioning coordinates, and before the processing device acquires the third positioning coordinates and the fourth positioning coordinates, the method further includes: the processing device controls the target object to move from the first position. to the second position. Through the above method, this application adopts a fully automated system, which does not require manual intervention and is more practical.
在一些可能的实施方式中,处理装置获取目标物上标记点在第一坐标系中的第一定位坐标和目标物上标记点在第二坐标系中的第二定位坐标包括:当目标物在第一位置时,处理装置获取第一摄像头对目标物上标记点的第一成像信息和第二摄像头对目标物上标记点的第二成像信息,根据第一成像信息确定第一定位坐标,根据第二成像信息确定第二定位坐标。处理装置获取目标物上标记点在第一坐标系中的第三定位坐标和目标物上标记点在第二坐标系中的第四定位坐标包括:当目标物在第二位置时,处理装置获取第一摄像头对目标物上标记点的第三成像信息和第二摄像头对目标物上标记点的第四成像信息,根据第三成像信息确定第三定位坐标,根据第四成像信息确定第四定位坐标。在该实施方式中,提供了一种基于摄像头获取目标物定位坐标的具体实现方式,增强了本方案的可实现性。In some possible implementations, the processing device obtaining the first positioning coordinates of the mark point on the target object in the first coordinate system and the second positioning coordinates of the mark point on the target object in the second coordinate system includes: when the target object is in At the first position, the processing device obtains the first imaging information of the marked point on the target object by the first camera and the second imaging information of the marked point on the target object by the second camera, determines the first positioning coordinates according to the first imaging information, and determines the first positioning coordinates according to the first imaging information. The second imaging information determines second positioning coordinates. The processing device obtains the third positioning coordinates of the mark point on the target object in the first coordinate system and the fourth positioning coordinates of the mark point on the target object in the second coordinate system, including: when the target object is at the second position, the processing device obtains The third imaging information of the marked point on the target object by the first camera and the fourth imaging information of the marked point on the target object by the second camera are used to determine the third positioning coordinates based on the third imaging information and the fourth positioning information is determined based on the fourth imaging information. coordinate. In this embodiment, a specific implementation method for obtaining the positioning coordinates of a target object based on a camera is provided, which enhances the realizability of this solution.
在一些可能的实施方式中,第一坐标系变换参数包括旋转矩阵和平移向量,以便于准确地获知两个摄像头坐标系之间的相对位置关系。In some possible implementations, the first coordinate system transformation parameters include a rotation matrix and a translation vector, so as to accurately obtain the relative positional relationship between the two camera coordinate systems.
[根据细则26改正 24.03.2023]
在一些可能的实施方式中,第一定位坐标、第二定位坐标和第一坐标系变换参数满足公式,第三定位坐标、第四定位坐标和第一坐标系变换参数满足公式。公式包括:其中,θ表示第一坐标系与第二坐标系之间的旋转角度,表示旋转矩阵,表示平移向量,表示第一定位坐标或第三定位坐标,表示第二定位坐标或第四定位坐标。通过上述方式,提供了一种计算第一坐标系变换参数的具体方式,进一步提高了本方案的可实现性。
[Amended in accordance with Rule 26 24.03.2023]
In some possible implementations, the first positioning coordinates, the second positioning coordinates and the first coordinate system transformation parameters satisfy the formula, and the third positioning coordinates, the fourth positioning coordinates and the first coordinate system transformation parameters satisfy the formula. Formulas include: Among them, θ represents the rotation angle between the first coordinate system and the second coordinate system, represents the rotation matrix, represents the translation vector, Indicates the first positioning coordinate or the third positioning coordinate, Indicates the second positioning coordinate or the fourth positioning coordinate. Through the above method, a specific method for calculating the first coordinate system transformation parameters is provided, which further improves the realizability of this solution.
在一些可能的实施方式中,处理装置根据第一坐标系变换参数对待测物体进行定位包括:处理装置根据第一坐标系变换参数对第一坐标系和第二坐标系进行统一得到目标坐标系,获取第一摄像头或第二摄像头对待测物体的目标成像信息,根据目标成像信息确定待测物体在目标坐标系中的目标定位坐标。In some possible implementations, the processing device locating the object to be measured according to the first coordinate system transformation parameters includes: the processing device unifies the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system, Obtain target imaging information of the object to be measured by the first camera or the second camera, and determine the target positioning coordinates of the object to be measured in the target coordinate system based on the target imaging information.
在一些可能的实施方式中,处理装置根据第一坐标系变换参数对第一坐标系和第二坐标系进行统一得到目标坐标系包括:处理装置根据第一坐标系变换参数将第二坐标系转换为第一坐标系,目标坐标系为第一坐标系。或者,处理装置根据第一坐标系变换参数将第一坐标系转换为第二坐标系,目标坐标系为第二坐标系。在该实施方式中,统一之后的坐标系既可以是第一坐标系也可以是第二坐标系,提高了本方案的灵活性。In some possible implementations, the processing device unifying the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system includes: the processing device converts the second coordinate system according to the first coordinate system transformation parameters. is the first coordinate system, and the target coordinate system is the first coordinate system. Alternatively, the processing device converts the first coordinate system into the second coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the second coordinate system. In this embodiment, the coordinate system after unification can be either the first coordinate system or the second coordinate system, which improves the flexibility of this solution.
在一些可能的实施方式中,方法还包括:当目标物在第一位置时,处理装置获取目标物上标记点在第三摄像头的第三坐标系中的第五定位坐标。其中,第一位置位于第一摄像头、第二摄像头和第三摄像头的拍摄重叠区域。当目标物在第二位置时,处理装置获取目标物上标记点在第三坐标系中的第六定位坐标。其中,第二位置位于第一摄像头、第二摄像头和第三摄像头的拍摄重叠区域;处理装置根据第一定位坐标、第二定位坐标、第三定位坐标、第四定位坐标、第五定位坐标和第六定位坐标,确定第一坐标系与第三坐标系之间的第二坐标系变换参数和第二坐标系与第三坐标系之间的第三坐标系变换参数。处理装置根据第二坐标系变换参数对第一坐标系和第三坐标系进行统一,根据第三坐标系变换参数对第二坐标系和第三坐标系进行统一。在该实施方式中,可以结合多个已完成外参标定的摄像头对另一个摄像头进行外参标定,从而可以提高标定精度。In some possible implementations, the method further includes: when the target object is at the first position, the processing device obtains the fifth positioning coordinates of the marker point on the target object in the third coordinate system of the third camera. The first position is located in a shooting overlap area of the first camera, the second camera and the third camera. When the target object is at the second position, the processing device obtains the sixth positioning coordinates of the mark point on the target object in the third coordinate system. Wherein, the second position is located in the shooting overlap area of the first camera, the second camera and the third camera; the processing device is based on the first positioning coordinates, the second positioning coordinates, the third positioning coordinates, the fourth positioning coordinates, the fifth positioning coordinates and The sixth positioning coordinate determines the second coordinate system transformation parameter between the first coordinate system and the third coordinate system and the third coordinate system transformation parameter between the second coordinate system and the third coordinate system. The processing device unifies the first coordinate system and the third coordinate system according to the transformation parameters of the second coordinate system, and unifies the second coordinate system and the third coordinate system according to the transformation parameters of the third coordinate system. In this embodiment, multiple cameras that have completed external parameter calibration can be combined to perform external parameter calibration on another camera, thereby improving the calibration accuracy.
在一些可能的实施方式中,处理装置根据第二坐标系变换参数对第一坐标系和第三坐标系进行统一包括:处理装置根据第二坐标系变换参数将第三坐标系转换为第一坐标系,或者,处理装置根据第二坐标系变换参数将第一坐标系转换为第三坐标系。处理装置根据第三坐标系变换参数对第二坐标系和第三坐标系进行统一包括:处理装置根据第三坐标系变换参数将第三坐标系转换为第二坐标系,或者,处理装置根据第三坐标系变换参数将第二坐标系转换为第三坐标系。在该实施方式中,可以将不同的坐标系作为统一坐标系,提高了本方案的扩展性。In some possible implementations, the processing device unifying the first coordinate system and the third coordinate system according to the second coordinate system transformation parameter includes: the processing device converts the third coordinate system into the first coordinate according to the second coordinate system transformation parameter. system, or the processing device converts the first coordinate system into the third coordinate system according to the second coordinate system transformation parameter. The processing device unifying the second coordinate system and the third coordinate system according to the third coordinate system transformation parameter includes: the processing device converts the third coordinate system into the second coordinate system according to the third coordinate system transformation parameter, or the processing device according to the third coordinate system transformation parameter. The three-coordinate system transformation parameter converts the second coordinate system into the third coordinate system. In this implementation, different coordinate systems can be used as a unified coordinate system, which improves the scalability of this solution.
第二方面,本申请提供了一种处理装置,包括:检测模块和计算模块。检测模块用于:当目标物在第一位置时,获取目标物上标记点在第一摄像头的第一坐标系中的第一定位坐标,以及目标物上标记点在第二摄像头的第二坐标系中的第二定位坐标。其中,第一位置位于第一摄像头与第二摄像头的拍摄重叠区域。当目标物在第二位置时,获取目标物上标记点在第一坐标系中的第三定位坐标,以及目标物上标记点在第二坐标系中的第四定位坐标。其中,第二位置位于第一摄像头与第二摄像头的拍摄重叠区域;计算模块用于:根据第一定位坐标、第二定位坐标、第三定位坐标和第四定位坐标确定第一坐标系与第二坐标系之间的第一坐标系变换参数,根据第一坐标系变换参数对对待测物体进行定位。In a second aspect, this application provides a processing device, including: a detection module and a calculation module. The detection module is used to: when the target object is at the first position, obtain the first positioning coordinates of the mark point on the target object in the first coordinate system of the first camera, and the second coordinates of the mark point on the target object in the second camera. The second positioning coordinate in the system. The first position is located in the overlapping area of the first camera and the second camera. When the target object is at the second position, the third positioning coordinates of the mark point on the target object in the first coordinate system and the fourth positioning coordinates of the mark point on the target object in the second coordinate system are obtained. Wherein, the second position is located in the overlapping area of the first camera and the second camera; the calculation module is used to: determine the first coordinate system and the third positioning coordinate according to the first positioning coordinates, the second positioning coordinates, the third positioning coordinates and the fourth positioning coordinates. The first coordinate system transformation parameter between the two coordinate systems is used to position the object to be measured according to the first coordinate system transformation parameter.
在一些可能的实施方式中,处理装置还包括控制模块。检测模块获取第一定位坐标和第二定位坐标之后,检测模块获取第三定位坐标和第四定位坐标之前,控制模块用于:控制目标物从第一位置移动到第二位置。In some possible implementations, the processing device further includes a control module. After the detection module obtains the first positioning coordinates and the second positioning coordinates, and before the detection module obtains the third positioning coordinates and the fourth positioning coordinates, the control module is used to: control the target object to move from the first position to the second position.
在一些可能的实施方式中,检测模块具体用于:当目标物在第一位置时,获取第一摄像头对目标物上标记点的第一成像信息和第二摄像头对目标物上标记点的第二成像信息,根据第一成像信息确定第一定位坐标,根据第二成像信息确定第二定位坐标。当目标物在第二位置时,获取第一摄像头对目标物上标记点的第三成像信息和第二摄像头对目标物上标记点的第四成像信息,根据第三成像信息确定第三定位坐标,根据第四成像信息确定第四定位坐标。In some possible implementations, the detection module is specifically configured to: when the target object is at the first position, obtain the first imaging information of the marked point on the target object by the first camera and the third imaging information of the marked point on the target object by the second camera. Two imaging information, determine the first positioning coordinates according to the first imaging information, and determine the second positioning coordinates according to the second imaging information. When the target object is at the second position, the third imaging information of the marked point on the target object by the first camera and the fourth imaging information of the marked point on the target object by the second camera are obtained, and the third positioning coordinates are determined based on the third imaging information. , determine the fourth positioning coordinates according to the fourth imaging information.
在一些可能的实施方式中,第一坐标系变换参数包括旋转矩阵和平移向量。In some possible implementations, the first coordinate system transformation parameter includes a rotation matrix and a translation vector.
[根据细则26改正 24.03.2023]
在一些可能的实施方式中,第一定位坐标、第二定位坐标和第一坐标系变换参数满足公式,第三定位坐标、第四定位坐标和第一坐标系变换参数满足公式。公式包括:其中,θ表示第一坐标系与第二坐标系之间的旋转角度,表示旋转矩阵,表示平移向量,表示第一定位坐标或第三定位坐标,表示第二定位坐标或第四定位坐标。
[Amended in accordance with Rule 26 24.03.2023]
In some possible implementations, the first positioning coordinates, the second positioning coordinates and the first coordinate system transformation parameters satisfy the formula, and the third positioning coordinates, the fourth positioning coordinates and the first coordinate system transformation parameters satisfy the formula. Formulas include: Among them, θ represents the rotation angle between the first coordinate system and the second coordinate system, represents the rotation matrix, represents the translation vector, Indicates the first positioning coordinate or the third positioning coordinate, Indicates the second positioning coordinate or the fourth positioning coordinate.
在一些可能的实施方式中,计算模块具体用于:根据第一坐标系变换参数对第一坐标系和第二坐标系进行统一得到目标坐标系,获取第一摄像头或第二摄像头对待测物体的目标成像信息,根据目标成像信息确定待测物体在目标坐标系中的目标定位坐标。In some possible implementations, the calculation module is specifically configured to: unify the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system, and obtain the image of the object to be measured by the first camera or the second camera. Target imaging information, determine the target positioning coordinates of the object to be measured in the target coordinate system based on the target imaging information.
在一些可能的实施方式中,计算模块具体用于:根据第一坐标系变换参数将第二坐标系转换为第一坐标系,目标坐标系为第一坐标系。或者,根据第一坐标系变换参数将第一坐标系转换为第二坐标系,目标坐标系为第二坐标系。In some possible implementations, the calculation module is specifically configured to: convert the second coordinate system into the first coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the first coordinate system. Alternatively, the first coordinate system is converted into the second coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the second coordinate system.
在一些可能的实施方式中,检测模块还用于:当目标物在第一位置时,获取目标物上标记点在第三摄像头的第三坐标系中的第五定位坐标。其中,第一位置位于第一摄像头、第二摄像头和第三摄像头的拍摄重叠区域。当目标物在第二位置时,获取目标物上标记点在第三坐标系中的第六定位坐标。其中,第二位置位于第一摄像头、第二摄像头和第三摄像头的拍摄重叠区域。计算模块还用于:根据第一定位坐标、第二定位坐标、第三定位坐标、第四定位坐标、第五定位坐标和第六定位坐标,确定第一坐标系与第三坐标系之间的第二坐标系变换参数和第二坐标系与第三坐标系之间的第三坐标系变换参数,根据第二坐标系变换参数对第一坐标系和第三坐标系进行统一,根据第三坐标系变换参数对第二坐标系和第三坐标系进行统一。In some possible implementations, the detection module is further configured to: when the target object is at the first position, obtain the fifth positioning coordinates of the mark point on the target object in the third coordinate system of the third camera. The first position is located in a shooting overlap area of the first camera, the second camera and the third camera. When the target object is at the second position, the sixth positioning coordinates of the mark point on the target object in the third coordinate system are obtained. The second position is located in a shooting overlap area of the first camera, the second camera and the third camera. The calculation module is also used to: determine the distance between the first coordinate system and the third coordinate system based on the first positioning coordinate, the second positioning coordinate, the third positioning coordinate, the fourth positioning coordinate, the fifth positioning coordinate and the sixth positioning coordinate. The second coordinate system transformation parameter and the third coordinate system transformation parameter between the second coordinate system and the third coordinate system are used to unify the first coordinate system and the third coordinate system according to the second coordinate system transformation parameter, and according to the third coordinate system The system transformation parameters unify the second coordinate system and the third coordinate system.
在一些可能的实施方式中,计算模块具体用于:根据第二坐标系变换参数将第三坐标系转换为第一坐标系,或者,根据第二坐标系变换参数将第一坐标系转换为第三坐标系。根据第三坐标系变换参数将第三坐标系转换为第二坐标系,或者,根据第三坐标系变换参数将第二坐标系转换为第三坐标系。In some possible implementations, the calculation module is specifically configured to: convert the third coordinate system into the first coordinate system according to the second coordinate system transformation parameter, or convert the first coordinate system into the first coordinate system according to the second coordinate system transformation parameter. Three coordinate system. The third coordinate system is converted into the second coordinate system according to the third coordinate system transformation parameter, or the second coordinate system is converted into the third coordinate system according to the third coordinate system transformation parameter.
第三方面,本申请提供了一种定位系统,包括:处理装置、第一摄像头、第二摄像头和目标物,其中,处理装置用于执行如上述第一方面任意实施方式介绍的定位方法。In a third aspect, this application provides a positioning system, including: a processing device, a first camera, a second camera, and a target object, wherein the processing device is used to perform the positioning method described in any embodiment of the first aspect.
第四方面,本申请提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,其中,计算机程序被硬件执行时能够实现上述第一方面中由处理装置执行的任意一种方法的部分或全部步骤。In a fourth aspect, the present application provides a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is executed by hardware, it can implement any of the methods executed by the processing device in the first aspect. some or all of the steps.
本申请实施例中,第一摄像头与第二摄像头具有拍摄重叠区域,可以控制目标物在该拍摄重叠区域内移动。具体地,当目标物在第一位置时,分别获取目标物上标记点在第一摄像头的坐标系和第二摄像头的坐标系中的定位坐标。当目标物移动到第二位置时,再分别获取目标物上标记点在第一摄像头的坐标系和第二摄像头的坐标系中的定位坐标。进而,根据获取到的多个定位坐标确定第一摄像头的坐标系与第二摄像头的坐标系之间的坐标系变换参数,并根据坐标系变换参数对待测物体进行定位。通过上述方式,本申请提供了一种对多个摄像头进行外参标定的具体实现方式,通过获取到的坐标系变换参数可以对多个摄像头的坐标系进行统一,后续基于每个摄像头获取到的定位坐标都可以用统一坐标系表示,便于在多摄像头的场景下确定摄像头覆盖范围内任意位置的定位坐标。In the embodiment of the present application, the first camera and the second camera have a shooting overlap area, and the target object can be controlled to move within the shooting overlap area. Specifically, when the target object is at the first position, the positioning coordinates of the mark point on the target object in the coordinate system of the first camera and the coordinate system of the second camera are respectively obtained. When the target object moves to the second position, the positioning coordinates of the mark point on the target object in the coordinate system of the first camera and the coordinate system of the second camera are respectively obtained. Furthermore, the coordinate system transformation parameters between the coordinate system of the first camera and the coordinate system of the second camera are determined based on the obtained multiple positioning coordinates, and the object to be measured is positioned based on the coordinate system transformation parameters. Through the above method, this application provides a specific implementation method for external parameter calibration of multiple cameras. The coordinate systems of multiple cameras can be unified through the obtained coordinate system transformation parameters. Subsequently, based on the obtained coordinate system of each camera, Positioning coordinates can be represented by a unified coordinate system, which makes it easy to determine the positioning coordinates of any position within the camera coverage in a multi-camera scenario.
附图说明Description of the drawings
图1为本申请实施例中多摄像头定位的一个场景示意图;Figure 1 is a schematic diagram of a scene for multi-camera positioning in the embodiment of the present application;
图2为本申请实施例中不同摄像头的坐标系示意图;Figure 2 is a schematic diagram of the coordinate systems of different cameras in the embodiment of the present application;
图3为本申请实施例提供的一种定位系统示意图;Figure 3 is a schematic diagram of a positioning system provided by an embodiment of the present application;
图4为本申请实施例中定位方法的一种实施例示意图;Figure 4 is a schematic diagram of an embodiment of the positioning method in the embodiment of the present application;
图5为本申请实施例提供的另一种定位系统示意图;Figure 5 is a schematic diagram of another positioning system provided by an embodiment of the present application;
图6为本申请实施例中定位方法的另一种实施例示意图;Figure 6 is a schematic diagram of another embodiment of the positioning method in the embodiment of the present application;
图7为本申请实施例中处理装置的一种结构示意图;Figure 7 is a schematic structural diagram of a processing device in an embodiment of the present application;
图8为本申请实施例中处理装置的另一种结构示意图。Figure 8 is another structural schematic diagram of a processing device in an embodiment of the present application.
具体实施方式Detailed ways
本申请实施例提供了一种定位方法和相关设备,可以根据坐标系变换参数对多个摄像头的坐标系进行统一,后续基于每个摄像头获取到的定位坐标都可以用统一坐标系表示,便于在多摄像头的场景下确定摄像头覆盖范围内任意位置的定位坐标。The embodiments of the present application provide a positioning method and related equipment, which can unify the coordinate systems of multiple cameras according to the coordinate system transformation parameters. Subsequent positioning coordinates obtained based on each camera can be represented by the unified coordinate system, which facilitates In a multi-camera scenario, determine the positioning coordinates of any location within the camera coverage.
图1为本申请实施例中多摄像头定位的一个场景示意图。如图1所示,摄像头1、摄像头2和摄像头3分别具有各自的拍摄区域,由于单个摄像头的拍摄区域有限,所以通常会考虑布局多个摄像头来拓展定位区域。待测物体上可以安装一个或多个标记点,以通过各标记点的坐标来对待测物体定位。图1中所示待测物体上的5个圆形标记就可以视为标记点,这些标记点之间的相对位置关系可以通过某个坐标系来表示,例如,以其中一个标记点为原点可以确定出其他标记点的坐标,这些坐标称之为标记点的标定坐标。摄像头会对待测物体进行拍摄成像,基于拍摄图片可以提取出各个标记点的像素坐标。结合标记点的标定坐标和标记点的像素坐标进行N点透视(perspective n point,PnP)计算即可确定各标记点的定位坐标,其中,标记点的定位坐标具体指的是以摄像头为原点的坐标系中各标记点的坐标。应理解,通过不同摄像头采集的图像信息所确定的定位信息都是在不同坐标系下表示的,会导致无法用统一的坐标来表示室内空间中对各个位置的定位信息。Figure 1 is a schematic diagram of a scene for multi-camera positioning in an embodiment of the present application. As shown in Figure 1, camera 1, camera 2 and camera 3 each have their own shooting area. Since the shooting area of a single camera is limited, multiple cameras are usually considered to expand the positioning area. One or more marking points can be installed on the object to be measured to locate the object to be measured through the coordinates of each marking point. The five circular marks on the object to be measured shown in Figure 1 can be regarded as mark points. The relative position relationship between these mark points can be represented by a certain coordinate system. For example, taking one of the mark points as the origin can The coordinates of other marker points are determined, and these coordinates are called the calibration coordinates of the marker points. The camera will capture and image the object to be measured, and the pixel coordinates of each marker point can be extracted based on the captured images. Combining the calibration coordinates of the marker points and the pixel coordinates of the marker points to perform N-point perspective (perspective n point, PnP) calculations can determine the positioning coordinates of each marker point. The positioning coordinates of the marker points specifically refer to the camera as the origin. The coordinates of each marker point in the coordinate system. It should be understood that the positioning information determined by the image information collected by different cameras is expressed in different coordinate systems, which will result in the inability to use unified coordinates to represent the positioning information of each position in the indoor space.
图2为本申请实施例中不同摄像头的坐标系示意图。如图2所示,摄像头的坐标系是以摄像头光心为原点的坐标系。以图2中的摄像头1为例,摄像头1的光心为O1,f1是摄像头1的焦距,C1是摄像头1的像平面1上的光心。摄像头1的像平面1上两个相互垂直的方向可以定义为X1轴和Y1轴,经过光心O1且垂直于像平面1的方向可以定位为Z1轴,那么O1-X1Y1Z1即为摄像头1的坐标系。同理,O2-X2Y2Z2即为摄像头2的坐标系。应理解,将一个摄像头的坐标系转换成另一个摄像头的坐标系需要确定这两个坐标系之间的坐标系变换参数,这个过程可以称之为摄像头的外参标定。在完成了摄像头的外参标定后就可以将多个摄像头的定位坐标用同一个坐标系表示,这个坐标系可以称之为统一坐标系。Figure 2 is a schematic diagram of the coordinate systems of different cameras in the embodiment of the present application. As shown in Figure 2, the coordinate system of the camera is the coordinate system with the optical center of the camera as the origin. Taking camera 1 in Figure 2 as an example, the optical center of camera 1 is O 1 , f 1 is the focal length of camera 1 , and C 1 is the optical center on image plane 1 of camera 1 . The two mutually perpendicular directions on the image plane 1 of the camera 1 can be defined as the X 1 axis and the Y 1 axis. The direction passing through the optical center O 1 and perpendicular to the image plane 1 can be positioned as the Z 1 axis, then O 1 -X 1 Y 1 Z 1 is the coordinate system of camera 1. In the same way, O 2 -X 2 Y 2 Z 2 is the coordinate system of camera 2. It should be understood that converting the coordinate system of one camera into the coordinate system of another camera requires determining the coordinate system transformation parameters between the two coordinate systems. This process can be called external parameter calibration of the camera. After completing the external parameter calibration of the camera, the positioning coordinates of multiple cameras can be represented by the same coordinate system. This coordinate system can be called a unified coordinate system.
图3为本申请实施例提供的一种定位系统示意图。如图3所示,该定位系统包括:处理装置、摄像头1、摄像头2、通信设备和目标物。具体地,处理装置用于通过通信设备向目标物发送指令以控制目标物进行移动。当目标物位于摄像头1和摄像头2的拍摄重叠区域时,摄像头1和摄像头2分别将各自拍摄目标物得到的成像信息通过通信设备发送至处理装置。处理装置可以根据摄像头1和摄像头2反馈的成像信息进行分析计算以确定摄像头1的坐标系与摄像头2的坐标系之间的坐标系变换参数,从而得到统一坐标系。之后,对待测物体进行定位的过程中就可以根据统一坐标系来表示待测物体的定位坐标。Figure 3 is a schematic diagram of a positioning system provided by an embodiment of the present application. As shown in Figure 3, the positioning system includes: a processing device, camera 1, camera 2, communication equipment and target objects. Specifically, the processing device is used to send instructions to the target object through the communication device to control the movement of the target object. When the target object is located in the overlapping shooting area of camera 1 and camera 2, camera 1 and camera 2 respectively send the imaging information obtained by shooting the target object to the processing device through the communication device. The processing device can perform analysis and calculation based on the imaging information fed back by camera 1 and camera 2 to determine the coordinate system transformation parameters between the coordinate system of camera 1 and the coordinate system of camera 2, thereby obtaining a unified coordinate system. After that, in the process of positioning the object to be measured, the positioning coordinates of the object to be measured can be expressed according to the unified coordinate system.
应理解,处理装置与摄像头1、摄像头2以及目标物之间的信息传输可以通过无线网络或有线网络通信,例如,该通信设备具体可以是路由器。在实际应用中,通信设备与处理装置可以是相互独立的,也可以是集成在一起的,具体此处不做限定。It should be understood that the information transmission between the processing device, the camera 1, the camera 2 and the target object can be communicated through a wireless network or a wired network. For example, the communication device can be a router. In practical applications, the communication device and the processing device may be independent of each other or integrated together, and the details are not limited here.
下面结合具体地实施方式对本申请提供的定位方法进行介绍。The positioning method provided by this application will be introduced below with reference to specific implementation modes.
图4为本申请实施例中定位方法的一种实施例示意图。该方法具体是基于上述图3介绍的定位系统来实现的。在该示例中,定位方法包括如下步骤。Figure 4 is a schematic diagram of a positioning method according to the embodiment of the present application. This method is specifically implemented based on the positioning system introduced in Figure 3 above. In this example, the positioning method includes the following steps.
401、当目标物在第一位置时,处理装置获取目标物上标记点在摄像头1的第一坐标系中的第一定位坐标,以及目标物上标记点在摄像头2的第二坐标系中的第二定位坐标。401. When the target object is at the first position, the processing device obtains the first positioning coordinates of the mark point on the target object in the first coordinate system of camera 1, and the first positioning coordinates of the mark point on the target object in the second coordinate system of camera 2. Second positioning coordinates.
本实施例中,目标物上设置有一个或多个标记点,目标物停留的第一位置在摄像头1与摄像头2的拍摄重叠区域。摄像头1对目标物上的标记点进行成像得到第一成像信息,并将第一成像信息发送至处理装置。摄像头2对目标物上的标记点进行成像得到第二成像信息,并将第二成像信息发送至处理装置。处理装置可以根据第一成像信息确定目标物上标记点在摄像头1的第一坐标系中的第一定位坐标,并根据第二成像信息确定目标物上标记点在摄像头2的第二坐标系中的第二定位坐标。In this embodiment, one or more marking points are provided on the target object, and the first position where the target object stays is in the overlapping area of photography by camera 1 and camera 2 . The camera 1 images the marked points on the target object to obtain first imaging information, and sends the first imaging information to the processing device. The camera 2 images the marked points on the target object to obtain second imaging information, and sends the second imaging information to the processing device. The processing device may determine the first positioning coordinates of the marker point on the target object in the first coordinate system of the camera 1 based on the first imaging information, and determine the location of the marker point on the target object in the second coordinate system of the camera 2 based on the second imaging information. the second positioning coordinates.
402、当目标物在第二位置时,处理装置获取目标物上标记点在摄像头1的第一坐标系中的第三定位坐标,以及目标物上标记点在摄像头2的第二坐标系中的第四定位坐标。402. When the target object is at the second position, the processing device obtains the third positioning coordinates of the mark point on the target object in the first coordinate system of camera 1, and the third positioning coordinates of the mark point on the target object in the second coordinate system of camera 2. The fourth positioning coordinate.
处理装置可以控制目标物从第一位置移动到第二位置,目标物停留的第二位置同样在摄像头1与摄像头2的拍摄重叠区域。摄像头1对目标物上的标记点进行成像得到第三成像信息,并将第三成像信息发送至处理装置。摄像头2对目标物上的标记点进行成像得到第四成像信息,并将第四成像信息发送至处理装置。处理装置可以根据第三成像信息确定目标物上标记点在摄像头1的第一坐标系中的第三定位坐标,并根据第四成像信息确定目标物上标记点在摄像头2的第二坐标系中的第四定位坐标。The processing device can control the target object to move from the first position to the second position, and the second position where the target object stays is also in the overlapping area of the camera 1 and the camera 2 . The camera 1 images the marked points on the target object to obtain third imaging information, and sends the third imaging information to the processing device. The camera 2 images the marked point on the target object to obtain fourth imaging information, and sends the fourth imaging information to the processing device. The processing device may determine the third positioning coordinates of the marker point on the target object in the first coordinate system of the camera 1 based on the third imaging information, and determine the location of the marker point on the target object in the second coordinate system of the camera 2 based on the fourth imaging information. The fourth positioning coordinates.
应理解,在上述步骤401和步骤402中,处理装置根据摄像头的成像信息确定摄像头坐标系中目标物定位坐标的具体实现方式,可以参考上述图1的相关介绍,此处不再赘述。还应理解,当目标物停留在第一位置或第二位置时,摄像头1和摄像头2可以同时对目标物进行拍摄,也可以在不同时刻对目标物进行拍摄,具体此处不做限定。It should be understood that in the above-mentioned steps 401 and 402, the processing device determines the specific implementation method of determining the positioning coordinates of the target object in the camera coordinate system based on the imaging information of the camera. Reference can be made to the relevant introduction in Figure 1 above, which will not be described again here. It should also be understood that when the target object stays at the first position or the second position, the camera 1 and the camera 2 can photograph the target object at the same time, or they can photograph the target object at different times, which is not limited here.
403、处理装置根据第一定位坐标、第二定位坐标、第三定位坐标和第四定位坐标确定第一坐标系与第二坐标系之间的第一坐标系变换参数。403. The processing device determines the first coordinate system transformation parameter between the first coordinate system and the second coordinate system based on the first positioning coordinates, the second positioning coordinates, the third positioning coordinates and the fourth positioning coordinates.
本实施例中,第一坐标系变换参数包括旋转矩阵(rotation matrix)和平移向量(translation vector)。其中,旋转矩阵是在乘以一个向量的时候有改变向量的方向但不改变大小的效果并保持了手性的矩阵。例如,坐标系1与坐标系2原点重合但不平行,空间点从坐标系1到坐标系2的变换就用旋转矩阵表示。如果坐标系1与坐标系2平行但不重合,空间点从坐标系1到坐标系2的变换就用平移向量表示。也就是说,空间点在坐标系1中的坐标加上平移向量就是空间点在坐标系2中的坐标。具体地,处理装置可以根据如下公式来计算第一坐标系变换参数。In this embodiment, the first coordinate system transformation parameters include a rotation matrix (rotation matrix) and a translation vector (translation vector). Among them, the rotation matrix is a matrix that when multiplied by a vector has the effect of changing the direction of the vector but not changing the size and maintaining chirality. For example, if the origins of coordinate system 1 and coordinate system 2 are coincident but not parallel, the transformation of a spatial point from coordinate system 1 to coordinate system 2 is represented by a rotation matrix. If coordinate system 1 and coordinate system 2 are parallel but not coincident, the transformation of a spatial point from coordinate system 1 to coordinate system 2 is represented by a translation vector. In other words, the coordinates of the space point in coordinate system 1 plus the translation vector are the coordinates of the space point in coordinate system 2. Specifically, the processing device may calculate the first coordinate system transformation parameter according to the following formula.
[根据细则26改正 24.03.2023]
该公式为:
[Amended in accordance with Rule 26 24.03.2023]
The formula is:
[根据细则26改正 24.03.2023]
其中,θ表示第一坐标系与第二坐标系之间的旋转角度,表示旋转矩阵,表示平移向量,表示基于摄像头1得到的定位坐标(如第一定位坐标或第三定位坐标),表示基于摄像头2得到的定位坐标(如第二定位坐标或第四定位坐标)。应理解,第一定位坐标、第二定位坐标和第一坐标系变换参数满足该公式,第三定位坐标、第四定位坐标和第一坐标系变换参数也满足该公式。也就是说,将第一定位坐标和第二定位坐标代入该公式作为一个方程,并将第三定位坐标和第四定位坐标代入该公式作为另一个方程,联合两个方程进行计算即可得到第一坐标系变换参数。
[Amended in accordance with Rule 26 24.03.2023]
Among them, θ represents the rotation angle between the first coordinate system and the second coordinate system, represents the rotation matrix, represents the translation vector, Represents the positioning coordinates obtained based on camera 1 (such as the first positioning coordinates or the third positioning coordinates), Indicates the positioning coordinates obtained based on camera 2 (such as the second positioning coordinates or the fourth positioning coordinates). It should be understood that the first positioning coordinates, the second positioning coordinates and the first coordinate system transformation parameters satisfy this formula, and the third positioning coordinates, the fourth positioning coordinates and the first coordinate system transformation parameters also satisfy this formula. That is to say, the first positioning coordinate and the second positioning coordinate are substituted into the formula as one equation, and the third positioning coordinate and the fourth positioning coordinate are substituted into the formula as another equation. The calculation can be obtained by combining the two equations. A coordinate system transformation parameter.
应理解,上述第一定位坐标和第二定位坐标可视为第一组定位信息,上述第三定位坐标和第四定位坐标可视为第二组定位信息。为了计算得到上述第一坐标系变换参数,需要至少两组定位信息。在一些可能的实施方式中,处理装置也可以控制目标物移动到更多个位置并采集到更多组定位信息,从而根据更多组定位信息来计算第一坐标系变换参数,可以减少计算误差。It should be understood that the above-mentioned first positioning coordinates and the second positioning coordinates can be regarded as the first set of positioning information, and the above-mentioned third positioning coordinates and the fourth positioning coordinates can be regarded as the second set of positioning information. In order to calculate the above-mentioned first coordinate system transformation parameters, at least two sets of positioning information are required. In some possible implementations, the processing device can also control the target to move to more locations and collect more sets of positioning information, so as to calculate the first coordinate system transformation parameters based on more sets of positioning information, which can reduce calculation errors. .
404、处理装置根据第一坐标系变换参数对待测物体进行定位。404. The processing device locates the object to be measured according to the first coordinate system transformation parameters.
处理装置计算得到第一坐标系变换参数后,会先根据第一坐标系变换参数对第一坐标系和第二坐标系进行统一得到目标坐标系。之后,无论是基于摄像头1还是摄像头2对待测物体进行定位,该待测物体的定位坐标都将用目标坐标系表示。具体地,处理装置可以获取摄像头1或摄像头2对待测物体的目标成像信息,并根据目标成像信息确定待测物体在目标坐标系中的目标定位坐标。After the processing device calculates the first coordinate system transformation parameters, it first unifies the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system. Afterwards, whether the object to be measured is positioned based on camera 1 or camera 2, the positioning coordinates of the object to be measured will be represented by the target coordinate system. Specifically, the processing device can obtain the target imaging information of the object to be measured by the camera 1 or the camera 2, and determine the target positioning coordinates of the object to be measured in the target coordinate system based on the target imaging information.
应理解,本申请不对目标坐标系做具体限定。例如,处理装置可以根据第一坐标系变换参数将第二坐标系转换为第一坐标系,该第一坐标系即为目标坐标系。又例如,处理装置可以根据第一坐标系变换参数将第一坐标系转换为第二坐标系,该第二坐标系即为目标坐标系。再例如,该目标坐标系也可以是除第一坐标系和第二坐标系之外的自定义坐标系,当然前提是还需要确定自定义坐标系与第一坐标系或第二坐标系之间的坐标系变换参数。It should be understood that this application does not specifically limit the target coordinate system. For example, the processing device may convert the second coordinate system into the first coordinate system according to the first coordinate system transformation parameter, and the first coordinate system is the target coordinate system. For another example, the processing device may convert the first coordinate system into the second coordinate system according to the first coordinate system transformation parameter, and the second coordinate system is the target coordinate system. For another example, the target coordinate system can also be a custom coordinate system other than the first coordinate system and the second coordinate system. Of course, the premise is that the relationship between the custom coordinate system and the first coordinate system or the second coordinate system needs to be determined. coordinate system transformation parameters.
需要说明的是,上述实施例介绍了两个摄像头之间外参标定的流程。在一些可能的实施方式中,还可以结合多个已完成外参标定的摄像头对另一个摄像头进行外参标定,从而可以提高标定精度。下面在上述实施例的基础上进行进一步介绍。It should be noted that the above embodiment introduces the process of external parameter calibration between two cameras. In some possible implementations, multiple cameras that have completed external parameter calibration can also be combined to perform external parameter calibration on another camera, thereby improving the calibration accuracy. Further introduction will be made below on the basis of the above embodiments.
图5为本申请实施例提供的另一种定位系统示意图。如图5所示,区别于上述图3所示的定位系统,本实施提供的定位系统还包括摄像头3。当目标物位于摄像头1、摄像头2和摄像头3的拍摄重叠区域时,摄像头1、摄像头2和摄像头3分别将各自拍摄目标物得到的成像信息通过通信设备发送至处理装置。其中,摄像头1和摄像头2之间已经按照上述图4所示实施例介绍的方法完成了外参标定。处理装置可以根据摄像头1、摄像头2和摄像头3反馈的成像信息进行分析计算以确定摄像头1的坐标系与摄像头3的坐标系之间的坐标系变换参数以及摄像头2的坐标系与摄像头3的坐标系之间的坐标系变换参数,从而得到统一坐标系。之后,对待测物体进行定位的过程中就可以根据统一坐标系来表示待测物体的定位坐标。Figure 5 is a schematic diagram of another positioning system provided by an embodiment of the present application. As shown in Figure 5, different from the positioning system shown in Figure 3 above, the positioning system provided in this implementation also includes a camera 3. When the target object is located in the shooting overlap area of camera 1, camera 2 and camera 3, camera 1, camera 2 and camera 3 respectively send the imaging information obtained by shooting the target object to the processing device through the communication device. Among them, the external parameter calibration between camera 1 and camera 2 has been completed according to the method introduced in the embodiment shown in Figure 4 above. The processing device can perform analysis and calculation based on the imaging information fed back by camera 1, camera 2 and camera 3 to determine the coordinate system transformation parameters between the coordinate system of camera 1 and the coordinate system of camera 3 and the coordinate system of camera 2 and the coordinates of camera 3. The coordinate system transformation parameters between the systems are used to obtain a unified coordinate system. After that, in the process of positioning the object to be measured, the positioning coordinates of the object to be measured can be expressed according to the unified coordinate system.
具体地,上述图4所示实施例中的第一位置和第二位置同样在摄像头3的拍摄区域。当目标物在第一位置时,摄像头3对目标物上的标记点进行成像得到第五成像信息,并将第五成像信息发送至处理装置。处理装置可以根据第五成像信息确定目标物上标记点在摄像头3的第三坐标系中的第五定位坐标。当目标物在第二位置时,摄像头3对目标物上的标记点进行成像得到第六成像信息,并将第六成像信息发送至处理装置。处理装置可以根据第六成像信息确定目标物上标记点在摄像头3的第三坐标系中的第六定位坐标。之后,处理装置根据第一定位坐标、第二定位坐标、第三定位坐标、第四定位坐标、第五定位坐标和第六定位坐标,确定第一坐标系与第三坐标系之间的第二坐标系变换参数和第二坐标系与第三坐标系之间的第三坐标系变换参数。进而,处理装置根据第二坐标系变换参数对第一坐标系和第三坐标系进行统一,根据第三坐标系变换参数对第二坐标系和第三坐标系进行统一。Specifically, the first position and the second position in the embodiment shown in FIG. 4 are also in the shooting area of the camera 3 . When the target object is at the first position, the camera 3 images the marker point on the target object to obtain fifth imaging information, and sends the fifth imaging information to the processing device. The processing device may determine the fifth positioning coordinates of the mark point on the target object in the third coordinate system of the camera 3 based on the fifth imaging information. When the target object is at the second position, the camera 3 images the marker point on the target object to obtain sixth imaging information, and sends the sixth imaging information to the processing device. The processing device may determine the sixth positioning coordinates of the mark point on the target object in the third coordinate system of the camera 3 based on the sixth imaging information. After that, the processing device determines the second position between the first coordinate system and the third coordinate system based on the first positioning coordinate, the second positioning coordinate, the third positioning coordinate, the fourth positioning coordinate, the fifth positioning coordinate and the sixth positioning coordinate. coordinate system transformation parameters and third coordinate system transformation parameters between the second coordinate system and the third coordinate system. Furthermore, the processing device unifies the first coordinate system and the third coordinate system according to the second coordinate system transformation parameter, and unifies the second coordinate system and the third coordinate system according to the third coordinate system transformation parameter.
[根据细则26改正 24.03.2023]
需要说明的是,第二坐标系变换参数和第三坐标系变换参数同样包括旋转矩阵和平移向量。作为一个示例,第二坐标系变换参数和第三坐标系变换参数可以按照下面提供的方程组进行计算。该方程组包括:
[Amended in accordance with Rule 26 24.03.2023]
It should be noted that the second coordinate system transformation parameters and the third coordinate system transformation parameters also include rotation matrices and translation vectors. As an example, the second coordinate system transformation parameter and the third coordinate system transformation parameter can be calculated according to the set of equations provided below. The system of equations includes:
[根据细则26改正 24.03.2023]
其中,摄像头i到摄像头j的旋转矩阵表示为摄像头i到摄像头j的旋转角度表示为θi,j,摄像头i到摄像头j的平移变量表示为ti,j表示基于摄像头1得到的定位坐标(如第一定位坐标或第三定位坐标),表示基于摄像头2得到的定位坐标(如第二定位坐标或第四定位坐标),表示基于摄像头3得到的定位坐标(如第五定位坐标或第六定位坐标)。将基于摄像头1、摄像头和摄像头3得到的上述定位坐标代入该方程组即可计算得到第二坐标系变换参数和第三坐标系变换参数。
[Amended in accordance with Rule 26 24.03.2023]
Among them, the rotation matrix from camera i to camera j is expressed as The rotation angle from camera i to camera j is expressed as θ i,j , and the translation variable from camera i to camera j is expressed as t i,j , Represents the positioning coordinates obtained based on camera 1 (such as the first positioning coordinates or the third positioning coordinates), Represents the positioning coordinates obtained based on camera 2 (such as the second positioning coordinates or the fourth positioning coordinates), Indicates the positioning coordinates obtained based on camera 3 (such as the fifth positioning coordinates or the sixth positioning coordinates). By substituting the above positioning coordinates obtained based on camera 1, camera 3 and camera 3 into the set of equations, the second coordinate system transformation parameters and the third coordinate system transformation parameters can be calculated.
需要说明的是,处理装置可以根据第二坐标系变换参数将第三坐标系转换为第一坐标系,或者,根据第二坐标系变换参数将第一坐标系转换为第三坐标系,从而完成摄像头1与摄像头3之间的外参标定。同理,处理装置可以根据第三坐标系变换参数将第三坐标系转换为第二坐标系,或者,根据第三坐标系变换参数将第二坐标系转换为第三坐标系,从而完成摄像头2与摄像头3之间的外参标定。由于摄像头1和摄像头2之间已经完成了外参标定,因此通过上述方式可以完成摄像头1、摄像头2和摄像头3之间的外参标定。在实际应用中,可以将上述任意一个摄像头的坐标系作为统一坐标系,之后,无论是基于哪个摄像头对待测物体进行定位,该待测物体的定位坐标都将用统一坐标系表示。It should be noted that the processing device can convert the third coordinate system into the first coordinate system according to the second coordinate system transformation parameter, or convert the first coordinate system into the third coordinate system according to the second coordinate system transformation parameter, thereby completing the process. External parameter calibration between camera 1 and camera 3. In the same way, the processing device can convert the third coordinate system into the second coordinate system according to the third coordinate system transformation parameter, or convert the second coordinate system into the third coordinate system according to the third coordinate system transformation parameter, thereby completing the camera 2 External parameter calibration with camera 3. Since the external parameter calibration between camera 1 and camera 2 has been completed, the external parameter calibration between camera 1, camera 2 and camera 3 can be completed through the above method. In practical applications, the coordinate system of any of the above cameras can be used as a unified coordinate system. After that, no matter which camera is used to locate the object to be measured, the positioning coordinates of the object to be measured will be expressed in the unified coordinate system.
下面以上述图3所示的定位系统为例介绍一个更为具体的实施例。A more specific embodiment will be introduced below, taking the positioning system shown in Figure 3 above as an example.
图6为本申请实施例中定位方法的另一种实施例示意图。在该示例中,定位方法包括如下步骤。Figure 6 is a schematic diagram of another embodiment of the positioning method in the embodiment of the present application. In this example, the positioning method includes the following steps.
601、确定初始坐标系。601. Determine the initial coordinate system.
处理装置控制目标物移动会首先进入某个摄像头的拍摄区域,例如,该摄像头为摄像头1,那么可以将摄像头1的坐标系确定为初始坐标系。When the processing device controls the movement of the target object, it will first enter the shooting area of a certain camera. For example, if the camera is camera 1, then the coordinate system of camera 1 can be determined as the initial coordinate system.
602、判断目标物是否进入多个摄像头的拍摄重叠区域且这多个摄像头之间的外参没有标定,若否,则执行步骤603;若是,则执行步骤604。602. Determine whether the target object enters the overlapping shooting area of multiple cameras and the external parameters between the multiple cameras are not calibrated. If not, perform step 603; if yes, perform step 604.
603、控制目标物继续移动。603. Control the target to continue moving.
604、控制目标物每走一段距离就停止一段时间以让各摄像头对目标物进行成像。604. Control the target object to stop for a period of time every time it walks a certain distance to allow each camera to image the target object.
应理解,目标物每次停止的时长应当大于摄像拍摄所需要的时长,例如,摄像头拍摄帧率为30帧每秒,则目标物每次停止的时长要大于33毫秒。各摄像头对停止的目标物进行拍摄并分别将各自获取到的成像信息发送至处理装置。It should be understood that the duration of each stop of the target object should be greater than the duration required for camera shooting. For example, if the camera shooting frame rate is 30 frames per second, the duration of each stop of the target object should be greater than 33 milliseconds. Each camera takes a picture of the stopped target and sends the acquired imaging information to the processing device.
605、判断目标物是否走出多个摄像头的拍摄重叠区域或定位信息的组数达到上限,若否,则继续执行步骤604;若是则执行步骤606。605. Determine whether the target object has walked out of the overlapping shooting area of multiple cameras or the number of sets of positioning information has reached the upper limit. If not, continue to step 604; if so, execute step 606.
应理解,目标物停留在同一位置时基于各摄像头得到的定位坐标可以作为一组定位信息,如果定位信息的组数达到上限可以认为所需要的定位信息已经足够了,将执行后面的操作。当然,如果目标物已经走出了多个摄像头的拍摄重叠区域,也就无法再继续采集新的一组定位信息,将执行后面的操作。It should be understood that when the target stays at the same position, the positioning coordinates obtained based on each camera can be used as a set of positioning information. If the number of sets of positioning information reaches the upper limit, it can be considered that the required positioning information is enough, and subsequent operations will be performed. Of course, if the target has walked out of the overlapping shooting area of multiple cameras, it will no longer be able to collect a new set of positioning information, and the following operations will be performed.
606、判断定位信息的组数是否大于或等于2,若否,则执行步骤607;若是,则执行步骤608。606. Determine whether the number of groups of positioning information is greater than or equal to 2. If not, perform step 607; if yes, perform step 608.
应理解,根据上述图4所示实施例的相关介绍,为了完成多个摄像头之间的外参标定需要至少两组定位信息。因此,处理装置在进行外参标定之前需要先确定是否采集到至少两组定位信息。It should be understood that according to the above-mentioned introduction to the embodiment shown in Figure 4, at least two sets of positioning information are required to complete the external parameter calibration between multiple cameras. Therefore, the processing device needs to determine whether at least two sets of positioning information have been collected before performing external parameter calibration.
607、停止此次外参标定。607. Stop this external parameter calibration.
608、根据获取到的多组定位信息进行多个摄像头之间的外参标定,并建立统一坐标系以便于后续的定位操作。608. Perform external parameter calibration between multiple cameras based on the obtained multiple sets of positioning information, and establish a unified coordinate system to facilitate subsequent positioning operations.
应理解,步骤608的具体实现方式可以参考上述图4所示实施例中步骤403和步骤404的相关介绍,此处不再赘述。It should be understood that for the specific implementation of step 608, reference can be made to the relevant introduction of steps 403 and 404 in the embodiment shown in FIG. 4, and will not be described again here.
综合上述各实施例的介绍可知,本申请提供了一种对多个摄像头进行外参标定的具体实现方式,通过获取到的坐标系变换参数可以对多个摄像头的坐标系进行统一,后续基于每个摄像头获取到的定位坐标都可以用统一坐标系表示,便于在多摄像头的场景下确定摄像头覆盖范围内任意位置的定位坐标。另外,本申请提供的定位系统是全自动化的系统,无需人工干预,实用性更强。Based on the introduction of the above embodiments, it can be seen that this application provides a specific implementation method for external parameter calibration of multiple cameras. The obtained coordinate system transformation parameters can be used to unify the coordinate systems of multiple cameras. Subsequently, based on each The positioning coordinates obtained by each camera can be represented by a unified coordinate system, which facilitates the determination of the positioning coordinates of any position within the camera coverage in a multi-camera scenario. In addition, the positioning system provided by this application is a fully automated system that does not require manual intervention and is more practical.
下面对上述定位系统的处理装置进行介绍。The processing device of the above positioning system is introduced below.
图7为本申请实施例中处理装置的一种结构示意图。如图7所示,该处理装置包括检测模块701、计算模块702和控制模块703。具体地,检测模块701可用于根据各摄像头反馈的成像信息确定目标物的定位坐标。计算模块702用于计算各摄像头之间的坐标系变换参数,并建立统一坐标系。控制模块703用于控制目标物移动。在一种可能的实施方式中,检测模块701用于执行上述图4所示实施例中的步骤401和步骤402,计算模块702用于执行上述图4所示实施例中的步骤403和步骤404。在一种可能的实施方式中,检测模块701用于执行上述图6所示实施例中的步骤602、步骤605和步骤606。计算模块702用于执行上述图6所示实施例中的步骤601、步骤607和步骤608。控制模块703用于执行上述图6所示实施例中的步骤603和步骤604。应理解,上述的检测模块701、计算模块702和控制模块703只是从功能实现上划分的模块,在实际应用中,检测模块701、计算模块702和控制模块703可以是相互独立的模块,也可以集成在一起实现。Figure 7 is a schematic structural diagram of a processing device in an embodiment of the present application. As shown in Figure 7, the processing device includes a detection module 701, a calculation module 702 and a control module 703. Specifically, the detection module 701 can be used to determine the positioning coordinates of the target object based on the imaging information fed back by each camera. The calculation module 702 is used to calculate coordinate system transformation parameters between cameras and establish a unified coordinate system. The control module 703 is used to control the movement of the target object. In a possible implementation, the detection module 701 is used to perform step 401 and step 402 in the embodiment shown in Figure 4, and the calculation module 702 is used to perform step 403 and step 404 in the embodiment shown in Figure 4. . In a possible implementation, the detection module 701 is used to perform step 602, step 605 and step 606 in the above-mentioned embodiment shown in FIG. 6 . The calculation module 702 is used to execute step 601, step 607 and step 608 in the embodiment shown in FIG. 6 . The control module 703 is used to execute steps 603 and 604 in the embodiment shown in FIG. 6 . It should be understood that the above-mentioned detection module 701, calculation module 702 and control module 703 are only modules divided in terms of functional implementation. In practical applications, the detection module 701, calculation module 702 and control module 703 can be independent modules, or they can integrated together to achieve.
图8为本申请实施例中处理装置的另一种结构示意图。如图8所示,该处理装置包括处理器801、存储器802以及收发器803。该处理器801、存储器802以及收发器803通过线路互相连接,其中,收发器803用于跟摄像头和目标物进行数据或指令交互。存储器802用于存储程序指令和数据。处理器801用于执行上述图4和图6所示实施例中的操作步骤。需要说明的是,上述图8中所示的处理器可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路ASIC,或者至少一个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。上述图8中所示的存储器可以存储操作系统和其他应用程序。在通过软件或者固件来实现本申请实施例提供的技术方案时,用于实现本申请实施例提供的技术方案的程序代码保存在存储器中,并由处理器来执行。在一实施例中,处理器内部可以包括存储器。在另一实施例中,处理器和存储器是两个独立的结构。Figure 8 is another structural schematic diagram of a processing device in an embodiment of the present application. As shown in Figure 8, the processing device includes a processor 801, a memory 802 and a transceiver 803. The processor 801, the memory 802 and the transceiver 803 are connected to each other through lines, where the transceiver 803 is used to exchange data or instructions with the camera and the target object. Memory 802 is used to store program instructions and data. The processor 801 is used to perform the operation steps in the embodiment shown in FIG. 4 and FIG. 6 . It should be noted that the processor shown in Figure 8 above can be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit ASIC, or at least one integrated circuit for executing related programs. , to implement the technical solutions provided by the embodiments of this application. The memory shown in Figure 8 above can store the operating system and other application programs. When the technical solutions provided by the embodiments of this application are implemented through software or firmware, the program code used to implement the technical solutions provided by the embodiments of this application is stored in the memory and executed by the processor. In one embodiment, the processor may include memory internally. In another embodiment, the processor and memory are two separate structures.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and simplicity of description, the specific working processes of the systems, devices and units described above can be referred to the corresponding processes in the foregoing method embodiments, and will not be described again here.
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,随机接入存储器等。具体地,例如:上述处理单元或处理器可以是中央处理器,通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。上述的这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can understand that all or part of the steps to implement the above embodiments can be completed by hardware, or can be completed by instructing relevant hardware through a program. The program can be stored in a computer-readable storage medium. The above-mentioned The storage media mentioned may be read-only memory, random access memory, etc. Specifically, for example, the above-mentioned processing unit or processor may be a central processing unit, a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic devices. , transistor logic devices, hardware components, or any combination thereof. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
当使用软件实现时,上述实施例描述的方法步骤可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。When implemented using software, the method steps described in the above embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated. The available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), etc.

Claims (20)

  1. 一种定位方法,其特征在于,包括:A positioning method, characterized by including:
    当目标物在第一位置时,处理装置获取所述目标物上标记点在第一摄像头的第一坐标系中的第一定位坐标,以及所述目标物上所述标记点在第二摄像头的第二坐标系中的第二定位坐标,其中,所述第一位置位于所述第一摄像头与所述第二摄像头的拍摄重叠区域;When the target object is at the first position, the processing device obtains the first positioning coordinates of the mark point on the target object in the first coordinate system of the first camera, and the position of the mark point on the target object in the first coordinate system of the second camera. The second positioning coordinates in the second coordinate system, wherein the first position is located in the shooting overlap area of the first camera and the second camera;
    当所述目标物在第二位置时,所述处理装置获取所述目标物上所述标记点在所述第一坐标系中的第三定位坐标,以及所述目标物上所述标记点在所述第二坐标系中的第四定位坐标,其中,所述第二位置位于所述第一摄像头与所述第二摄像头的拍摄重叠区域;When the target object is at the second position, the processing device obtains the third positioning coordinates of the marker point on the target object in the first coordinate system, and the marker point on the target object is at The fourth positioning coordinate in the second coordinate system, wherein the second position is located in the overlapping area of the first camera and the second camera;
    所述处理装置根据所述第一定位坐标、所述第二定位坐标、所述第三定位坐标和所述第四定位坐标确定所述第一坐标系与所述第二坐标系之间的第一坐标系变换参数;The processing device determines a third position between the first coordinate system and the second coordinate system based on the first positioning coordinate, the second positioning coordinate, the third positioning coordinate and the fourth positioning coordinate. a coordinate system transformation parameter;
    所述处理装置根据所述第一坐标系变换参数对待测物体进行定位。The processing device locates the object to be measured according to the first coordinate system transformation parameter.
  2. 根据权利要求1所述的方法,其特征在于,所述处理装置获取所述第一定位坐标和所述第二定位坐标之后,所述处理装置获取所述第三定位坐标和所述第四定位坐标之前,所述方法还包括:The method according to claim 1, characterized in that, after the processing device obtains the first positioning coordinates and the second positioning coordinates, the processing device obtains the third positioning coordinates and the fourth positioning coordinates. Before coordinates, the method also includes:
    所述处理装置控制所述目标物从所述第一位置移动到所述第二位置。The processing device controls the movement of the target object from the first position to the second position.
  3. 根据权利要求1或2所述的方法,其特征在于,所述处理装置获取所述目标物上所述标记点在所述第一坐标系中的第一定位坐标和所述目标物上所述标记点在所述第二坐标系中的第二定位坐标包括:The method according to claim 1 or 2, characterized in that the processing device obtains the first positioning coordinates of the mark point on the target object in the first coordinate system and the first positioning coordinate of the mark point on the target object. The second positioning coordinates of the marker point in the second coordinate system include:
    当所述目标物在第一位置时,所述处理装置获取所述第一摄像头对所述目标物上所述标记点的第一成像信息和所述第二摄像头对所述目标物上所述标记点的第二成像信息,根据所述第一成像信息确定所述第一定位坐标,根据所述第二成像信息确定所述第二定位坐标;When the target object is at the first position, the processing device obtains the first imaging information of the marked point on the target object by the first camera and the first imaging information of the mark point on the target object by the second camera. Second imaging information of the mark point, determining the first positioning coordinates according to the first imaging information, and determining the second positioning coordinates according to the second imaging information;
    所述处理装置获取所述目标物上所述标记点在所述第一坐标系中的第三定位坐标和所述目标物上所述标记点在所述第二坐标系中的第四定位坐标包括:The processing device obtains the third positioning coordinates of the mark point on the target object in the first coordinate system and the fourth positioning coordinates of the mark point on the target object in the second coordinate system. include:
    当所述目标物在第二位置时,所述处理装置获取所述第一摄像头对所述目标物上所述标记点的第三成像信息和所述第二摄像头对所述目标物上所述标记点的第四成像信息,根据所述第三成像信息确定所述第三定位坐标,根据所述第四成像信息确定所述第四定位坐标。When the target object is at the second position, the processing device acquires the third imaging information of the marked point on the target object by the first camera and the third imaging information of the mark point on the target object by the second camera. The fourth imaging information of the mark point, the third positioning coordinates are determined according to the third imaging information, and the fourth positioning coordinates are determined according to the fourth imaging information.
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述第一坐标系变换参数包括旋转矩阵和平移向量。The method according to any one of claims 1 to 3, characterized in that the first coordinate system transformation parameters include a rotation matrix and a translation vector.
  5. [根据细则26改正 24.03.2023]
    根据权利要求4所述的方法,其特征在于,所述第一定位坐标、所述第二定位坐标和所述第一坐标系变换参数满足公式,所述第三定位坐标、所述第四定位坐标和所述第一坐标系变换参数满足所述公式;
    [Amended in accordance with Rule 26 24.03.2023]
    The method according to claim 4, characterized in that the first positioning coordinates, the second positioning coordinates and the first coordinate system transformation parameters satisfy the formula, the third positioning coordinates, the fourth positioning coordinates The coordinates and the first coordinate system transformation parameters satisfy the formula;
    所述公式包括: The formula includes:
    其中,所述θ表示所述第一坐标系与所述第二坐标系之间的旋转角度,所述表示所述旋转矩阵,所述表示所述平移向量,所述表示所述第一定位坐标或所述第三定位坐标,所述表示所述第二定位坐标或所述第四定位坐标。Wherein, the θ represents the rotation angle between the first coordinate system and the second coordinate system, and the represents the rotation matrix, the represents the translation vector, the represents the first positioning coordinate or the third positioning coordinate, and the Indicates the second positioning coordinate or the fourth positioning coordinate.
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述处理装置根据所述第一坐标系变换参数对待测物体进行定位包括:The method according to any one of claims 1 to 5, characterized in that the processing device positioning the object to be measured according to the first coordinate system transformation parameter includes:
    所述处理装置根据所述第一坐标系变换参数对所述第一坐标系和所述第二坐标系进行统一得到目标坐标系,获取所述第一摄像头或所述第二摄像头对所述待测物体的目标成像信息,根据所述目标成像信息确定所述待测物体在所述目标坐标系中的目标定位坐标。The processing device unifies the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain a target coordinate system, and obtains the first camera or the second camera's response to the target coordinate system. The target imaging information of the object to be measured is determined according to the target imaging information, and the target positioning coordinates of the object to be measured in the target coordinate system are determined.
  7. 根据权利要求6所述的方法,其特征在于,所述处理装置根据所述第一坐标系变换参数对所述第一坐标系和所述第二坐标系进行统一得到目标坐标系包括:The method according to claim 6, characterized in that, the processing device unifying the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain the target coordinate system includes:
    所述处理装置根据所述第一坐标系变换参数将所述第二坐标系转换为所述第一坐标系,所述目标坐标系为所述第一坐标系;The processing device converts the second coordinate system into the first coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the first coordinate system;
    或者,or,
    所述处理装置根据所述第一坐标系变换参数将所述第一坐标系转换为所述第二坐标系,所述目标坐标系为所述第二坐标系。The processing device converts the first coordinate system into the second coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the second coordinate system.
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 7, characterized in that the method further includes:
    当所述目标物在所述第一位置时,所述处理装置获取所述目标物上所述标记点在第三摄像头的第三坐标系中的第五定位坐标,其中,所述第一位置位于所述第一摄像头、所述第二摄像头和所述第三摄像头的拍摄重叠区域;When the target object is at the first position, the processing device obtains the fifth positioning coordinates of the mark point on the target object in the third coordinate system of the third camera, wherein the first position Located in the shooting overlap area of the first camera, the second camera and the third camera;
    当所述目标物在所述第二位置时,所述处理装置获取所述目标物上所述标记点在所述第三坐标系中的第六定位坐标,其中,所述第二位置位于所述第一摄像头、所述第二摄像头和所述第三摄像头的拍摄重叠区域;When the target object is at the second position, the processing device obtains the sixth positioning coordinates of the mark point on the target object in the third coordinate system, wherein the second position is located at the second position. The shooting overlap area of the first camera, the second camera and the third camera;
    所述处理装置根据所述第一定位坐标、所述第二定位坐标、所述第三定位坐标、所述第四定位坐标、所述第五定位坐标和所述第六定位坐标,确定所述第一坐标系与所述第三坐标系之间的第二坐标系变换参数和所述第二坐标系与所述第三坐标系之间的第三坐标系变换参数;The processing device determines the first positioning coordinate, the second positioning coordinate, the third positioning coordinate, the fourth positioning coordinate, the fifth positioning coordinate and the sixth positioning coordinate. a second coordinate system transformation parameter between the first coordinate system and the third coordinate system and a third coordinate system transformation parameter between the second coordinate system and the third coordinate system;
    所述处理装置根据所述第二坐标系变换参数对所述第一坐标系和所述第三坐标系进行统一,根据所述第三坐标系变换参数对所述第二坐标系和所述第三坐标系进行统一。The processing device unifies the first coordinate system and the third coordinate system according to the second coordinate system transformation parameter, and unifies the second coordinate system and the third coordinate system according to the third coordinate system transformation parameter. The three coordinate systems are unified.
  9. 根据权利要求8所述的方法,其特征在于,所述处理装置根据所述第二坐标系变换参数对所述第一坐标系和所述第三坐标系进行统一包括:The method according to claim 8, wherein the processing device unifying the first coordinate system and the third coordinate system according to the second coordinate system transformation parameter includes:
    所述处理装置根据所述第二坐标系变换参数将所述第三坐标系转换为所述第一坐标系,或者,所述处理装置根据所述第二坐标系变换参数将所述第一坐标系转换为所述第三坐标系;The processing device converts the third coordinate system into the first coordinate system according to the second coordinate system transformation parameter, or the processing device converts the first coordinate system into the first coordinate system according to the second coordinate system transformation parameter. The system is converted into the third coordinate system;
    所述处理装置根据所述第三坐标系变换参数对所述第二坐标系和所述第三坐标系进行统一包括:The processing device unifying the second coordinate system and the third coordinate system according to the third coordinate system transformation parameter includes:
    所述处理装置根据所述第三坐标系变换参数将所述第三坐标系转换为所述第二坐标系,或者,所述处理装置根据所述第三坐标系变换参数将所述第二坐标系转换为所述第三坐标系。The processing device converts the third coordinate system into the second coordinate system according to the third coordinate system transformation parameter, or the processing device converts the second coordinate system into the second coordinate system according to the third coordinate system transformation parameter. The coordinate system is converted into the third coordinate system.
  10. 一种处理装置,其特征在于,包括:检测模块和计算模块;A processing device, characterized by comprising: a detection module and a calculation module;
    所述检测模块用于:当目标物在第一位置时,获取所述目标物上标记点在第一摄像头的第一坐标系中的第一定位坐标,以及所述目标物上所述标记点在第二摄像头的第二坐标系中的第二定位坐标,其中,所述第一位置位于所述第一摄像头与所述第二摄像头的拍摄重叠区域;The detection module is configured to: when the target object is at the first position, obtain the first positioning coordinates of the marker point on the target object in the first coordinate system of the first camera, and the marker point on the target object. The second positioning coordinates in the second coordinate system of the second camera, wherein the first position is located in the shooting overlap area of the first camera and the second camera;
    当所述目标物在第二位置时,获取所述目标物上所述标记点在所述第一坐标系中的第三定位坐标,以及所述目标物上所述标记点在所述第二坐标系中的第四定位坐标,其中,所述第二位置位于所述第一摄像头与所述第二摄像头的拍摄重叠区域;When the target object is at the second position, the third positioning coordinates of the marker point on the target object in the first coordinate system are obtained, and the marker point on the target object is at the second position. The fourth positioning coordinate in the coordinate system, wherein the second position is located in the shooting overlap area of the first camera and the second camera;
    所述计算模块用于:根据所述第一定位坐标、所述第二定位坐标、所述第三定位坐标和所述第四定位坐标确定所述第一坐标系与所述第二坐标系之间的第一坐标系变换参数;The calculation module is configured to: determine the relationship between the first coordinate system and the second coordinate system based on the first positioning coordinates, the second positioning coordinates, the third positioning coordinates and the fourth positioning coordinates. The first coordinate system transformation parameter between;
    根据所述第一坐标系变换参数对对待测物体进行定位。The object to be measured is positioned according to the first coordinate system transformation parameter.
  11. 根据权利要求10所述的处理装置,其特征在于,所述处理装置还包括控制模块;所述检测模块获取所述第一定位坐标和所述第二定位坐标之后,所述检测模块获取所述第三定位坐标和所述第四定位坐标之前,所述控制模块用于:控制所述目标物从所述第一位置移动到所述第二位置。The processing device according to claim 10, characterized in that the processing device further includes a control module; after the detection module obtains the first positioning coordinates and the second positioning coordinates, the detection module obtains the Before the third positioning coordinate and the fourth positioning coordinate, the control module is used to control the target object to move from the first position to the second position.
  12. 根据权利要求10或11所述的处理装置,其特征在于,所述检测模块具体用于:The processing device according to claim 10 or 11, characterized in that the detection module is specifically used for:
    当所述目标物在第一位置时,获取所述第一摄像头对所述目标物上所述标记点的第一成像信息和所述第二摄像头对所述目标物上所述标记点的第二成像信息,根据所述第一成像信息确定所述第一定位坐标,根据所述第二成像信息确定所述第二定位坐标;When the target object is at the first position, the first imaging information of the marked point on the target object by the first camera and the third imaging information of the marked point on the target object by the second camera are obtained. two imaging information, determining the first positioning coordinates according to the first imaging information, and determining the second positioning coordinates according to the second imaging information;
    当所述目标物在第二位置时,获取所述第一摄像头对所述目标物上所述标记点的第三成像信息和所述第二摄像头对所述目标物上所述标记点的第四成像信息,根据所述第三成像信息确定所述第三定位坐标,根据所述第四成像信息确定所述第四定位坐标。When the target object is at the second position, the third imaging information of the marked point on the target object by the first camera and the third imaging information of the marked point on the target object by the second camera are obtained. Four imaging information, the third positioning coordinates are determined according to the third imaging information, and the fourth positioning coordinates are determined according to the fourth imaging information.
  13. 根据权利要求10至12中任一项所述的处理装置,其特征在于,所述第一坐标系变换参数包括旋转矩阵和平移向量。The processing device according to any one of claims 10 to 12, wherein the first coordinate system transformation parameter includes a rotation matrix and a translation vector.
  14. [根据细则26改正 24.03.2023]
    根据权利要求13所述的处理装置,其特征在于,所述第一定位坐标、所述第二定位坐标和所述第一坐标系变换参数满足公式,所述第三定位坐标、所述第四定位坐标和所述第一坐标系变换参数满足所述公式;
    [Amended in accordance with Rule 26 24.03.2023]
    The processing device according to claim 13, characterized in that the first positioning coordinates, the second positioning coordinates and the first coordinate system transformation parameters satisfy the formula, the third positioning coordinates, the fourth positioning coordinates The positioning coordinates and the first coordinate system transformation parameters satisfy the formula;
    所述公式包括: The formula includes:
    其中,所述θ表示所述第一坐标系与所述第二坐标系之间的旋转角度,所述表示所述旋转矩阵,所述表示所述平移向量,所述表示所述第一定位坐标或所述第三定位坐标,所述表示所述第二定位坐标或所述第四定位坐标。Wherein, the θ represents the rotation angle between the first coordinate system and the second coordinate system, and the represents the rotation matrix, the represents the translation vector, the represents the first positioning coordinate or the third positioning coordinate, and the Indicates the second positioning coordinate or the fourth positioning coordinate.
  15. 根据权利要求10至14中任一项所述的处理装置,其特征在于,所述计算模块具体用于:The processing device according to any one of claims 10 to 14, characterized in that the calculation module is specifically used for:
    根据所述第一坐标系变换参数对所述第一坐标系和所述第二坐标系进行统一得到目标坐标系,获取所述第一摄像头或所述第二摄像头对所述待测物体的目标成像信息,根据所述目标成像信息确定所述待测物体在所述目标坐标系中的目标定位坐标。Unify the first coordinate system and the second coordinate system according to the first coordinate system transformation parameters to obtain a target coordinate system, and obtain the target of the object to be measured by the first camera or the second camera. Imaging information: determine the target positioning coordinates of the object to be measured in the target coordinate system according to the target imaging information.
  16. 根据权利要求15所述的处理装置,其特征在于,所述计算模块具体用于:The processing device according to claim 15, characterized in that the calculation module is specifically used for:
    根据所述第一坐标系变换参数将所述第二坐标系转换为所述第一坐标系,所述目标坐标系为所述第一坐标系;Convert the second coordinate system to the first coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the first coordinate system;
    或者,or,
    根据所述第一坐标系变换参数将所述第一坐标系转换为所述第二坐标系,所述目标坐标系为所述第二坐标系。The first coordinate system is converted into the second coordinate system according to the first coordinate system transformation parameter, and the target coordinate system is the second coordinate system.
  17. 根据权利要求10至16中任一项所述的处理装置,其特征在于,所述检测模块还用于:The processing device according to any one of claims 10 to 16, characterized in that the detection module is also used for:
    当所述目标物在所述第一位置时,获取所述目标物上所述标记点在第三摄像头的第三坐标系中的第五定位坐标,其中,所述第一位置位于所述第一摄像头、所述第二摄像头和所述第三摄像头的拍摄重叠区域;When the target object is at the first position, the fifth positioning coordinates of the mark point on the target object in the third coordinate system of the third camera are obtained, wherein the first position is located at the third coordinate system. The shooting overlap area of one camera, the second camera and the third camera;
    当所述目标物在所述第二位置时,获取所述目标物上所述标记点在所述第三坐标系中的第六定位坐标,其中,所述第二位置位于所述第一摄像头、所述第二摄像头和所述第三摄像头的拍摄重叠区域;When the target object is at the second position, obtain the sixth positioning coordinates of the mark point on the target object in the third coordinate system, wherein the second position is located at the first camera , the shooting overlap area of the second camera and the third camera;
    所述计算模块还用于:根据所述第一定位坐标、所述第二定位坐标、所述第三定位坐标、所述第四定位坐标、所述第五定位坐标和所述第六定位坐标,确定所述第一坐标系与所述第三坐标系之间的第二坐标系变换参数和所述第二坐标系与所述第三坐标系之间的第三坐标系变换参数;The calculation module is also used to: according to the first positioning coordinate, the second positioning coordinate, the third positioning coordinate, the fourth positioning coordinate, the fifth positioning coordinate and the sixth positioning coordinate , determine the second coordinate system transformation parameter between the first coordinate system and the third coordinate system and the third coordinate system transformation parameter between the second coordinate system and the third coordinate system;
    根据所述第二坐标系变换参数对所述第一坐标系和所述第三坐标系进行统一,根据所述第三坐标系变换参数对所述第二坐标系和所述第三坐标系进行统一。The first coordinate system and the third coordinate system are unified according to the second coordinate system transformation parameter, and the second coordinate system and the third coordinate system are unified according to the third coordinate system transformation parameter. Unite.
  18. 根据权利要求17所述的处理装置,其特征在于,所述计算模块具体用于:The processing device according to claim 17, characterized in that the calculation module is specifically used for:
    根据所述第二坐标系变换参数将所述第三坐标系转换为所述第一坐标系,或者,根据所述第二坐标系变换参数将所述第一坐标系转换为所述第三坐标系;Convert the third coordinate system to the first coordinate system according to the second coordinate system transformation parameter, or convert the first coordinate system to the third coordinate system according to the second coordinate system transformation parameter Tie;
    根据所述第三坐标系变换参数将所述第三坐标系转换为所述第二坐标系,或者,根据所述第三坐标系变换参数将所述第二坐标系转换为所述第三坐标系。Convert the third coordinate system to the second coordinate system according to the third coordinate system transformation parameter, or convert the second coordinate system to the third coordinate system according to the third coordinate system transformation parameter Tie.
  19. 一种定位系统,其特征在于,所述定位系统包括:处理装置、第一摄像头、第二摄像头和目标物,其中,所述处理装置用于执行如权利要求1至9中任一项所述的方法。A positioning system, characterized in that the positioning system includes: a processing device, a first camera, a second camera and a target object, wherein the processing device is used to perform as described in any one of claims 1 to 9 Methods.
  20. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在计算机设备上运行时,使得所述计算机设备执行如权利要求1至9中任一项所述的方法。A computer-readable storage medium, characterized by comprising computer instructions, which when the computer instructions are run on a computer device, cause the computer device to perform the method according to any one of claims 1 to 9.
PCT/CN2023/077019 2022-03-10 2023-02-18 Positioning method and related device WO2023169186A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210240303.5A CN116772804A (en) 2022-03-10 2022-03-10 Positioning method and related equipment
CN202210240303.5 2022-03-10

Publications (1)

Publication Number Publication Date
WO2023169186A1 true WO2023169186A1 (en) 2023-09-14

Family

ID=87937146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/077019 WO2023169186A1 (en) 2022-03-10 2023-02-18 Positioning method and related device

Country Status (2)

Country Link
CN (1) CN116772804A (en)
WO (1) WO2023169186A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104776832A (en) * 2015-04-16 2015-07-15 浪潮软件集团有限公司 Method, set top box and system for positioning objects in space
CN107449432A (en) * 2016-05-31 2017-12-08 华为终端(东莞)有限公司 One kind utilizes dual camera air navigation aid, device and terminal
CN107917666A (en) * 2016-10-09 2018-04-17 上海铼钠克数控科技股份有限公司 Binocular vision device and coordinate scaling method
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera
WO2019080229A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Chess piece positioning method and system based on machine vision, storage medium, and robot
CN109872372A (en) * 2019-03-07 2019-06-11 山东大学 A kind of small-sized quadruped robot overall Vision localization method and system
CN111429530A (en) * 2020-04-10 2020-07-17 浙江大华技术股份有限公司 Coordinate calibration method and related device
CN111524176A (en) * 2020-04-16 2020-08-11 深圳市沃特沃德股份有限公司 Method and device for measuring and positioning sight distance and computer equipment
CN112837391A (en) * 2021-03-04 2021-05-25 北京柏惠维康科技有限公司 Coordinate conversion relation obtaining method and device, electronic equipment and storage medium
CN113486797A (en) * 2018-09-07 2021-10-08 百度在线网络技术(北京)有限公司 Unmanned vehicle position detection method, device, equipment, storage medium and vehicle
CN114140536A (en) * 2021-11-30 2022-03-04 清华大学 Pose data processing method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104776832A (en) * 2015-04-16 2015-07-15 浪潮软件集团有限公司 Method, set top box and system for positioning objects in space
CN107449432A (en) * 2016-05-31 2017-12-08 华为终端(东莞)有限公司 One kind utilizes dual camera air navigation aid, device and terminal
CN107917666A (en) * 2016-10-09 2018-04-17 上海铼钠克数控科技股份有限公司 Binocular vision device and coordinate scaling method
WO2019080229A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Chess piece positioning method and system based on machine vision, storage medium, and robot
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera
CN113486797A (en) * 2018-09-07 2021-10-08 百度在线网络技术(北京)有限公司 Unmanned vehicle position detection method, device, equipment, storage medium and vehicle
CN109872372A (en) * 2019-03-07 2019-06-11 山东大学 A kind of small-sized quadruped robot overall Vision localization method and system
CN111429530A (en) * 2020-04-10 2020-07-17 浙江大华技术股份有限公司 Coordinate calibration method and related device
CN111524176A (en) * 2020-04-16 2020-08-11 深圳市沃特沃德股份有限公司 Method and device for measuring and positioning sight distance and computer equipment
CN112837391A (en) * 2021-03-04 2021-05-25 北京柏惠维康科技有限公司 Coordinate conversion relation obtaining method and device, electronic equipment and storage medium
CN114140536A (en) * 2021-11-30 2022-03-04 清华大学 Pose data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116772804A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
JP6951595B2 (en) Housing data collection and model generation methods
CN108734744B (en) Long-distance large-view-field binocular calibration method based on total station
JP6622503B2 (en) Camera model parameter estimation apparatus and program thereof
WO2022120567A1 (en) Automatic calibration system based on visual guidance
WO2018153374A1 (en) Camera calibration
CN112258567A (en) Visual positioning method and device for object grabbing point, storage medium and electronic equipment
CN112232279B (en) Personnel interval detection method and device
CN112132908B (en) Camera external parameter calibration method and device based on intelligent detection technology
CN110910459A (en) Camera device calibration method and device and calibration equipment
WO2023045271A1 (en) Two-dimensional map generation method and apparatus, terminal device, and storage medium
US20180225839A1 (en) Information acquisition apparatus
CN112949478A (en) Target detection method based on holder camera
JP5079547B2 (en) Camera calibration apparatus and camera calibration method
WO2019123988A1 (en) Calibration data generating device, calibration data generating method, calibration system, and control program
CN111445537A (en) Calibration method and system of camera
WO2019221340A1 (en) Method and system for calculating spatial coordinates of region of interest, and non-transitory computer-readable recording medium
CN109064499A (en) A kind of multistory frame seismic testing high-speed video measurement method based on distribution parsing
CN115187612A (en) Plane area measuring method, device and system based on machine vision
WO2022165934A1 (en) Object surface data measurement method and system, electronic device, and storage medium
WO2023169186A1 (en) Positioning method and related device
Yang et al. Effect of field of view on the accuracy of camera calibration
WO2018186507A1 (en) Method for performing calibration by using measured data without assumed calibration model and three-dimensional scanner calibration system for performing same
WO2023124053A1 (en) Position detection method and apparatus based on mobile camera, processing device, and medium
CN113240754B (en) Method, device, equipment and storage medium for determining internal parameters of PTZ image pickup device
US20200351488A1 (en) Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23765764

Country of ref document: EP

Kind code of ref document: A1