WO2020198963A1 - Data processing method and apparatus related to photographing device, and image processing device - Google Patents

Data processing method and apparatus related to photographing device, and image processing device Download PDF

Info

Publication number
WO2020198963A1
WO2020198963A1 PCT/CN2019/080500 CN2019080500W WO2020198963A1 WO 2020198963 A1 WO2020198963 A1 WO 2020198963A1 CN 2019080500 W CN2019080500 W CN 2019080500W WO 2020198963 A1 WO2020198963 A1 WO 2020198963A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coordinate system
relationship
rotation relationship
reference coordinate
Prior art date
Application number
PCT/CN2019/080500
Other languages
French (fr)
Chinese (zh)
Inventor
黄胜
薛唐立
梁家斌
马东东
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980005043.XA priority Critical patent/CN111247389B/en
Priority to PCT/CN2019/080500 priority patent/WO2020198963A1/en
Publication of WO2020198963A1 publication Critical patent/WO2020198963A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/12Interpretation of pictures by comparison of two or more pictures of the same area the pictures being supported in the same relative position as when they were taken
    • G01C11/14Interpretation of pictures by comparison of two or more pictures of the same area the pictures being supported in the same relative position as when they were taken with optical projection
    • G01C11/16Interpretation of pictures by comparison of two or more pictures of the same area the pictures being supported in the same relative position as when they were taken with optical projection in a common plane
    • G01C11/18Interpretation of pictures by comparison of two or more pictures of the same area the pictures being supported in the same relative position as when they were taken with optical projection in a common plane involving scanning means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/28Special adaptation for recording picture point data, e.g. for profiles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/53Determining attitude
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present invention relates to the field of image processing technology, in particular to a data processing method, device and image processing equipment related to photographing equipment.
  • Cameras can be mounted on the mobile platform through PTZ and other equipment, so that when the mobile platform is moving, the camera will take pictures of the target environment. Based on the image, and perform three-dimensional reconstruction based on the image to realize the functions of environmental mapping and mapping.
  • the steps of 3D reconstruction using images taken by shooting equipment include: Based on the image, using Structure from Motion (SFM) technology to recover the correct spatial posture of each shooting device, that is, the shooting device is in a certain reference coordinate system
  • the posture of the camera includes the position and angle information of the shooting.
  • the angle information can be, for example, the pitch angle, roll angle, and yaw angle.
  • the reference coordinate system related to surveying and mapping is generally the Earth-centered (Earth-fixed, ECEF) or the Northeast Sky (ENU) coordinate system established by a known point on the ground or the North East (NED) ) Geographical coordinate systems such as coordinate systems.
  • the current mainstream 3D modeling software can correctly restore the correct pose of the shooting device in a reference coordinate system such as a geodetic coordinate system or a geographic coordinate system based on the images in multiple directions, so as to determine the device coordinate system and the reference coordinate system The relationship between.
  • the embodiments of the present invention provide a data processing method, device, and image processing equipment for shooting equipment, which can more accurately determine the rotation between the equipment coordinate system of the shooting equipment and the reference coordinate system when the mobile platform is in a linear motion state. Relationship, and more accurately complete the coordinate system position conversion of pixels.
  • an embodiment of the present invention provides a data processing method related to a photographing device, characterized in that the method is applied to an image processing device, and the image processing device can obtain posture data of the photographing device, and the photographing device is set at On a mobile platform, and the motion state of the mobile platform is a linear motion state, the method includes:
  • the position information of the pixel on the first image in the reference coordinate system is determined.
  • an embodiment of the present invention also provides a data processing device related to a photographing device.
  • the device is applied to an image processing device that can obtain posture data of the photographing device, and the photographing device is set on a mobile platform.
  • the motion state of the mobile platform is a linear motion state, and the device includes:
  • An acquiring module configured to acquire first posture data when the photographing device acquires the first image
  • a determining module configured to determine, according to the first posture data, a first rotation relationship between the device coordinate system of the shooting device and a reference coordinate system when the first image is collected;
  • the processing module is configured to determine the position information of the pixel on the first image in the reference coordinate system according to the first rotation relationship.
  • an embodiment of the present invention also provides an image processing device that can obtain posture data of the shooting device, the shooting device is set on a mobile platform, and the motion state of the mobile platform is linear motion State, the image processing equipment includes: a communication interface unit and a processing unit;
  • the communication interface unit is used to communicate with an external device and obtain data of the external device
  • the processing unit is configured to obtain, through the communication interface unit, the first posture data of the shooting device when the first image is collected; according to the first posture data, determine the status of the shooting device when the first image is collected A first rotation relationship between the device coordinate system and a reference coordinate system; according to the first rotation relationship, the position information of a pixel on the first image in the reference coordinate system is determined.
  • the rotation relationship between the device coordinate system where the photographing device is located and the reference coordinate system is determined, and the posture data of the photographing device when the image is collected is referred to.
  • the rotation relationship between the equipment coordinate system and the reference coordinate system can be determined more accurately, and the accuracy of the conversion from pixel points to three-dimensional coordinates in the state of linear motion is better ensured, so as to better realize environmental monitoring, Tasks such as drawing.
  • FIG. 1 is a schematic diagram of a task scene of a drone equipped with a shooting device according to an embodiment of the present invention
  • Figure 2 is a schematic diagram of the North-East coordinate system of an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of the relationship between an image coordinate system and a device coordinate system according to an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of a data processing method related to a photographing device according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a process for determining a rotation relationship according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a multi-belt movement and a single belt movement in a straight state of a mobile platform according to an embodiment of the present invention
  • FIG. 7 is a schematic flowchart of an optimized rotation relationship obtained according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a flow of coordinate position conversion according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a flow of performing rotation relationship optimization and coordinate position conversion according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a data processing device for photographing equipment according to an embodiment of the present invention.
  • Fig. 11 is a schematic structural diagram of an image processing device according to an embodiment of the present invention.
  • the shooting device used to shoot images can be directly mounted on the mobile platform, or can be mounted on the mobile platform through a pan-tilt.
  • the shooting device can capture multiple frames of images of the current environment, and for each frame of image, it can read the posture data when the image was taken.
  • These posture data may include The pitch angle, roll angle and yaw Yaw angle.
  • the corresponding image processing device can be triggered to use the image posture data as the intermediate data to convert between the device coordinate system, the pan/tilt coordinate system, and the reference coordinate system to convert the image captured by the camera
  • the above points are converted to the reference coordinate system, and the three-dimensional points of these points in the reference coordinate system are determined, so as to realize tasks such as SLAM environmental drawing.
  • FIG. 1 shows a scene diagram in which an unmanned aerial vehicle is equipped with a shooting device to acquire images and then perform corresponding image processing to achieve tasks such as SLAM.
  • the scene diagram includes a drone 101 and a photographing device 102 provided at the bottom of the drone 101.
  • the drone 101 may be provided with a pan-tilt
  • the photographing device 102 may be provided on the pan-tilt.
  • the drone may be a rotary-wing drone as shown in FIG. 1, such as a quadrotor, a hexarotor, or an octorotor, etc. In some embodiments, it may also be a fixed-wing aircraft.
  • the pan/tilt head mainly refers to a three-axis pan/tilt head, which can rotate in the pitch direction, roll direction, and yaw direction, so as to drive the camera to shoot images in different directions.
  • the mobile platform can also be an unmanned car, an intelligent mobile robot, etc. running on land. It can also be an unmanned car, a pan/tilt on an intelligent robot, and a photographing device. Take pictures of different orientations as needed.
  • the image processing device 100 on the ground can perform a series of data processing in the embodiment of the present invention. In FIG. 1, the drone 101 and the image processing device 100 are designed separately.
  • the image processing device 100 can also be mounted on the drone to receive and read the image taken by the camera 102.
  • Attitude data; or the image processing device 100 itself, as a component of the UAV, can be directly or indirectly connected to the camera 102 and the pan/tilt to obtain image and attitude data.
  • FIGS. 2 to 4 are schematic diagrams of the coordinate system involved in the embodiment of the present invention.
  • Figure 2 shows the North-East coordinate system.
  • the North-East coordinate system belongs to the geographic coordinate system and is one of the reference coordinate systems in the embodiment of the present invention.
  • the positive direction of the X-axis of the North-East coordinate system points to the geographic
  • the direction north is North
  • the positive direction of the Y axis points to the geographic direction East
  • the positive direction of the Z axis points to Down.
  • Figure 3 shows the relationship between the device coordinate system and the image coordinate system, where the XY plane in the device coordinate system is parallel to the image plane, the Z axis of the device coordinate system is the main axis of the camera, and the origin O is the projection center (optical center).
  • the coordinate system is a three-dimensional coordinate system.
  • the origin O 1 of the image coordinate system where the image plane is located is the intersection point of the camera's principal axis and the image plane, also called the principal point.
  • the distance between point O and point O1 is the focal length f
  • the imaging plane coordinate is a two-dimensional coordinate. It can be understood that FIG. 3 is only for illustration.
  • the principal point of the image is usually not in the exact center of the imaging plane. It can be seen from Figure 3 that the point (x, y) on the image coordinate system and the point Q on the device coordinate system are (X, Y, Z).
  • FIG. 4 is a schematic flowchart of a data processing method for a photographing device according to an embodiment of the present invention. It is executed by an image processing device, which may be, for example, a smart mobile terminal, tablet computer, personal computer, notebook computer, etc.
  • the image processing device can obtain the posture data of the shooting device, and the shooting device is set on a mobile platform, and the image processing device can obtain the posture data of the shooting device through the mobile platform.
  • the image processing device acquires the first image when the shooting device collects the first image in S401.
  • a posture data During the movement of the mobile platform, the photographing device may photograph multiple frames of images to form an image set, and the first image may be any one of the multiple frames of images.
  • the shooting device When the shooting device shoots the first image and other images, it can record the posture data of the corresponding shooting device.
  • These posture data are obtained through the sensing data processing of sensors such as gyroscopes, and can include the horizontal position of the shooting device when the image is taken. Roll angle, pitch Pitch angle and Yaw angle.
  • the pan-tilt angle information determined based on the sensor data of the sensor set on the pan-tilt can be used as First posture data, and record first posture data for the first image.
  • PTZ angle information includes: Roll angle, Pitch angle, and Yaw angle.
  • the first posture data includes: Roll angle, Pitch angle, and Yaw angle of the mobile platform.
  • the conversion process from the PTZ coordinate system where the sensor on the pan/tilt is located to the reference coordinate system (geographic coordinate system or geodetic coordinate system, etc.), and the conversion from the coordinate system where the sensor of the mobile platform is located to the reference coordinate system (geographic coordinate system) Or the geodetic coordinate system, etc.) have the same conversion process.
  • the following uses the pan-tilt coordinate system as an example to illustrate the conversion process between the device coordinate system of the shooting device and the reference coordinate system.
  • the pan/tilt angle information can be recorded in the extension field of the corresponding environmental image, so that the image processing device can directly obtain it when needed.
  • the photographing device can automatically acquire the pan/tilt angle information from the pan/tilt, or the pan/tilt actively sends the pan/tilt angle information to the photographing device, so that the photographing device can record the corresponding pan/tilt angle information for the image after the image is captured.
  • the first posture data includes: the angle information of the pan/tilt corresponding to the pan/tilt when the first image is collected.
  • the pan/tilt angle information can be read directly from the extended field of the first image.
  • the embodiment of the present invention specifically triggers the execution of S401 when the motion state of the mobile platform is the linear motion state.
  • it can be determined whether the movement route planned by the mobile platform is a straight line. For example, when the drone is flying based on the set navigation trajectory, it detects whether the navigation trajectory in the current period of time is a straight line, and if so, it is determined to move The platform is in linear motion.
  • an instruction can be manually triggered and sent to the image processing device. The instruction is used to indicate that the mobile platform is in a linear motion state.
  • the state of motion of the mobile platform is a linear motion state, which mainly means that the mobile platform will have a linear motion state. It does not require that the mobile platform must move along a straight line. In the actual processing process, the mobile platform is in a straight line or Approximately linear motion, or even if it is a curved motion but receives an instruction from the user to indicate that the mobile platform is in a linear motion state, it can be considered that the motion state of the mobile platform is a linear motion state.
  • the pan/tilt angle information of the first image included in the first attitude data can be considered to be the clockwise rotation angle of the pan/tilt relative to the X-axis, Y-axis, and Z-axis of the northeast coordinate system.
  • the rotation of these three axes can be Obtain the rotation relationship between the PTZ coordinate system and the northeast coordinate system NED, that is, the image processing device determines the device coordinate system of the shooting device when the first image is collected according to the first posture data in S402
  • the reference coordinate system includes a geographic coordinate system or a geodetic coordinate system. In the embodiment of the present invention, the North-East coordinate system shown in FIG. 2 is taken as an example for description.
  • some of the currently known coordinate systems can determine the relative relationship between the coordinate systems through some technical means, such as rotation relationship, translation relationship, etc. Based on these relative relationships, the alignment and alignment of different coordinate systems can be achieved.
  • the first posture data that is, the pan/tilt angle information of the pan/tilt is referred to.
  • FIG. 5 One of the specific implementation manners of S402 is shown in FIG. 5, which may specifically include the following steps.
  • the second rotation relationship between the pan-tilt coordinate system of the pan-tilt and the reference coordinate system is acquired according to the first posture data, and the device coordinate system of the photographing device to the pan-tilt coordinate is acquired The third rotational relationship between the lines.
  • the third rotation relationship can be determined based on the assembly relationship between the camera and the pan/tilt.
  • the Roll angle, Pitch angle, and Yaw angle in the pan/tilt angle information correspond to the X axis, Y axis, and Z axis of the pan/tilt coordinate system, respectively.
  • the rotation matrix from the device coordinate system to the pan/tilt coordinate system that is, the third rotation relationship, is described in the following formula 2.
  • the third rotation relationship is directly configured according to the assembly relationship between the photographing device and the pan/tilt, and is directly read and used when determining the first rotation relationship.
  • the first rotation relationship between the device coordinate system and the reference coordinate system when the first image is acquired is determined , That is, the rotation matrix from the equipment coordinate system to the NED coordinate system, that is, the first rotation relationship:
  • R camera_to_ned R gimbal_to_ned *R camera_to_gimbal formula 3;
  • each device coordinate system can be correctly aligned to the NED coordinate system.
  • the image processing device determines the position information of the pixels on the first image in the reference coordinate system according to the first rotation relationship in S403.
  • the two-dimensional point of the image can be converted from the image coordinate system of the first image to the device coordinate system , And then based on R camera_to_ned combining the translation matrix t between the device coordinate system and the northeast coordinate system and the scaling relationship that is the scale s, and then transform from the device coordinate system to the northeast coordinate system again.
  • the translation matrix t can be calculated based on the overlap between multiple frames of images, and the scale s represents the scaling relationship between the device coordinate system and the reference coordinate system.
  • the device coordinate system needs to be enlarged to facilitate reference with the Northeast coordinate system, etc.
  • the coordinate system is aligned.
  • the two-dimensional point (x, y) on the first image is converted to the device coordinate system, it becomes the three-dimensional point Q(X, Y, Z), s*R camera_to_ned *Q+t can obtain the three-dimensional The three-dimensional point coordinates of point Q in the northeast coordinate system.
  • first rotation relationship of the first image After the first rotation relationship of the first image is obtained, other images can also be obtained based on the first rotation relationship (especially the second image adjacent to the first image in the acquisition time, for example, the first image is taken immediately after the first image is taken).
  • the position information from the point on the image to the reference coordinate system can also be obtained based on the first rotation relationship (especially the second image adjacent to the first image in the acquisition time, for example, the first image is taken immediately after the first image is taken).
  • the relative rotation relationship between the first image and the second image can be calculated, and then based on the first rotation Relationship and relative rotation relationship, the rotation relationship between the device coordinate system and the reference coordinate system of the second image can be obtained, and the second image can be obtained based on the rotation relationship between the device coordinate system and the reference coordinate system of the second image
  • the position information of the pixel on the above reference coordinate system may also be obtained based on the translation relationship corresponding to the first image and the relative translation relationship between the first image and the second image.
  • the scaling relationship s of the image is the same as the scaling relationship of the first image.
  • the north-east coordinate system is taken as an example for description, and other types of reference coordinate systems perform the same processing.
  • other types of reference coordinate systems can have a known conversion relationship with the northeast coordinate system.
  • R camera_to_ned Based on the ability to determine R camera_to_ned , it can be further based on other coordinate systems such as the northeast sky coordinate system (another A conversion relationship between a geographic coordinate system) and a north-east coordinate system to obtain a rotation matrix between the device coordinate system and these coordinate systems, and the rotation matrix is used to characterize the rotation relationship between the coordinate systems.
  • the first image corresponding to the first image can be determined through the existing image-based Rotation relationship and related rotation relationship of other images.
  • the movement routes of the UAV's single and multiple belts are shown, and the black dots indicate the waypoints that are automatically planned when performing a certain task or the user manually clicks to set Connecting these waypoints constitutes the flight route of the drone.
  • the rotation relationship, translation relationship and scale corresponding to the image can be directly calculated based on the vision algorithm.
  • the following formula can be used to calculate A correct similarity transformation matrix is solved to obtain the rotation matrix R, the translation matrix t, and the scale s.
  • n is defined as frames with an image in a multi-flight movement during shooting
  • y i refers to the photographing apparatus in the reference coordinate system of the actual center coordinates
  • x i refers to the center of the imaging device coordinate system apparatus coordinate.
  • the device coordinate system of the shooting device can be aligned to a reference coordinate system such as the North-East coordinate system using this similar transformation matrix. Due to the lack of a single degree of freedom, an exception occurs when solving the similar transformation matrix problem, because if a given rotation matrix R satisfies the condition, the new rotation is obtained after rotating around the straight line where the single air belt is located.
  • the matrix also meets the condition, and each R corresponds to a t, so there is a solution without an array that satisfies the condition, that is, it is impossible to accurately calculate a correct similar transformation matrix theoretically. Therefore, in the case of a single belt movement as shown in FIG.
  • the rotation relationship between the device coordinate system where the photographing device is located and the reference coordinate system is determined, and the posture data of the photographing device when the image is collected is referred to.
  • the rotation relationship between the equipment coordinate system and the reference coordinate system can be determined more accurately, and the accuracy of the conversion from pixel points to three-dimensional coordinates in the state of linear motion is better ensured, so as to better realize environmental monitoring, Tasks such as drawing.
  • R camera_to_ned is calculated based on the accurate values of Roll angle, Pitch angle and Yaw angle.
  • the Roll and Pitch angles in the PTZ angle information are more accurate and can be used directly, but the Yaw angle may have a large deviation.
  • the image processing equipment is specific
  • the first rotation relationship may be optimized according to a preset optimization algorithm, so as to determine the position information of the pixel points on the first image in the reference coordinate system based on the optimized first rotation relationship.
  • the optimization algorithm may be some minimization algorithm.
  • the above S403 may include: performing optimization processing on the first rotation relationship according to a preset first algorithm; and according to the optimized first rotation Relationship, determining the position information of the pixel on the first image in the reference coordinate system.
  • the idea of the first algorithm is to make the center coordinates corresponding to each image determined based on the optimized first rotation relationship be in the second coordinate in the reference coordinate system, and the camera device determined based on the sensor data is in the reference coordinate system The difference between the first coordinates below and the preset first minimization condition are satisfied, and the center coordinates refer to the center coordinates of the device coordinate system.
  • the first attitude data includes pitch angle, roll angle, and yaw angle. Since the Roll angle and pitch angle in the first attitude data are considered accurate, the optimization of the first rotation relationship
  • the processing includes optimizing the yaw angle.
  • the optimizing processing of the first rotation relationship according to the preset first algorithm includes: S701: acquiring each image in the first image set collected by the photographing device When a frame of image is collected, the center coordinates of the device coordinate system and the first coordinates in the reference coordinate system of the center coordinates determined using sensor data; S702: based on the center coordinates and the first coordinates, The first rotation relationship is optimized, so that the difference between the second coordinate in the reference coordinate system and the corresponding first coordinate of each of the center coordinates determined based on the optimized first rotation relationship is sum Meet the preset first minimization condition; wherein, the first image set includes the first image and at least one frame of second image.
  • the first image set includes multiple frames of images including the first image, for example, three frames of images, five frames of images or even more images taken continuously within a period of time, except for the first image
  • Other images can be understood as second images.
  • the second image is an image adjacent to the first image in the acquisition time.
  • the second image refers to the second image that is captured immediately after the first image is captured. , Or the third image, etc.
  • a position coordinate in the reference coordinate system will be obtained, that is, the second coordinate Satisfying the first minimization condition is that the sum of the difference between all the second coordinates and the actual coordinates corresponding to each frame image sensed by the sensor, that is, the first coordinates, is the smallest.
  • the first coordinates are the three-dimensional coordinates sensed by the GPS sensor and the altimeter when the shooting device shoots the corresponding image.
  • the specific expression form of the first algorithm can refer to the following formula 4, where s*R camera_to_ned *C v +t is the conversion position coordinate, where C v refers to the device coordinate system of the shooting device The center coordinate of, that is, the above-mentioned second coordinate, and Cw corresponds to the above-mentioned first coordinate.
  • R camera_to_ned can be calculated based on the Pitch angle, the Roll angle, and the Yaw angle using the above formula 1, formula 2, and formula 3.
  • the value of n is 3, 4, 5 or other values, which refers to the 3 frames, 4 frames, and 5 frames included in the first image set.
  • the first coordinate is based on the spatial position information of the shooting device when the image is taken, and is calculated by using sensor data.
  • the sensor data includes the data collected by the global positioning sensor and the data collected by the height sensor, that is,
  • the first coordinate is mainly determined based on the data sensed by GPS positioning devices and altimeters installed on mobile platforms such as drones or shooting equipment. Using GPS and altimeters, the center of the camera can be obtained.
  • the position coordinates GPS coordinates plus altitude
  • a reference coordinate system such as the North East coordinate system.
  • C v and C w in formula 4 can be directly obtained, R camera_to_ned can directly use the value in the pan/tilt angle information based on Pitch angle and Roll angle, and Yaw angle can be used as a parameter to be optimized.
  • the value of Yaw angle in the pan/tilt angle information can be used as an initial value.
  • the translation relationship, that is, the translation matrix t and the scaling relationship, that is, the scale s can be calculated by other existing methods.
  • t and s can also be considered as a parameter to be optimized, so that only an initial value is provided to be added to the calculation in Formula 4.
  • the optimizing the first rotation relationship based on the center coordinates and the first coordinates may include: performing the first rotation based on the center coordinates and the first coordinates The relationship, the translation relationship between the device coordinate system and the reference coordinate system, and the scaling relationship between the device coordinate system and the reference coordinate system are optimized.
  • the R camera_to_ned in the above formula it can be calculated based on the pan/tilt angle information of the first image as described above. Therefore, for the first image, based on the pan/tilt angle information of the first image and t, s, Cv , and Cw can calculate the difference between the second coordinate corresponding to the first image and the first coordinate determined by the sensor data when the first image is taken;
  • the relative rotation relationship of the second image with respect to the first image can be calculated according to the pan/tilt angle information of the second image, and the relative rotation relationship can be calculated based on the difference of the pan/tilt angle information corresponding to the two frames of images Therefore, the rotation relationship between the device coordinate system corresponding to the second image and the reference coordinate system can be determined on the basis of the R camera_to_ned of the first image and based on the relative rotation relationship.
  • the difference between the second coordinate corresponding to the second image and the first coordinate determined by the sensor data when the second image is taken can be calculated; and so on, the respective second coordinates and first coordinates of n frames of images can be obtained
  • the difference between, and the summation and minimization of n differences can be calculated to obtain the R camera_to_ned of the first image. That is to say, based on the rotation angle of the pan-tilt, the relative rotation relationship between each second image and the first image can be obtained.
  • R camera_to_ned of the first image can be used instead of device coordinates corresponding to the second image directly tied to the relationship between the rotating reference frame, and further in the process optimization can be solved in R camera_to_ned, or angle values therein Yaw angle.
  • the value of t can also be calculated based on the difference between the pan/tilt angles of each second image and the first image to obtain the relative translation relationship, and then only the first image is optimized. Translation relationship. For the scale s, the first image and all the second images correspond to the same s.
  • the pixels on the first image can be determined according to the optimized first rotation relationship, optimized translation relationship, and optimized zoom relationship.
  • the position information of the point in the reference coordinate system Specifically, the two-dimensional point (x, y) on the first image is first converted to the three-dimensional point Q(X, Y, Z) in the device coordinate system, based on (optimized s)*(optimized R camera_to_ned )*Q+(optimized t), you can calculate the three-dimensional point coordinates of the three-dimensional point Q in the northeast coordinate system, and then determine the position coordinates of the pixel points on the first image in the reference coordinate system.
  • the first image is the first frame image in the first image set
  • after optimization processing can obtain the optimized Yaw angle (or the first rotation relationship), t and s
  • the Yaw angle (or the first rotation relationship), the translation matrix t, and the scale s are all associated with the first frame of image.
  • it can be determined according to the first rotation relationship, the optimized first translation matrix, and the optimized scale
  • the PNP Perspecitve-n-Point
  • a computer vision algorithm can be used to calculate the rotation matrix (relative rotation relationship) and translation matrix (relative translation relationship) of each second image relative to the first image.
  • the rotation matrix and the translation matrix can align the origin of the two coordinate systems with the coordinate axis, and the scale can be reduced or enlarged on one of the coordinate systems, so that the scales of the two coordinate systems are aligned.
  • the first image may also be any frame of image in the image set.
  • the rotation relationship between the device coordinate system where the shooting device is located and the reference coordinate system is made more accurate, so that in the state of linear motion, the conversion of pixels to three-dimensional coordinates is more accurate.
  • the position of the pixel points on the images in the second image set may be converted based on the first rotation relationship.
  • the first rotation relationship in the embodiment of the present invention may refer to the rotation relationship obtained through the above formula 1, formula 2, and formula 3, or it may be based on the above formula 1, formula 2 and formula 3, through the first An algorithm such as formula 4 optimizes the rotation relationship.
  • the second image set may include the first image and an image captured again after obtaining the first rotation relationship, or the images included in the second image set are actually consistent with the images of the first image set mentioned above.
  • the position information in the reference coordinate system of the pixel on each frame of the second image set collected by the shooting device can be determined; and
  • the second image set includes the first image and at least one frame of the third image.
  • the first image may refer to an image captured by a photographing device in the first frame of the second image set. .
  • the relative transformation relationship between the other images and the first image is mainly used.
  • FIG. 8 is a schematic diagram of a flow of position transformation according to an embodiment of the present invention.
  • the third image in the second image set is used for description.
  • the first rotation relationship the first image captured by the photographing device is determined.
  • the position information of the pixel on each frame of the image in the second image set in the reference coordinate system may include: S801: using a visual algorithm to determine the relative transformation relationship between the third image and the first image; S802 : Based on the first rotation relationship and the relative transformation relationship, determine a fourth rotation relationship between the device coordinate system and the reference coordinate system when acquiring the third image; S803: according to the first rotation Relationship, determine the position information of the pixel on the first image in the reference coordinate system, and determine the position of the pixel on the third image in the reference coordinate system according to the fourth rotation relationship location information.
  • the PNP method mentioned above can be used to calculate the relative transformation relationship between the first image and the third image.
  • the relative transformation relationship includes the relative rotation relationship between the first image and the third image, and can also include the first image and the third image.
  • the fourth rotation relationship from the device coordinate system to the reference coordinate system corresponding to the third image can be obtained.
  • the pixels on the third image can be converted to the reference coordinate system, and the pixels in the reference The position coordinates on the coordinate system.
  • the translation relationship and scale from the device coordinate system to the reference coordinate system corresponding to the third image can be directly obtained through a visual algorithm; or it can be based on the combination of the relative translation relationship in the relative transformation relationship between the third image and the first image
  • the first image calculates the translation relationship and the scale from the device coordinate system to the reference coordinate system corresponding to the third image through the translation relationship obtained by the optimization and the optimized scale.
  • the first rotation relationship and the fourth rotation relationship may be optimized first, and the position of the pixel point on the corresponding image in the reference coordinate system is determined according to the optimized first rotation relationship and the fourth rotation relationship.
  • FIG. 1 please refer to FIG. 1
  • S901 optimizing the first rotation relationship and the fourth rotation relationship according to a preset second algorithm Processing
  • S902 Determine the position information of the pixel on the first image in the reference coordinate system according to the optimized first rotation relationship, and determine the third image according to the optimized fourth rotation relationship The position information of the pixel on the above reference coordinate system.
  • S901 may include: acquiring second posture data when the shooting device collects each frame of image in the second image set; determining the second image according to the second posture data The fifth rotation relationship between the device coordinate system and the reference coordinate system when each frame of image in the set is collected; the first rotation relationship and the fourth rotation relationship are optimized, so that the optimized The difference between the first rotation relationship and the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfy the preset second minimization condition.
  • the second posture data includes the posture angle information of each frame of the image in the second image set at the time of shooting, and may specifically include the pitch angle, roll angle, and Yaw angle of each frame of image during shooting.
  • the second algorithm mainly uses the pan/tilt angle information of the images in the second image set at the time of shooting as a reference, and optimizes the fourth rotation relationship obtained based on the vision algorithm.
  • the pan/tilt angle information at the time of image shooting can be added as a reference for constraint, and the corresponding rotation relationship can be further optimized.
  • the basis of optimization is that although the pan/tilt angle information at the time of image shooting is not very accurate, it can still be used as a reference, that is, the rotation relationship optimized by the BA vision algorithm and the rotation relationship calculated based on the pan/tilt angle information cannot deviate too much . Play the role of fixing the optimized rotation relationship near the rotation matrix provided by the pan/tilt angle information.
  • the specific optimization method refers to the following formula 5.
  • the rotation relationship deviation mainly refers to: the rotation relationship R i to be optimized is constrained based on the pan/tilt angle information of the i-th frame image in the second image set at the time of shooting, R ref_i is based on the above formula 1 2. The formula 3 is calculated, and R ref_i is completely calculated based on the corresponding second posture data of the i-th frame image.
  • ⁇ i is the elimination dimension coefficient.
  • the general value is the error variance of the image feature point extraction.
  • ⁇ i may be equal to 4.
  • X j represents the j-th three-dimensional point
  • R 1 is the first rotation relationship mentioned above, which can be directly the first rotation relationship calculated by the above formula 1, formula 2, and formula 3.
  • a first rotational relationship or equation 4 obtained by optimizing the relationship between the first image as a rotation, R 1 of the first image can not optimized.
  • the R i of each other image is obtained by optimizing formula 5 in combination with R 1 .
  • R 1 is known, optimize R 2 .
  • the first image from the device coordinate system to the reference coordinate system corresponding to each frame of the second image set can be obtained.
  • Five rotation relations, the fifth rotation relation is R ref_i in formula 5.
  • a corresponding R ref_i is calculated, and the optimization constraint is performed through the posture data when these images are taken.
  • the difference between the optimized first rotation relationship, the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfies the determination manner of the preset second minimization condition, It includes: determining that the sum between the first data and the second data satisfies a preset third minimization condition; wherein, the first data is calculated based on the pixel points on each frame of the second image set The sum of the reprojection errors of ⁇ i,j ⁇ ij ⁇
  • the method for calculating the deviation of the rotation relationship may be by performing dot multiplication on the rotation relationship to be optimized and the corresponding rotation relationship of the pan/tilt head. Then converted to the modulus value of the angle vector.
  • the subsequent images can also be more accurate In S902, the position information of the pixel on the third image in the second image set in the reference coordinate system can be determined more accurately, and the first rotation relationship is not sufficiently accurate. Big impact.
  • the rotation relationship between the device coordinate system where the shooting device is located and the reference coordinate system is made more accurate, so that in the state of linear motion, the conversion of pixels to three-dimensional coordinates is more accurate.
  • the accuracy of the rotation relationship corresponding to the subsequent images captured by the photographing device is also better guaranteed, thereby ensuring that tasks such as environmental monitoring and mapping can be better completed.
  • FIG. 10 is a schematic structural diagram of a data processing device for a photographing device according to an embodiment of the present invention.
  • the device is set in an image processing device, and the image processing device can obtain posture data of the photographing device.
  • the motion state of the mobile platform is a linear motion state.
  • the image processing device may be a smart device capable of processing related content in each of the foregoing embodiments.
  • the device includes the following structure.
  • the acquiring module 1001 is used to acquire the first posture data when the shooting device collects the first image; the determining module 1002 is used to determine the equipment of the shooting device when the first image is collected according to the first posture data A first rotation relationship between a coordinate system and a reference coordinate system; the processing module 1003 is configured to determine the position information of a pixel on the first image in the reference coordinate system according to the first rotation relationship.
  • the reference coordinate system refers to a geographic coordinate system or a geodetic coordinate system.
  • the photographing device is provided on a pan-tilt of the mobile platform.
  • the determining module 1002 is specifically configured to obtain the second rotation relationship between the pan/tilt coordinate system of the pan/tilt and the reference coordinate system according to the first posture data, and obtain the The third rotation relationship between the device coordinate system of the photographing device and the pan-tilt coordinate system; according to the second rotation relationship and the third rotation relationship, it is determined that the device coordinate system is The first rotational relationship between reference coordinate systems.
  • the third rotation relationship is configured according to the assembly relationship between the photographing device and the pan/tilt.
  • the processing module 1003 is specifically configured to perform optimization processing on the first rotation relationship according to a preset first algorithm; and determine the first image on the first image according to the optimized first rotation relationship The position information of the pixel in the reference coordinate system.
  • the first attitude data includes a pitch angle, a roll angle, and a yaw angle; the optimization processing of the first rotation relationship includes an optimization processing on the yaw angle.
  • the processing module 1003 is specifically configured to obtain the center coordinates of the device coordinate system and the center coordinates of the device coordinate system determined by using sensor data when each frame of the first image set collected by the shooting device is collected.
  • the difference between the second coordinate of the center coordinate in the reference coordinate system and the first coordinate meets a preset first minimization condition; wherein, the first image set includes the first image And at least one frame of second image.
  • the second image is an image adjacent to the first image in acquisition time.
  • the processing module 1003 is specifically configured to perform translation between the first rotation relationship, the device coordinate system and the reference coordinate system based on the center coordinates and the first coordinates The relationship, the scaling relationship between the device coordinate system and the reference coordinate system is optimized.
  • the processing module 1003 is specifically configured to determine that the pixels on the first image are in the reference according to the optimized first rotation relationship, the optimized translation relationship, and the optimized zoom relationship. Position information in the coordinate system.
  • the sensor data includes data collected by a global positioning sensor and data collected by a height sensor.
  • the processing module 1003 is configured to determine, according to the first rotation relationship, the pixel point on each frame of the second image set collected by the shooting device in the reference coordinate system Location information; wherein the second image set includes the first image and at least one frame of the third image.
  • the first image is the first frame of image in the second image set.
  • the processing module 1003 is specifically configured to use a visual algorithm to determine the relative transformation relationship between the third image and the first image; based on the first rotation relationship and the relative transformation relationship , Determining a fourth rotation relationship between the device coordinate system and the reference coordinate system when the third image is acquired; according to the first rotation relationship, determining that a pixel on the first image is in the reference The position information in the coordinate system is determined, and the position information of the pixel on the third image in the reference coordinate system is determined according to the fourth rotation relationship.
  • the processing module 1003 is specifically configured to perform optimization processing on the first rotation relationship and the fourth rotation relationship according to a preset second algorithm; according to the optimized first rotation relationship, Determine the position information of the pixel on the first image in the reference coordinate system, and determine the position of the pixel on the third image in the reference coordinate system according to the optimized fourth rotation relationship information.
  • the processing module 1003 is specifically configured to obtain second posture data when the shooting device collects each frame of the second image set; according to the second posture data, determine the The fifth rotation relationship between the device coordinate system and the reference coordinate system when each frame of the image in the second image set is collected; the first rotation relationship and the fourth rotation relationship are optimized to The difference between the optimized first rotation relationship and the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfy the preset second minimization condition.
  • the processing module 1003 is specifically configured to determine that the sum between the first data and the second data satisfies a preset third minimization condition; wherein, the first data is based on the second The sum of the reprojection errors calculated from the pixels on each frame of the image in the image set, the second data is the sum of the rotation relation deviations corresponding to each frame of the image in the second image set, the rotation relation deviation Refers to the difference between the first rotation relationship or the fourth rotation relationship and the corresponding fifth rotation relationship.
  • each module in the device for the specific implementation of each module in the device, reference may be made to the description of the related content in the foregoing embodiment, which is not repeated here.
  • the rotation relationship between the device coordinate system where the photographing device is located and the reference coordinate system is determined, and the posture data of the photographing device when the image is collected is referred to.
  • the rotation relationship between the equipment coordinate system and the reference coordinate system can be determined more accurately, and the accuracy of the conversion from pixel points to three-dimensional coordinates in the state of linear motion is better ensured, so as to better realize environmental monitoring, Tasks such as drawing.
  • FIG. 11 is a schematic structural diagram of an image processing device according to an embodiment of the present invention.
  • the image processing device is a smart device.
  • the image processing device can obtain the posture data of the shooting device, and the shooting device is set on a mobile platform.
  • the motion state of the mobile platform is a linear motion state.
  • the image processing device is a device on the ground, which can receive images captured by a shooting device through an aircraft, and can also receive shooting
  • the posture data of the device when the image is taken such as data such as the angle information of the pan/tilt on which the camera is mounted.
  • the photographing device may also be mounted on a mobile platform, and the data required for image processing can be obtained by connecting with the photographing device, pan-tilt and other devices of the mobile platform.
  • the image processing device itself can also be used as a component of the mobile platform for connecting with shooting devices, pan-tilt and other devices.
  • the image processing device includes: a communication interface unit 1101, a processing unit 1102, and other structures such as a power supply module, a housing, etc.; the image processing device may include: a user interface unit 1103 and a storage unit 1104 as needed.
  • the user interface unit 1103 may be, for example, a touch screen, which can obtain instructions from the user and can present the user with corresponding original data (such as received image data) and processed data (such as an environmental monitoring map made after image processing). ) And other data.
  • the storage unit 1104 may include volatile memory (volatile memory), such as random-access memory (RAM); the storage unit 1104 may also include non-volatile memory (non-volatile memory), such as fast Flash memory (flash memory), solid-state drive (SSD), etc.; the storage unit 1104 may also include a combination of the foregoing types of memories.
  • volatile memory volatile memory
  • non-volatile memory non-volatile memory
  • flash memory flash memory
  • SSD solid-state drive
  • the processing unit 1102 may be composed of a central processing unit (CPU).
  • the processing unit 1102 may also include a hardware chip.
  • the above-mentioned hardware chip may be, for example, an application-specific integrated circuit (ASIC) or a programmable logic device (PLD).
  • the PLD may be, for example, a field-programmable gate array (FPGA), a general array logic (generic array logic, GAL), etc.
  • the storage unit 1104 may be used to store some data, such as the image data of the environment mentioned above, processed drawing data, etc., and the storage unit 1104 may also be used to store program instructions.
  • the processing unit 1102 can call the program instructions to implement the corresponding functions and steps in the foregoing embodiments.
  • the communication interface unit 1101 is used to communicate with an external device to obtain data of the external device; the processing unit 1102 is used to obtain the first image collected by the photographing device through the communication interface unit 1101 According to the first posture data, determine the first rotation relationship between the device coordinate system and the reference coordinate system of the shooting device when the first image is collected according to the first posture data; according to the first rotation relationship , Determining the position information of the pixel on the first image in the reference coordinate system.
  • the reference coordinate system refers to a geographic coordinate system or a geodetic coordinate system.
  • the photographing device is provided on a pan-tilt of the mobile platform.
  • the processing unit 1102 is configured to obtain a second rotation relationship between the pan/tilt coordinate system of the pan/tilt and the reference coordinate system according to the first posture data, and obtain the photographing The third rotation relationship between the device coordinate system of the device and the pan-tilt coordinate system; according to the second rotation relationship and the third rotation relationship, it is determined that the device coordinate system and the reference when collecting the first image The first rotational relationship between coordinate systems.
  • the third rotation relationship is configured according to the assembly relationship between the photographing device and the pan/tilt.
  • the processing unit 1102 is configured to perform optimization processing on the first rotation relationship according to a preset first algorithm; determine the image on the first image according to the optimized first rotation relationship The position information of the pixel in the reference coordinate system.
  • the first attitude data includes a pitch angle, a roll angle, and a yaw angle; the optimization processing of the first rotation relationship includes an optimization processing on the yaw angle.
  • the processing unit 1102 is configured to obtain the center coordinates of the device coordinate system and the center determined by using sensor data when each frame of the image in the first image set collected by the shooting device is collected.
  • the coordinates are the first coordinates in the reference coordinate system; based on the center coordinates and the first coordinates, the first rotation relationship is optimized, so that each position determined based on the optimized first rotation relationship
  • the difference between the second coordinate of the center coordinate in the reference coordinate system and the first coordinate meets a preset first minimization condition; wherein, the first image set includes the first image and At least one second image.
  • the second image is an image adjacent to the first image in acquisition time.
  • the processing unit 1102 is configured to determine the first rotation relationship, the translation relationship between the device coordinate system and the reference coordinate system based on the center coordinate and the first coordinate , The scaling relationship between the device coordinate system and the reference coordinate system is optimized.
  • the processing unit 1102 is configured to determine that a pixel on the first image is at the reference coordinate according to the optimized first rotation relationship, the optimized translation relationship, and the optimized zoom relationship. Location information in the department.
  • the sensor data includes data collected by a global positioning sensor and data collected by a height sensor.
  • the processing unit 1102 is configured to determine, according to the first rotation relationship, the pixel point on each frame of the second image set collected by the shooting device in the reference coordinate system. Location information; wherein the second image set includes the first image and at least one frame of the third image.
  • the first image is the first frame of image in the second image set.
  • the processing unit 1102 is configured to use a visual algorithm to determine the relative transformation relationship between the third image and the first image; based on the first rotation relationship and the relative transformation relationship, Determine the fourth rotation relationship between the device coordinate system and the reference coordinate system when the third image is collected; determine that the pixel on the first image is at the reference coordinate according to the first rotation relationship And determine the position information of the pixel on the third image in the reference coordinate system according to the fourth rotation relationship.
  • the processing unit 1102 is configured to perform optimization processing on the first rotation relationship and the fourth rotation relationship according to a preset second algorithm; determine according to the optimized first rotation relationship The position information of the pixel on the first image in the reference coordinate system, and the position information of the pixel on the third image in the reference coordinate system is determined according to the optimized fourth rotation relationship .
  • the processing unit 1102 is configured to obtain second posture data when the shooting device collects each frame of the second image set; according to the second posture data, determine the first posture data The fifth rotation relationship between the device coordinate system and the reference coordinate system when each frame of the image in the two image sets is collected; the first rotation relationship and the fourth rotation relationship are optimized to make The difference between the optimized first rotation relationship and the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfy the preset second minimization condition.
  • the processing unit 1102 is configured to determine that the sum between the first data and the second data satisfies a preset third minimization condition; wherein, the first data is based on the second image
  • the sum of the re-projection errors calculated from the pixels on each frame of the image in the set, the second data is the sum of the rotation relation deviations corresponding to each frame of the image in the second image collection, and the rotation relation deviation is Refers to the difference between the first rotation relationship or the fourth rotation relationship and the corresponding fifth rotation relationship.
  • the rotation relationship between the device coordinate system where the photographing device is located and the reference coordinate system is determined, and the posture data of the photographing device when the image is collected is referred to.
  • the rotation relationship between the equipment coordinate system and the reference coordinate system can be determined more accurately, and the accuracy of the conversion from pixel points to three-dimensional coordinates in the state of linear motion is better ensured, so as to better realize environmental monitoring, Tasks such as drawing.
  • the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Abstract

A data processing method related to a photographing device, the method being applied to an image processing device. The image processing device may obtain attitude data of the photographing device. The photographing device is disposed on a mobile platform, and the motion state of the mobile platform is a linear motion state. The method comprises: obtaining first attitude data when the photographing device collects a first image (S401); according to the first attitude data, determining a first rotation relationship between a device coordinate system of the photographing device and a reference coordinate system when the first image is collected (S402); and determining location information of a pixel point in the first image in the reference coordinate system according to the first rotation relationship (S403). Embodiments of the present invention further provide a data processing apparatus related to the photographing device and the image processing device. In the linear motion state, accuracy in converting a pixel point to three-dimensional coordinates may be ensured, thereby better implementing tasks such as environment monitoring and drawing.

Description

关于拍摄设备的数据处理方法、装置及图像处理设备Data processing method, device and image processing equipment for photographing equipment 技术领域Technical field
本发明涉及图像处理技术领域,尤其涉及一种关于拍摄设备的数据处理方法、装置及图像处理设备。The present invention relates to the field of image processing technology, in particular to a data processing method, device and image processing equipment related to photographing equipment.
背景技术Background technique
随着技术的发展,无人机等移动平台得到了极大的利用,可以在移动平台上通过云台等设备搭载拍摄设备,从而使得在移动平台移动的过程中,由拍摄设备拍摄目标环境下的图像,并基于图像进行三维重建,实现环境测绘、制图等功能。With the development of technology, mobile platforms such as unmanned aerial vehicles have been greatly utilized. Cameras can be mounted on the mobile platform through PTZ and other equipment, so that when the mobile platform is moving, the camera will take pictures of the target environment. Based on the image, and perform three-dimensional reconstruction based on the image to realize the functions of environmental mapping and mapping.
目前,利用拍摄设备拍摄的图像进行三维重建的步骤包括:基于图像,利用运动恢复结构(Structure from motion,SFM)技术恢复出各拍摄设备的正确空间姿态,也即拍摄设备在某一参考坐标系的位姿,包括拍摄的位置和角度信息,角度信息例如可以是俯仰角、横滚角以及偏航角,然后再以拍摄设备在该参考坐标系下的位姿出发,来计算得到图像上的像素点所对应的实际空间的三维点的坐标,完成环境空间的三维重建,进而完成即时定位与地图构建(Simultaneous localization and mapping,SLAM)等任务。At present, the steps of 3D reconstruction using images taken by shooting equipment include: Based on the image, using Structure from Motion (SFM) technology to recover the correct spatial posture of each shooting device, that is, the shooting device is in a certain reference coordinate system The posture of the camera includes the position and angle information of the shooting. The angle information can be, for example, the pitch angle, roll angle, and yaw angle. Then, starting from the posture of the shooting device in the reference coordinate system, the image on the The coordinates of the three-dimensional points in the actual space corresponding to the pixel points complete the three-dimensional reconstruction of the environmental space, and then complete tasks such as instantaneous localization and mapping (Simultaneous Localization and Mapping, SLAM).
与测绘相关的参考坐标系一般为大地坐标系(Earth-centered,Earth-fixed,ECEF)或者由地面上已知的某点所建立的东北天(简称ENU)坐标系或北东地(简称NED)坐标系等地理坐标系。目前主流的三维建模软件根据在多个方向位置上的图像,可以正确恢复拍摄设备在大地坐标系、或地理坐标系等参考坐标系下的正确位姿,从而确定设备坐标系与参考坐标系之间的关系。The reference coordinate system related to surveying and mapping is generally the Earth-centered (Earth-fixed, ECEF) or the Northeast Sky (ENU) coordinate system established by a known point on the ground or the North East (NED) ) Geographical coordinate systems such as coordinate systems. The current mainstream 3D modeling software can correctly restore the correct pose of the shooting device in a reference coordinate system such as a geodetic coordinate system or a geographic coordinate system based on the images in multiple directions, so as to determine the device coordinate system and the reference coordinate system The relationship between.
经研究发现,在移动平台处于直线或近似直线运动的情况下,例如无人机单航带飞行的情况,基于直线运动过程中拍摄的图像和SFM来最终得到的设备坐标系与参考坐标系之间的关系不够准确,使得坐标系之间无法准确对齐,像素点的坐标位置转换存在较大的误差。Research has found that when the mobile platform is moving in a straight line or approximately a straight line, such as the case of a UAV flying with a single belt, the final device coordinate system and the reference coordinate system are obtained based on the images taken during the linear motion and the SFM. The relationship between them is not accurate enough, so that the coordinate systems cannot be accurately aligned, and the coordinate position conversion of the pixel points has a large error.
发明内容Summary of the invention
本发明实施例提供了一种关于拍摄设备的数据处理方法、装置及图像处理 设备,可以在移动平台处于直线运动状态下,较准确地确定拍摄设备的设备坐标系与参考坐标系之间的旋转关系,进而较为准确地完成像素点的坐标系位置转换。The embodiments of the present invention provide a data processing method, device, and image processing equipment for shooting equipment, which can more accurately determine the rotation between the equipment coordinate system of the shooting equipment and the reference coordinate system when the mobile platform is in a linear motion state. Relationship, and more accurately complete the coordinate system position conversion of pixels.
一方面,本发明实施例提供了一种关于拍摄设备的数据处理方法,其特征在于,所述方法应用于图像处理设备,该图像处理设备能够获取拍摄设备的姿态数据,所述拍摄设备设于移动平台上,且所述移动平台的运动状态为直线运动状态,所述方法包括:On the one hand, an embodiment of the present invention provides a data processing method related to a photographing device, characterized in that the method is applied to an image processing device, and the image processing device can obtain posture data of the photographing device, and the photographing device is set at On a mobile platform, and the motion state of the mobile platform is a linear motion state, the method includes:
获取所述拍摄设备采集第一图像时的第一姿态数据;Acquiring first posture data when the photographing device collects the first image;
根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系;Determine, according to the first posture data, a first rotation relationship between the device coordinate system of the photographing device and a reference coordinate system when the first image is collected;
根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。According to the first rotation relationship, the position information of the pixel on the first image in the reference coordinate system is determined.
另一方面,本发明实施例还提供了一种关于拍摄设备的数据处理装置,所述装置应用于图像处理设备,该图像处理设备能够获取拍摄设备的姿态数据,所述拍摄设备设于移动平台上,且所述移动平台的运动状态为直线运动状态,所述装置包括:On the other hand, an embodiment of the present invention also provides a data processing device related to a photographing device. The device is applied to an image processing device that can obtain posture data of the photographing device, and the photographing device is set on a mobile platform. And the motion state of the mobile platform is a linear motion state, and the device includes:
获取模块,用于获取所述拍摄设备采集第一图像时的第一姿态数据;An acquiring module, configured to acquire first posture data when the photographing device acquires the first image;
确定模块,用于根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系;A determining module, configured to determine, according to the first posture data, a first rotation relationship between the device coordinate system of the shooting device and a reference coordinate system when the first image is collected;
处理模块,用于根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。The processing module is configured to determine the position information of the pixel on the first image in the reference coordinate system according to the first rotation relationship.
再一方面,本发明实施例还提供了一种图像处理设备,该图像处理设备能够获取拍摄设备的姿态数据,所述拍摄设备设于移动平台上,且所述移动平台的运动状态为直线运动状态,该图像处理设备包括:通信接口单元、处理单元;In yet another aspect, an embodiment of the present invention also provides an image processing device that can obtain posture data of the shooting device, the shooting device is set on a mobile platform, and the motion state of the mobile platform is linear motion State, the image processing equipment includes: a communication interface unit and a processing unit;
所述通信接口单元,用于与外部设备通信,获取外部设备的数据;The communication interface unit is used to communicate with an external device and obtain data of the external device;
所述处理单元,用于通过所述通信接口单元获取所述拍摄设备采集第一图像时的第一姿态数据;根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系;根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。The processing unit is configured to obtain, through the communication interface unit, the first posture data of the shooting device when the first image is collected; according to the first posture data, determine the status of the shooting device when the first image is collected A first rotation relationship between the device coordinate system and a reference coordinate system; according to the first rotation relationship, the position information of a pixel on the first image in the reference coordinate system is determined.
本发明实施例在移动平台处于直线运动状态时,确定拍摄设备所在的设备 坐标系到参考坐标系之间的旋转关系,参考了拍摄设备在采集图像时的姿态数据,这使得在直线运动状态下,可以较为准确地来确定设备坐标系到参考坐标系之间的旋转关系,较好地确保了在直线运动状态下,像素点到三维坐标的转换的准确性,从而更好地实现环境监测、制图等任务。In the embodiment of the present invention, when the mobile platform is in a linear motion state, the rotation relationship between the device coordinate system where the photographing device is located and the reference coordinate system is determined, and the posture data of the photographing device when the image is collected is referred to. , The rotation relationship between the equipment coordinate system and the reference coordinate system can be determined more accurately, and the accuracy of the conversion from pixel points to three-dimensional coordinates in the state of linear motion is better ensured, so as to better realize environmental monitoring, Tasks such as drawing.
附图说明Description of the drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following will briefly introduce the drawings needed in the embodiments. Obviously, the drawings in the following description are only some of the present invention. Embodiments, for those of ordinary skill in the art, without creative work, other drawings can be obtained from these drawings.
图1是本发明实施例的通过无人机搭载拍摄设备的任务场景示意图;FIG. 1 is a schematic diagram of a task scene of a drone equipped with a shooting device according to an embodiment of the present invention;
图2是本发明实施例的北东地坐标系的示意图;Figure 2 is a schematic diagram of the North-East coordinate system of an embodiment of the present invention;
图3是本发明实施例的图像坐标系与设备坐标系之间的关系的示意图;FIG. 3 is a schematic diagram of the relationship between an image coordinate system and a device coordinate system according to an embodiment of the present invention;
图4是本发明实施例的一种关于拍摄设备的数据处理方法的流程示意图;4 is a schematic flowchart of a data processing method related to a photographing device according to an embodiment of the present invention;
图5是本发明实施例的一种确定旋转关系的流程示意图;FIG. 5 is a schematic diagram of a process for determining a rotation relationship according to an embodiment of the present invention;
图6是本发明实施例的一种关于移动平台的多航带运动与直线状态的单航带运动的示意图;FIG. 6 is a schematic diagram of a multi-belt movement and a single belt movement in a straight state of a mobile platform according to an embodiment of the present invention;
图7是本发明实施例的一种优化得到旋转关系的流程示意图;FIG. 7 is a schematic flowchart of an optimized rotation relationship obtained according to an embodiment of the present invention;
图8是本发明实施例的坐标位置转换的流程示意图;FIG. 8 is a schematic diagram of a flow of coordinate position conversion according to an embodiment of the present invention;
图9是本发明实施例的进行旋转关系优化及坐标位置转换的流程示意图;FIG. 9 is a schematic diagram of a flow of performing rotation relationship optimization and coordinate position conversion according to an embodiment of the present invention;
图10是本发明实施例的关于拍摄设备的数据处理装置的结构示意图;FIG. 10 is a schematic structural diagram of a data processing device for photographing equipment according to an embodiment of the present invention;
图11是本发明实施例的一种图像处理设备的结构示意图。Fig. 11 is a schematic structural diagram of an image processing device according to an embodiment of the present invention.
具体实施方式detailed description
在本发明实施例中,用于拍摄图像的拍摄设备可以直接挂载在移动平台上,也可以通过一个云台搭载在移动平台上的。在移动平台在移动的过程中,拍摄设备可以拍摄到多帧当前环境的图像,而针对每一帧图像,都可以读取在拍摄该图像时的姿态数据,这些姿态数据可以包括拍摄设备拍摄时的俯仰Pitch角、横滚Roll角以及偏航Yaw角。在确定移动平台当前处于直线运动状态时,可以触发相应的图像处理设备以图像的姿态数据为中间数据,在设备坐 标系、云台坐标系以及参考坐标系进行转换,将拍摄设备拍摄到的图像上的点转换到参考坐标系下,确定这些点在参考坐标系下的三维点,进而实现诸如SLAM环境制图等任务。In the embodiment of the present invention, the shooting device used to shoot images can be directly mounted on the mobile platform, or can be mounted on the mobile platform through a pan-tilt. During the movement of the mobile platform, the shooting device can capture multiple frames of images of the current environment, and for each frame of image, it can read the posture data when the image was taken. These posture data may include The pitch angle, roll angle and yaw Yaw angle. When it is determined that the mobile platform is currently in linear motion, the corresponding image processing device can be triggered to use the image posture data as the intermediate data to convert between the device coordinate system, the pan/tilt coordinate system, and the reference coordinate system to convert the image captured by the camera The above points are converted to the reference coordinate system, and the three-dimensional points of these points in the reference coordinate system are determined, so as to realize tasks such as SLAM environmental drawing.
请参考图1,其示出了一种通过无人机搭载拍摄设备来获取图像进而进行相应的图像处理,实现诸如SLAM等任务的场景图。在该场景图中包括无人机101、在无人机101底部设置的拍摄设备102,在一个实施例中,无人机101可以设置云台,以及在云台上设置所述拍摄设备102。该无人机可以是图1所示的旋翼无人机,例如四旋翼、六旋翼、或八旋翼等等,在一些实施例中,也可以是固定翼飞行器。所述云台主要是指三轴云台,该云台可以在俯仰Pitch方向、横滚Roll方向以及偏航Yaw方向上转动,从而带动拍摄设备拍摄不同方位的图像。除无人机外,在其他实施例中,移动平台也可以是在陆上行驶的无人驾驶汽车、智能移动机器人等,通过无人驾驶汽车、智能机器人上的云台以及拍摄设备,也可以根据需要拍摄不同方位的图像。基于无人机上的拍摄设备拍摄的图像以及在拍摄设备拍摄图像时的姿态数据,地面端的图像处理设备100可以进行本发明实施例的一系列数据处理。在图1中,无人机101与图像处理设备100是分离设计的,在其他实施例中,图像处理设备100也可以挂载在所述无人机上,接收拍摄设备102拍摄的图像并读取姿态数据;或者图像处理设备100本身作为无人机的一个部件,能够直接或者间接与拍摄设备102和云台相连,获取图像和姿态数据。Please refer to FIG. 1, which shows a scene diagram in which an unmanned aerial vehicle is equipped with a shooting device to acquire images and then perform corresponding image processing to achieve tasks such as SLAM. The scene diagram includes a drone 101 and a photographing device 102 provided at the bottom of the drone 101. In one embodiment, the drone 101 may be provided with a pan-tilt, and the photographing device 102 may be provided on the pan-tilt. The drone may be a rotary-wing drone as shown in FIG. 1, such as a quadrotor, a hexarotor, or an octorotor, etc. In some embodiments, it may also be a fixed-wing aircraft. The pan/tilt head mainly refers to a three-axis pan/tilt head, which can rotate in the pitch direction, roll direction, and yaw direction, so as to drive the camera to shoot images in different directions. In addition to drones, in other embodiments, the mobile platform can also be an unmanned car, an intelligent mobile robot, etc. running on land. It can also be an unmanned car, a pan/tilt on an intelligent robot, and a photographing device. Take pictures of different orientations as needed. Based on the images taken by the camera on the drone and the posture data when the camera is taking the images, the image processing device 100 on the ground can perform a series of data processing in the embodiment of the present invention. In FIG. 1, the drone 101 and the image processing device 100 are designed separately. In other embodiments, the image processing device 100 can also be mounted on the drone to receive and read the image taken by the camera 102. Attitude data; or the image processing device 100 itself, as a component of the UAV, can be directly or indirectly connected to the camera 102 and the pan/tilt to obtain image and attitude data.
请参见图2到图4,是本发明实施例所涉及的坐标系的示意图。其中,图2示出的是北东地坐标系,北东地坐标系属于地理坐标系,是本发明实施例的参考坐标系的一种,北东地坐标系的X轴正方向是指向地理方向的北North、Y轴正方向是指向地理方向的东East、Z轴正方向是指向地Down。Please refer to FIGS. 2 to 4, which are schematic diagrams of the coordinate system involved in the embodiment of the present invention. Among them, Figure 2 shows the North-East coordinate system. The North-East coordinate system belongs to the geographic coordinate system and is one of the reference coordinate systems in the embodiment of the present invention. The positive direction of the X-axis of the North-East coordinate system points to the geographic The direction north is North, the positive direction of the Y axis points to the geographic direction East, and the positive direction of the Z axis points to Down.
图3示出了设备坐标系与图像坐标系的关系,其中,设备坐标系中的XY平面和图像平面平行,设备坐标系的Z轴为相机主轴,原点O是投影中心(光心),设备坐标系为三维坐标系。同时,在图3中,图像平面所在的图像坐标系的原点O 1为相机主轴和图像平面的交点,也称为像主点,O点和O1点之间的距离为焦距f,成像平面坐标系为二维坐标,可以理解的是,图3仅为示意,例如,在一些实施例中,由于相机制造的原因,像主点通常不在成像平面的正中心。从图3可以看出,图像坐标系上的点(x,y)在设备坐标系上的点 Q为(X,Y,Z)。 Figure 3 shows the relationship between the device coordinate system and the image coordinate system, where the XY plane in the device coordinate system is parallel to the image plane, the Z axis of the device coordinate system is the main axis of the camera, and the origin O is the projection center (optical center). The coordinate system is a three-dimensional coordinate system. At the same time, in Figure 3, the origin O 1 of the image coordinate system where the image plane is located is the intersection point of the camera's principal axis and the image plane, also called the principal point. The distance between point O and point O1 is the focal length f, the imaging plane coordinate The system is a two-dimensional coordinate. It can be understood that FIG. 3 is only for illustration. For example, in some embodiments, due to the manufacturing of the camera, the principal point of the image is usually not in the exact center of the imaging plane. It can be seen from Figure 3 that the point (x, y) on the image coordinate system and the point Q on the device coordinate system are (X, Y, Z).
基于上述的有关坐标系的示意以及描述,再请参见图4,是本发明实施例的一种关于拍摄设备的数据处理方法的流程示意图,本发明实施例的所述方法可以由上述提及的图像处理设备来执行,该图像处理设备例如可以是一个智能移动终端、平板电脑、个人电脑、笔记本电脑等等。该图像处理设备能够获取拍摄设备的姿态数据,而该拍摄设备设于移动平台,图像处理设备可以通过移动平台来获取拍摄设备的姿态数据。在一个实施例中,若移动平台的运动状态为直线运动状态(即移动平台的多个位移点基本在一条直线上),图像处理设备在S401中获取所述拍摄设备采集第一图像时的第一姿态数据,在移动平台移动的过程中,所述拍摄设备可以拍摄得到多帧图像构成图像集合,所述第一图像可以是该多帧图像中的任意一帧图像。Based on the above-mentioned schematic and description about the coordinate system, please refer to FIG. 4 again, which is a schematic flowchart of a data processing method for a photographing device according to an embodiment of the present invention. It is executed by an image processing device, which may be, for example, a smart mobile terminal, tablet computer, personal computer, notebook computer, etc. The image processing device can obtain the posture data of the shooting device, and the shooting device is set on a mobile platform, and the image processing device can obtain the posture data of the shooting device through the mobile platform. In one embodiment, if the motion state of the mobile platform is a linear motion state (that is, the multiple displacement points of the mobile platform are basically on a straight line), the image processing device acquires the first image when the shooting device collects the first image in S401. A posture data. During the movement of the mobile platform, the photographing device may photograph multiple frames of images to form an image set, and the first image may be any one of the multiple frames of images.
在拍摄设备拍摄第一图像以及其他图像时,可以记录相应的拍摄设备的姿态数据,这些姿态数据是通过诸如陀螺仪等传感器的感测数据处理得到的,可以包括在拍摄图像时拍摄设备的横滚Roll角、俯仰Pitch角以及偏航Yaw角。在一个实施例中,如果拍摄设备是通过云台搭载在移动平台上的,则可以在第一图像拍摄过程中,将基于设置在云台上的传感器的感测数据确定的云台角信息作为第一姿态数据,并为第一图像记录第一姿态数据,云台角信息包括:Roll角、Pitch角、Yaw角。而如果云台直接固定在无人机等移动平台上,则可以将基于移动平台上的传感器的感测数据确定的移动平台角度数据作为第一姿态数据,并为第一图像记录第一姿态数据,第一姿态数据包括:移动平台的Roll角、Pitch角、Yaw角。从云台上的传感器所在的云台坐标系转换到参考坐标系(地理坐标系或大地坐标系等)的转换过程、与从移动平台的传感器所在的坐标系转换到参考坐标系(地理坐标系或大地坐标系等)的转换过程相同,在此基础上,后续均以云台坐标系为例,来说明拍摄设备的设备坐标系与参考坐标系之间的转换过程。When the shooting device shoots the first image and other images, it can record the posture data of the corresponding shooting device. These posture data are obtained through the sensing data processing of sensors such as gyroscopes, and can include the horizontal position of the shooting device when the image is taken. Roll angle, pitch Pitch angle and Yaw angle. In one embodiment, if the camera is mounted on the mobile platform via a pan-tilt, the pan-tilt angle information determined based on the sensor data of the sensor set on the pan-tilt can be used as First posture data, and record first posture data for the first image. PTZ angle information includes: Roll angle, Pitch angle, and Yaw angle. If the PTZ is directly fixed on a mobile platform such as a drone, the angle data of the mobile platform determined based on the sensing data of the sensors on the mobile platform can be used as the first attitude data, and the first attitude data can be recorded for the first image , The first posture data includes: Roll angle, Pitch angle, and Yaw angle of the mobile platform. The conversion process from the PTZ coordinate system where the sensor on the pan/tilt is located to the reference coordinate system (geographic coordinate system or geodetic coordinate system, etc.), and the conversion from the coordinate system where the sensor of the mobile platform is located to the reference coordinate system (geographic coordinate system) Or the geodetic coordinate system, etc.) have the same conversion process. On this basis, the following uses the pan-tilt coordinate system as an example to illustrate the conversion process between the device coordinate system of the shooting device and the reference coordinate system.
云台角信息可以记录在对应的环境影像的扩展字段中,以便于图像处理设备在需要时能够直接获取得到。拍摄设备可以自动地从云台中获取这些云台角信息,或者云台主动地将云台角信息发送给拍摄设备,以便于拍摄设备在拍摄到图像后为该图像记录对应的云台角信息。在本发明实施例中,所述第一姿态数据包括:所述云台在采集所述第一图像时对应的云台角信息。云台角信息可 以直接从第一图像的扩展字段中读取即可。The pan/tilt angle information can be recorded in the extension field of the corresponding environmental image, so that the image processing device can directly obtain it when needed. The photographing device can automatically acquire the pan/tilt angle information from the pan/tilt, or the pan/tilt actively sends the pan/tilt angle information to the photographing device, so that the photographing device can record the corresponding pan/tilt angle information for the image after the image is captured. In the embodiment of the present invention, the first posture data includes: the angle information of the pan/tilt corresponding to the pan/tilt when the first image is collected. The pan/tilt angle information can be read directly from the extended field of the first image.
本发明实施例具体是在移动平台的运动状态为直线运动状态时,触发执行所述S401的,在本发明实施例中,检测移动平台的运动状态是否为直线运动状态的方式包括多种。在一个实施例中,可以判断为移动平台规划的运动路线是否为直线,例如,无人机在基于设置的航行轨迹飞行时,检测当前一段时间内的航行轨迹是否为直线,若是,则确定移动平台处于直线运动状态。又例如,当用户计划手动控制移动平台沿直线或者近似直线移动时,可以手动触发指令发送给图像处理设备,该指令用于指示移动平台处于直线运动状态。也就是说,所述移动平台的运动状态为直线运动状态主要是说明移动平台会存在一个直线运动状态,并非要求移动平台一定是沿着一条直线运动,在实际处理过程中,移动平台处于直线或者近似直线的运动,或者即使为曲线运动但接收到用户的关于指示移动平台处于直线运动状态的指令时,都可以认为所述移动平台的运动状态为直线运动状态。The embodiment of the present invention specifically triggers the execution of S401 when the motion state of the mobile platform is the linear motion state. In the embodiment of the present invention, there are multiple ways to detect whether the motion state of the mobile platform is the linear motion state. In one embodiment, it can be determined whether the movement route planned by the mobile platform is a straight line. For example, when the drone is flying based on the set navigation trajectory, it detects whether the navigation trajectory in the current period of time is a straight line, and if so, it is determined to move The platform is in linear motion. For another example, when the user plans to manually control the mobile platform to move along a straight line or approximately a straight line, an instruction can be manually triggered and sent to the image processing device. The instruction is used to indicate that the mobile platform is in a linear motion state. That is to say, the state of motion of the mobile platform is a linear motion state, which mainly means that the mobile platform will have a linear motion state. It does not require that the mobile platform must move along a straight line. In the actual processing process, the mobile platform is in a straight line or Approximately linear motion, or even if it is a curved motion but receives an instruction from the user to indicate that the mobile platform is in a linear motion state, it can be considered that the motion state of the mobile platform is a linear motion state.
第一姿态数据所包括的第一图像的云台角信息可以认为分别是云台相对北东地坐标系X轴、Y轴、Z轴的顺时针旋转的角度,通过这三个轴的旋转可得到云台坐标系到北东地坐标系NED的旋转关系,也就是说,图像处理设备在S402中根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系。所述参考坐标系包括地理坐标系或大地坐标系,在本发明实施例中,以图2示出的北东地坐标系作为参考坐标系为例进行说明。可以理解的是,目前已知的一些坐标系之间都可以通过一些技术手段确定坐标系之间的相对关系,例如旋转关系、平移关系等,基于这些相对关系都可以实现不同坐标系的对齐和位置坐标的转换,在本发明实施例中,在将拍摄设备的设备坐标系与参考坐标系对齐时,参考了第一姿态数据即云台的云台角信息。所述S402的其中一种具体实现方式参考图5所示,具体可以包括如下步骤。The pan/tilt angle information of the first image included in the first attitude data can be considered to be the clockwise rotation angle of the pan/tilt relative to the X-axis, Y-axis, and Z-axis of the northeast coordinate system. The rotation of these three axes can be Obtain the rotation relationship between the PTZ coordinate system and the northeast coordinate system NED, that is, the image processing device determines the device coordinate system of the shooting device when the first image is collected according to the first posture data in S402 The first rotational relationship with the reference coordinate system. The reference coordinate system includes a geographic coordinate system or a geodetic coordinate system. In the embodiment of the present invention, the North-East coordinate system shown in FIG. 2 is taken as an example for description. It is understandable that some of the currently known coordinate systems can determine the relative relationship between the coordinate systems through some technical means, such as rotation relationship, translation relationship, etc. Based on these relative relationships, the alignment and alignment of different coordinate systems can be achieved. For the conversion of position coordinates, in the embodiment of the present invention, when aligning the device coordinate system of the photographing device with the reference coordinate system, the first posture data, that is, the pan/tilt angle information of the pan/tilt is referred to. One of the specific implementation manners of S402 is shown in FIG. 5, which may specifically include the following steps.
在S501中根据所述第一姿态数据获取所述云台的云台坐标系与所述参考坐标系之间的第二旋转关系,并获取所述拍摄设备的设备坐标系到所述云台坐标系之间的第三旋转关系。In S501, the second rotation relationship between the pan-tilt coordinate system of the pan-tilt and the reference coordinate system is acquired according to the first posture data, and the device coordinate system of the photographing device to the pan-tilt coordinate is acquired The third rotational relationship between the lines.
设云台角信息中的Roll角、Pitch角、Yaw角分别对应的角度为θ r、θ p、θ y,则云台坐标系到北东地坐标系(参考坐标系)的旋转矩阵即第二旋转关系为: Assuming that the angles corresponding to Roll, Pitch, and Yaw angles in the PTZ angle information are θ r, θ p, θ y , then the rotation matrix from the PTZ coordinate system to the Northeast coordinate system (reference coordinate system) is the first The two rotation relations are:
Figure PCTCN2019080500-appb-000001
Figure PCTCN2019080500-appb-000001
可以基于拍摄设备与云台之间的装配关系来确定所述第三旋转关系,云台角信息中Roll角、Pitch角、Yaw角分别对应云台坐标系的X轴、Y轴、以及Z轴,根据设备坐标系和云台坐标系的各轴朝向可得设备坐标系到云台坐标系的旋转矩阵即第三旋转关系为下述公式2所述。在一个实施例中,所述第三旋转关系是根据所述拍摄设备与所述云台的装配关系直接配置得到,在确定所述第一旋转关系时直接读取并使用。The third rotation relationship can be determined based on the assembly relationship between the camera and the pan/tilt. The Roll angle, Pitch angle, and Yaw angle in the pan/tilt angle information correspond to the X axis, Y axis, and Z axis of the pan/tilt coordinate system, respectively. According to the orientation of each axis of the device coordinate system and the pan/tilt coordinate system, the rotation matrix from the device coordinate system to the pan/tilt coordinate system, that is, the third rotation relationship, is described in the following formula 2. In one embodiment, the third rotation relationship is directly configured according to the assembly relationship between the photographing device and the pan/tilt, and is directly read and used when determining the first rotation relationship.
Figure PCTCN2019080500-appb-000002
Figure PCTCN2019080500-appb-000002
根据上述公式1和公式2,在S502中根据所述第二旋转关系和所述第三旋转关系,确定采集所述第一图像时所述设备坐标系与参考坐标系之间的第一旋转关系,即可以得设备坐标系到北东地坐标系NED的旋转矩阵即第一旋转关系为:According to the above formula 1 and formula 2, in S502 according to the second rotation relationship and the third rotation relationship, the first rotation relationship between the device coordinate system and the reference coordinate system when the first image is acquired is determined , That is, the rotation matrix from the equipment coordinate system to the NED coordinate system, that is, the first rotation relationship:
R camera_to_ned=R gimbal_to_ned*R camera_to_gimbal     公式3; R camera_to_ned = R gimbal_to_ned *R camera_to_gimbal formula 3;
如果图像中记录的云台角信息完全准确,则每个设备坐标系都可以正确对齐到北东地NED坐标系上。在得到了准确的旋转矩阵R camera_to_ned后,图像处理设备在S403中根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。对于第一图像上的某个二维点,可以基于上述所说的图像坐标系与设备坐标系之间的关系,将图像的该二维点从第一图像的图像坐标系转换到设备坐标系,然后基于R camera_to_ned结合设备坐标系与北东地坐标系之间平移矩阵t和缩放关系即尺度s,再次从设备坐标系转换到北东地坐标系下。其中,平移矩阵t可以根据多帧图像之间的重叠部分来计算,尺度s表示设备坐标系与参考坐标系之间的缩放关系,设备坐标系需要放大s以便于与北东地坐标系等参考坐标系对齐。例如,对于所述第一图像上的二维点(x,y)转换到设备坐标系上后,为三维点Q(X,Y,Z),s*R camera_to_ned*Q+t即可得到三维点Q在北东地坐标系下的三维点坐标。 If the PTZ angle information recorded in the image is completely accurate, each device coordinate system can be correctly aligned to the NED coordinate system. After obtaining the accurate rotation matrix R camera_to_ned , the image processing device determines the position information of the pixels on the first image in the reference coordinate system according to the first rotation relationship in S403. For a certain two-dimensional point on the first image, based on the above-mentioned relationship between the image coordinate system and the device coordinate system, the two-dimensional point of the image can be converted from the image coordinate system of the first image to the device coordinate system , And then based on R camera_to_ned combining the translation matrix t between the device coordinate system and the northeast coordinate system and the scaling relationship that is the scale s, and then transform from the device coordinate system to the northeast coordinate system again. Among them, the translation matrix t can be calculated based on the overlap between multiple frames of images, and the scale s represents the scaling relationship between the device coordinate system and the reference coordinate system. The device coordinate system needs to be enlarged to facilitate reference with the Northeast coordinate system, etc. The coordinate system is aligned. For example, after the two-dimensional point (x, y) on the first image is converted to the device coordinate system, it becomes the three-dimensional point Q(X, Y, Z), s*R camera_to_ned *Q+t can obtain the three-dimensional The three-dimensional point coordinates of point Q in the northeast coordinate system.
在得到了第一图像的第一旋转关系后,基于第一旋转关系也可以得到其他 图像(特别是在采集时间上与第一图像邻近的第二图像,例如拍摄了第一图像后紧接着拍摄到的图像)上的点到参考坐标系下的位置信息。具体的,基于第二图像与第一图像的图像重叠部分和/或两个图像对应的云台角信息,可以计算得到第一图像与第二图像之间的相对旋转关系,那么基于第一旋转关系和相对旋转关系,可以得到第二图像的关于设备坐标系到参考坐标系之间的旋转关系,基于第二图像的关于设备坐标系到参考坐标系之间的旋转关系,可以得到第二图像上的像素点在参考坐标系下的位置信息。在一个实施例中,第二图像的关于设备坐标系到参考坐标系之间平移关系也可以基于第一图像对应的平移关系和第一图像与第二图像之间的相对平移关系得到,第二图像的缩放关系s则与第一图像的缩放关系相同。After the first rotation relationship of the first image is obtained, other images can also be obtained based on the first rotation relationship (especially the second image adjacent to the first image in the acquisition time, for example, the first image is taken immediately after the first image is taken). The position information from the point on the image to the reference coordinate system. Specifically, based on the image overlapping part of the second image and the first image and/or the pan/tilt angle information corresponding to the two images, the relative rotation relationship between the first image and the second image can be calculated, and then based on the first rotation Relationship and relative rotation relationship, the rotation relationship between the device coordinate system and the reference coordinate system of the second image can be obtained, and the second image can be obtained based on the rotation relationship between the device coordinate system and the reference coordinate system of the second image The position information of the pixel on the above reference coordinate system. In an embodiment, the translation relationship between the device coordinate system and the reference coordinate system of the second image may also be obtained based on the translation relationship corresponding to the first image and the relative translation relationship between the first image and the second image. The scaling relationship s of the image is the same as the scaling relationship of the first image.
在本发明实施例中是以北东地坐标系为例进行的说明,其他类型的参考坐标系进行相同的处理。或者说,其他类型的参考坐标系都可以与北东地坐标系之间存在已知的转换关系,在能够确定R camera_to_ned的基础上,可以进一步地基于其他坐标系例如东北天坐标系(另一种地理坐标系)与北东地坐标系之间的转换关系,得到设备坐标系到这些坐标系之间的旋转矩阵,旋转矩阵用于表征坐标系之间的旋转关系。 In the embodiment of the present invention, the north-east coordinate system is taken as an example for description, and other types of reference coordinate systems perform the same processing. In other words, other types of reference coordinate systems can have a known conversion relationship with the northeast coordinate system. Based on the ability to determine R camera_to_ned , it can be further based on other coordinate systems such as the northeast sky coordinate system (another A conversion relationship between a geographic coordinate system) and a north-east coordinate system to obtain a rotation matrix between the device coordinate system and these coordinate systems, and the rotation matrix is used to characterize the rotation relationship between the coordinate systems.
在一个实施例中,若所述移动平台的运动状态不为直线运动状态,例如确定无人机是在多航带运动时,则可以通过现有的基于图像来确定第一图像对应的第一旋转关系及其他图像的相关旋转关系。在一个实施例中,如图6所示,示出了无人机的单航带和多航带运动的运动路线,黑点表示在执行某项任务时自动规划的航点或者用户手动点击设置的航点,将这些航点连接即构成了无人机飞行的航线。In one embodiment, if the motion state of the mobile platform is not a linear motion state, for example, when it is determined that the drone is moving in a multi-aircraft belt, the first image corresponding to the first image can be determined through the existing image-based Rotation relationship and related rotation relationship of other images. In one embodiment, as shown in Figure 6, the movement routes of the UAV's single and multiple belts are shown, and the black dots indicate the waypoints that are automatically planned when performing a certain task or the user manually clicks to set Connecting these waypoints constitutes the flight route of the drone.
其中,在多航带的情况下,可以基于视觉算法直接计算得到图像对应的旋转关系、平移关系以及尺度。具体的,利用多航带进行SFM或SLAM,由于计算出的视觉坐标中心和每帧图像拍摄时拍摄设备对应的GPS中心均处在多条直线上(参考图6),可以利用下式解算出一个正确的相似变换矩阵,即解算得到旋转矩阵R,平移矩阵t,以及尺度s。在下述公式中,n是指在多航带运动过程中拍摄的图像的帧数,y i是指拍摄设备在参考坐标系下实际的中心坐标,x i是指拍摄设备的设备坐标系的中心坐标。 Among them, in the case of multiple flight belts, the rotation relationship, translation relationship and scale corresponding to the image can be directly calculated based on the vision algorithm. Specifically, using multiple flight belts for SFM or SLAM, since the calculated visual coordinate center and the GPS center corresponding to the shooting device when each frame of image is taken are on multiple straight lines (refer to Figure 6), the following formula can be used to calculate A correct similarity transformation matrix is solved to obtain the rotation matrix R, the translation matrix t, and the scale s. In the following formulas, n is defined as frames with an image in a multi-flight movement during shooting, y i refers to the photographing apparatus in the reference coordinate system of the actual center coordinates, x i refers to the center of the imaging device coordinate system apparatus coordinate.
Figure PCTCN2019080500-appb-000003
Figure PCTCN2019080500-appb-000003
在得到这个相似变换矩阵后,拍摄设备的设备坐标系可以利用这个相似变换矩阵对齐到北东地坐标系等参考坐标系下。单航带由于缺少一个自由度,求解相似变换矩阵的这个问题时会出现异常,因为若给定一个旋转矩阵R满足条件,则绕着这条单航带所在的直线进行旋转后得到的新旋转矩阵也符合条件,而每一个R又对应一个t,因此存在无数组满足条件的解,即无法准确从理论上算出一个正确的相似变换矩阵。所以,在图6所示的单航带移动的情况下,需要借助本发明上述实施例中提及的拍摄设备的姿态数据,或者说云台角信息,基于上述提及的方式计算第一旋转关系,进而将设备坐标系正确对齐到北东地坐标系等参考坐标系。After the similar transformation matrix is obtained, the device coordinate system of the shooting device can be aligned to a reference coordinate system such as the North-East coordinate system using this similar transformation matrix. Due to the lack of a single degree of freedom, an exception occurs when solving the similar transformation matrix problem, because if a given rotation matrix R satisfies the condition, the new rotation is obtained after rotating around the straight line where the single air belt is located. The matrix also meets the condition, and each R corresponds to a t, so there is a solution without an array that satisfies the condition, that is, it is impossible to accurately calculate a correct similar transformation matrix theoretically. Therefore, in the case of a single belt movement as shown in FIG. 6, it is necessary to calculate the first rotation based on the above-mentioned method with the help of the posture data of the shooting device mentioned in the above-mentioned embodiment of the present invention, or the pan/tilt angle information. Relationship, and then correctly align the device coordinate system to a reference coordinate system such as the North-East coordinate system.
本发明实施例在移动平台处于直线运动状态时,确定拍摄设备所在的设备坐标系到参考坐标系之间的旋转关系,参考了拍摄设备在采集图像时的姿态数据,这使得在直线运动状态下,可以较为准确地来确定设备坐标系到参考坐标系之间的旋转关系,较好地确保了在直线运动状态下,像素点到三维坐标的转换的准确性,从而更好地实现环境监测、制图等任务。In the embodiment of the present invention, when the mobile platform is in a linear motion state, the rotation relationship between the device coordinate system where the photographing device is located and the reference coordinate system is determined, and the posture data of the photographing device when the image is collected is referred to. , The rotation relationship between the equipment coordinate system and the reference coordinate system can be determined more accurately, and the accuracy of the conversion from pixel points to three-dimensional coordinates in the state of linear motion is better ensured, so as to better realize environmental monitoring, Tasks such as drawing.
如上述,R camera_to_ned是建立在Roll角、Pitch角以及Yaw角为准确值的基础上进行的计算。在一些场景下,云台角信息中的Roll角、Pitch角是比较准确的,可以直接使用,但是Yaw角则还有可能存在较大的偏差,此时,在执行S403时,图像处理设备具体可以按照预设的优化算法,对第一旋转关系进行优化,以基于优化后的第一旋转关系来确定第一图像上的像素点在参考坐标系下的位置信息。 As mentioned above, R camera_to_ned is calculated based on the accurate values of Roll angle, Pitch angle and Yaw angle. In some scenarios, the Roll and Pitch angles in the PTZ angle information are more accurate and can be used directly, but the Yaw angle may have a large deviation. At this time, when performing S403, the image processing equipment is specific The first rotation relationship may be optimized according to a preset optimization algorithm, so as to determine the position information of the pixel points on the first image in the reference coordinate system based on the optimized first rotation relationship.
所述优化算法可以是一些最小化求解算法,在一个实施例中,上述的S403可以包括:按照预设的第一算法,对所述第一旋转关系进行优化处理;根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。所述第一算法的思路是:使得基于优化后的第一旋转关系确定的各个图像对应的中心坐标在所述参考坐标系中的第二坐标、与基于传感器数据确定的拍摄设备在参考坐标系下的第一坐标之间的差异和满足预设的第一最小化条件,所述中心坐标是指设备坐标系的中心坐标。如上所述,所述第一姿态数据包括俯仰角、横滚角、偏航角,而由于第一姿态数据中Roll角、Pitch角被 认为是准确的,因此,所述第一旋转关系的优化处理包括对所述偏航角进行优化处理。The optimization algorithm may be some minimization algorithm. In one embodiment, the above S403 may include: performing optimization processing on the first rotation relationship according to a preset first algorithm; and according to the optimized first rotation Relationship, determining the position information of the pixel on the first image in the reference coordinate system. The idea of the first algorithm is to make the center coordinates corresponding to each image determined based on the optimized first rotation relationship be in the second coordinate in the reference coordinate system, and the camera device determined based on the sensor data is in the reference coordinate system The difference between the first coordinates below and the preset first minimization condition are satisfied, and the center coordinates refer to the center coordinates of the device coordinate system. As described above, the first attitude data includes pitch angle, roll angle, and yaw angle. Since the Roll angle and pitch angle in the first attitude data are considered accurate, the optimization of the first rotation relationship The processing includes optimizing the yaw angle.
在一个实施例中,如图7所示,所述按照预设的第一算法,对所述第一旋转关系进行优化处理,包括:S701:获取所述拍摄设备采集的第一图像集合中每一帧图像采集时,所述设备坐标系的中心坐标以及利用传感器数据确定的所述中心坐标在所述参考坐标系中的第一坐标;S702:基于所述中心坐标以及所述第一坐标,对所述第一旋转关系进行优化处理,以使得基于优化后的第一旋转关系确定的各个所述中心坐标在所述参考坐标系中的第二坐标与对应的第一坐标之间的差异和满足预设的第一最小化条件;其中,所述第一图像集合包括所述第一图像以及至少一帧第二图像。所述第一图像集合中包括了所述第一图像在内的多帧图像,例如在一段时间内连续拍摄的三帧图像、五帧图像甚至更多的图像,除所述第一图像外的其他图像可以理解为第二图像,所述第二图像为在采集时间上邻近所述第一图像的图像,例如第二图像是指拍摄得到第一图像后,紧接着拍摄得到的第二张图像、或第三张图像等等。每一帧图像的设备坐标系的中心坐标在经过各帧图像对应的旋转关系、平移关系以及缩放关系进行坐标系转换后,均会得到一个在参考坐标系下的位置坐标即所述第二坐标,满足第一最小化条件就是所有的第二坐标与基于传感器感测到的各帧图像对应的实际坐标即所述第一坐标之间的差值的和最小。所述第一坐标是拍摄设备在拍摄相应的图像时,通过GPS传感器和高度计感测得到的三维坐标。In one embodiment, as shown in FIG. 7, the optimizing processing of the first rotation relationship according to the preset first algorithm includes: S701: acquiring each image in the first image set collected by the photographing device When a frame of image is collected, the center coordinates of the device coordinate system and the first coordinates in the reference coordinate system of the center coordinates determined using sensor data; S702: based on the center coordinates and the first coordinates, The first rotation relationship is optimized, so that the difference between the second coordinate in the reference coordinate system and the corresponding first coordinate of each of the center coordinates determined based on the optimized first rotation relationship is sum Meet the preset first minimization condition; wherein, the first image set includes the first image and at least one frame of second image. The first image set includes multiple frames of images including the first image, for example, three frames of images, five frames of images or even more images taken continuously within a period of time, except for the first image Other images can be understood as second images. The second image is an image adjacent to the first image in the acquisition time. For example, the second image refers to the second image that is captured immediately after the first image is captured. , Or the third image, etc. After the center coordinates of the device coordinate system of each frame of image are converted through the coordinate system of the rotation relationship, translation relationship and zoom relationship corresponding to each frame of image, a position coordinate in the reference coordinate system will be obtained, that is, the second coordinate Satisfying the first minimization condition is that the sum of the difference between all the second coordinates and the actual coordinates corresponding to each frame image sensed by the sensor, that is, the first coordinates, is the smallest. The first coordinates are the three-dimensional coordinates sensed by the GPS sensor and the altimeter when the shooting device shoots the corresponding image.
在一个实施例中,所述第一算法的具体表现形式可参考下述公式4,其中,s*R camera_to_ned*C v+t是转换位置坐标,其中的C v是指拍摄设备的设备坐标系的中心坐标,即上述的第二坐标,C w是对应于上述的第一坐标。 In an embodiment, the specific expression form of the first algorithm can refer to the following formula 4, where s*R camera_to_ned *C v +t is the conversion position coordinate, where C v refers to the device coordinate system of the shooting device The center coordinate of, that is, the above-mentioned second coordinate, and Cw corresponds to the above-mentioned first coordinate.
Figure PCTCN2019080500-appb-000004
Figure PCTCN2019080500-appb-000004
其中,R camera_to_ned可以基于Pitch角、Roll角以及Yaw角,采用上述的公式1、公式2以及公式3计算得到。n值取3、4、5或者其他数值,是指第一图像集合中包括的3帧图像、4帧图像、5帧图像。所述第一坐标是根据在拍摄图像时所述拍摄设备的空间位置信息,是利用利用传感器数据计算得到,所述传感数据包括全球定位传感器的采集数据和高度传感器的采集数据,也就是说,所述第一坐标主要是根据设置在无人机等移动平台或拍摄设备上的GPS 等定位装置与高度计等装置的感测到的数据来确定,利用GPS和高度计,可以得到相机的中心在北东地坐标系等参考坐标系下的位置坐标(GPS坐标加高度)。 Among them, R camera_to_ned can be calculated based on the Pitch angle, the Roll angle, and the Yaw angle using the above formula 1, formula 2, and formula 3. The value of n is 3, 4, 5 or other values, which refers to the 3 frames, 4 frames, and 5 frames included in the first image set. The first coordinate is based on the spatial position information of the shooting device when the image is taken, and is calculated by using sensor data. The sensor data includes the data collected by the global positioning sensor and the data collected by the height sensor, that is, The first coordinate is mainly determined based on the data sensed by GPS positioning devices and altimeters installed on mobile platforms such as drones or shooting equipment. Using GPS and altimeters, the center of the camera can be obtained. The position coordinates (GPS coordinates plus altitude) in a reference coordinate system such as the North East coordinate system.
在一个实施例中,公式4中C v、C w可以直接得到,R camera_to_ned可以基于Pitch角、Roll角直接使用云台角信息中的值,而Yaw角可以作为待优化的一个参数,当然,云台角信息中的Yaw角的值可以作为一个初值。而其中的平移关系即平移矩阵t和缩放关系即尺度s可以通过现有的其他方式计算得到。在一个实施例中,t和s也可以认为是一个待优化的参数,从而仅仅提供一个初值加入到公式4中的计算。在公式4最小化求解的过程不断更新Yaw角的数值,并且,t和s的值也会不断更新优化,最后Yaw角、t、s更新后的值可以使得上述的公式4的结果最小化。也就是说,所述基于所述中心坐标以及所述第一坐标,对所述第一旋转关系进行优化处理,可以包括:基于所述中心坐标以及所述第一坐标,对所述第一旋转关系、所述设备坐标系与所述参考坐标系之间的平移关系、所述设备坐标系与所述参考坐标系之间的缩放关系进行优化处理。 In an embodiment, C v and C w in formula 4 can be directly obtained, R camera_to_ned can directly use the value in the pan/tilt angle information based on Pitch angle and Roll angle, and Yaw angle can be used as a parameter to be optimized. Of course, The value of Yaw angle in the pan/tilt angle information can be used as an initial value. The translation relationship, that is, the translation matrix t and the scaling relationship, that is, the scale s can be calculated by other existing methods. In one embodiment, t and s can also be considered as a parameter to be optimized, so that only an initial value is provided to be added to the calculation in Formula 4. In the process of minimizing equation 4, the value of Yaw angle is constantly updated, and the values of t and s will also be updated and optimized. Finally, the updated values of Yaw angle, t, and s can minimize the result of equation 4 above. That is, the optimizing the first rotation relationship based on the center coordinates and the first coordinates may include: performing the first rotation based on the center coordinates and the first coordinates The relationship, the translation relationship between the device coordinate system and the reference coordinate system, and the scaling relationship between the device coordinate system and the reference coordinate system are optimized.
进一步说明的是,针对上述公式中的R camera_to_ned,如上所述是可以基于第一图像的云台角信息计算得到的,因此,对于第一图像,基于第一图像的云台角信息以及t、s、C v、C w可以计算第一图像所对应的第二坐标与在拍摄第一图像时传感器数据确定的第一坐标之间的差值; It is further explained that for the R camera_to_ned in the above formula, it can be calculated based on the pan/tilt angle information of the first image as described above. Therefore, for the first image, based on the pan/tilt angle information of the first image and t, s, Cv , and Cw can calculate the difference between the second coordinate corresponding to the first image and the first coordinate determined by the sensor data when the first image is taken;
针对第二图像,则可根据第二图像的云台角信息计算第二图像相对于第一图像的相对旋转关系,该相对旋转关系可以基于两帧图像对应的云台角信息的差异来计算得到,因此,可以在第一图像的R camera_to_ned基础上、基于相对旋转关系确定第二图像对应的设备坐标系到参考坐标系之间的旋转关系,通过上述的关于第一图像的相同计算方式,同样可以计算第二图像所对应的第二坐标与在拍摄第二图像时传感器数据确定的第一坐标之间的差值;以此类推,可以得到n帧图像的各自的第二坐标与第一坐标之间的差值,对n个差值进行求和最小化计算,即可得到第一图像的R camera_to_ned。也就是说,基于云台的转动角度,可以得出各第二图像与第一图像之间的相对旋转关系,因此,上述公式4中可以只需要用到第一图像的R camera_to_ned,而非需要直接第二图像对应的设备坐标系到参考坐标系之间的旋转关系,进而在最优化求 解的过程中能够得到R camera_to_ned,或者说其中的Yaw角的角度值。同理,当t作为待优化的参数也需要优化时,t值也是可以基于各个第二图像与第一图像对应的云台角的差异计算得到相对平移关系,进而仅优化得出第一图像的平移关系。而对于尺度s,第一图像和所有的第二图像均对应相同的s。 For the second image, the relative rotation relationship of the second image with respect to the first image can be calculated according to the pan/tilt angle information of the second image, and the relative rotation relationship can be calculated based on the difference of the pan/tilt angle information corresponding to the two frames of images Therefore, the rotation relationship between the device coordinate system corresponding to the second image and the reference coordinate system can be determined on the basis of the R camera_to_ned of the first image and based on the relative rotation relationship. Through the same calculation method for the first image described above, the same The difference between the second coordinate corresponding to the second image and the first coordinate determined by the sensor data when the second image is taken can be calculated; and so on, the respective second coordinates and first coordinates of n frames of images can be obtained The difference between, and the summation and minimization of n differences can be calculated to obtain the R camera_to_ned of the first image. That is to say, based on the rotation angle of the pan-tilt, the relative rotation relationship between each second image and the first image can be obtained. Therefore, in the above formula 4, only the R camera_to_ned of the first image can be used instead of device coordinates corresponding to the second image directly tied to the relationship between the rotating reference frame, and further in the process optimization can be solved in R camera_to_ned, or angle values therein Yaw angle. In the same way, when t is the parameter to be optimized and needs to be optimized, the value of t can also be calculated based on the difference between the pan/tilt angles of each second image and the first image to obtain the relative translation relationship, and then only the first image is optimized. Translation relationship. For the scale s, the first image and all the second images correspond to the same s.
在基于上述的公式优化得到第一旋转关系、平移关系以及缩放关系后,可以根据优化后的第一旋转关系、优化后的平移关系以及优化后的缩放关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。具体可以是先将所述第一图像上的二维点(x,y)转换到设备坐标系上为三维点Q(X,Y,Z),基于(优化后的s)*(优化后的R camera_to_ned)*Q+(优化后的t),即可计算得到三维点Q在北东地坐标系下的三维点坐标,进而确定第一图像上的像素点在参考坐标系下的位置坐标。 After the first rotation relationship, translation relationship, and zoom relationship are optimized based on the above formula, the pixels on the first image can be determined according to the optimized first rotation relationship, optimized translation relationship, and optimized zoom relationship. The position information of the point in the reference coordinate system. Specifically, the two-dimensional point (x, y) on the first image is first converted to the three-dimensional point Q(X, Y, Z) in the device coordinate system, based on (optimized s)*(optimized R camera_to_ned )*Q+(optimized t), you can calculate the three-dimensional point coordinates of the three-dimensional point Q in the northeast coordinate system, and then determine the position coordinates of the pixel points on the first image in the reference coordinate system.
在一个实施例中,所述第一图像是所述第一图像集合中的第一帧图像,经过优化处理可得到优化后的Yaw角(或第一旋转关系)、t以及s后,将得到的Yaw角(或第一旋转关系)、平移矩阵t以及尺度s均与第一帧图像关联,这样一来,可以根据第一旋转关系、优化后的第一平移矩阵以及优化后的尺度,确定所述第一图像集合中各个图像上的像素点在参考坐标系中的位置坐标;前述描述了第一图像上的像素点在参考坐标系的位置坐标。在计算后续的第二图像时,只需要计算第二图像与第一图像之间的相对旋转关系和相对平移关系,即可得到第二图像对应的设备坐标系到参考坐标系下的旋转关系和平移关系,由此可以计算得到各个第二图像上的像素点到参考坐标系下的位置坐标。在一个实施例中,可以利用PNP(Perspecitve-n-Point)方法(一种计算机视觉算法)解算出各第二图像的相对第一图像的旋转矩阵(相对旋转关系)和平移矩阵(相对平移关系)。In one embodiment, the first image is the first frame image in the first image set, and after optimization processing can obtain the optimized Yaw angle (or the first rotation relationship), t and s, The Yaw angle (or the first rotation relationship), the translation matrix t, and the scale s are all associated with the first frame of image. In this way, it can be determined according to the first rotation relationship, the optimized first translation matrix, and the optimized scale The position coordinates of the pixel points on each image in the first image set in the reference coordinate system; the foregoing describes the position coordinates of the pixel points on the first image in the reference coordinate system. When calculating the subsequent second image, it is only necessary to calculate the relative rotation relationship and relative translation relationship between the second image and the first image, and then the rotation relationship from the device coordinate system to the reference coordinate system corresponding to the second image and The translation relationship, from which the position coordinates of the pixel points on each second image to the reference coordinate system can be calculated. In one embodiment, the PNP (Perspecitve-n-Point) method (a computer vision algorithm) can be used to calculate the rotation matrix (relative rotation relationship) and translation matrix (relative translation relationship) of each second image relative to the first image. ).
简单来说,旋转矩阵和平移矩阵可以使得两个坐标系的原点和坐标轴对齐,尺度可以对其中的一个坐标系进行缩小或放大,使得两个坐标系的尺度对齐。在一个实施例中,所述第一图像也可以是所述图像集合中的任意一帧图像。In simple terms, the rotation matrix and the translation matrix can align the origin of the two coordinate systems with the coordinate axis, and the scale can be reduced or enlarged on one of the coordinate systems, so that the scales of the two coordinate systems are aligned. In an embodiment, the first image may also be any frame of image in the image set.
通过上述提及的优化方式,使得拍摄设备所在的设备坐标系与参考坐标系之间的旋转关系更加准确,使得在直线运动状态下,像素点到三维坐标的转换的更加准确性。Through the above-mentioned optimization method, the rotation relationship between the device coordinate system where the shooting device is located and the reference coordinate system is made more accurate, so that in the state of linear motion, the conversion of pixels to three-dimensional coordinates is more accurate.
在一个实施例中,在得到第一旋转关系后,还可以基于该第一旋转关系对第二图像集合中的图像上的像素点进行位置转换。在本发明实施例中的第一旋转关系可以是指通过上述的公式1、公式2以及公式3得到的旋转关系,也可以是在上述的公式1、公式2以及公式3的基础上,通过第一算法例如公式4优化后的旋转关系。所述第二图像集合可以包括第一图像以及在得到第一旋转关系后再次拍摄得到的图像,或者所述第二图像集合所包括的图像实际上与上述涉及的第一图像集合的图像一致。在本发明实施例中,根据所述第一旋转关系,可以确定所述拍摄设备采集的第二图像集合中每一帧图像上的像素点在所述参考坐标系中的位置信息;而所述第二图像集合包括所述第一图像以及至少一帧第三图像,在一个实施例中,所述第一图像可以是指所述第二图像集合中的第一帧由拍摄设备拍摄到的图像。In an embodiment, after the first rotation relationship is obtained, the position of the pixel points on the images in the second image set may be converted based on the first rotation relationship. The first rotation relationship in the embodiment of the present invention may refer to the rotation relationship obtained through the above formula 1, formula 2, and formula 3, or it may be based on the above formula 1, formula 2 and formula 3, through the first An algorithm such as formula 4 optimizes the rotation relationship. The second image set may include the first image and an image captured again after obtaining the first rotation relationship, or the images included in the second image set are actually consistent with the images of the first image set mentioned above. In the embodiment of the present invention, according to the first rotation relationship, the position information in the reference coordinate system of the pixel on each frame of the second image set collected by the shooting device can be determined; and The second image set includes the first image and at least one frame of the third image. In one embodiment, the first image may refer to an image captured by a photographing device in the first frame of the second image set. .
利用第一旋转关系计算第二图像集合中其他图像的像素点在参考坐标系的位置信息时,主要利用了其他图像与第一图像之间的相对变换关系。具体的请参见图8,是本发明实施例的一种位置变换的流程示意图,以第二图像集合中的第三图像进行说明,根据所述第一旋转关系,确定所述拍摄设备采集的第二图像集合中每一帧图像上的像素点在所述参考坐标系中的位置信息可以包括:S801:利用视觉算法确定所述第三图像与所述第一图像之间的相对变换关系;S802:基于所述第一旋转关系以及所述相对变换关系,确定采集所述第三图像时所述设备坐标系与所述参考坐标系之间的第四旋转关系;S803:根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据所述第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。When using the first rotation relationship to calculate the position information of the pixel points of the other images in the second image set in the reference coordinate system, the relative transformation relationship between the other images and the first image is mainly used. For details, please refer to FIG. 8, which is a schematic diagram of a flow of position transformation according to an embodiment of the present invention. The third image in the second image set is used for description. According to the first rotation relationship, the first image captured by the photographing device is determined. The position information of the pixel on each frame of the image in the second image set in the reference coordinate system may include: S801: using a visual algorithm to determine the relative transformation relationship between the third image and the first image; S802 : Based on the first rotation relationship and the relative transformation relationship, determine a fourth rotation relationship between the device coordinate system and the reference coordinate system when acquiring the third image; S803: according to the first rotation Relationship, determine the position information of the pixel on the first image in the reference coordinate system, and determine the position of the pixel on the third image in the reference coordinate system according to the fourth rotation relationship location information.
可以采用上述提及的PNP方法来计算第一图像和第三图像之间的相对变换关系,该相对变换关系包括第一图像与第三图像之间的相对旋转关系,还可以包括第一图像与第三图像之间的相对平移关系。在第一旋转关系的基础上,加上相对变换关系,可以得到第三图像对应的设备坐标系到参考坐标系的所述第四旋转关系。在得到第四旋转关系后,再结合第三图像对应的设备坐标系到参考坐标系的平移关系以及尺度,即可将第三图像上的像素点转换到参考坐标系上,得到像素点在参考坐标系上的位置坐标。可以是通过视觉算法直接得到所述第三图像对应的设备坐标系到参考坐标系的平移关系以及尺度;或者也可 以是基于第三图像与第一图像对应的相对变换关系中的相对平移关系结合第一图像通过前述方式优化得到的平移关系、以及优化得到的尺度来计算得到所述第三图像对应的设备坐标系到参考坐标系的平移关系以及尺度。The PNP method mentioned above can be used to calculate the relative transformation relationship between the first image and the third image. The relative transformation relationship includes the relative rotation relationship between the first image and the third image, and can also include the first image and the third image. The relative translation relationship between the third images. On the basis of the first rotation relationship and the relative transformation relationship, the fourth rotation relationship from the device coordinate system to the reference coordinate system corresponding to the third image can be obtained. After the fourth rotation relationship is obtained, combined with the translation relationship and scale from the device coordinate system corresponding to the third image to the reference coordinate system, the pixels on the third image can be converted to the reference coordinate system, and the pixels in the reference The position coordinates on the coordinate system. The translation relationship and scale from the device coordinate system to the reference coordinate system corresponding to the third image can be directly obtained through a visual algorithm; or it can be based on the combination of the relative translation relationship in the relative transformation relationship between the third image and the first image The first image calculates the translation relationship and the scale from the device coordinate system to the reference coordinate system corresponding to the third image through the translation relationship obtained by the optimization and the optimized scale.
进一步地,在基于第一旋转关系确定第一图像上的像素点在参考坐标系下的位置信息、基于第四旋转关系确定第三图像上的像素点在参考坐标系下的位置信息的过程中,可以先对第一旋转关系和第四旋转关系进行优化,按照优化的第一旋转关系和第四旋转关系,确定对应图像上的像素点在参考坐标系下的位置。在一个实施例中,请参考图9,图像处理设备在执行所述S803时具体可以包括:S901:按照预设的第二算法,对所述第一旋转关系以及所述第四旋转关系进行优化处理;S902:根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据优化后的第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。Further, in the process of determining the position information of the pixels on the first image in the reference coordinate system based on the first rotation relationship, and determining the position information of the pixels on the third image in the reference coordinate system based on the fourth rotation relationship , The first rotation relationship and the fourth rotation relationship may be optimized first, and the position of the pixel point on the corresponding image in the reference coordinate system is determined according to the optimized first rotation relationship and the fourth rotation relationship. In one embodiment, please refer to FIG. 9, when the image processing device executes the S803, it may specifically include: S901: optimizing the first rotation relationship and the fourth rotation relationship according to a preset second algorithm Processing; S902: Determine the position information of the pixel on the first image in the reference coordinate system according to the optimized first rotation relationship, and determine the third image according to the optimized fourth rotation relationship The position information of the pixel on the above reference coordinate system.
在一个实施例中,所述S901可以包括:获取所述拍摄设备采集所述第二图像集合中每一帧图像时的第二姿态数据;根据所述第二姿态数据,确定所述第二图像集合中每一帧图像在采集时所述设备坐标系与所述参考坐标系之间的第五旋转关系;对所述第一旋转关系以及所述第四旋转关系进行优化处理,以使得优化后的第一旋转关系以及优化后的第四旋转关系与对应的第五旋转关系之间的差异满足预设的第二最小化条件。所述第二姿态数据包括第二图像集合中每一帧图像在拍摄时云台的姿态角信息,具体可以包括各帧图像拍摄时的Pitch角、Roll角以及Yaw角。In an embodiment, S901 may include: acquiring second posture data when the shooting device collects each frame of image in the second image set; determining the second image according to the second posture data The fifth rotation relationship between the device coordinate system and the reference coordinate system when each frame of image in the set is collected; the first rotation relationship and the fourth rotation relationship are optimized, so that the optimized The difference between the first rotation relationship and the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfy the preset second minimization condition. The second posture data includes the posture angle information of each frame of the image in the second image set at the time of shooting, and may specifically include the pitch angle, roll angle, and Yaw angle of each frame of image during shooting.
所述第二算法主要是将第二图像集合中的图像在拍摄时的云台角信息作为参考,对于基于视觉算法得到的第四旋转关系来进行优化。可以在对图像集合中的图像做光束法平差方法(Bundle Adjustment,BA)等视觉算法的相关处理时,加入在图像拍摄时的云台角信息作为参考进行约束,进一步优化相应的旋转关系。优化的依据是在图像拍摄时的云台角信息虽然不是非常精确,但是还是可以作为参考,即通过BA视觉算法等方式优化的旋转关系和基于云台角信息算出来的旋转关系不能偏离太大。起到将优化后的旋转关系固定在云台角信息提供的旋转矩阵附近的作用。The second algorithm mainly uses the pan/tilt angle information of the images in the second image set at the time of shooting as a reference, and optimizes the fourth rotation relationship obtained based on the vision algorithm. When performing visual algorithm related processing such as Bundle Adjustment (BA) for the images in the image collection, the pan/tilt angle information at the time of image shooting can be added as a reference for constraint, and the corresponding rotation relationship can be further optimized. The basis of optimization is that although the pan/tilt angle information at the time of image shooting is not very accurate, it can still be used as a reference, that is, the rotation relationship optimized by the BA vision algorithm and the rotation relationship calculated based on the pan/tilt angle information cannot deviate too much . Play the role of fixing the optimized rotation relationship near the rotation matrix provided by the pan/tilt angle information.
在一个实施例中,具体的优化方式参考下述公式5。In an embodiment, the specific optimization method refers to the following formula 5.
Figure PCTCN2019080500-appb-000005
Figure PCTCN2019080500-appb-000005
其中,||u ij-v ij||是指在BA算法下,基于所述第二图像集合中的图像上的像素点计算的重投影误差,δ idiff(R i-R ref_i)是指旋转关系偏差,该旋转关系偏差主要是指:基于第二图像集合中第i帧图像在拍摄时的云台角信息对待优化的旋转关系R i进行约束,R ref_i是通过上述的公式1、公式2、公式3计算得到,R ref_i完全基于第i帧图像的对应第二姿态数据计算得到。δ i是消除量纲系数,为了和重投影误差相适应,一般取值是图像特征点提取的误差方差大小,在一个实施例中,δ i可以等于4。X j代表第j个三维点,P i代表第二图像集合中第i帧图像上与X j对应的像素点,当三维点X j在第二图像集合的图像中有投影P i时σ ij=1,否则σ ij=0。 Among them, ||u ij -v ij || refers to the reprojection error calculated based on the pixels on the images in the second image set under the BA algorithm, and δ i diff(R i -R ref_i ) refers to Rotation relationship deviation, the rotation relationship deviation mainly refers to: the rotation relationship R i to be optimized is constrained based on the pan/tilt angle information of the i-th frame image in the second image set at the time of shooting, R ref_i is based on the above formula 1 2. The formula 3 is calculated, and R ref_i is completely calculated based on the corresponding second posture data of the i-th frame image. δ i is the elimination dimension coefficient. In order to adapt to the reprojection error, the general value is the error variance of the image feature point extraction. In one embodiment, δ i may be equal to 4. X j represents the j-th three-dimensional point, P i represents the pixel point corresponding to X j on the i-th frame of the second image set, when the three-dimensional point X j has a projection P i in the image of the second image set σ ij =1, otherwise σ ij =0.
当i=1时,为第一帧图像即上述的第一图像,R 1为上述提及的第一旋转关系,可以直接是由上述公式1、公式2以及公式3计算得到的第一旋转关系或者公式4优化得到的第一旋转关系作为第一图像的旋转关系,第一图像的R 1可以不用优化。对后续的i-1帧图像,再通过公式5结合R 1优化得到各个其他图像的R i。例如,针对i=2时的第三图像,是基于该两帧图像中每一图像的BA算法计算的重投影误差之和、与该两帧图像中每一图像的旋转关系偏差之和,在R 1已知的情况,来优化得到R 2When i=1, it is the first frame image, that is, the above-mentioned first image, and R 1 is the first rotation relationship mentioned above, which can be directly the first rotation relationship calculated by the above formula 1, formula 2, and formula 3. a first rotational relationship or equation 4 obtained by optimizing the relationship between the first image as a rotation, R 1 of the first image can not optimized. For the subsequent i-1 frame images, the R i of each other image is obtained by optimizing formula 5 in combination with R 1 . For example, for the third image when i=2, it is the sum of the reprojection errors calculated based on the BA algorithm of each image in the two frames and the sum of the deviations from the rotation relationship of each image in the two frames. If R 1 is known, optimize R 2 .
具体的,基于第二姿态数据参考前述的公式1、公式2以及公式3,可以得到第二图像集合中的每一帧图像对应的从所述设备坐标系到所述参考坐标系之间的第五旋转关系,该第五旋转关系即为公式5中的R ref_i。对第二图像集合中的每一帧图像而言,都会对应计算得到一个R ref_i,通过在拍摄这些图像时的姿态数据来进行优化约束。 Specifically, referring to the aforementioned formula 1, formula 2, and formula 3 based on the second posture data, the first image from the device coordinate system to the reference coordinate system corresponding to each frame of the second image set can be obtained. Five rotation relations, the fifth rotation relation is R ref_i in formula 5. For each frame of image in the second image set, a corresponding R ref_i is calculated, and the optimization constraint is performed through the posture data when these images are taken.
进一步地,在一个实施例中,所述优化后的第一旋转关系以及优化后的第四旋转关系与对应的第五旋转关系之间的差异满足预设的第二最小化条件的确定方式,包括:确定第一数据与第二数据之间的和满足预设的第三最小化条件;其中,所述第一数据为基于所述第二图像集合中每一帧图像上的像素点计算得到的重投影误差之和,即公式5中∑ i,jσ ij·||u ij-v ij||求和部分,所述第二数据为所述第二图像集合中每一帧图像对应的旋转关系偏差之和,即公式5中的∑ iδ idiff(R i-R ref_i)求和部分,所述旋转关系偏差是指所述第一旋转关系或所 述第四旋转关系与对应的第五旋转关系之间的差异。在一个实施例中,所述旋转关系偏差的计算方法可以是通过对待优化的旋转关系与对应的云台旋转关系进行点乘即
Figure PCTCN2019080500-appb-000006
再转换成角向量后的模长值。如上述,R 1可通过上述的公式1到公式3直接计算得到,或者通过公式4优化得到。也可以在通过公式4优化得到后再通过公式5优化计算i=1时的R 1。在得到第一旋转关系R 1后,可以优化计算得到i=2时的第三图像的第四旋转关系即R 2,在R 1和R 2为已知,可以进一步优化得到i=3时的图像的旋转关系,以此类推。这样一来,即使优化得到的第一旋转关系R 1不是特别准确,后面的图像由于加入了关于第五旋转关系R ref_i的约束,在公式5的优化下,后续的图像也能够得到较为准确的旋转关系,从而在S902中,能够更为准确地确定第二图像集合中第三图像上像素点在所述参考坐标系中的位置信息,而不会因为第一旋转关系不是足够准确而受到太大的影响。
Further, in an embodiment, the difference between the optimized first rotation relationship, the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfies the determination manner of the preset second minimization condition, It includes: determining that the sum between the first data and the second data satisfies a preset third minimization condition; wherein, the first data is calculated based on the pixel points on each frame of the second image set The sum of the reprojection errors of ∑ i,j σ ij ·||u ij -v ij || in formula 5, the second data is the corresponding to each frame of image in the second image set The sum of the deviations of the rotation relationship, that is, the summation of ∑ i δ i diff (R i -R ref_i ) in formula 5, the rotation relationship deviation refers to the first rotation relationship or the fourth rotation relationship and the corresponding The difference between the fifth rotation relationship. In one embodiment, the method for calculating the deviation of the rotation relationship may be by performing dot multiplication on the rotation relationship to be optimized and the corresponding rotation relationship of the pan/tilt head.
Figure PCTCN2019080500-appb-000006
Then converted to the modulus value of the angle vector. As mentioned above, R 1 can be directly calculated by formula 1 to formula 3 above, or obtained by optimization by formula 4. It is also possible to optimize the calculation of R 1 when i=1 by formula 5 after the optimization is obtained by formula 4. After the first rotation relationship R 1 is obtained, the fourth rotation relationship R 2 of the third image when i=2 can be optimally calculated. When R 1 and R 2 are known, the fourth rotation relationship when i=3 can be further optimized. The image rotation relationship, and so on. In this way, even if the first rotation relationship R 1 obtained by optimization is not particularly accurate, the following images have added constraints on the fifth rotation relationship R ref_i . Under the optimization of formula 5, the subsequent images can also be more accurate In S902, the position information of the pixel on the third image in the second image set in the reference coordinate system can be determined more accurately, and the first rotation relationship is not sufficiently accurate. Big impact.
通过上述提及的优化方式,使得拍摄设备所在的设备坐标系与参考坐标系之间的旋转关系更加准确,使得在直线运动状态下,像素点到三维坐标的转换的更加准确性。并且也较好地保证了拍摄设备后续拍摄到的图像对应的旋转关系的准确性,从而保证了环境监测、制图等任务能够较好地完成。Through the above-mentioned optimization method, the rotation relationship between the device coordinate system where the shooting device is located and the reference coordinate system is made more accurate, so that in the state of linear motion, the conversion of pixels to three-dimensional coordinates is more accurate. In addition, the accuracy of the rotation relationship corresponding to the subsequent images captured by the photographing device is also better guaranteed, thereby ensuring that tasks such as environmental monitoring and mapping can be better completed.
再请参见图10,是本发明实施例的关于拍摄设备的数据处理装置的结构示意图,所述装置设置于图像处理设备,该图像处理设备能够获取拍摄设备的姿态数据,所述拍摄设备设于移动平台上,且所述移动平台的运动状态为直线运动状态。该图像处理设备可以是能够处理上述各个实施例中的相关内容的智能设备。所述装置包括如下结构。Please refer to FIG. 10 again, which is a schematic structural diagram of a data processing device for a photographing device according to an embodiment of the present invention. The device is set in an image processing device, and the image processing device can obtain posture data of the photographing device. On a mobile platform, and the motion state of the mobile platform is a linear motion state. The image processing device may be a smart device capable of processing related content in each of the foregoing embodiments. The device includes the following structure.
获取模块1001,用于获取所述拍摄设备采集第一图像时的第一姿态数据;确定模块1002,用于根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系;处理模块1003,用于根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。The acquiring module 1001 is used to acquire the first posture data when the shooting device collects the first image; the determining module 1002 is used to determine the equipment of the shooting device when the first image is collected according to the first posture data A first rotation relationship between a coordinate system and a reference coordinate system; the processing module 1003 is configured to determine the position information of a pixel on the first image in the reference coordinate system according to the first rotation relationship.
在一个实施例中,所述参考坐标系是指:地理坐标系、或大地坐标系。In one embodiment, the reference coordinate system refers to a geographic coordinate system or a geodetic coordinate system.
在一个实施例中,所述拍摄设备设于所述移动平台的云台上。In one embodiment, the photographing device is provided on a pan-tilt of the mobile platform.
在一个实施例中,所述确定模块1002,具体用于根据所述第一姿态数据获取所述云台的云台坐标系与所述参考坐标系之间的第二旋转关系,并获取所 述拍摄设备的设备坐标系到所述云台坐标系之间的第三旋转关系;根据所述第二旋转关系和所述第三旋转关系,确定采集所述第一图像时所述设备坐标系与参考坐标系之间的第一旋转关系。In one embodiment, the determining module 1002 is specifically configured to obtain the second rotation relationship between the pan/tilt coordinate system of the pan/tilt and the reference coordinate system according to the first posture data, and obtain the The third rotation relationship between the device coordinate system of the photographing device and the pan-tilt coordinate system; according to the second rotation relationship and the third rotation relationship, it is determined that the device coordinate system is The first rotational relationship between reference coordinate systems.
在一个实施例中,所述第三旋转关系是根据所述拍摄设备与所述云台的装配关系配置的。In an embodiment, the third rotation relationship is configured according to the assembly relationship between the photographing device and the pan/tilt.
在一个实施例中,所述处理模块1003,具体用于按照预设的第一算法,对所述第一旋转关系进行优化处理;根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。In one embodiment, the processing module 1003 is specifically configured to perform optimization processing on the first rotation relationship according to a preset first algorithm; and determine the first image on the first image according to the optimized first rotation relationship The position information of the pixel in the reference coordinate system.
在一个实施例中,所述第一姿态数据包括俯仰角、横滚角、偏航角;所述第一旋转关系的优化处理包括对所述偏航角进行优化处理。In an embodiment, the first attitude data includes a pitch angle, a roll angle, and a yaw angle; the optimization processing of the first rotation relationship includes an optimization processing on the yaw angle.
在一个实施例中,所述处理模块1003,具体用于获取所述拍摄设备采集的第一图像集合中每一帧图像采集时,所述设备坐标系的中心坐标以及利用传感器数据确定的所述中心坐标在所述参考坐标系中的第一坐标;基于所述中心坐标以及所述第一坐标,对所述第一旋转关系进行优化处理,以使得基于优化后的第一旋转关系确定的各个所述中心坐标在所述参考坐标系中的第二坐标与所述第一坐标之间的差异和满足预设的第一最小化条件;其中,所述第一图像集合包括所述第一图像以及至少一帧第二图像。In one embodiment, the processing module 1003 is specifically configured to obtain the center coordinates of the device coordinate system and the center coordinates of the device coordinate system determined by using sensor data when each frame of the first image set collected by the shooting device is collected. The first coordinate of the center coordinate in the reference coordinate system; based on the center coordinate and the first coordinate, the first rotation relationship is optimized, so that each determined based on the optimized first rotation relationship The difference between the second coordinate of the center coordinate in the reference coordinate system and the first coordinate meets a preset first minimization condition; wherein, the first image set includes the first image And at least one frame of second image.
在一个实施例中,所述第二图像为在采集时间上邻近所述第一图像的图像。In one embodiment, the second image is an image adjacent to the first image in acquisition time.
在一个实施例中,所述处理模块1003,具体用于基于所述中心坐标以及所述第一坐标,对所述第一旋转关系、所述设备坐标系与所述参考坐标系之间的平移关系、所述设备坐标系与所述参考坐标系之间的缩放关系进行优化处理。In one embodiment, the processing module 1003 is specifically configured to perform translation between the first rotation relationship, the device coordinate system and the reference coordinate system based on the center coordinates and the first coordinates The relationship, the scaling relationship between the device coordinate system and the reference coordinate system is optimized.
在一个实施例中,所述处理模块1003,具体用于根据优化后的第一旋转关系、优化后的平移关系以及优化后的缩放关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。In one embodiment, the processing module 1003 is specifically configured to determine that the pixels on the first image are in the reference according to the optimized first rotation relationship, the optimized translation relationship, and the optimized zoom relationship. Position information in the coordinate system.
在一个实施例中,所述传感数据包括全球定位传感器的采集数据和高度传感器的采集数据。In one embodiment, the sensor data includes data collected by a global positioning sensor and data collected by a height sensor.
在一个实施例中,所述处理模块1003,用于根据所述第一旋转关系,确定所述拍摄设备采集的第二图像集合中每一帧图像上的像素点在所述参考坐 标系中的位置信息;其中,所述第二图像集合包括所述第一图像以及至少一帧第三图像。In one embodiment, the processing module 1003 is configured to determine, according to the first rotation relationship, the pixel point on each frame of the second image set collected by the shooting device in the reference coordinate system Location information; wherein the second image set includes the first image and at least one frame of the third image.
在一个实施例中,所述第一图像是所述第二图像集合中的第一帧图像。In an embodiment, the first image is the first frame of image in the second image set.
在一个实施例中,所述处理模块1003,具体用于利用视觉算法确定所述第三图像与所述第一图像之间的相对变换关系;基于所述第一旋转关系以及所述相对变换关系,确定采集所述第三图像时所述设备坐标系与所述参考坐标系之间的第四旋转关系;根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据所述第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。In one embodiment, the processing module 1003 is specifically configured to use a visual algorithm to determine the relative transformation relationship between the third image and the first image; based on the first rotation relationship and the relative transformation relationship , Determining a fourth rotation relationship between the device coordinate system and the reference coordinate system when the third image is acquired; according to the first rotation relationship, determining that a pixel on the first image is in the reference The position information in the coordinate system is determined, and the position information of the pixel on the third image in the reference coordinate system is determined according to the fourth rotation relationship.
在一个实施例中,所述处理模块1003,具体用于按照预设的第二算法,对所述第一旋转关系以及所述第四旋转关系进行优化处理;根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据优化后的第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。In one embodiment, the processing module 1003 is specifically configured to perform optimization processing on the first rotation relationship and the fourth rotation relationship according to a preset second algorithm; according to the optimized first rotation relationship, Determine the position information of the pixel on the first image in the reference coordinate system, and determine the position of the pixel on the third image in the reference coordinate system according to the optimized fourth rotation relationship information.
在一个实施例中,所述处理模块1003,具体用于获取所述拍摄设备采集所述第二图像集合中每一帧图像时的第二姿态数据;根据所述第二姿态数据,确定所述第二图像集合中每一帧图像在采集时所述设备坐标系与所述参考坐标系之间的第五旋转关系;对所述第一旋转关系以及所述第四旋转关系进行优化处理,以使得优化后的第一旋转关系以及优化后的第四旋转关系与对应的第五旋转关系之间的差异满足预设的第二最小化条件。In one embodiment, the processing module 1003 is specifically configured to obtain second posture data when the shooting device collects each frame of the second image set; according to the second posture data, determine the The fifth rotation relationship between the device coordinate system and the reference coordinate system when each frame of the image in the second image set is collected; the first rotation relationship and the fourth rotation relationship are optimized to The difference between the optimized first rotation relationship and the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfy the preset second minimization condition.
在一个实施例中,所述处理模块1003,具体用于确定第一数据与第二数据之间的和满足预设的第三最小化条件;其中,所述第一数据为基于所述第二图像集合中每一帧图像上的像素点计算得到的重投影误差之和,所述第二数据为所述第二图像集合中每一帧图像对应的旋转关系偏差之和,所述旋转关系偏差是指所述第一旋转关系或所述第四旋转关系与对应的第五旋转关系之间的差异。In one embodiment, the processing module 1003 is specifically configured to determine that the sum between the first data and the second data satisfies a preset third minimization condition; wherein, the first data is based on the second The sum of the reprojection errors calculated from the pixels on each frame of the image in the image set, the second data is the sum of the rotation relation deviations corresponding to each frame of the image in the second image set, the rotation relation deviation Refers to the difference between the first rotation relationship or the fourth rotation relationship and the corresponding fifth rotation relationship.
本发明实施例中,所述装置中各个模块的具体实现可参考前述实施例中相关内容的描述,在此不赘述。本发明实施例在移动平台处于直线运动状态时,确定拍摄设备所在的设备坐标系到参考坐标系之间的旋转关系,参考了拍摄设备在采集图像时的姿态数据,这使得在直线运动状态下,可以较为准确地来确 定设备坐标系到参考坐标系之间的旋转关系,较好地确保了在直线运动状态下,像素点到三维坐标的转换的准确性,从而更好地实现环境监测、制图等任务。In the embodiment of the present invention, for the specific implementation of each module in the device, reference may be made to the description of the related content in the foregoing embodiment, which is not repeated here. In the embodiment of the present invention, when the mobile platform is in a linear motion state, the rotation relationship between the device coordinate system where the photographing device is located and the reference coordinate system is determined, and the posture data of the photographing device when the image is collected is referred to. , The rotation relationship between the equipment coordinate system and the reference coordinate system can be determined more accurately, and the accuracy of the conversion from pixel points to three-dimensional coordinates in the state of linear motion is better ensured, so as to better realize environmental monitoring, Tasks such as drawing.
再请参见图11,是本发明实施例的一种图像处理设备的结构示意图,该图像处理设备为一个智能设备,该图像处理设备能够获取拍摄设备的姿态数据,所述拍摄设备设于移动平台上,且所述移动平台的运动状态为直线运动状态,例如,在图1的场景中,所述图像处理设备为地面端的设备,其能够通过飞行器接收拍摄设备拍摄到的图像,也能够接收拍摄设备在拍摄图像时的姿态数据,例如挂载拍摄设备的云台的云台角信息等数据。在其他实施例中,所述拍摄设备也可以搭载在移动平台上,通过与移动平台的拍摄设备、云台等设备相连,来获取图像处理所需的数据。当然,所述图像处理设备本身也可以作为移动平台的一个部件,用于与拍摄设备、云台等设备相连。Please refer to FIG. 11 again, which is a schematic structural diagram of an image processing device according to an embodiment of the present invention. The image processing device is a smart device. The image processing device can obtain the posture data of the shooting device, and the shooting device is set on a mobile platform. Above, and the motion state of the mobile platform is a linear motion state. For example, in the scene in Figure 1, the image processing device is a device on the ground, which can receive images captured by a shooting device through an aircraft, and can also receive shooting The posture data of the device when the image is taken, such as data such as the angle information of the pan/tilt on which the camera is mounted. In other embodiments, the photographing device may also be mounted on a mobile platform, and the data required for image processing can be obtained by connecting with the photographing device, pan-tilt and other devices of the mobile platform. Of course, the image processing device itself can also be used as a component of the mobile platform for connecting with shooting devices, pan-tilt and other devices.
所述图像处理设备包括:通信接口单元1101、处理单元1102,并且包括其他的一些诸如供电模块、壳体等结构;所述图像处理设备可以根据需要包括:用户接口单元1103、存储单元1104。该用户接口单元1103例如可以是触摸显示屏,能够获取用户的指令并且能够向用户呈现相应的原始数据(例如接收到的图像数据)、处理后的数据(例如基于图像处理后制作的环境监测地图)等数据。The image processing device includes: a communication interface unit 1101, a processing unit 1102, and other structures such as a power supply module, a housing, etc.; the image processing device may include: a user interface unit 1103 and a storage unit 1104 as needed. The user interface unit 1103 may be, for example, a touch screen, which can obtain instructions from the user and can present the user with corresponding original data (such as received image data) and processed data (such as an environmental monitoring map made after image processing). ) And other data.
所述存储单元1104可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储单元1104也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),固态硬盘(solid-state drive,SSD)等;存储单元1104还可以包括上述种类的存储器的组合。The storage unit 1104 may include volatile memory (volatile memory), such as random-access memory (RAM); the storage unit 1104 may also include non-volatile memory (non-volatile memory), such as fast Flash memory (flash memory), solid-state drive (SSD), etc.; the storage unit 1104 may also include a combination of the foregoing types of memories.
所述处理单元1102可以是由中央处理器(central processing unit,CPU)构成。所述处理单元1102还可以包括硬件芯片。上述硬件芯片例如可以是专用集成电路(application-specific integrated circuit,ASIC),亦或是可编程逻辑器件(programmable logic device,PLD)等。该PLD可以是诸如现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)等。The processing unit 1102 may be composed of a central processing unit (CPU). The processing unit 1102 may also include a hardware chip. The above-mentioned hardware chip may be, for example, an application-specific integrated circuit (ASIC) or a programmable logic device (PLD). The PLD may be, for example, a field-programmable gate array (FPGA), a general array logic (generic array logic, GAL), etc.
在一个实施例中,所述存储单元1104可用于存储一些数据,例如上述提及的环境的图像数据、处理后的制图数据等等,所述存储单元1104还可以用于存储程序指令。所述处理单元1102可以调用所述程序指令,实现前述实施例中相应的功能以及步骤。In one embodiment, the storage unit 1104 may be used to store some data, such as the image data of the environment mentioned above, processed drawing data, etc., and the storage unit 1104 may also be used to store program instructions. The processing unit 1102 can call the program instructions to implement the corresponding functions and steps in the foregoing embodiments.
在一个实施例中,所述通信接口单元1101,用于与外部设备通信,获取外部设备的数据;所述处理单元1102,用于通过所述通信接口单元1101获取所述拍摄设备采集第一图像时的第一姿态数据;根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系;根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。In one embodiment, the communication interface unit 1101 is used to communicate with an external device to obtain data of the external device; the processing unit 1102 is used to obtain the first image collected by the photographing device through the communication interface unit 1101 According to the first posture data, determine the first rotation relationship between the device coordinate system and the reference coordinate system of the shooting device when the first image is collected according to the first posture data; according to the first rotation relationship , Determining the position information of the pixel on the first image in the reference coordinate system.
在一个实施例中,所述参考坐标系是指:地理坐标系、或大地坐标系。In one embodiment, the reference coordinate system refers to a geographic coordinate system or a geodetic coordinate system.
在一个实施例中,所述拍摄设备设于所述移动平台的云台上。In one embodiment, the photographing device is provided on a pan-tilt of the mobile platform.
在一个实施例中,所述处理单元1102,用于根据所述第一姿态数据获取所述云台的云台坐标系与所述参考坐标系之间的第二旋转关系,并获取所述拍摄设备的设备坐标系到所述云台坐标系之间的第三旋转关系;根据所述第二旋转关系和所述第三旋转关系,确定采集所述第一图像时所述设备坐标系与参考坐标系之间的第一旋转关系。In one embodiment, the processing unit 1102 is configured to obtain a second rotation relationship between the pan/tilt coordinate system of the pan/tilt and the reference coordinate system according to the first posture data, and obtain the photographing The third rotation relationship between the device coordinate system of the device and the pan-tilt coordinate system; according to the second rotation relationship and the third rotation relationship, it is determined that the device coordinate system and the reference when collecting the first image The first rotational relationship between coordinate systems.
在一个实施例中,所述第三旋转关系是根据所述拍摄设备与所述云台的装配关系配置的。In an embodiment, the third rotation relationship is configured according to the assembly relationship between the photographing device and the pan/tilt.
在一个实施例中,所述处理单元1102,用于按照预设的第一算法,对所述第一旋转关系进行优化处理;根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。In an embodiment, the processing unit 1102 is configured to perform optimization processing on the first rotation relationship according to a preset first algorithm; determine the image on the first image according to the optimized first rotation relationship The position information of the pixel in the reference coordinate system.
在一个实施例中,所述第一姿态数据包括俯仰角、横滚角、偏航角;所述第一旋转关系的优化处理包括对所述偏航角进行优化处理。In an embodiment, the first attitude data includes a pitch angle, a roll angle, and a yaw angle; the optimization processing of the first rotation relationship includes an optimization processing on the yaw angle.
在一个实施例中,所述处理单元1102,用于获取所述拍摄设备采集的第一图像集合中每一帧图像采集时,所述设备坐标系的中心坐标以及利用传感器数据确定的所述中心坐标在所述参考坐标系中的第一坐标;基于所述中心坐标以及所述第一坐标,对所述第一旋转关系进行优化处理,以使得基于优化后的第一旋转关系确定的各个所述中心坐标在所述参考坐标系中的第二坐标与所述第一坐标之间的差异和满足预设的第一最小化条件;其中,所述第一图像集 合包括所述第一图像以及至少一帧第二图像。In one embodiment, the processing unit 1102 is configured to obtain the center coordinates of the device coordinate system and the center determined by using sensor data when each frame of the image in the first image set collected by the shooting device is collected. The coordinates are the first coordinates in the reference coordinate system; based on the center coordinates and the first coordinates, the first rotation relationship is optimized, so that each position determined based on the optimized first rotation relationship The difference between the second coordinate of the center coordinate in the reference coordinate system and the first coordinate meets a preset first minimization condition; wherein, the first image set includes the first image and At least one second image.
在一个实施例中,所述第二图像为在采集时间上邻近所述第一图像的图像。In one embodiment, the second image is an image adjacent to the first image in acquisition time.
在一个实施例中,所述处理单元1102,用于基于所述中心坐标以及所述第一坐标,对所述第一旋转关系、所述设备坐标系与所述参考坐标系之间的平移关系、所述设备坐标系与所述参考坐标系之间的缩放关系进行优化处理。In one embodiment, the processing unit 1102 is configured to determine the first rotation relationship, the translation relationship between the device coordinate system and the reference coordinate system based on the center coordinate and the first coordinate , The scaling relationship between the device coordinate system and the reference coordinate system is optimized.
在一个实施例中,所述处理单元1102,用于根据优化后的第一旋转关系、优化后的平移关系以及优化后的缩放关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。In one embodiment, the processing unit 1102 is configured to determine that a pixel on the first image is at the reference coordinate according to the optimized first rotation relationship, the optimized translation relationship, and the optimized zoom relationship. Location information in the department.
在一个实施例中,所述传感数据包括全球定位传感器的采集数据和高度传感器的采集数据。In one embodiment, the sensor data includes data collected by a global positioning sensor and data collected by a height sensor.
在一个实施例中,所述处理单元1102,用于根据所述第一旋转关系,确定所述拍摄设备采集的第二图像集合中每一帧图像上的像素点在所述参考坐标系中的位置信息;其中,所述第二图像集合包括所述第一图像以及至少一帧第三图像。In one embodiment, the processing unit 1102 is configured to determine, according to the first rotation relationship, the pixel point on each frame of the second image set collected by the shooting device in the reference coordinate system. Location information; wherein the second image set includes the first image and at least one frame of the third image.
在一个实施例中,所述第一图像是所述第二图像集合中的第一帧图像。In an embodiment, the first image is the first frame of image in the second image set.
在一个实施例中,所述处理单元1102,用于利用视觉算法确定所述第三图像与所述第一图像之间的相对变换关系;基于所述第一旋转关系以及所述相对变换关系,确定采集所述第三图像时所述设备坐标系与所述参考坐标系之间的第四旋转关系;根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据所述第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。In an embodiment, the processing unit 1102 is configured to use a visual algorithm to determine the relative transformation relationship between the third image and the first image; based on the first rotation relationship and the relative transformation relationship, Determine the fourth rotation relationship between the device coordinate system and the reference coordinate system when the third image is collected; determine that the pixel on the first image is at the reference coordinate according to the first rotation relationship And determine the position information of the pixel on the third image in the reference coordinate system according to the fourth rotation relationship.
在一个实施例中,所述处理单元1102,用于按照预设的第二算法,对所述第一旋转关系以及所述第四旋转关系进行优化处理;根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据优化后的第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。In one embodiment, the processing unit 1102 is configured to perform optimization processing on the first rotation relationship and the fourth rotation relationship according to a preset second algorithm; determine according to the optimized first rotation relationship The position information of the pixel on the first image in the reference coordinate system, and the position information of the pixel on the third image in the reference coordinate system is determined according to the optimized fourth rotation relationship .
在一个实施例中,所述处理单元1102,用于获取所述拍摄设备采集所述第二图像集合中每一帧图像时的第二姿态数据;根据所述第二姿态数据,确定所述第二图像集合中每一帧图像在采集时所述设备坐标系与所述参考坐标系 之间的第五旋转关系;对所述第一旋转关系以及所述第四旋转关系进行优化处理,以使得优化后的第一旋转关系以及优化后的第四旋转关系与对应的第五旋转关系之间的差异满足预设的第二最小化条件。In one embodiment, the processing unit 1102 is configured to obtain second posture data when the shooting device collects each frame of the second image set; according to the second posture data, determine the first posture data The fifth rotation relationship between the device coordinate system and the reference coordinate system when each frame of the image in the two image sets is collected; the first rotation relationship and the fourth rotation relationship are optimized to make The difference between the optimized first rotation relationship and the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfy the preset second minimization condition.
在一个实施例中,所述处理单元1102,用于确定第一数据与第二数据之间的和满足预设的第三最小化条件;其中,所述第一数据为基于所述第二图像集合中每一帧图像上的像素点计算得到的重投影误差之和,所述第二数据为所述第二图像集合中每一帧图像对应的旋转关系偏差之和,所述旋转关系偏差是指所述第一旋转关系或所述第四旋转关系与对应的第五旋转关系之间的差异。In one embodiment, the processing unit 1102 is configured to determine that the sum between the first data and the second data satisfies a preset third minimization condition; wherein, the first data is based on the second image The sum of the re-projection errors calculated from the pixels on each frame of the image in the set, the second data is the sum of the rotation relation deviations corresponding to each frame of the image in the second image collection, and the rotation relation deviation is Refers to the difference between the first rotation relationship or the fourth rotation relationship and the corresponding fifth rotation relationship.
本发明实施例中,所述处理单元的具体实现可参考前述实施例中相关内容的描述,在此不赘述。本发明实施例在移动平台处于直线运动状态时,确定拍摄设备所在的设备坐标系到参考坐标系之间的旋转关系,参考了拍摄设备在采集图像时的姿态数据,这使得在直线运动状态下,可以较为准确地来确定设备坐标系到参考坐标系之间的旋转关系,较好地确保了在直线运动状态下,像素点到三维坐标的转换的准确性,从而更好地实现环境监测、制图等任务。In the embodiment of the present invention, for the specific implementation of the processing unit, reference may be made to the description of the related content in the foregoing embodiment, which is not repeated here. In the embodiment of the present invention, when the mobile platform is in a linear motion state, the rotation relationship between the device coordinate system where the photographing device is located and the reference coordinate system is determined, and the posture data of the photographing device when the image is collected is referred to. , The rotation relationship between the equipment coordinate system and the reference coordinate system can be determined more accurately, and the accuracy of the conversion from pixel points to three-dimensional coordinates in the state of linear motion is better ensured, so as to better realize environmental monitoring, Tasks such as drawing.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments. Wherein, the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
以上所揭露的仅为本发明部分实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。The above-disclosed are only some embodiments of the present invention, which of course cannot be used to limit the scope of rights of the present invention. Therefore, equivalent changes made according to the claims of the present invention still fall within the scope of the present invention.

Claims (37)

  1. 一种关于拍摄设备的数据处理方法,其特征在于,所述方法应用于图像处理设备,该图像处理设备能够获取拍摄设备的姿态数据,所述拍摄设备设于移动平台上,且所述移动平台的运动状态为直线运动状态,所述方法包括:A data processing method for a photographing device, characterized in that the method is applied to an image processing device that can obtain posture data of the photographing device, the photographing device is set on a mobile platform, and the mobile platform The motion state of is a linear motion state, and the method includes:
    获取所述拍摄设备采集第一图像时的第一姿态数据;Acquiring first posture data when the photographing device collects the first image;
    根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系;Determine, according to the first posture data, a first rotation relationship between the device coordinate system of the photographing device and a reference coordinate system when the first image is collected;
    根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。According to the first rotation relationship, the position information of the pixel on the first image in the reference coordinate system is determined.
  2. 如权利要求1所述的方法,其特性在于,所述参考坐标系是指:地理坐标系、或大地坐标系。The method according to claim 1, wherein the reference coordinate system refers to a geographic coordinate system or a geodetic coordinate system.
  3. 如权利要求1所述的方法,其特征在于,所述拍摄设备设于所述移动平台的云台上。The method of claim 1, wherein the photographing device is set on a pan-tilt of the mobile platform.
  4. 如权利要求3所述的方法,其特征在于,所述根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系,包括:The method of claim 3, wherein the first rotation relationship between the device coordinate system and the reference coordinate system of the photographing device when the first image is collected is determined according to the first posture data ,include:
    根据所述第一姿态数据获取所述云台的云台坐标系与所述参考坐标系之间的第二旋转关系,并获取所述拍摄设备的设备坐标系到所述云台坐标系之间的第三旋转关系;Obtain the second rotation relationship between the pan/tilt coordinate system of the pan/tilt and the reference coordinate system according to the first posture data, and obtain the distance between the device coordinate system of the photographing device and the pan/tilt coordinate system The third rotation relationship;
    根据所述第二旋转关系和所述第三旋转关系,确定采集所述第一图像时所述设备坐标系与参考坐标系之间的第一旋转关系。According to the second rotation relationship and the third rotation relationship, a first rotation relationship between the device coordinate system and a reference coordinate system when the first image is acquired is determined.
  5. 如权利要求4所述的方法,其特征在于,所述第三旋转关系是根据所述拍摄设备与所述云台的装配关系配置的。The method of claim 4, wherein the third rotation relationship is configured according to the assembly relationship between the photographing device and the pan/tilt.
  6. 根据权利要求2所述的方法,其特征在于,所述根据所述第一旋转关 系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,包括:The method according to claim 2, wherein the determining the position information of the pixel on the first image in the reference coordinate system according to the first rotation relationship comprises:
    按照预设的第一算法,对所述第一旋转关系进行优化处理;Optimizing the first rotation relationship according to the preset first algorithm;
    根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。According to the optimized first rotation relationship, the position information of the pixel on the first image in the reference coordinate system is determined.
  7. 根据权利要求6所述的方法,其特征在于,所述第一姿态数据包括俯仰角、横滚角、偏航角;The method according to claim 6, wherein the first attitude data includes a pitch angle, a roll angle, and a yaw angle;
    所述第一旋转关系的优化处理包括对所述偏航角进行优化处理。The optimization processing of the first rotation relationship includes optimizing the yaw angle.
  8. 如权利要求7所述的方法,其特征在于,所述按照预设的第一算法,对所述第一旋转关系进行优化处理,包括:8. The method according to claim 7, wherein the optimizing the first rotation relationship according to the preset first algorithm comprises:
    获取所述拍摄设备采集的第一图像集合中每一帧图像采集时,所述设备坐标系的中心坐标以及利用传感器数据确定的所述中心坐标在所述参考坐标系中的第一坐标;Acquiring the center coordinates of the device coordinate system and the first coordinates in the reference coordinate system of the center coordinates determined by using sensor data for each frame of image acquisition in the first image set collected by the shooting device;
    基于所述中心坐标以及所述第一坐标,对所述第一旋转关系进行优化处理,以使得基于优化后的第一旋转关系确定的各个所述中心坐标在所述参考坐标系中的第二坐标与对应的第一坐标之间的差异和满足预设的第一最小化条件;Based on the center coordinates and the first coordinates, the first rotation relationship is optimized, so that each of the center coordinates determined based on the optimized first rotation relationship is the second in the reference coordinate system. The difference between the coordinate and the corresponding first coordinate and meets the preset first minimization condition;
    其中,所述第一图像集合包括所述第一图像以及至少一帧第二图像。Wherein, the first image set includes the first image and at least one frame of second image.
  9. 如权利要求8所述的方法,其特征在于,所述第二图像为在采集时间上邻近所述第一图像的图像。8. The method of claim 8, wherein the second image is an image adjacent to the first image in acquisition time.
  10. 如权利要求8所述的方法,其特征在于,所述基于所述中心坐标以及所述第一坐标,对所述第一旋转关系进行优化处理,包括:The method according to claim 8, wherein the optimizing the first rotation relationship based on the center coordinates and the first coordinates comprises:
    基于所述中心坐标以及所述第一坐标,对所述第一旋转关系、所述设备坐标系与所述参考坐标系之间的平移关系、所述设备坐标系与所述参考坐标系之间的缩放关系进行优化处理。Based on the center coordinates and the first coordinates, the first rotation relationship, the translation relationship between the device coordinate system and the reference coordinate system, and the relationship between the device coordinate system and the reference coordinate system The zoom relationship is optimized.
  11. 如权利要求10所述的方法,其特征在于,所述根据优化后的第一旋 转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,包括:The method according to claim 10, wherein the determining the position information of the pixel on the first image in the reference coordinate system according to the optimized first rotation relationship comprises:
    根据优化后的第一旋转关系、优化后的平移关系以及优化后的缩放关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。According to the optimized first rotation relationship, the optimized translation relationship, and the optimized zoom relationship, the position information of the pixel on the first image in the reference coordinate system is determined.
  12. 如权利要求8所述的方法,其特征在于,所述传感数据包括全球定位传感器的采集数据和高度传感器的采集数据。The method according to claim 8, wherein the sensor data includes data collected by a global positioning sensor and data collected by a height sensor.
  13. 如权利要求1至12中任一项所述的方法,其特征在于,所述根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,包括:The method according to any one of claims 1 to 12, wherein the determining the position information of the pixel on the first image in the reference coordinate system according to the first rotation relationship, include:
    根据所述第一旋转关系,确定所述拍摄设备采集的第二图像集合中每一帧图像上的像素点在所述参考坐标系中的位置信息;Determine, according to the first rotation relationship, position information in the reference coordinate system of pixels on each frame of images in the second image set collected by the photographing device;
    其中,所述第二图像集合包括所述第一图像以及至少一帧第三图像。Wherein, the second image set includes the first image and at least one frame of third image.
  14. 如权利要求13所述的方法,其特征在于,所述第一图像是所述第二图像集合中的第一帧图像。The method of claim 13, wherein the first image is the first frame of the image in the second image set.
  15. 如权利要求13所述的方法,其特征在于,所述根据所述第一旋转关系,确定所述拍摄设备采集的第二图像集合中每一帧图像上的像素点在所述参考坐标系中的位置信息,包括:The method according to claim 13, wherein, according to the first rotation relationship, it is determined that the pixel on each frame of the image in the second image set collected by the photographing device is in the reference coordinate system Location information, including:
    利用视觉算法确定所述第三图像与所述第一图像之间的相对变换关系;Using a visual algorithm to determine the relative transformation relationship between the third image and the first image;
    基于所述第一旋转关系以及所述相对变换关系,确定采集所述第三图像时所述设备坐标系与所述参考坐标系之间的第四旋转关系;Based on the first rotation relationship and the relative transformation relationship, determining a fourth rotation relationship between the device coordinate system and the reference coordinate system when acquiring the third image;
    根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据所述第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。According to the first rotation relationship, determine the position information of the pixel on the first image in the reference coordinate system, and determine that the pixel on the third image is in the third image according to the fourth rotation relationship. The position information in the reference coordinate system.
  16. 如权利要求15所述的方法,其特征在于,所述根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据所述第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信 息,包括:The method according to claim 15, wherein the position information of the pixel on the first image in the reference coordinate system is determined according to the first rotation relationship, and according to the fourth The rotation relationship to determine the position information of the pixel on the third image in the reference coordinate system includes:
    按照预设的第二算法,对所述第一旋转关系以及所述第四旋转关系进行优化处理;Performing optimization processing on the first rotation relationship and the fourth rotation relationship according to a preset second algorithm;
    根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据优化后的第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。Determine the position information of the pixel on the first image in the reference coordinate system according to the optimized first rotation relationship, and determine the pixel on the third image according to the optimized fourth rotation relationship Position information in the reference coordinate system.
  17. 如权利要求16所述的方法,其特征在于,所述按照预设的第二算法,对所述第一旋转关系以及所述第四旋转关系进行优化处理,包括:The method according to claim 16, wherein the optimizing the first rotation relationship and the fourth rotation relationship according to a preset second algorithm comprises:
    获取所述拍摄设备采集所述第二图像集合中每一帧图像时的第二姿态数据;Acquiring second posture data when the shooting device collects each frame of images in the second image set;
    根据所述第二姿态数据,确定所述第二图像集合中每一帧图像在采集时所述设备坐标系与所述参考坐标系之间的第五旋转关系;Determine, according to the second posture data, a fifth rotation relationship between the device coordinate system and the reference coordinate system when each frame of image in the second image set is collected;
    对所述第一旋转关系以及所述第四旋转关系进行优化处理,以使得优化后的第一旋转关系以及优化后的第四旋转关系与对应的第五旋转关系之间的差异满足预设的第二最小化条件。The first rotation relationship and the fourth rotation relationship are optimized, so that the difference between the optimized first rotation relationship, the optimized fourth rotation relationship and the corresponding fifth rotation relationship meets the preset The second minimum condition.
  18. 如权利要求15所述的方法,其特征在于,所述优化后的第一旋转关系以及优化后的第四旋转关系与对应的第五旋转关系之间的差异满足预设的第二最小化条件的确定方式,包括:The method according to claim 15, wherein the difference between the optimized first rotation relationship and the optimized fourth rotation relationship and the corresponding fifth rotation relationship satisfy a preset second minimization condition The methods of determination include:
    确定第一数据与第二数据之间的和满足预设的第三最小化条件;Determining that the sum between the first data and the second data satisfies a preset third minimization condition;
    其中,所述第一数据为基于所述第二图像集合中每一帧图像上的像素点计算得到的重投影误差之和,所述第二数据为所述第二图像集合中每一帧图像对应的旋转关系偏差之和,所述旋转关系偏差是指所述第一旋转关系或所述第四旋转关系与对应的第五旋转关系之间的差异。Wherein, the first data is the sum of reprojection errors calculated based on the pixel points on each frame of image in the second image set, and the second data is each frame of image in the second image set The sum of the deviations of the corresponding rotational relations, the deviation of the rotational relations refers to the difference between the first rotational relationship or the fourth rotational relationship and the corresponding fifth rotational relationship.
  19. 一种关于拍摄设备的数据处理装置,其特征在于,所述装置应用于图像处理设备,该图像处理设备能够获取拍摄设备的姿态数据,所述拍摄设备设于移动平台上,且所述移动平台的运动状态为直线运动状态,所述装置包括:A data processing device for photographing equipment, characterized in that the device is applied to an image processing equipment, and the image processing equipment can obtain posture data of the photographing equipment, the photographing equipment is set on a mobile platform, and The motion state of is a linear motion state, and the device includes:
    获取模块,用于获取所述拍摄设备采集第一图像时的第一姿态数据;An acquiring module, configured to acquire first posture data when the photographing device acquires the first image;
    确定模块,用于根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系;A determining module, configured to determine, according to the first posture data, a first rotation relationship between the device coordinate system of the shooting device and a reference coordinate system when the first image is collected;
    处理模块,用于根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。The processing module is configured to determine the position information of the pixel on the first image in the reference coordinate system according to the first rotation relationship.
  20. 一种图像处理设备,其特征在于,该图像处理设备能够获取拍摄设备的姿态数据,所述拍摄设备设于移动平台上,且所述移动平台的运动状态为直线运动状态,所述图像处理设备包括:通信接口单元、处理单元;An image processing device, characterized in that the image processing device can acquire posture data of a shooting device, the shooting device is set on a mobile platform, and the motion state of the mobile platform is a linear motion state, the image processing device Including: communication interface unit, processing unit;
    所述通信接口单元,用于与外部设备通信,获取外部设备的数据;The communication interface unit is used to communicate with an external device and obtain data of the external device;
    所述处理单元,用于通过所述通信接口单元获取所述拍摄设备采集第一图像时的第一姿态数据;根据所述第一姿态数据,确定采集所述第一图像时所述拍摄设备的设备坐标系与参考坐标系之间的第一旋转关系;根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。The processing unit is configured to obtain, through the communication interface unit, the first posture data of the shooting device when the first image is collected; according to the first posture data, determine the status of the shooting device when the first image is collected A first rotation relationship between the device coordinate system and a reference coordinate system; according to the first rotation relationship, the position information of a pixel on the first image in the reference coordinate system is determined.
  21. 如权利要求20所述的图像处理设备,其特性在于,所述参考坐标系是指:地理坐标系、或大地坐标系。The image processing device according to claim 20, wherein the reference coordinate system refers to a geographic coordinate system or a geodetic coordinate system.
  22. 如权利要求20所述的图像处理设备,其特征在于,所述拍摄设备设于所述移动平台的云台上。22. The image processing device of claim 20, wherein the photographing device is provided on a pan-tilt of the mobile platform.
  23. 如权利要求22所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 22, wherein the processing unit is configured to
    根据所述第一姿态数据获取所述云台的云台坐标系与所述参考坐标系之间的第二旋转关系,并获取所述拍摄设备的设备坐标系到所述云台坐标系之间的第三旋转关系;Obtain the second rotation relationship between the pan/tilt coordinate system of the pan/tilt and the reference coordinate system according to the first posture data, and obtain the distance between the device coordinate system of the photographing device and the pan/tilt coordinate system The third rotation relationship;
    根据所述第二旋转关系和所述第三旋转关系,确定采集所述第一图像时所述设备坐标系与参考坐标系之间的第一旋转关系。According to the second rotation relationship and the third rotation relationship, a first rotation relationship between the device coordinate system and a reference coordinate system when the first image is acquired is determined.
  24. 如权利要求23所述的图像处理设备,其特征在于,所述第三旋转关系是根据所述拍摄设备与所述云台的装配关系配置的。23. The image processing device according to claim 23, wherein the third rotation relationship is configured according to the assembly relationship between the photographing device and the pan/tilt.
  25. 根据权利要求21所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 21, wherein the processing unit is configured to
    按照预设的第一算法,对所述第一旋转关系进行优化处理;Optimizing the first rotation relationship according to the preset first algorithm;
    根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。According to the optimized first rotation relationship, the position information of the pixel on the first image in the reference coordinate system is determined.
  26. 根据权利要求25所述的图像处理设备,其特征在于,所述第一姿态数据包括俯仰角、横滚角、偏航角;The image processing device according to claim 25, wherein the first attitude data includes a pitch angle, a roll angle, and a yaw angle;
    所述第一旋转关系的优化处理包括对所述偏航角进行优化处理。The optimization processing of the first rotation relationship includes optimizing the yaw angle.
  27. 如权利要求26所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 26, wherein the processing unit is configured to
    获取所述拍摄设备采集的第一图像集合中每一帧图像采集时,所述设备坐标系的中心坐标以及利用传感器数据确定的所述中心坐标在所述参考坐标系中的第一坐标;Acquiring the center coordinates of the device coordinate system and the first coordinates in the reference coordinate system of the center coordinates determined by using sensor data for each frame of image acquisition in the first image set collected by the shooting device;
    基于所述中心坐标以及所述第一坐标,对所述第一旋转关系进行优化处理,以使得基于优化后的第一旋转关系确定的各个所述中心坐标在所述参考坐标系中的第二坐标与对应的第一坐标之间的差异和满足预设的第一最小化条件;Based on the center coordinates and the first coordinates, the first rotation relationship is optimized, so that each of the center coordinates determined based on the optimized first rotation relationship is the second in the reference coordinate system. The difference between the coordinate and the corresponding first coordinate and meets the preset first minimization condition;
    其中,所述第一图像集合包括所述第一图像以及至少一帧第二图像。Wherein, the first image set includes the first image and at least one frame of second image.
  28. 如权利要求27所述的图像处理设备,其特征在于,所述第二图像为在采集时间上邻近所述第一图像的图像。The image processing device according to claim 27, wherein the second image is an image adjacent to the first image in acquisition time.
  29. 如权利要求27所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 27, wherein the processing unit is configured to
    基于所述中心坐标以及所述第一坐标,对所述第一旋转关系、所述设备坐标系与所述参考坐标系之间的平移关系、所述设备坐标系与所述参考坐标系之间的缩放关系进行优化处理。Based on the center coordinates and the first coordinates, the first rotation relationship, the translation relationship between the device coordinate system and the reference coordinate system, and the relationship between the device coordinate system and the reference coordinate system The zoom relationship is optimized.
  30. 如权利要求29所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 29, wherein the processing unit is configured to
    根据优化后的第一旋转关系、优化后的平移关系以及优化后的缩放关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息。According to the optimized first rotation relationship, the optimized translation relationship, and the optimized zoom relationship, the position information of the pixel on the first image in the reference coordinate system is determined.
  31. 如权利要求27所述的图像处理设备,其特征在于,所述传感数据包括全球定位传感器的采集数据和高度传感器的采集数据。28. The image processing device of claim 27, wherein the sensor data includes data collected by a global positioning sensor and data collected by a height sensor.
  32. 如权利要求20至31中任一项所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to any one of claims 20 to 31, wherein the processing unit is configured to
    根据所述第一旋转关系,确定所述拍摄设备采集的第二图像集合中每一帧图像上的像素点在所述参考坐标系中的位置信息;Determine, according to the first rotation relationship, position information in the reference coordinate system of pixels on each frame of images in the second image set collected by the photographing device;
    其中,所述第二图像集合包括所述第一图像以及至少一帧第三图像。Wherein, the second image set includes the first image and at least one frame of third image.
  33. 如权利要求32所述的图像处理设备,其特征在于,所述第一图像是所述第二图像集合中的第一帧图像。The image processing device according to claim 32, wherein the first image is a first frame image in the second image set.
  34. 如权利要求32所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 32, wherein the processing unit is configured to
    利用视觉算法确定所述第三图像与所述第一图像之间的相对变换关系;Using a visual algorithm to determine the relative transformation relationship between the third image and the first image;
    基于所述第一旋转关系以及所述相对变换关系,确定采集所述第三图像时所述设备坐标系与所述参考坐标系之间的第四旋转关系;Based on the first rotation relationship and the relative transformation relationship, determining a fourth rotation relationship between the device coordinate system and the reference coordinate system when acquiring the third image;
    根据所述第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据所述第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。According to the first rotation relationship, determine the position information of the pixel on the first image in the reference coordinate system, and determine that the pixel on the third image is in the third image according to the fourth rotation relationship. The position information in the reference coordinate system.
  35. 如权利要求34所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 34, wherein the processing unit is configured to
    按照预设的第二算法,对所述第一旋转关系以及所述第四旋转关系进行优 化处理;Optimizing the first rotation relationship and the fourth rotation relationship according to a preset second algorithm;
    根据优化后的第一旋转关系,确定所述第一图像上的像素点在所述参考坐标系中的位置信息,并根据优化后的第四旋转关系,确定所述第三图像上的像素点在所述参考坐标系中的位置信息。Determine the position information of the pixel on the first image in the reference coordinate system according to the optimized first rotation relationship, and determine the pixel on the third image according to the optimized fourth rotation relationship Position information in the reference coordinate system.
  36. 如权利要求35所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 35, wherein the processing unit is configured to
    获取所述拍摄设备采集所述第二图像集合中每一帧图像时的第二姿态数据;Acquiring second posture data when the shooting device collects each frame of images in the second image set;
    根据所述第二姿态数据,确定所述第二图像集合中每一帧图像在采集时所述设备坐标系与所述参考坐标系之间的第五旋转关系;Determine, according to the second posture data, a fifth rotation relationship between the device coordinate system and the reference coordinate system when each frame of image in the second image set is collected;
    对所述第一旋转关系以及所述第四旋转关系进行优化处理,以使得优化后的第一旋转关系以及优化后的第四旋转关系与对应的第五旋转关系之间的差异满足预设的第二最小化条件。The first rotation relationship and the fourth rotation relationship are optimized, so that the difference between the optimized first rotation relationship, the optimized fourth rotation relationship and the corresponding fifth rotation relationship meets the preset The second minimum condition.
  37. 如权利要求34所述的图像处理设备,其特征在于,所述处理单元,用于The image processing device according to claim 34, wherein the processing unit is configured to
    确定第一数据与第二数据之间的和满足预设的第三最小化条件;Determining that the sum between the first data and the second data satisfies a preset third minimization condition;
    其中,所述第一数据为基于所述第二图像集合中每一帧图像上的像素点计算得到的重投影误差之和,所述第二数据为所述第二图像集合中每一帧图像对应的旋转关系偏差之和,所述旋转关系偏差是指所述第一旋转关系或所述第四旋转关系与对应的第五旋转关系之间的差异。Wherein, the first data is the sum of reprojection errors calculated based on the pixel points on each frame of image in the second image set, and the second data is each frame of image in the second image set The sum of the deviations of the corresponding rotational relations, the deviation of the rotational relations refers to the difference between the first rotational relationship or the fourth rotational relationship and the corresponding fifth rotational relationship.
PCT/CN2019/080500 2019-03-29 2019-03-29 Data processing method and apparatus related to photographing device, and image processing device WO2020198963A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980005043.XA CN111247389B (en) 2019-03-29 2019-03-29 Data processing method and device for shooting equipment and image processing equipment
PCT/CN2019/080500 WO2020198963A1 (en) 2019-03-29 2019-03-29 Data processing method and apparatus related to photographing device, and image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/080500 WO2020198963A1 (en) 2019-03-29 2019-03-29 Data processing method and apparatus related to photographing device, and image processing device

Publications (1)

Publication Number Publication Date
WO2020198963A1 true WO2020198963A1 (en) 2020-10-08

Family

ID=70879115

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/080500 WO2020198963A1 (en) 2019-03-29 2019-03-29 Data processing method and apparatus related to photographing device, and image processing device

Country Status (2)

Country Link
CN (1) CN111247389B (en)
WO (1) WO2020198963A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222814A (en) * 2022-06-02 2022-10-21 珠海云洲智能科技股份有限公司 Rescue equipment guiding method and device, terminal equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827730A (en) * 2022-04-19 2022-07-29 咪咕文化科技有限公司 Video cover selecting method, device, equipment and storage medium
CN114812513A (en) * 2022-05-10 2022-07-29 北京理工大学 Unmanned aerial vehicle positioning system and method based on infrared beacon

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100208057A1 (en) * 2009-02-13 2010-08-19 Peter Meier Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
CN103822615A (en) * 2014-02-25 2014-05-28 北京航空航天大学 Unmanned aerial vehicle ground target real-time positioning method with automatic extraction and gathering of multiple control points
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105825518A (en) * 2016-03-31 2016-08-03 西安电子科技大学 Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting
CN106097304A (en) * 2016-05-31 2016-11-09 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method
CN106500669A (en) * 2016-09-22 2017-03-15 浙江工业大学 A kind of Aerial Images antidote based on four rotor IMU parameters
CN108765298A (en) * 2018-06-15 2018-11-06 中国科学院遥感与数字地球研究所 Unmanned plane image split-joint method based on three-dimensional reconstruction and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101128913B1 (en) * 2009-05-07 2012-03-27 에스엔유 프리시젼 주식회사 Vision inspection system and method for converting coordinates using the same
CN101922930B (en) * 2010-07-08 2013-11-06 西北工业大学 Aviation polarization multi-spectrum image registration method
US9160980B2 (en) * 2011-01-11 2015-10-13 Qualcomm Incorporated Camera-based inertial sensor alignment for PND
CN102359780B (en) * 2011-10-26 2014-04-23 中国科学技术大学 Ground target positioning method applied into video monitoring system
CN105698762B (en) * 2016-01-15 2018-02-23 中国人民解放军国防科学技术大学 Target method for rapidly positioning based on observation station at different moments on a kind of unit flight path
CN105758426B (en) * 2016-02-19 2019-07-26 深圳市杉川机器人有限公司 The combined calibrating method of the multisensor of mobile robot
CN107314771B (en) * 2017-07-04 2020-04-21 合肥工业大学 Unmanned aerial vehicle positioning and attitude angle measuring method based on coding mark points
CN108645398A (en) * 2018-02-09 2018-10-12 深圳积木易搭科技技术有限公司 A kind of instant positioning and map constructing method and system based on structured environment
CN108845335A (en) * 2018-05-07 2018-11-20 中国人民解放军国防科技大学 Unmanned aerial vehicle ground target positioning method based on image and navigation information
CN108933896B (en) * 2018-07-30 2020-10-02 长沙全度影像科技有限公司 Panoramic video image stabilization method and system based on inertial measurement unit

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100208057A1 (en) * 2009-02-13 2010-08-19 Peter Meier Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
CN103822615A (en) * 2014-02-25 2014-05-28 北京航空航天大学 Unmanned aerial vehicle ground target real-time positioning method with automatic extraction and gathering of multiple control points
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105825518A (en) * 2016-03-31 2016-08-03 西安电子科技大学 Sequence image rapid three-dimensional reconstruction method based on mobile platform shooting
CN106097304A (en) * 2016-05-31 2016-11-09 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method
CN106500669A (en) * 2016-09-22 2017-03-15 浙江工业大学 A kind of Aerial Images antidote based on four rotor IMU parameters
CN108765298A (en) * 2018-06-15 2018-11-06 中国科学院遥感与数字地球研究所 Unmanned plane image split-joint method based on three-dimensional reconstruction and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222814A (en) * 2022-06-02 2022-10-21 珠海云洲智能科技股份有限公司 Rescue equipment guiding method and device, terminal equipment and storage medium
CN115222814B (en) * 2022-06-02 2023-09-01 珠海云洲智能科技股份有限公司 Rescue equipment guiding method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN111247389B (en) 2022-03-25
CN111247389A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN109887057B (en) Method and device for generating high-precision map
EP3454008B1 (en) Survey data processing device, survey data processing method, and survey data processing program
JP6138326B1 (en) MOBILE BODY, MOBILE BODY CONTROL METHOD, PROGRAM FOR CONTROLLING MOBILE BODY, CONTROL SYSTEM, AND INFORMATION PROCESSING DEVICE
JP2008186145A (en) Aerial image processing apparatus and aerial image processing method
US11057604B2 (en) Image processing method and device
WO2018120350A1 (en) Method and device for positioning unmanned aerial vehicle
TWI556198B (en) Positioning and directing data analysis system and method thereof
US20190385361A1 (en) Reconstruction of a scene from a moving camera
WO2020198963A1 (en) Data processing method and apparatus related to photographing device, and image processing device
CN111966133A (en) Visual servo control system of holder
CN113551665B (en) High-dynamic motion state sensing system and sensing method for motion carrier
US20210185235A1 (en) Information processing device, imaging control method, program and recording medium
WO2019183789A1 (en) Method and apparatus for controlling unmanned aerial vehicle, and unmanned aerial vehicle
Kinnari et al. GNSS-denied geolocalization of UAVs by visual matching of onboard camera images with orthophotos
CN111699454B (en) Flight planning method and related equipment
CN108801225A (en) A kind of unmanned plane tilts image positioning method, system, medium and equipment
CN110720023B (en) Method and device for processing parameters of camera and image processing equipment
US20210229810A1 (en) Information processing device, flight control method, and flight control system
Zhou et al. Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera
WO2020019175A1 (en) Image processing method and apparatus, and photographing device and unmanned aerial vehicle
CN111344650B (en) Information processing device, flight path generation method, program, and recording medium
WO2019189381A1 (en) Moving body, control device, and control program
US11415990B2 (en) Optical object tracking on focal plane with dynamic focal length
WO2020119572A1 (en) Shape inferring device, shape inferring method, program, and recording medium
WO2021134715A1 (en) Control method and device, unmanned aerial vehicle and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19923124

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19923124

Country of ref document: EP

Kind code of ref document: A1