WO2020207108A1 - 图像处理方法、装置和系统,以及机器人 - Google Patents

图像处理方法、装置和系统,以及机器人 Download PDF

Info

Publication number
WO2020207108A1
WO2020207108A1 PCT/CN2020/074636 CN2020074636W WO2020207108A1 WO 2020207108 A1 WO2020207108 A1 WO 2020207108A1 CN 2020074636 W CN2020074636 W CN 2020074636W WO 2020207108 A1 WO2020207108 A1 WO 2020207108A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
angle
camera
image processing
processing method
Prior art date
Application number
PCT/CN2020/074636
Other languages
English (en)
French (fr)
Inventor
陈志强
崔锦
赵延平
林东
胡斌
彭志
Original Assignee
清华大学
同方威视技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学, 同方威视技术股份有限公司 filed Critical 清华大学
Publication of WO2020207108A1 publication Critical patent/WO2020207108A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to an image processing method, device and system, a robot, and a computer-readable storage medium.
  • the driving route may bend and the direction of movement changes many times.
  • the image of the target captured by the camera fixed on the mobile device during this period is transmitted back to the display device on the user’s hand.
  • the direction of the image will change with the direction of the mobile device, which causes the target image observed by the user to rotate all the time, which is very inconvenient.
  • the user cannot determine the direction of the target on the scene where the camera is located, and even dizziness occurs.
  • the related technology uses an image rotation method to correct the tilt of the image. For example, the rotation angle of the image is determined according to the user's instruction, and the image rotation at any angle is realized. There is even a method that uses hardware to keep the direction of the camera always constant when the direction of the mobile device changes, so that images of a stationary target relative to the user's direction can be obtained.
  • embodiments of the present disclosure propose a technical solution that can automatically rotate an image according to the pose of the mobile device, so that the displayed image matches the position of the observer, thereby improving the viewing experience.
  • an image processing method including: acquiring an image taken by a camera mounted on a mobile device, and determining the attitude angle of the camera when the image is taken; according to the attitude Angle, calculating the rotation angle of the image relative to the reference direction; using a transformation matrix based on the rotation angle to perform rotation processing on the image.
  • the attitude angle of the camera when shooting the image is determined according to the attitude angle output by the attitude sensor and at least one of the angular velocity information of the three axes and the acceleration information of the three axes.
  • determining the attitude angle of the camera when shooting the image includes: using a quaternion method to calculate that the camera is shooting the The attitude angle of the image, where the attitude angle includes at least one of a pitch angle, a roll angle, and a yaw angle.
  • calculating the rotation angle of the image according to the attitude angle includes: using the correspondence between the ground coordinate system and the coordinate system of the mobile device, and according to the angle plane perpendicular to the shooting direction of the camera Calculate the rotation angle corresponding to the attitude angle.
  • the transformation matrix is
  • represents the rotation angle
  • the image processing method further includes: outputting the rotated image to the display device of the user, so that the direction of the coordinate system of the location of the target captured by the camera on the display device is the same as that of the display device.
  • the reference direction remains relatively stationary, that is, the direction of the position coordinate system of the scene where the target photographed by the camera is located on the camera field of view area and the reference direction of the camera field of view area remain relatively stationary.
  • the reference direction of the display device is stationary relative to the display device, and can be any fixed straight line on the display surface of the display device.
  • the image processing method further includes: cropping the image, and the position of the center of the cropped image on the display device is substantially equal to the position of the center of the image before cropping on the display device. constant.
  • the image includes a plurality of images
  • performing cropping processing on the image includes: cropping the size of each image to be smaller than the size of the inscribed circle of the overlapping part of the multiple images; or The size of the image is cut to be larger than the size of the inscribed circle of the overlapping part of the multiple images, and the data-free area appearing in the partially rotated image is filled with a single color.
  • the image processing method further includes: performing filtering processing on the image before performing rotation processing on the image.
  • an image processing device including: an acquiring unit configured to acquire an image captured by a camera installed on a mobile device; and a determining unit configured to determine that the camera is shooting The posture angle of the image; the calculation unit is configured to calculate the rotation angle of the image relative to the reference direction according to the posture angle; the rotation unit is configured to use a transformation matrix based on the rotation angle to The image is rotated.
  • the image processing device further includes at least one of a filtering unit and a cropping unit, wherein: the filtering unit is configured to filter the image; the cropping unit is configured to The image is cropped, and the position of the image center after cropping on the display device and the position of the image center before cropping on the display device remain basically unchanged.
  • an image processing device including: a memory; and a processor coupled to the memory, the processor being configured to be based on instructions stored in the memory, Perform the image processing method as described in any of the foregoing embodiments.
  • an image processing method including: acquiring an image taken by a camera mounted on a mobile device; determining an attitude angle of the camera when the image is taken; and according to the attitude Calculating the correction parameters of the image; using the correction parameters of the image to perform correction processing on the image.
  • the correction parameter includes at least one of a rotation angle and a distortion correction parameter;
  • the correction process includes at least one of a rotation process, a cropping process, and a distortion correction process.
  • the attitude angle includes at least one of a pitch angle, a roll angle, and a yaw angle
  • calculating the correction parameters of the image according to the attitude angle includes: according to the angle plane and the shooting direction of the camera For the vertical attitude angle, the rotation angle corresponding to the attitude angle is calculated; and the distortion correction parameter is calculated according to the attitude angle other than the attitude angle where the angle plane is perpendicular to the shooting direction of the camera.
  • a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the image processing method as described in any of the foregoing embodiments is implemented.
  • an image processing system including the image processing device of any one of the foregoing embodiments.
  • the image processing system further includes at least one of an attitude sensor, a camera, and a display device, wherein: the attitude sensor is installed on a mobile device and is used to obtain the attitude angle of the camera; The camera is installed on the mobile device and used for shooting images; the display device is used for displaying processed images.
  • a robot for inspecting the bottom surface or top surface of a vehicle including the image processing system of any one of the foregoing embodiments.
  • the image taken by the mobile device is rotated correspondingly according to the posture of the mobile device to obtain an image in a fixed direction. Displaying such an image can reduce user operations and improve work efficiency and user experience.
  • FIG. 1A is a flowchart showing an image processing method according to some embodiments of the present disclosure
  • FIG. 1B is a flowchart showing an image processing method according to other embodiments of the present disclosure.
  • FIG. 1C is a flowchart showing an image processing method according to still other embodiments of the present disclosure.
  • FIG. 2 is a schematic diagram showing an image processing method according to some embodiments of the present disclosure
  • FIG. 3 is a schematic diagram showing image processing methods according to other embodiments of the present disclosure.
  • FIG. 4 is a block diagram showing an image processing apparatus according to some embodiments of the present disclosure.
  • FIG. 5 is a block diagram showing image processing apparatuses according to other embodiments of the present disclosure.
  • FIG. 6 is a block diagram showing an image processing system according to some embodiments of the present disclosure.
  • Figure 7 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
  • FIG. 1A is a flowchart illustrating an image processing method according to some embodiments of the present disclosure. As shown in FIG. 1A, the image processing method includes steps S1, S3, S5, and S7.
  • step S1 an image taken by a camera mounted on the mobile device is acquired.
  • the mobile device is, for example, a four-axis aircraft, a balance trolley, or a four-wheel differential steering intelligent mobile chassis.
  • the camera is, for example, a camera.
  • the camera can be installed on the body of the mobile device or on the robotic arm of the mobile device.
  • the captured images can be multiple, continuous videos, or discrete images with a very low frame rate.
  • step S3 the attitude angle of the camera when the image is taken is determined.
  • the attitude angle of the camera when capturing an image is determined according to the attitude angle output by the attitude sensor and at least one of the angular velocity information of the three axes and the acceleration information of the three axes.
  • the attitude sensor can use an inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the inertial measurement unit is, for example, a gyroscope sensor, etc., and can be fixedly installed on a part of the mobile device that is fixedly connected to the camera body (for example, a robotic arm).
  • the camera body here may refer to a housing that contains the camera lens, etc.
  • the angular velocity and acceleration of the camera body obviously do not include the acceleration of the camera lens during zooming and stretching.
  • the attitude angle of the camera represents the attitude angle of the camera body.
  • the attitude angle includes at least one of a pitch angle, a roll angle, and a yaw angle.
  • the attitude angle of the mobile device is calculated using, for example, a quaternion method. It should be understood that, according to the acceleration information of the three axes and the angular velocity information of the three axes, other posture fusion algorithms such as the first-order complementary algorithm and the Kalman filter algorithm can also be used to obtain the posture angle of the camera. Of course, the attitude angle of the camera can also be determined directly according to the attitude angle output by the attitude sensor.
  • the attitude sensor outputs both the attitude angle and the acceleration information of the three axes and the angular velocity information of the three axes, according to actual needs, you can choose to determine the output attitude angle as the attitude angle of the camera, or you can choose the acceleration of the three axes Information and three-axis angular velocity information to calculate the attitude angle of the camera.
  • the attitude angle of the camera can be equal to the attitude angle of this part of the mobile device. Based on this, the tilt angle of the image taken by the camera can be calculated based on the attitude angle of the mobile device, thereby calculating the image rotation angle required to make the image not tilt.
  • step S5 based on the attitude angle, the rotation angle of the image relative to the reference direction is calculated.
  • the corresponding relationship between the ground coordinate system and the coordinate system of the mobile device is used to calculate the rotation angle corresponding to the attitude angle according to the attitude angle of the angle plane perpendicular to the shooting direction of the camera.
  • the corresponding relationship between the ground coordinate system and the coordinate system of the mobile device is three Euler angles, namely pitch angle, roll angle and yaw angle, which reflect the attitude of the mobile device relative to the ground. Furthermore, according to the posture of the mobile device relative to the ground, the tilt angle of the image taken by the camera in a predetermined plane, for example, the tilt angle relative to the reference direction, can be calculated. Thus, the required rotation angle of the image can be calculated so that the target of the image maintains a fixed direction relative to the user, that is, consistent with the reference direction.
  • the shooting direction refers to the direction of the optical axis of the camera's emitted light (that is, the direction in which the light emitted from the last lens/mirror/prism and other optical elements points to the subject).
  • the shooting direction of the camera is up or down, if the mobile device equipped with the camera rotates in the horizontal plane, the angle plane of the yaw angle in the attitude angle (that is, the plane of the body axis on the horizontal plane)
  • the plane where the projection and the earth's axis are located ie, the horizontal plane
  • the rotation angle of the image can be calculated according to the yaw angle of the mobile device.
  • the shooting direction of the camera is forward shooting, if the mobile device or its mechanical arm rotates in the vertical plane at this time, the rotation angle of the image can be calculated according to the roll angle of the mobile device.
  • step S7 the image is rotated using the transformation matrix based on the rotation angle.
  • the transformation matrix can be any suitable transformation matrix
  • represents the rotation angle
  • FIG. 1B is a flowchart showing an image processing method according to other embodiments of the present disclosure.
  • the difference between Fig. 1B and Fig. 1A is that Fig. 1B further includes steps S6, S8 and S9. Only the differences between FIG. 1B and FIG. 1A will be described below, and the similarities will not be repeated.
  • step S6 the image is filtered.
  • the filtering processing includes image enhancement filtering processing. Filtering is performed before the image is rotated, which can avoid jagged edges of the image during the rotation process, thereby ensuring the clarity of the output image and improving the user's viewing experience.
  • step S8 the image is cropped.
  • step S9 the image is output.
  • the image that has undergone rotation processing and cropping processing is output to the user's display device.
  • the reference direction may be a stationary direction relative to the camera field of view boundary (the frame of the camera field of view), for example, the reference direction may be a direction parallel to the long side of the camera field of view boundary in FIG. 2 and pointing to the right (as shown in FIG.
  • the reference direction when the camera rotates, the reference direction also rotates with the camera frame (shown by the thick arrow in Figure 2C).
  • the image in the camera field of view is always adaptively displayed on the display device, so the reference direction can be any fixed direction on the display device in FIG. 2, for example, as shown by the thick arrows in FIG. 2B and FIG. 2D.
  • there can be two reference directions namely, the reference direction in the camera's field of view and the reference direction on the display device.
  • the angle between the reference direction in the camera's field of view and a boundary of the camera's field of view is equal to the reference direction on the display device.
  • FIG. 2 is a schematic diagram illustrating an image processing method according to some embodiments of the present disclosure.
  • Figure 2 includes diagrams A-F.
  • the large rectangle represents the boundary of the camera's field of view, and the inside is the field of view that the camera (or camera) can shoot.
  • the image data area overlaps with the camera's field of view; the small square in the middle represents the upcoming
  • the area that is transmitted to the display on the user's display device can be obtained after cropping from the image data area.
  • Diagrams B, D, and F show images displayed on the user's display device.
  • the image data shown in FIG. 2 includes a car, a lock, and a key, some of which only show a part of the lock and the key.
  • FIG. 2D is the traditional display result
  • FIG. 2F is the display result of the method of the present disclosure.
  • Fig. 2A the extension direction of the photographed car coincides with the reference direction, and the camera field of view area coincides with the image data area.
  • Diagram B the image transmitted from the camera is displayed on the display device, and the car in it maintains the reference direction and does not tilt.
  • the user holding the display device cannot determine the true direction of the target on the scene captured by the camera. If the virtual coordinate system of the shooting scene is set (if there is a stationary target on the scene, the virtual coordinate system and the stationary target on the scene remain relatively static), as the camera rotates, the position coordinate system of the shooting scene will be relative to the reference direction movement.
  • the image data area is rotated and transformed so that the position coordinate system of the camera shooting scene will be consistent with the reference direction of the camera. At this time, the image data area will no longer overlap with the camera's field of view.
  • the dotted rectangle in Figure 2E represents the image data area rotated 45 degrees clockwise, and the solid rectangle represents the camera field of view. It can be seen that the image data area is rotated 45 degrees clockwise relative to the camera’s field of view. For example, in the figure The car has rotated 45 degrees relative to the reference direction of the camera's field of view.
  • the rotated image needs to be cropped according to the display size of the display device, so as to be adaptively displayed on the display device.
  • the triangular area in FIG. 2E is a data-free area in the image with a rotation angle of 45 degrees, and is filled with black.
  • the image data subjected to the above-mentioned rotation processing is displayed as a non-tilted image on the display device after being cropped. That is, the user can always determine the real coordinate system direction of the shooting scene on the display device without rotating the display device.
  • the direction of the stationary object in the rotated image on the display device is consistent with the reference direction on the display device. Will not be affected by camera rotation.
  • Fig. 3 shows a schematic diagram of an image processing method according to other embodiments of the present disclosure.
  • the reference direction is the horizontal direction as an example.
  • diagram A shows multiple images with different rotation angles, where a represents the captured image in the reference direction, b represents the captured image rotated 45 degrees clockwise, and c represents the captured image rotated 90 degrees clockwise.
  • D represents a captured image rotated clockwise by 135 degrees
  • e represents an inscribed circle of the overlapping portion of the multiple images ad
  • f represents a square smaller in size than the inscribed circle e.
  • the smiling face in the image is a static target.
  • Diagram B shows the image data area of the captured image d after the rotation process.
  • the data-free area in the rotated image is filled with a single color (for example, black), and then output to the display device for display to the user.
  • a single color for example, black
  • other distinguishable colors or patterns may also be used for filling, all of which fall within the protection scope of the present disclosure.
  • Diagrams C and D respectively show the images displayed on the display device after the image in diagram B is cropped in different ways.
  • Diagram C shows that the size of the image is cropped to a square f which is smaller than the size of the inscribed circle e. That is, the cropped image is directly displayed on the display device.
  • Diagram D shows that the size of the image is cropped into a square f′ larger than the size of the inscribed circle e. That is, the cropped image filled with the data-free area with a single color is directly displayed on the display device.
  • the captured image is rotated correspondingly according to the posture of the camera to obtain an image in the reference direction. Displaying such an image can reduce user operations, improve work efficiency and user experience. It should be understood that the video can be equivalent to multiple images, so similar processing can be applied to the video taken by the mobile device to obtain a video in a fixed direction. Displaying such a video can also reduce user operations and improve work efficiency and user experience.
  • the above-mentioned image processing is real-time, that is, real-time processing is performed on the captured image or video, so that the real-time displayed image or video is in the reference direction. This can further improve the user's viewing experience.
  • FIG. 1C is a flowchart showing an image processing method according to other embodiments of the present disclosure.
  • the difference between FIG. 1C and FIG. 1A is that steps S5' and S7' in FIG. 1C are different from steps S5 and S7 in FIG. 1B. Only the differences between FIG. 1C and FIG. 1A will be described below, and the similarities will not be repeated.
  • the camera not only rotates on the horizontal plane (that is, the yaw angle changes) with the mobile device, but the camera also pitches or rolls, and is shot.
  • the image may undergo distortion such as deformation, and distortion correction is required at this time.
  • step S5' the correction parameters of the image are calculated according to the attitude angle.
  • the correction parameter includes at least one of a rotation angle and a distortion correction parameter.
  • the attitude angle includes at least one of a pitch angle, a roll angle, and a yaw angle.
  • the rotation angle is similar to the rotation angle of step S5 in FIG. 1A.
  • step S7' correction processing is performed on the image using the correction parameters of the image.
  • the correction processing includes at least one of rotation processing, cropping processing, and distortion correction processing.
  • the rotation processing is similar to the rotation processing of step S7 in FIG. 1A.
  • the cropping processing is similar to the processing shown in diagram b to diagram b'in FIG. 2.
  • the distortion correction processing includes, for example, a process of performing reverse trapezoidal transformation on an image that has undergone normal trapezoidal distortion.
  • FIG. 4 is a block diagram showing an image processing apparatus according to some embodiments of the present disclosure.
  • the image processing device 40 includes: an acquisition unit 410, a determination unit 430, a calculation unit 450, and a rotation unit 470.
  • the acquiring unit 410 is configured to acquire an image captured by a camera installed on a mobile device, for example, step S1 as shown in FIG. 1A may be executed.
  • the determining unit 430 is configured to determine the posture angle of the camera when shooting the image, for example, step S3 as shown in FIG. 1A may be performed.
  • the calculation unit 450 is configured to calculate the rotation angle of the image with respect to the reference direction according to the attitude angle. For example, step S5 as shown in FIG. 1A may be executed.
  • the rotation unit 470 is configured to perform rotation processing on the image by using a transformation matrix based on the rotation angle. For example, step S7 as shown in FIG. 1A may be performed.
  • the image processing device 40 further includes a filtering unit 460.
  • the filtering unit 460 is configured to perform filtering processing on the image, for example, step S6 as shown in FIG. 1B may be performed.
  • the image processing device 40 further includes a cropping unit 480.
  • the cropping unit 480 is configured to perform filtering processing on the image, for example, step S8 as shown in 1B may be performed.
  • FIG. 5 is a block diagram showing image processing apparatuses according to other embodiments of the present disclosure.
  • the image processing device 50 includes a memory 510 and a processor 520 coupled to the memory 510.
  • the memory 510 is used to store instructions for executing corresponding embodiments of the image processing method.
  • the processor 520 is configured to execute the image processing method in any of the embodiments of the present disclosure based on instructions stored in the memory 510.
  • each step in the foregoing image processing method can be implemented by a processor, and can be implemented in any manner of software, hardware, firmware, or a combination thereof.
  • the hardware cost can be further saved, and the size of the product can be effectively reduced because the hardware installation space is not occupied.
  • embodiments of the present disclosure may also adopt the form of a computer program product implemented on one or more non-volatile storage media containing computer program instructions. Therefore, embodiments of the present disclosure also provide a computer-readable storage medium on which computer instructions are stored, and when the instructions are executed by a processor, the image processing method in any of the foregoing embodiments is implemented.
  • An embodiment of the present disclosure also provides an image processing system, including the image processing device described in any of the foregoing embodiments.
  • Figure 6 is a block diagram illustrating a mobile device according to some embodiments of the present disclosure.
  • the image processing system 6 includes an image processing device 60.
  • the image processing device 60 is configured to execute the image processing method described in any of the foregoing embodiments.
  • the structure of the image processing device 60 may be similar to the aforementioned image processing device 40 or 50.
  • the image processing system 6 further includes a camera 611 and a posture sensor 612.
  • the camera 611 is used to capture images. As mentioned above, the camera 611 can be fixedly installed on a mobile device.
  • the attitude sensor 612 is used to obtain the attitude angle of the camera.
  • the attitude angle, and at least one of three-axis angular velocity information and three-axis acceleration information may be output by an attitude sensor such as an inertial measurement unit.
  • the inertial measurement unit can be a gyroscope sensor, which can be fixedly installed on a mobile device.
  • the image processing system 6 further includes a display device 621.
  • the display device 621 is used to display the processed image.
  • the image captured by the camera can maintain a fixed direction (that is, consistent with the reference direction) after processing, so the image in a fixed direction can be displayed on the display device.
  • the display device can be any product or component with a display function such as a mobile phone, a computer, a TV, a navigator, etc.
  • An embodiment of the present disclosure also provides a mobile device, including the image processing system described in any of the foregoing embodiments.
  • the mobile device is, for example, a robot, which is used to inspect the bottom or top surface of the vehicle.
  • Figure 7 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
  • the computer system can be represented in the form of a general-purpose computing device.
  • the computer system includes a memory 7, a processor 7, and a bus 700 connecting different system components.
  • the memory 710 may include, for example, a system memory, a nonvolatile storage medium, and the like.
  • the system memory for example, stores an operating system, an application program, a boot loader (Boot Loader), and other programs.
  • the system memory may include volatile storage media, such as random access memory (RAM) and/or cache memory.
  • RAM random access memory
  • the non-volatile storage medium stores, for example, instructions for executing corresponding embodiments of the display method.
  • Non-volatile storage media include, but are not limited to, magnetic disk storage, optical storage, flash memory, etc.
  • the processor 720 can use discrete hardware such as a central processing unit (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistors. Component way to achieve.
  • each module such as the judgment module and the determination module can be implemented by a central processing unit (CPU) running instructions for executing corresponding steps in the memory, or can be implemented by a dedicated circuit that executes the corresponding steps.
  • the bus 700 can use any bus structure among a variety of bus structures.
  • the bus structure includes, but is not limited to, an industry standard architecture (ISA) bus, a microchannel architecture (MCA) bus, and a peripheral component interconnect (PCI) bus.
  • ISA industry standard architecture
  • MCA microchannel architecture
  • PCI peripheral component interconnect
  • the computer system may also include an input/output interface 730, a network interface 740, a storage interface 750, and so on. These interfaces 730, 740, 750, and the memory 710 and the processor 720 may be connected through a bus 700.
  • the input and output interface 730 may provide a connection interface for input and output devices such as a display device, a mouse, and a keyboard.
  • the network interface 740 provides a connection interface for various networked devices.
  • the storage interface 750 provides a connection interface for external storage devices such as floppy disks, U disks, and SD cards.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本公开涉及一种图像处理方法、装置和系统,以及机器人。图像处理方法包括:获取安装在移动装置上的摄像机拍摄的图像;确定所述摄像机拍摄所述图像时的姿态角;根据所述姿态角,计算所述图像相对于基准方向的旋转角度;利用基于所述旋转角度的变换矩阵,对所述图像进行旋转处理。

Description

图像处理方法、装置和系统,以及机器人
相关申请的交叉引用
本申请是以CN申请号为201910275326.8,申请日为2019年4月8日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及图像处理技术领域,特别涉及一种图像处理方法、装置和系统,机器人以及计算机可读存储介质。
背景技术
移动装置在多个位置之间移动时,行驶路线可能发生弯折,运动方向多次变化,固定在移动装置上的摄像头在此期间拍摄的目标的图像在回传呈现给用户手上的显示设备的时候,图像的方向会随着移动装置的方向变化而变化,导致用户观察到的目标图像也一直在旋转,非常不方便,用户无法判断相机所在现场的目标的方向,甚至发生眩晕现象。
相关技术采用图像旋转方法来修正图像的倾斜。例如,根据用户的指令来确定图像的旋转角度,实现任意角度的图像旋转。甚至还有方法利用硬件在移动装置的方向改变时保持摄像机的方向始终不变,从而能够获得静止目标相对于用户方向不变的图像。
发明内容
发明人认为:相关技术均无法实现根据移动装置的姿态,通过算法自动旋转图像。鉴于此,本公开实施例提出一种技术方案,能够根据移动装置的位姿,自动旋转图像,使得显示的图像与观察者的位置匹配,从而提高观看体验。
根据本公开实施例的第一方面,提供了一种图像处理方法,包括:获取安装在移动装置上的摄像机拍摄的图像,确定所述摄像机在拍摄所述图像时的姿态角;据所述姿态角,计算所述图像相对于基准方向的旋转角度;利用基于所述旋转角度的变换矩阵,对所述图像进行旋转处理。
在一些实施例中,根据姿态传感器输出的姿态角、以及三轴的角速度信息和三轴 的加速度信息中的至少之一,确定所述摄像机在拍摄所述图像时的姿态角。
在一些实施例中,根据所述三轴的角速度信息和三轴的加速度信息,确定所述摄像机在拍摄所述图像时的姿态角包括:利用四元数法,计算所述摄像机在拍摄所述图像时的姿态角,所述姿态角包括俯仰角、滚转角和偏航角中的至少之一。
在一些实施例中,根据所述姿态角,计算所述图像的旋转角度包括:利用地面坐标系与所述移动装置的坐标系之间的对应关系,根据角度平面与所述摄像机的拍摄方向垂直的姿态角,计算与所述姿态角对应的旋转角度。
在一些实施例中,所述变换矩阵为
Figure PCTCN2020074636-appb-000001
其中,
Figure PCTCN2020074636-appb-000002
表示所述图像的像素点的原始位置,
Figure PCTCN2020074636-appb-000003
表示所述图像的像素点的旋转后位置,θ表示所述旋转角度。
在一些实施例中,所述图像处理方法还包括:将经过旋转处理的图像输出到用户的显示设备,从而摄像头拍摄的目标所在现场的位置坐标系在显示设备上的方向与显示设备的所述基准方向保持相对静止,也即摄像头拍摄的目标所在现场的位置坐标系在相机视野区域上的方向与相机视野区域的所述基准方向保持相对静止。此时,如果拍摄的所述图像中存在静止目标,则静止目标在显示设备上的方向与显示设备的基准方向之间的夹角保持不变。所述显示设备的基准方向相对于显示设备静止,可以为显示设备的显示表面上的任意固定的直线。
在一些实施例中,所述图像处理方法还包括:对所述图像进行裁剪处理,裁剪后的图像中心在所述显示设备上的位置与裁剪前的图像中心在所述显示设备上的位置基本保持不变。
在一些实施例中,所述图像包括多幅,对所述图像进行裁剪处理包括:将每幅图像的尺寸裁剪成比所述多幅图像的重叠部分的内接圆的尺寸小;或者将每幅图像的尺寸裁剪成比所述多幅图像的重叠部分的内接圆的尺寸大,并在部分旋转角度的图像中出现的无数据区域用单色填充。
在一些实施例中,所述图像处理方法还包括:在对所述图像进行旋转处理之前,对所述图像进行滤波处理。
根据本公开实施例的第二方面,提供了一种图像处理装置,包括:获取单元,被配置为获取安装在移动装置上的摄像机拍摄的图像;确定单元,被配置为确定所述摄 像机在拍摄所述图像时的姿态角;计算单元,被配置为根据所述姿态角,计算所述图像相对于基准方向的旋转角度;旋转单元,被配置为利用基于所述旋转角度的变换矩阵,对所述图像进行旋转处理。
在一些实施例中,所述图像处理装置还包括滤波单元和裁剪单元中的至少之一,其中:所述滤波单元被配置为对所述图像进行滤波处理;所述裁剪单元被配置为对所述图像进行裁剪处理,裁剪后的图像中心在所述显示设备上的位置与裁剪前的图像中心在所述显示设备上的位置基本保持不变。
根据本公开实施例的第三方面,提供了一种图像处理装置,包括:存储器;和耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行如前述任一实施例所述的图像处理方法。
根据本公开实施例的第四方面,提供了一种图像处理方法,包括:获取安装在移动装置上的摄像机拍摄的图像;确定所述摄像机在拍摄所述图像时的姿态角;根据所述姿态角计算所述图像的校正参数;利用所述图像的校正参数,对所述图像进行校正处理。
在一些实施例中,所述校正参数包括旋转角度和畸变校正参数中的至少一种;所述校正处理包括旋转处理、裁剪处理和畸变校正处理中的至少一种。
在一些实施例中,所述姿态角包括俯仰角、滚转角和偏航角中的至少之一,根据所述姿态角计算所述图像的校正参数包括:根据角度平面与所述摄像机的拍摄方向垂直的姿态角,计算与所述姿态角对应的旋转角度;根据角度平面与所述摄像机的拍摄方向垂直的姿态角之外的其他姿态角,计算所述畸变校正参数。
根据本公开实施例的第五方面,提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前述任一实施例所述的图像处理方法。
根据本公开实施例的第六方面,提供了一种图像处理系统,包括前述任一实施例的图像处理装置。
在一些实施例中,所述图像处理系统还包括姿态传感器、摄像机和显示设备中的至少一种,其中:所述姿态传感器安装在移动装置上,用于获取所述摄像机的姿态角;所述摄像机安装在所述移动装置上,用于拍摄图像;所述显示设备用于显示处理后的图像。
根据本公开实施例的第七方面,提供了一种机器人,用于检查车辆的底面或顶面,包括前述任一实施例的图像处理系统。
在上述实施例中,根据移动装置的姿态对移动装置所拍摄的图像进行相应的旋转,可以得到处于固定方向的图像,显示这样的图像能够减少用户操作、提高工作效率和用户体验。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其它特征及其优点将会变得清楚。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同说明书一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1A是示出根据本公开一些实施例的图像处理方法的流程图;
图1B是示出根据本公开另一些实施例的图像处理方法的流程图;
图1C是示出根据本公开又一些实施例的图像处理方法的流程图;
图2是示出根据本公开一些实施例的图像处理方法的示意图;
图3是示出根据本公开另一些实施例的图像处理方法的示意图;
图4是示出根据本公开一些实施例的图像处理装置的框图;
图5是示出根据本公开另一些实施例的图像处理装置的框图;
图6是示出根据本公开一些实施例的图像处理系统的框图;
图7是示出用于实现本公开一些实施例的计算机系统的框图。
应当明白,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。此外,相同或类似的参考标号表示相同或类似的构件。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。对示例性实施例的描述仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。本公开可以以许多不同的形式实现,不限于这里所述的实施例。提供这些实施例是为了使本公开透彻且完整,并且向本领域技术人员充分表达本公开的范围。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置应被解释为仅仅是示例性的,而不是作为限制。
本公开使用的所有术语(包括技术术语或者科学术语)与本公开所属领域的普通 技术人员理解的含义相同,除非另外特别定义。还应当理解,在诸如通用字典中定义的术语应当被解释为具有与它们在相关技术的上下文中的含义相一致的含义,而不应用理想化或极度形式化的意义来解释,除非这里明确地这样定义。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
图1A是示出根据本公开一些实施例的图像处理方法的流程图。如图1A所示,图像处理方法包括步骤S1、S3、S5和S7。
在步骤S1,获取安装在移动装置上的摄像机拍摄的图像。
移动装置例如为四轴飞行器、平衡小车或四轮差速转向的智能移动底盘等。摄像机例如为摄像头等。摄像机可以安装在移动装置本体上,也可以安装在移动装置的机械臂上。拍摄的图像可以是多幅,可以是连续的视频,也可以是不连贯的帧率很低的几幅图像。
在步骤S3,确定摄像机在拍摄图像时的姿态角。
在一些实施例中,根据姿态传感器输出的姿态角、以及三轴的角速度信息和三轴的加速度信息中的至少之一,确定摄像机在拍摄图像时的姿态角。
姿态传感器可以采用惯性测量单元(IMU)。惯性测量单元例如为陀螺仪传感器等,可以固定安装在移动装置的与摄像机本体固定连接的部分(例如机械臂)上。这里的摄像机本体可以指容纳摄像镜头的壳体等,摄像机本体的角速度和加速度显然不包括摄像镜头在进行变焦做伸缩运动的加速度。摄像机的姿态角表示摄像机本体的姿态角。
姿态角包括俯仰角、滚转角和偏航角中的至少之一。在一些实施例中,根据姿态传感器输出的三轴的加速度信息和三轴的角速度信息,利用例如四元数法,计算移动装置的姿态角。应当理解,根据三轴的加速度信息和三轴的角速度信息,也可以利用一阶互补算法和卡尔曼滤波算法等其他姿态融合算法,来得到摄像机的姿态角。当然,也可以直接根据姿态传感器输出的姿态角来确定摄像机的姿态角。在姿态传感器既输出姿态角又输出三轴的加速度信息和三轴的角速度信息的情况下,根据实际需要,可以选择将输出的姿态角确定为摄像机的姿态角,也可以选择根据三轴的加速度信息和三轴的角速度信息计算出摄像机的姿态角。
由于摄像机固定安装在移动装置的一部分上,所以摄像机的姿态角可以等同于移动装置这部分的姿态角。基于此,可以根据移动装置的姿态角来计算摄像机所拍摄的 图像的倾斜角度,从而计算使图像不倾斜所需的图像旋转角度。
在步骤S5,根据姿态角,计算图像相对于基准方向的旋转角度。
在一些实施例中,用地面坐标系与移动装置的坐标系之间的对应关系,根据角度平面与摄像机的拍摄方向垂直的姿态角,计算与姿态角对应的旋转角度。
地面坐标系与移动装置的坐标系之间的对应关系是三个欧拉角,即俯仰角、滚转角和偏航角,反应了移动装置相对地面的姿态。进而,根据移动装置相对地面的姿态,可以计算摄像机所拍摄的图像在预定平面内的倾斜角度,例如相对于基准方向的倾斜角度。由此,可以计算图像所需的旋转角度,以使得图像的目标相对于用户保持固定方向,即与基准方向一致。
拍摄方向指的是摄像机的出射光线的光轴所指的方向(即从最后一个镜头/反光镜/棱镜等光学元件出射的光线指向被摄对象的方向)。举例如下,在摄像机的拍摄方向向上或向下的情况下,此时若搭载摄像机的移动装置在水平面内转动的情况下,姿态角中的偏航角的角度平面(即机体轴在水平面上的投影与地轴所在平面,即水平面)与拍摄方向垂直,因此可以根据移动装置的偏航角计算图像的旋转角度。在摄像机的拍摄方向为向前拍摄时,此时若移动装置或其机械臂在铅垂面内转动,可以根据移动装置的滚转角计算图像的旋转角度。
在步骤S7,利用基于旋转角度的变换矩阵,对图像进行旋转处理。
在一些实施例中,变换矩阵可以为
Figure PCTCN2020074636-appb-000004
其中,
Figure PCTCN2020074636-appb-000005
表示所述图像的像素点的原始位置,
Figure PCTCN2020074636-appb-000006
表示所述图像的像素点的旋转后位置,θ表示旋转角度。
图1B是示出根据本公开另一些实施例的图像处理方法的流程图。图1B与图1A的不同之处在于,图1B还包括步骤S6、S8和S9。下面将仅描述图1B与图1A的不同之处,相同之处不再赘述。
在步骤S6,对图像进行滤波处理。
在一些实施例中,滤波处理包括图像增强滤波处理。在对图像进行旋转处理之前进行滤波处理,可以避免旋转过程中图像的边缘出现锯齿化,从而保证输出图像的清晰度,提升用户的观看体验。
在步骤S8,对图像进行裁剪处理。
在步骤S9,输出图像。
例如,将经过旋转处理和裁剪处理的图像输出到用户的显示设备。如上所述,在不旋转显示设备的情况下,经过旋转处理和裁剪处理的图像中的静止目标在显示设备上的方向与基准方向一致,且裁剪后的图像中心在显示设备上的位置与裁剪前的图像中心在显示设备上的位置基本保持不变。基准方向可以为相对于相机视野边界(相机视野区域的边框)静止的方向,例如,基准方向可以为平行于图2中的相机视野边界的长边且指向右侧的方向(如图2A中的粗箭头所示),当相机旋转时,基准方向随着相机边框也进行旋转(如图2C中的粗箭头所示)。而相机视野中的图像总是适应性地显示在显示设备上,因此基准方向可以为图2中的显示设备上的任意固定方向,例如可以为如图2B、图2D中的粗箭头所示。综上,基准方向可以有两种,即相机视野区域中的基准方向和显示设备上的基准方向,相机视野区域中的基准方向相对于相机视野某一边界的夹角等于显示设备上的基准方向相对于显示设备的对应边界的夹角。
下面以基准方向为水平方向为例,结合图2描述图像处理方法。
图2是示出根据本公开一些实施例的图像处理方法的示意图。图2包括示图A-F。在示图A、C和E中,大矩形表示相机视野边界,内部是相机(或摄像机)可拍摄的视野区域,在通常情况下,图像数据区域与相机视野区域重合;中间的小正方形表示即将被传输到用户的显示设备上显示的区域,即可从图像数据区域中裁剪处理后得到。示图B、D和F示出在用户的显示设备上显示的图像。图2所示的图像数据包括汽车、锁和钥匙,其中有些示图中仅示出锁和钥匙的一部分。在图2中,假定拍摄到的汽车、钥匙和锁都是静止的目标,而相机是逐渐发生旋转运动的。图2D是传统的显示结果,图2F是本公开的方法的显示结果。
在示图2A中,拍摄到的汽车的延伸方向与基准方向一致,另外,相机视野区域与图像数据区域重合。在示图B中,显示设备上显示从相机传输过来的图像,其中的汽车保持基准方向,不发生倾斜。
在示图2C中,由于相机顺时针旋转了45度,静止目标(例如汽车)相对于相机视野区域中的基准方向逆时针旋转了45度,相机视野区域仍然与图像数据区域重合,但是在图2A中的部分目标在图2C中就显示不全了,例如,图2A中的钥匙在图2C的相机视野区域中只显示了一部分。由于显示设备是顺应相机拍摄的视野的,因此拍摄的图像数据直接传送给显示设备后,显示设备上显示的图像相对于基准方向发生了 倾斜,并且图像中的部分物体已经落在显示区域之外,如示图2D所示。此时手持显示设备的用户无法判断出相机拍摄的现场的目标的真实方向。如果设定拍摄现场的虚拟坐标系(如果现场有静止目标的话,则该虚拟坐标系与现场静止目标相对保持静止),随着相机的旋转运动,拍摄现场的位置坐标系会与基准方向发生相对运动。
为了避免图2D的情况,在本公开的实施例中,将图像数据区域进行旋转变换,以使相机拍摄现场的位置坐标系与相机的基准方向将保持一致。此时,图像数据区域将不再与相机视野区域重合。示图2E中的虚线矩形表示顺时针旋转45度后的图像数据区域,实线矩形表示相机视野区域,可以看出图像数据区域相对于相机视野区域顺时针旋转了45度,例如,图中的汽车相对于相机视野区域的基准方向旋转了45度。
接着需要根据显示设备的显示尺寸对旋转后的图像进行裁剪,以适应性地显示在显示设备上。图2E中的三角区域是旋转角度为45度的图像中的无数据区域,用黑色填充。经过上述旋转处理的图像数据在裁剪后在显示设备上显示为不倾斜的图像。即,用户无需旋转显示设备,便可以在显示设备上始终判断出拍摄现场的真实坐标系方向,经过旋转处理的图像中的静止目标在显示设备上的方向与显示设备上的基准方向一致,用户不会受到相机旋转的影响。
图3示出根据本公开另一些实施例的图像处理方法的示意图。下面仍以基准方向为水平方向为例进行描述。
在图3中,示图A示出不同旋转角度的多幅图像,其中a表示处于基准方向的拍摄图像,b表示顺时针旋转45度的拍摄图像,c表示顺时针旋转90度的拍摄图像示,d表示顺时针旋转135度的拍摄图像,e表示多幅图像a-d的重叠部分的内接圆,f表示比内接圆e的尺寸小的正方形。图像中的笑脸是静止目标。
在示图B中,d和e的含义与图A类似,f'表示比内接圆e的尺寸大的正方形。示图B示出拍摄图像d的经旋转处理后的图像数据区域。旋转处理后的图像中出现的无数据区域用单色(例如黑色)填充,然后再输出到显示设备,以显示给用户。当然,对于无数据区域,除了单色填充以外,也可以用其他可以区分出的颜色或图案填充,均落入本公开的保护范围之内。
示图C和D分别示出示图B中的图像经不同方式裁剪后在显示设备上显示的图像。示图C示出图像的尺寸裁剪成比内接圆e的尺寸小的正方形f。即,在显示设备上直接显示裁剪后图像。示图D中示出图像的尺寸裁剪成比内接圆e的尺寸大的正方形f'。即,在显示设备上直接显示用单色填充无数据区域的裁剪后图像。
在上述实施例中,根据摄像机的姿态对所拍摄的图像进行相应的旋转,可以得到处于基准方向的图像,显示这样的图像能够减少用户操作、提高工作效率和用户体验。应当理解,视频可以相当于多幅图像,由此对于移动装置拍摄视频也可以采用类似的处理,得到处于固定方向的视频,显示这样的视频同样能够减少用户操作、提高工作效率和用户体验。
在一些实施例中,上述图像处理是实时的,即对所拍摄的图像或视频进行实时处理,使得实时显示的图像或视频都处于基准方向。这样可以进一步提高用户的观看体验。
图1C是示出根据本公开另一些实施例的图像处理方法的流程图。图1C与图1A的不同之处在于,图1C中的步骤S5'和S7'不同于图1B中的步骤S5和S7。下面将仅描述图1C与图1A的不同之处,相同之处不再赘述。
在本实施例中,以安装在移动装置上且向上拍摄的相机为例,相机不仅随移动装置发生水平面上的旋转(即偏航角发生变化),而且相机还有俯仰或者滚转,被拍摄图像可能发生形变等畸变,此时需要进行畸变校正。
在步骤S5',根据姿态角计算图像的校正参数。在一些实施例中,校正参数包括旋转角度和畸变校正参数中的至少一种。
如前所述,姿态角包括俯仰角、滚转角和偏航角中的至少之一。旋转角度类似于图1A中步骤S5的旋转角度。
在步骤S7',利用图像的校正参数,对图像进行校正处理。在一些实施例中,校正处理包括旋转处理、裁剪处理和畸变校正处理中的至少一种。
旋转处理类似于图1A中步骤S7的旋转处理。裁剪处理类似于图2中示图b到示图b'所示的处理。畸变校正处理包括例如对发生正梯形畸变的图像进行倒梯形变换的过程。
图4是示出根据本公开一些实施例的图像处理装置的框图。如图4所示,图像处理装置40包括:获取单元410、确定单元430、计算单元450和旋转单元470。
获取单元410被配置为获取安装在移动装置上的摄像机拍摄的图像,例如可以执行如图1A所示的步骤S1。
确定单元430被配置为确定所述摄像机在拍摄所述图像时的姿态角,例如可以执行如图1A所示的步骤S3。
计算单元450被配置为根据所述姿态角,计算所述图像相对于基准方向的旋转角 度,例如可以执行如图1A所示的步骤S5。
旋转单元470被配置为利用基于所述旋转角度的变换矩阵,对所述图像进行旋转处理,例如可以执行如图1A所示的步骤S7。
在一些实施例中,图像处理装置40还包括滤波单元460。滤波单元460被配置为对所述图像进行滤波处理,例如可以执行如图1B所示的步骤S6。
在另一些实施例中,图像处理装置40还包括裁剪单元480。裁剪单元480被配置为对所述图像进行滤波处理,例如可以执行如1B所示的步骤S8。
图5是示出根据本公开另一些实施例的图像处理装置的框图。
如图5所示,图像处理装置50包括:存储器510以及耦接至该存储器510的处理器520。存储器510用于存储执行图像处理方法对应实施例的指令。处理器520被配置为基于存储在存储器510中的指令,执行本公开中任意一些实施例中的图像处理方法。
应当理解,前述图像处理方法中的各个步骤都可以通过处理器来实现,并且可以软件、硬件、固件或其结合的任一种方式实现。在通过软件方式实现前述图像处理方式的情况下,可以进一步节省硬件成本,并且由于不用占用硬件安装空间,还可有效降低产品的尺寸。
除了图像处理方法、装置之外,本公开实施例还可采用在一个或多个包含有计算机程序指令的非易失性存储介质上实施的计算机程序产品的形式。因此,本公开实施例还提供一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现前述任意实施例中的图像处理方法。
本公开实施例还提供一种图像处理系统,包括前述任一实施例所述的图像处理装置。
图6是示出根据本公开一些实施例的移动装置的框图。
如图6所示,图像处理系统6包括图像处理装置60。图像处理装置60被配置为执行前述任一实施例所述的图像处理方法。图像处理装置60的结构可以类似与前述的图像处理装置40或50。
在一些实施例中,图像处理系统6还包括摄像机611和姿态传感器612。
摄像机611用于拍摄图像。如前所述,摄像机611可以固定安装在移动装置上。
姿态传感器612用于获取摄像机的姿态角。例如,可以通过惯性测量单元等姿态传感器来输出姿态角、以及三轴的角速度信息和三轴的加速度信息中的至少之一。如前所述,惯性测量单元可以为陀螺仪传感器,可以固定安装在移动装置上。
在另一些实施例中,图像处理系统6还包括显示设备621。显示设备621用于显示处理后的图像。如前所述,摄像机所拍摄的图像经过处理后可以保持固定方向(即与基准方向一致),因此显示设备上能够显示固定方向的图像。显示设备可以为手机、电脑、电视机、导航仪等任何具有显示功能的产品或部件。
本公开实施例还提供一种移动装置,包括前述任一实施例所述的图像处理系统。移动装置例如为机器人,用于检查车辆的底面或顶面。
图7是示出用于实现本公开一些实施例的计算机系统的框图。
如图7所示,计算机系统可以通用计算设备的形式表现。计算机系统包括存储器7、处理器7和连接不同系统组件的总线700。
存储器710例如可以包括系统存储器、非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)以及其他程序等。系统存储器可以包括易失性存储介质,例如随机存取存储器(RAM)和/或高速缓存存储器。非易失性存储介质例如存储有执行显示方法的对应实施例的指令。非易失性存储介质包括但不限于磁盘存储器、光学存储器、闪存等。
处理器720可以用中央处理器(CPU)、数字信号处理器(DSP)、应用专用集成电路(ASIC)、现场可编程门阵列(FPGA)或其它可编程逻辑设备、分立门或晶体管等分立硬件组件方式来实现。相应地,诸如判断模块和确定模块的每个模块,可以通过中央处理器(CPU)运行存储器中执行相应步骤的指令来实现,也可以通过执行相应步骤的专用电路来实现。
总线700可以使用多种总线结构中的任意总线结构。例如,总线结构包括但不限于工业标准体系结构(ISA)总线、微通道体系结构(MCA)总线、外围组件互连(PCI)总线。
计算机系统还可以包括输入输出接口730、网络接口740、存储接口750等。这些接口730、740、750以及存储器710和处理器720之间可以通过总线700连接。输入输出接口730可以为显示设备、鼠标、键盘等输入输出设备提供连接接口。网络接口740为各种联网设备提供连接接口。存储接口750为软盘、U盘、SD卡等外部存储设备提供连接接口。
至此,已经详细描述了本公开的各种实施例。为了避免遮蔽本公开的构思,没有描述本领域所公知的一些细节。本领域技术人员根据上面的描述,完全可以明白如何实施这里公开的技术方案。
虽然已经通过示例对本公开的一些特定实施例进行了详细说明,但是本领域的技术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本公开的范围。本领域的技术人员应该理解,可在不脱离本公开的范围和精神的情况下,对以上实施例进行修改或者对部分技术特征进行等同替换。本公开的范围由所附权利要求来限定。

Claims (19)

  1. 一种图像处理方法,包括:
    获取安装在移动装置上的摄像机拍摄的图像,确定所述摄像机在拍摄所述图像时的姿态角;
    根据所述姿态角,计算所述图像相对于基准方向的旋转角度;
    利用基于所述旋转角度的变换矩阵,对所述图像进行旋转处理。
  2. 根据权利要求1所述的图像处理方法,其中,根据姿态传感器输出的姿态角、以及三轴的角速度信息和三轴的加速度信息中的至少之一,确定所述摄像机在拍摄所述图像时的姿态角。
  3. 根据权利要求2所述的图像处理方法,其中,根据所述三轴的角速度信息和三轴的加速度信息,确定所述摄像机在拍摄所述图像时的姿态角包括:
    利用四元数法,计算所述摄像机在拍摄所述图像时的姿态角,所述姿态角包括俯仰角、滚转角和偏航角中的至少之一。
  4. 根据权利要求3所述的图像处理方法,其中,根据所述姿态角,计算所述图像的旋转角度包括:
    利用地面坐标系与所述移动装置的坐标系之间的对应关系,根据角度平面与所述摄像机的拍摄方向垂直的姿态角,计算与所述姿态角对应的旋转角度。
  5. 根据权利要求4所述的图像处理方法,其中,所述变换矩阵为
    Figure PCTCN2020074636-appb-100001
    其中,
    Figure PCTCN2020074636-appb-100002
    表示所述图像的像素点的原始位置,
    Figure PCTCN2020074636-appb-100003
    表示所述图像的像素点的旋转后位置,θ表示所述旋转角度。
  6. 根据权利要求1至5中任一项所述的图像处理方法,还包括:将经过旋转处理的图像输出到用户的显示设备,从而所述摄像头拍摄的目标所在现场的位置坐标系在 显示设备上的方向与显示设备的所述基准方向保持相对静止。
  7. 根据权利要求6所述的图像处理方法,还包括:对所述图像进行裁剪处理,裁剪后的图像中心在所述显示设备上的位置与裁剪前的图像中心在所述显示设备上的位置基本保持不变。
  8. 根据权利要求7所述的图像处理方法,其中,所述图像包括多幅,对所述图像进行裁剪处理包括:
    将每幅图像的尺寸裁剪成比所述多幅图像的重叠部分的内接圆的尺寸小;或者
    将每幅图像的尺寸裁剪成比所述多幅图像的重叠部分的内接圆的尺寸大,并在部分旋转角度的图像中出现的无数据区域用单色填充。
  9. 根据权利要求1所述的图像处理方法,还包括:在对所述图像进行旋转处理之前,对所述图像进行滤波处理。
  10. 一种图像处理装置,包括:
    获取单元,被配置为获取安装在移动装置上的摄像机拍摄的图像;
    确定单元,被配置为确定所述摄像机在拍摄所述图像时的姿态角;
    计算单元,被配置为根据所述姿态角,计算所述图像相对于基准方向的旋转角度;
    旋转单元,被配置为利用基于所述旋转角度的变换矩阵,对所述图像进行旋转处理。
  11. 根据权利要求10所述的图像处理装置,还包括滤波单元和裁剪单元中的至少之一,其中:
    所述滤波单元被配置为对所述图像进行滤波处理;
    所述裁剪单元被配置为对所述图像进行裁剪处理,裁剪后的图像中心在所述显示设备上的位置与裁剪前的图像中心在所述显示设备上的位置基本保持不变。
  12. 一种图像处理方法,包括:
    获取安装在移动装置上的摄像机拍摄的图像;
    确定所述摄像机在拍摄所述图像时的姿态角;
    根据所述姿态角计算所述图像的校正参数;
    利用所述图像的校正参数,对所述图像进行校正处理。
  13. 根据权利要求12所述的图像处理方法,其中:
    所述校正参数包括旋转角度和畸变校正参数中的至少一种;
    所述校正处理包括旋转处理、裁剪处理和畸变校正处理中的至少一种。
  14. 根据权利要求13所述的图像处理方法,其中,
    所述姿态角包括俯仰角、滚转角和偏航角中的至少之一,
    根据所述姿态角计算所述图像的校正参数包括:
    根据角度平面与所述摄像机的拍摄方向垂直的姿态角,计算与所述姿态角对应的旋转角度;
    根据角度平面与所述摄像机的拍摄方向垂直的姿态角之外的其他姿态角,计算所述畸变校正参数。
  15. 一种图像处理装置,包括:
    存储器;和
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行如权利要求1至9、12至14中任一项所述的图像处理方法。
  16. 一种图像处理系统,包括:如权利要求10、11、15中任一项所述的图像处理装置。
  17. 根据权利要求16所述的图像处理系统,还包括姿态传感器、摄像机和显示设备中的至少一种,其中:
    所述姿态传感器安装在移动装置上,用于获取所述摄像机的姿态角;
    所述摄像机安装在所述移动装置上,用于拍摄图像;
    所述显示设备用于显示处理后的图像。
  18. 一种机器人,用于检查车辆的底面或顶面,包括如权利要求16至17中任一项所述的图像处理系统。
  19. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如权利要求1至9、12至14中任一项所述的图像处理方法。
PCT/CN2020/074636 2019-04-08 2020-02-10 图像处理方法、装置和系统,以及机器人 WO2020207108A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910275326.8 2019-04-08
CN201910275326.8A CN111800589B (zh) 2019-04-08 2019-04-08 图像处理方法、装置和系统,以及机器人

Publications (1)

Publication Number Publication Date
WO2020207108A1 true WO2020207108A1 (zh) 2020-10-15

Family

ID=72751480

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074636 WO2020207108A1 (zh) 2019-04-08 2020-02-10 图像处理方法、装置和系统,以及机器人

Country Status (2)

Country Link
CN (1) CN111800589B (zh)
WO (1) WO2020207108A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697570A (zh) * 2020-12-30 2022-07-01 华为技术有限公司 用于显示图像的方法、电子设备及芯片
CN114742749A (zh) * 2022-02-27 2022-07-12 扬州盛强薄膜材料有限公司 基于图像处理的pvc薄膜质量检测方法
CN114872048A (zh) * 2022-05-27 2022-08-09 河南职业技术学院 一种机器人舵机角度校准方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113206951B (zh) * 2021-04-13 2022-07-05 武汉科技大学 一种基于扑翼飞行系统的实时电子稳像方法
CN113379850B (zh) * 2021-06-30 2024-01-30 深圳银星智能集团股份有限公司 移动机器人控制方法、装置、移动机器人及存储介质
CN113239918B (zh) * 2021-07-13 2021-10-01 北京金博星指纹识别科技有限公司 一种图像分辨率归一化处理方法及装置
CN116934833A (zh) * 2023-07-18 2023-10-24 广州大学 基于双目视觉水下结构病害检测方法、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2075631A1 (en) * 2007-12-26 2009-07-01 Fujinon Corporation Image rotating adapter and camera having the same
CN105635450A (zh) * 2015-12-25 2016-06-01 努比亚技术有限公司 移动终端解锁方法及装置
CN106257911A (zh) * 2016-05-20 2016-12-28 上海九鹰电子科技有限公司 用于视频图像的图像稳定方法和装置
CN106708089A (zh) * 2016-12-20 2017-05-24 北京小米移动软件有限公司 跟随式的飞行控制方法及装置、无人机
CN107809594A (zh) * 2017-11-10 2018-03-16 维沃移动通信有限公司 一种拍摄方法及移动终端

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5577931B2 (ja) * 2010-08-06 2014-08-27 ソニー株式会社 画像処理装置、画像処理方法およびプログラム
US10038850B2 (en) * 2014-09-23 2018-07-31 Texas Instruments Incorporated Optical image stabilization (OIS) with compensation for component misalignment
CN107592446B (zh) * 2016-07-06 2020-06-05 腾讯科技(深圳)有限公司 一种视频图像处理方法、装置及系统
CN106973228B (zh) * 2017-03-31 2020-02-21 联想(北京)有限公司 一种拍摄方法及电子设备
CN107300973A (zh) * 2017-06-21 2017-10-27 深圳传音通讯有限公司 屏幕旋转控制方法、系统以及装置
CN108733066B (zh) * 2018-05-07 2021-05-07 中国人民解放军国防科技大学 一种基于吊舱姿态反馈的目标跟踪控制方法
CN109528315B (zh) * 2018-11-12 2021-12-17 南京迈瑞生物医疗电子有限公司 术野图像控制系统、方法、计算机设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2075631A1 (en) * 2007-12-26 2009-07-01 Fujinon Corporation Image rotating adapter and camera having the same
CN105635450A (zh) * 2015-12-25 2016-06-01 努比亚技术有限公司 移动终端解锁方法及装置
CN106257911A (zh) * 2016-05-20 2016-12-28 上海九鹰电子科技有限公司 用于视频图像的图像稳定方法和装置
CN106708089A (zh) * 2016-12-20 2017-05-24 北京小米移动软件有限公司 跟随式的飞行控制方法及装置、无人机
CN107809594A (zh) * 2017-11-10 2018-03-16 维沃移动通信有限公司 一种拍摄方法及移动终端

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697570A (zh) * 2020-12-30 2022-07-01 华为技术有限公司 用于显示图像的方法、电子设备及芯片
CN114697570B (zh) * 2020-12-30 2024-04-26 华为技术有限公司 用于显示图像的方法、电子设备及芯片
CN114742749A (zh) * 2022-02-27 2022-07-12 扬州盛强薄膜材料有限公司 基于图像处理的pvc薄膜质量检测方法
CN114872048A (zh) * 2022-05-27 2022-08-09 河南职业技术学院 一种机器人舵机角度校准方法
CN114872048B (zh) * 2022-05-27 2024-01-05 河南职业技术学院 一种机器人舵机角度校准方法

Also Published As

Publication number Publication date
CN111800589A (zh) 2020-10-20
CN111800589B (zh) 2022-04-19

Similar Documents

Publication Publication Date Title
WO2020207108A1 (zh) 图像处理方法、装置和系统,以及机器人
US10506154B2 (en) Method and device for generating a panoramic image
US10594941B2 (en) Method and device of image processing and camera
WO2021227359A1 (zh) 一种无人机投影方法、装置、设备及存储介质
CN106846409B (zh) 鱼眼相机的标定方法及装置
CN107660337A (zh) 用于从鱼眼摄像机产生组合视图的系统及方法
EP3016065B1 (en) Coordinate computation device and method, and image processing device and method
CN113556464B (zh) 拍摄方法、装置及电子设备
CN107749069B (zh) 图像处理方法、电子设备和图像处理系统
WO2021168804A1 (zh) 图像处理方法、图像处理装置和图像处理系统
WO2021004416A1 (zh) 一种基于视觉信标建立信标地图的方法、装置
US10104286B1 (en) Motion de-blurring for panoramic frames
CN112204946A (zh) 数据处理方法、装置、可移动平台及计算机可读存储介质
US20090059018A1 (en) Navigation assisted mosaic photography
CN113436267B (zh) 视觉惯导标定方法、装置、计算机设备和存储介质
CN113034347A (zh) 倾斜摄影图像处理方法、装置、处理设备及存储介质
CN111353945B (zh) 鱼眼图像校正方法、装置及存储介质
US10785470B2 (en) Image processing apparatus, image processing method, and image processing system
JP4548228B2 (ja) 画像データ作成方法
JP2005275789A (ja) 三次元構造抽出方法
JP5882153B2 (ja) 三次元座標算出装置
JP2009077022A (ja) 運転支援システム及び車両
JPH1118007A (ja) 全方向性画像表示システム
GB2557212A (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses
TWI672950B (zh) 可補償影像變化的影像裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20787985

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20787985

Country of ref document: EP

Kind code of ref document: A1