WO2022047701A1 - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
WO2022047701A1
WO2022047701A1 PCT/CN2020/113251 CN2020113251W WO2022047701A1 WO 2022047701 A1 WO2022047701 A1 WO 2022047701A1 CN 2020113251 W CN2020113251 W CN 2020113251W WO 2022047701 A1 WO2022047701 A1 WO 2022047701A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
edge
camera
pixel area
line segment
Prior art date
Application number
PCT/CN2020/113251
Other languages
French (fr)
Chinese (zh)
Inventor
高文良
周游
刘洁
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080009631.3A priority Critical patent/CN113348489A/en
Priority to PCT/CN2020/113251 priority patent/WO2022047701A1/en
Publication of WO2022047701A1 publication Critical patent/WO2022047701A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Definitions

  • the present application relates to the field of image processing, and more particularly, to an image processing method and apparatus.
  • the camera can capture image information of the object, and transmit the captured image information to the user, and the user can operate the electronic device according to the image information.
  • the field of view of the camera on the current electronic device is not very large, which restricts the user's field of view and affects the user's visual experience.
  • the present application provides an image processing method and device, which can generate an image with a large field of view and improve the user's visual experience.
  • an image processing method includes acquiring a first image captured by a first camera of an electronic device of a scene; acquiring a second image captured by a second camera of the electronic device of the scene; wherein, the The observation range of the first camera is larger than the observation range of the second camera; and the resolution of the first camera is lower than the resolution of the second camera, and/or the imaging of the first camera is an achromatic image, the image of the second camera is a color image;
  • image processing is performed on the first pixel area in the first image to obtain a processed first image.
  • an image processing apparatus comprising: a memory for storing a computer program; a processor for invoking the computer program, and when the computer program is executed by the processor, causes the apparatus to execute
  • the steps are as follows: obtaining a first image captured by a first camera of an electronic device on the scene; obtaining a second image captured by a second camera of the electronic device on the scene; wherein, the observation range of the first camera is greater than the observation range of the second camera; and, the resolution of the first camera is lower than the resolution of the second camera, and/or the imaging of the first camera is an achromatic image, the second camera
  • the imaging of the camera is a color image;
  • image processing is performed on the first pixel area in the first image to obtain a processed first image.
  • a third aspect provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed, implements the method provided in the first aspect.
  • a computer program product comprising instructions that, when executed by a computer, cause the computer to perform the method provided by the first aspect.
  • the second image is a color image and/or the second image is a high-resolution image
  • the second image has a better visual effect
  • the observation range of the first camera is larger than that of the second camera.
  • the field of view corresponding to the first image is larger than the field of view corresponding to the second image.
  • the present application can fuse the second image with better visual effect and the first image with large field of view to obtain the processed first image with both large field of view and better imaging effect, so that users can have better visual experience.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 3 and FIG. 4 are schematic diagrams of different overlapping situations of the first image and the second image provided by the embodiments of the present application.
  • FIG. 5 is a schematic diagram of changes in attitude angles before and after stabilization of the first image provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a processed first image provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an image segmented using superpixels according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an image mapping process provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an image edge extraction provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of segmenting an edge segment according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of finding a line segment in a second image that matches a line in the first image according to an embodiment of the present application.
  • FIG. 12 is a schematic block diagram of an image processing apparatus provided by an embodiment of the present application.
  • the electronic devices in the embodiments of the present application may be various electronic devices with cameras, such as mobile platforms, such as drones, unmanned ships, drone vehicles, and vehicles that can be driven automatically; they may also be smart wearable devices, For example, VR/AR glasses, etc.; it can also be smart terminal devices, such as mobile phones, tablet computers, etc.
  • mobile platforms such as drones, unmanned ships, drone vehicles, and vehicles that can be driven automatically
  • smart wearable devices For example, VR/AR glasses, etc.
  • smart terminal devices such as mobile phones, tablet computers, etc.
  • the electronic device in this embodiment of the present application may be the above-mentioned movable platform or other electronic device that is communicatively connected to the wearable device.
  • the electronic device may be an unmanned aerial vehicle, or a control terminal that communicates with the unmanned aerial vehicle, such as a remote control used in conjunction with the unmanned aerial vehicle, or a mobile phone that communicates with the unmanned aerial vehicle through wired or wireless communication.
  • the camera in the embodiments of the present application may also be referred to as a camera, an image sensor, or the like.
  • Examples may be RGB color cameras, infrared cameras, binocular cameras, multi-eye cameras, multi-spectral cameras, and the like.
  • a main camera is usually set on the movable platform, and the main camera can photograph the surrounding scene of the movable platform and obtain image information of the scene during the movement of the movable platform.
  • the removable platform can transmit the image information to the user device, and display the image on the user device.
  • This image can be a first-person view (FPV) image, and the user can perform corresponding control operations based on the image information. For example, the movement direction, movement speed, stop position, etc. of the movable platform are controlled, or the shooting angle and exposure parameters of the main camera of the movable platform are adjusted. Specifically, this operation can be performed by the user touching a control button of the user equipment or touching the screen of the user equipment.
  • FMV first-person view
  • the main camera can be a color camera, and the color camera can capture a color image, and the color image can make the user have a more intuitive perception of the surrounding environment.
  • the main camera can be a high-resolution camera, so that high-resolution high-definition images can be captured, so that the user has a better viewing experience of the images in the scene. Clear images help users identify objects of interest.
  • the main purpose of the main camera is to obtain high-resolution, vivid-color imaging.
  • a camera with high imaging quality has a limited angle of view (FOV).
  • FOV angle of view
  • the design and manufacturing cost of the camera is very high.
  • the observation range of the main camera's imaging is limited. Therefore, the operation based on the image captured by the main camera limits the user's field of view and affects the user's observation of the target object in a wider range of the scene. .
  • the embodiments of the present application provide an image processing method, aiming at increasing the field of view of the image in the limited observation range of the camera, ensuring the quality of the processed image to a certain extent, and improving the user's visual experience.
  • a perception camera for perceiving the surrounding environment can also be set on the movable platform, and this camera is mainly used to obtain sensory data around the drone.
  • this camera is mainly used to obtain sensory data around the drone.
  • the movable platform can obtain the depth information and temperature information of the objects in the scene with the help of these perception cameras, and then assist in the execution of obstacle avoidance, tracking, positioning and other functions during the movement process.
  • the data collected by these perception cameras will be directly input into the data processing link to extract the target objects in the scene, or generate scene depth information.
  • Perceptual cameras only need to obtain accurate depth information or temperature information.
  • the image data collected by these perceptual cameras does not need to generate images to display to users. Therefore, the data acquisition and data processing processes of these perceptual cameras are not in electronic devices. Designed for better visual effects of images.
  • these perception cameras often have as large an observation range as possible, so that a wider range of scenes can be photographed and corresponding imaging data can be obtained. That is, the perception effect of the image captured by the perception camera is not as good as that of the image captured by the main camera, but the field of view of the camera is larger than the field of view of the main camera used for shooting.
  • the drone includes a main camera 110 for capturing images and a camera 120 for sensing the surrounding environment.
  • the resolution of the image captured by the main camera 110 is higher than the resolution of the image captured by the camera 120 , but the field of view of the camera 120 is larger than that of the main camera 110 .
  • Camera 120 may also be referred to as a vision sensor.
  • the present application proposes to fuse the image captured by the camera 120 with the image captured by the main camera 110, and combine the respective advantages of the two cameras to generate an image with a large field of view and high resolution in some areas. .
  • FIG. 2 is an image processing method 200 provided by an embodiment of the present application.
  • the method can be applied to an electronic device, and the method shown in FIG. 2 can be executed by a processor in the electronic device.
  • the electronic device may include a first camera and a second camera, and the first camera and the second camera are two different types of cameras with different imaging effects.
  • the observation range of the first camera is larger than the observation range of the second camera.
  • the resolution of the first camera is lower than the resolution of the second camera.
  • the image of the first camera is an achromatic image
  • the image of the second camera is a color image.
  • the observation range in the embodiment of the present application can be understood as the scene area that can be photographed by the camera.
  • the method 200 includes steps S210-S240.
  • S210 Acquire a first image captured by the first camera of the scene.
  • S220 Acquire a second image captured by the second camera of the scene.
  • the first pixel area is the imaging area for the target object in the first image
  • the second pixel area is the imaging area for the target object in the second image.
  • the first camera and the second camera in the embodiment of the present application may shoot for the same scene, so as to obtain the first image and the second image respectively.
  • the objects in the scene captured by the first camera are not exactly the same as the objects in the scene captured by the second camera.
  • the observation range of the first camera is larger than the observation range of the second camera, there are more objects in the scene captured by the first camera than objects in the scene captured by the second camera.
  • both the first camera and the second camera shoot at the forest, and there are imaging areas for the same trees in the first image and the second image, but the number of corresponding trees in the first image is greater than the number of trees corresponding to the second image quantity.
  • the difference between the first image and the second image caused by the different observation ranges of the first camera and the second camera will be referred to as the difference in the field of view of the first image and the second image.
  • the field of view is larger than the field of view of the second image.
  • the first pixel area and the second pixel area are imaging areas for the same target object in the first image and the second image, respectively.
  • the target object is a partial scene object in the scene captured by the first camera, that is, the first pixel area is a partial area in the first image.
  • the target objects are all or part of the scene objects in the scene captured by the second camera, that is, the second pixel area may be the entire area in the second image, or may be a partial area of the second image.
  • the second pixel area is the entire area in the second image, it means that the objects in the scene captured by the first camera include all the objects captured by the second camera.
  • the first pixel area and the second pixel area described above are imaging areas that objectively target the same target object.
  • this embodiment of the present application may also determine the first pixel area and the second pixel area according to the user's selection, that is, the first pixel area and/or the second pixel area corresponding to the target object are determined based on user operations. For example, the region where the user wishes to perform image processing may be determined according to the user's click operation, and then the first pixel region and the second pixel region may be determined according to the region where the user wishes to perform image processing.
  • the user only cares about the characters in the scene, he can click on the characters, and the first pixel area and the second pixel area are imaging areas corresponding to the same character in the first image and the second image, respectively.
  • the second image is a color image and/or the second image is a high-resolution image
  • the second image has a better visual effect
  • the observation range of the first camera is larger than that of the second camera. Therefore, the first The field of view corresponding to the image is larger than the field of view corresponding to the second image.
  • the present application can fuse the second image with better visual effect and the first image with large field of view to obtain the processed first image with both large field of view and better imaging effect, so that users can have better visual experience.
  • the user can observe the imaging of objects in more scenes around the electronic device by observing the first image with a larger field of view, thereby making a decision that is beneficial to subsequent judgments.
  • the display information supplemented by the second image can also be used to assist in improving the visual effect of some areas in the first image, such as stronger or more realistic color, clearer imaging, etc.
  • the first pixel area is hereinafter understood as an area in the first image that overlaps with the second image
  • the second pixel area is understood as an area in the second image that overlaps with the first image
  • FIG. 3 and FIG. 4 show different degrees of overlapping of the first image and the second image.
  • Fig. 3 shows a situation where the entire area of the second image completely overlaps with a partial area of the first image
  • Fig. 4 shows a situation where a partial area of the second image overlaps with a partial area of the first image.
  • the entire area of the second image 320 overlaps with a partial area of the first image 310 .
  • the second pixel area is the entire area of the second image, and the first pixel area 311 and the second image overlapping.
  • a partial area of the second image 320 overlaps with a partial area of the first image 210 , in this case, the first pixel area 312 overlaps with a part of the second image 320 .
  • the area of the area overlapping with the first image in the second image is larger than the area of the area not overlapping, that is, the size of the second pixel area 312 is slightly smaller than the size of the second image 320 .
  • the camera in this embodiment of the present application may be a camera that shoots photos, or may be a camera that shoots videos.
  • the first image and the second image may be a certain frame of images in the video stream.
  • the first image may be an image directly captured by a camera, or the first image may be an image obtained by performing image processing on an image captured by a camera.
  • the present application may acquire the first initial image, and perform stabilization processing on the first image to obtain the first image.
  • shaking cannot be avoided.
  • the image captured by the camera can be stabilized, so that the user will be more comfortable watching the stabilized image without dizziness.
  • the image needs to be cropped, that is, the image size after stabilization is smaller than the image size before stabilization. Therefore, in this embodiment of the present application, the first initial image can be stabilized first, and then fused with the second image, so as to ensure that the second image with high resolution and/or color information will not be cropped, and improve the Utilization of the second image.
  • the embodiment of the present application does not specifically limit the manner of stabilization processing, for example, a traditional video stabilization algorithm may be used for stabilization.
  • the gyroscope in the inertial measurement unit is used to measure the change of the rotation angle of the sensor in the camera, and then the change is filtered more smoothly through a filter to generate a new attitude angle, and finally the image Switch to the stabilized attitude angle.
  • Figure 5 shows the change of the attitude angle of the sensor before and after stabilization.
  • the dotted line represents the change of the attitude angle with time before stabilization
  • the solid line represents the change of the attitude angle with time after stabilization.
  • the attitude angle in this embodiment of the present application may include the pitch angle of the sensor.
  • the first camera may be the camera 120
  • the second camera may be the main camera 110 .
  • the camera 120 and the camera 110 have their respective functions in the drone.
  • the camera 120 is used to perceive the surrounding environment, and the images it takes will not be presented to the user;
  • the camera 110 is used to take pictures, and the images it takes are mainly used for presentation. to users.
  • the present application cleverly combines the images captured by the camera 110 and the camera 120 without changing the hardware mechanism of the electronic device, and can present images with both a large field of view and a better visual effect to the user without significantly increasing the cost.
  • the first camera in this embodiment of the present application may be used to perceive depth information of surrounding objects, that is, the first image may be an image including depth information of the object.
  • the embodiments of the present application do not specifically limit the manner in which the first camera acquires the depth information.
  • the first camera can acquire depth information according to the principle of binocular vision, as shown in FIG. 1 .
  • the first camera may acquire depth information according to the time of flight (TOF) principle.
  • TOF time of flight
  • the first camera 120 in FIG. 1 may include a first visual sensor 121 and a second visual sensor 122 . Further, the first visual image captured by the first visual sensor 121 may be acquired, the second visual image captured by the second visual sensor 122 may be acquired, and an image with depth information may be generated according to the first visual image and the second visual image.
  • a first initial image may be generated according to the first visual image and the second visual image, and then stabilization processing is performed on the first initial image to obtain the first image.
  • stabilization processing may also be performed on the first visual image and the second visual image respectively, and then the first image is generated based on the stabilized left first visual image and the second visual image.
  • the first image and the second image are generated by different cameras, there will be differences in the internal parameters, shooting positions and shooting angles of different cameras. If the images captured by the cameras are directly fused and matched, the complexity will be relatively high.
  • the second image in this embodiment of the present application may be an image after pose transformation.
  • the present application may acquire a second initial image, where the second initial image may be an image directly captured by the second camera, and then convert the second initial image to the camera pose where the first image is located to generate the second image .
  • This embodiment of the present application may first perform pose transformation on the second initial image, the transformed second image is the same as the camera pose where the first image is located, and then the second image with the same camera pose is fused with the first image deal with. Fusion of two images with the same camera pose can greatly reduce the complexity of fusion processing.
  • the second initial image can be converted into the first image according to at least one of the following parameters: internal parameters of the first camera, internal parameters of the second camera, and a rotation relationship between the first image and the second initial image.
  • a second image is formed on the camera pose of the image.
  • the second image may be formed by converting the second initial image to the camera pose of the first image according to the internal parameters of the first camera and the internal parameters of the second camera.
  • the camera internal parameters corresponding to the converted second image and the first image are the same. Since the rotation relationship is not adjusted, the converted first image and the second image may still have a relative rotation relationship.
  • the second initial image can be converted into the camera pose of the first image to form second image.
  • the converted second image and the first image not only have the same internal reference, but also eliminate the relative rotation relationship.
  • the baselines of the two images are already parallel, and the matching relationship between the two images only has a positional deviation in the horizontal direction. In this way, during the image matching process, It is only necessary to find matching pixels in the horizontal direction, which can greatly reduce the complexity of image processing.
  • [u, v, 1] T represents a two-dimensional (2D) pixel point in a homogeneous image coordinate system (homogeneous image coordinates).
  • [x w , y w , z w ] T represents a three-dimensional (world coordinates, 3D) pixel point in the world coordinate system.
  • the matrix K is called the camera calibration matrix, that is, the internal parameters of each camera, referred to as internal parameters.
  • the internal parameter K contains 5 parameters, as follows:
  • ⁇ x fm x
  • ⁇ y fm y
  • f represents the focal length
  • m x and m y represent the number of pixels per unit distance on the x and y axes.
  • represents the skew parameters between the x, y axes, eg, for a charge coupled device (CCD) camera, the pixels are not square.
  • ⁇ and ⁇ are the position of the optical center (principal point).
  • the matrix R is the rotation matrix
  • the matrix T is the translation matrix
  • R and T are the external parameters of the camera (extrinsic matrix). Rotation and displacement transformations.
  • the original posture of the first image in the world coordinate system is denoted as R 0
  • the posture of the filtered first image is denoted as R 1
  • the internal parameter of the second camera is recorded as K c
  • the second initial image captured by the second camera is recorded as I c
  • the internal parameter of the first camera is recorded as K g
  • the first image is recorded as I g
  • the first image is recorded as I g
  • the rotation relationship between the first image and the second initial image is denoted as R gc .
  • the electronic device in the embodiment of the present application may be a device such as a movable platform.
  • the user can operate the electronic device according to the processed first image.
  • the user can enter instructions.
  • the electronic device can adjust the orientations of the first camera and the second camera according to the instruction input by the user, so that the first camera and the second camera shoot towards the same scene.
  • the user After obtaining the processed first image, the user can determine the current position information of the electronic device according to the information in the processed first image, and determine the moving direction of the electronic device according to the current position information, and input the information related to the movement The control command corresponding to the direction.
  • the processed first image in the embodiment of the present application can be used for FPV flight.
  • the user can control the flight of the drone according to the processed first image, and since the field of view of the processed first image is larger, the user's field of view is wider, and the user experience will be better.
  • the first pixel area in the embodiment of the present application is preferably located in the middle area of the first image, that is, the overlapping area of the second image and the first image is the middle area of the first image.
  • the generated processed first image may be an image in which the middle area is a high-resolution image, the edge area is a low-resolution image, and/or the middle area is a color image,
  • the edge area is the image of the subcolor image. Since the user mainly focuses on the middle area when viewing the processed first image, the high resolution and/or color of the middle area can improve the user's visual experience.
  • the edge area is mainly used to assist the user's perception of the surrounding environment. Therefore, the user does not have high requirements for the resolution and/or color of the edge area, and the low resolution and/or color information of the edge area will not affect the user's Visual experience.
  • the user may also adjust the shooting direction of the first camera and/or the second camera with respect to the region where the first pixel region is located in the first image. For example, as shown in FIG. 4 , if the first pixel area 312 is located at the edge area of the first image 310 instead of the middle area, the user can adjust the shooting direction of the first camera and/or the second camera so that the first pixel area located in the middle area of the first image.
  • the first image is an achromatic image
  • the second image is a color image
  • the first image may be a low-resolution achromatic image
  • the second image may be a high-resolution color image
  • the first image is an achromatic image and the second image is a color image
  • the color information of the second image can be retained. In this way, the generated processed first image is partially colored and partially black and white. Image.
  • the embodiment of the present application may further perform color processing on the processed first image to generate an aesthetic artistic photo.
  • the color processing may include at least one of the following: color filling, color removal, and color retention.
  • the middle area 420 in FIG. 6 is a color area
  • the edge area 410 is a black and white area.
  • the tall buildings 402 are half color and the other half are black and white
  • the ship 404 is half color and the other half is black and white
  • the road 406 is half color and the other half is black and white
  • the Ferris wheel 408 is half color and the other half is black and white.
  • the embodiments of the present application may perform color filling (or color expansion) on the processed first image. Specifically, based on the colored areas of the same object in the processed first image, the colorless areas may be filled. color fill.
  • the processed first image can be segmented, and the same object can be segmented from the processed first image; if one part of the same object has color information and the other part has no color information, the same object can be divided according to the color information in the same object. part, fill the other part with color.
  • This embodiment of the present application does not specifically limit the manner of dividing the processed first image.
  • a superpixel segmentation method can be used to segment the processed first image to generate superpixels, so that adjacent pixels with similar texture, color, brightness and other characteristics form irregular pixel blocks with certain visual significance.
  • FIG. 7 shows an image segmented by a superpixel segmentation method, wherein the number of superpixels after the segmentation of the region 504 is smaller than the number of superpixels after the segmentation of the region 502 . Therefore, the processing complexity of region 504 is lower than that of region 502 , however, the image features retained by region 502 are greater than those retained by region 504 .
  • the embodiments of the present application do not specifically limit the method used for superpixel segmentation.
  • the superpixel segmentation method may include at least one of the following algorithms: simple linear iterative clustering (SLIC), Graph-based, NCut, Turbopixel, Quick-shift, Graph-cut a, Graph-cut b.
  • the SLIC algorithm can significantly reduce the computational complexity and at the same time can improve the control over the size and compactness of the superpixels. Therefore, the SLIC algorithm is preferably used in this embodiment of the present application.
  • the Ferris wheel 408 is located at the boundary between the color area and the black and white area, and a part of the Ferris wheel 408 has color information and the other part has no color information, then the color can be extended to the entire Ferris wheel according to the part of the color information. .
  • This embodiment of the present application may further perform color processing such as color filling, and/or color removal, and/or color retention, on the processed first image based on the depth information.
  • color processing such as color filling, and/or color removal, and/or color retention
  • the generated processed first image also has depth information.
  • the present application may perform color processing on the processed first image based on the depth information. For example, the same color processing can be performed on regions of the same depth in the processed first image.
  • a user's click operation on the processed first image may be acquired, first depth information of the object at the click location may be determined, and color processing may be performed on the processed first image according to the first depth information.
  • the target area can be determined according to the first depth information, and the target area is filled with color or only the color of the target area is retained.
  • the target area is connected to the object at the point selected, and the difference between the depth of the target area and the first depth information is within a first preset range.
  • the user can click on the object that needs to be colored, and the processor can determine the pixel points that are connected in position and similar in depth as the target area according to the depth information of the clicked place, and only display the color of the target area, thus forming an aesthetic art. According to.
  • distance information input by the user may be acquired, and color processing is performed on the processed first image according to the distance information.
  • the target area can be determined according to the distance information, and the target area is filled with color or only the color of the target area is retained, wherein the difference between the depth of the target area and the distance is within a second preset range.
  • the user can input the distance at which the color is to be displayed, and the processor can convert the distance into depth, and then highlight the object corresponding to the depth with color, leaving the rest in black and white, thus forming an aesthetically pleasing artistic photo.
  • the embodiment of the present application can highlight objects, buildings, etc. at a certain distance according to the depth information, or perform auxiliary measurement, detection, etc. based on the depth information.
  • the embodiments of the present application do not specifically limit the manner of performing image processing on the first pixel region in the first image based on the second pixel region in the second image.
  • the second pixel area in the second image can be used to directly replace the first pixel area in the first image.
  • This method is simple to implement and has low processing complexity, but there will be images in the processed first image. dislocation.
  • the second pixel area may be mapped to the first pixel area according to the mapping relationship between the second pixel area and the first pixel area.
  • the processed first image generated in this way is relatively beautiful, and there is no obvious image dislocation.
  • mapping method as an example.
  • the present application can map the second pixel area to the first pixel area according to the mapping relationship between the pixel points in the second pixel area and the pixel points in the first pixel area.
  • a pixel point matching the pixel point in the second pixel area is found in the first pixel area, and the pixel point in the second pixel area is mapped to the position of the matching pixel point in the first pixel area.
  • the present application only maps the edges of the first pixel area and the second pixel area, and the non-edge area can be interpolated by using an interpolation method.
  • the edge of the second pixel area can be mapped to the edge of the first pixel area, and the non-edge area can use the edges on both sides Do difference mapping.
  • 315 , 316 , and 317 are edge areas of the first image 310
  • 314 is a non-edge area of the first image 310
  • 325 , 326 , and 327 are edge areas of the second image
  • 324 is the second image 320 non-edge areas.
  • edge 315 matches edge 325
  • edge 316 matches edge 326
  • edge 317 matches edge 327. Therefore, edge 325 can be mapped to where edge 315 is, and edge 326 can be mapped to where edge 316 is. , which maps edge 327 to where edge 317 is located.
  • the non-edge region 314 can be obtained by interpolation based on the information of the edges 315 and 316 on both sides thereof.
  • the present application can determine the edge of the second pixel area and the edge of the first pixel area, and map the edge of the second pixel area to the first pixel area according to the mapping relationship between the edge of the second pixel area and the edge of the first pixel area the edge of the pixel area to generate image information of the edge area of the processed first image.
  • the embodiments of the present application do not specifically limit the manner of determining the edge of the second pixel region and the edge of the first pixel region.
  • edge detection operators can be used to extract the edge of the second pixel region and the edge of the first pixel region.
  • the edge detection operator may be, for example, a detection operator in the Canny Operator algorithm, that is, the Canny Operator algorithm may be used to extract the edges of the second image and the first pixel area.
  • the parameters in the edge detection operator can be threshold parameters.
  • the edge densities of the two images are not balanced, it will increase the complexity of edge mapping. For example, the number of edges extracted from the second image is greater than the number of edges extracted from the first pixel region, which increases the complexity of determining the mapping relationship between the edges.
  • the present application can adjust the parameters in the edge detection operator to balance the edge densities of the second pixel area and the first pixel area.
  • Edge density may refer to the ratio between the number of pixels on the edges in the image and the total number of pixels in the entire image.
  • the edge density d g is:
  • d g the number of pixels on the edge of the first pixel area/the total number of pixels in the first pixel area
  • the edge density dc is:
  • d c the number of pixels on the edge in the second pixel area/the total number of pixels in the second pixel area
  • the present application can extract the edge of the second pixel area and the initial edge of the first pixel area; and adjust the parameters in the edge detection operator according to the edge density of the second pixel area and the initial edge density of the first pixel area; The parameters in the subsequent edge detection operator determine the edge of the first pixel area.
  • the initial edge of the first pixel area may be used as the final first pixel area
  • the threshold parameter in the edge detection operator adjusts the threshold parameter in the edge detection operator, and according to the adjusted parameters, Re-extract the edge of the first pixel area, so that the difference between the edge density of the second pixel area and the edge density of the first pixel area is within the third preset range, that is, the edge detection operator can be adjusted continuously and re-extract the edge of the first pixel area until the re-extracted edge density of the first pixel area and the edge density of the second pixel area are within a third preset range.
  • the above description is to re-extract the edge of the first pixel area so that the difference between the edge density of the first pixel area and the edge density of the second pixel area is within the third preset range. It is not limited to this, as long as it is ensured that the difference between the edge densities between the two images is within the third preset range. For example, it is also possible to keep the edge of the first pixel area unchanged, and re-extract the edge of the second pixel area, so that the difference between the edge density of the second pixel area and the edge density of the first pixel area is within the third preset range Inside.
  • the default threshold parameters [t l , t n ] can be used first to extract the edge of the second pixel area and the edge of the first pixel area.
  • the left side is the second pixel area
  • the right side is the extracted edge of the second pixel area.
  • edge densities of the second pixel area and the first pixel area are unbalanced, the following process can be used to adjust.
  • represents the tolerance of the edge density difference between the two images, and its size is an engineering experience value, which can be adjusted as needed.
  • 1/ ⁇ represents the similarity of the edge density of the two images.
  • the image information of the edge area of the first image may include: determining a first edge segment and a second edge segment, the first edge segment is the edge segment on the edge of the first pixel area, and the second edge segment is the second pixel area. According to the mapping relationship between the first edge segment and the second edge segment, the second edge segment is mapped to the region where the first edge segment is located, so as to generate the edge region of the processed first image. image information.
  • the extracted edge may be multiple edge segments, rather than one continuous edge, the present application can perform edge matching according to the mapping relationship between multiple edge segments in the two figures.
  • the present application may segment the extracted edge, for example, into multiple line segments, and then perform matching according to the segmented line segments.
  • Figure 10 shows schematic views before and after edge segmentation.
  • the left image is the edge extracted according to the edge extraction algorithm, and the right image is the edge after segmentation.
  • edge segments there may be bifurcated edge segments in the extracted edges of the image, for example, an edge segment similar to the one in the left image where three line segments meet at one point. If the edge matching is performed directly with the forked edge segment, it will increase the difficulty of matching.
  • the extracted edge can be divided into multiple line segments, that is, into multiple segments with two endpoints and a connecting line. Because the structure of the line segment is simple, and compared with the edge segment before the segment, the pixels on the line segment are also smaller. Therefore, matching with line segments can improve the efficiency of matching and reduce the difficulty of matching.
  • the line segment in the embodiment of the present application may be a straight line segment, or may also be a curve segment, which is not specifically limited.
  • the present application can segment the first edge segment to obtain the first line segment set; segment the second edge segment to obtain the second line segment set; determine the line segment and the second line segment in the first line segment set The mapping relationship between the line segments in the set; according to the mapping relationship between the line segments in the first line segment set and the line segments in the second set, the line segments in the second line segment set are mapped to the line segments in the first line segment set , to generate image information of the edge region of the processed first image.
  • segmented line segments can be deleted, and some line segments that do not meet the requirements can be removed, and then the line segment matching can be performed. For example, a line segment whose length is less than the first preset value in the line segment set may be eliminated, because it is not meaningful to match a line segment whose length is too small, and processing resources will be wasted.
  • the present application may segment the first edge segment to obtain a first initial edge segment set; and then remove line segments whose length is less than the first preset value in the first initial edge segment set to obtain the first line segment collection.
  • the present application may segment the second edge segment to obtain a second initial edge segment set; and then remove line segments whose length is less than the second preset value in the second initial edge segment set to obtain the second line segment gather.
  • the first preset value and the second preset value may be equal.
  • the line segments in the first line segment set and the second line segment set may be determined according to the matching degree between the pixels on the line segments in the first line segment set and the pixels on the line segments in the second line segment set The mapping relationship between the line segments in .
  • This embodiment of the present application does not specifically limit the algorithm used in the matching process between the first line segment set and the second line segment set.
  • the matching between the first set of line segments and the second set of line segments may be performed using a normalized cross-correlation (NCC) algorithm.
  • NCC normalized cross-correlation
  • the corresponding matching points q i1 , q i2 ,...,q in the set E c are searched by NCC matching in . If the matching point q belongs to a certain line segment m j , the matching degree of the line segment li and the line segment m j is increased by 1. The matching points corresponding to each point on the line segment li are repeatedly calculated, so as to find the line segment m j that best matches the line segment li .
  • the embodiment of the present application does not specifically limit the manner of determining the line segment m j that matches the line segment li .
  • the ratio between the number of pixels matching the line segment li in the line segment m j and the total number of pixels on the line segment m j is greater than a preset threshold, it can be determined that the line segment m j and the line segment li match.
  • the line segment with the largest number of pixel points matching the line segment li in the set E c may be determined as the line segment matching the line segment li, or the line segment matching the line segment li in the set E c may be determined as the line segment matching the line segment li .
  • the line segment with the highest proportion of pixel points is determined as the line segment matching the line segment li, or the line segment with the proportion of the pixel points matching the line segment li in the set E c that is greater than the preset threshold is determined as the line segment matching the line segment li. .
  • the left side is the first image
  • the right side is the second image.
  • For the line segment in the first image compare with the line segment in the same row in the second image, and find the line segment in the second image that matches the line segment in the first image.
  • FIG. 11 is a schematic diagram of finding a matching line segment for the line segment 602 in the first image. According to the voting principle described above, since the number of matching pixels in the line segment 604 in the second image is the largest, that is, the line segment 604 has the highest matching score, the line segment 604 is the line segment that matches the line segment 602 .
  • FIG. 12 is an image processing apparatus 700 provided by an embodiment of the present application.
  • the apparatus 700 may include a memory 710 and a processor 720 .
  • the memory 710 is used to store computer programs.
  • the processor 720 is configured to invoke the computer program, and when the computer program is executed by the processor, cause the apparatus to perform the following steps:
  • the observation range of the first camera is larger than the observation range of the second camera; and the resolution of the first camera is lower than the resolution of the second camera, and/or the first camera
  • the imaging of the second camera is an achromatic image, and the imaging of the second camera is a color image;
  • image processing is performed on the first pixel area in the first image to obtain a processed first image.
  • the target object is all or part of the scene objects in the scene captured by the second camera.
  • the first pixel area and/or the second pixel area corresponding to the target object is determined based on a user operation.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: acquiring a first initial image captured by the first camera of the scene; Stabilization processing is performed on the first initial image to obtain the first image.
  • the first camera includes a first visual sensor and a second viewing angle sensor
  • the apparatus when the computer program is executed by the processor, the apparatus causes the apparatus to perform the following steps: acquiring the obtaining a first visual image captured by the first visual sensor on the scene; acquiring a second visual image captured by the second visual sensor on the scene; according to the first visual image and the second visual image image, generating the first initial image with depth information.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: acquiring a second initial image captured by the second camera of the scene; Converting the second initial image to the camera pose of the first image to generate the second image.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: according to the internal parameters of the first camera, the internal parameters of the second camera, and the rotation relationship between the first image and the second initial image, and converting the second initial image to the camera pose of the first image to generate the second image.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: receiving an instruction input by a user; adjusting the first instruction according to the instruction input by the user
  • the shooting directions of the camera and the second camera are such that the first camera and the second camera shoot toward the same scene.
  • the electronic device is an unmanned aerial vehicle, and the processed first image is used for FPV flight from a first-person main perspective.
  • the first pixel area is located in a middle area of the first image.
  • the first image is an achromatic image
  • the second image is a color image
  • the computer program when executed by the processor, it causes the device to perform the following steps: using a superpixel segmentation device to segment the processed first image; if If a part of the same object after segmentation has color information and the other part has no color information, then another part of the same object is filled with color according to the part of the same object that has color information.
  • the achromatic image includes depth information of an object
  • the computer program when executed by the processor, causes the apparatus to perform the following steps: according to the depth information of the object, Color processing is performed on the processed first image, and the color processing includes at least one of the following: color filling, color removal, and color retention.
  • the computer program when executed by the processor, the computer program causes the apparatus to perform the following step: performing the same color processing on regions of the same depth in the processed first image .
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: acquiring a user's clicking operation on the processed first image; determining a point first depth information of the selected object; color processing is performed on the processed first image according to the first depth information.
  • the computer program when executed by the processor, it causes the apparatus to perform the following step: determining a target area according to the first depth information, wherein the target area is the same as the The objects at the selected point are connected, and/or the difference between the depth of the target area and the first depth information is within a first preset range; and color processing is performed on the target area.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: acquiring distance information input by a user; The first image is color processed.
  • the computer program when executed by the processor, it causes the apparatus to perform the following step: determining a target area according to the distance information, wherein the depth of the target area is the same as that of the target area. The difference between the distances is within a second preset range; and color processing is performed on the target area.
  • the computer program when executed by the processor, causes the apparatus to perform the steps of: determining the edge of the second pixel area and the edge of the first pixel area; According to the mapping relationship between the edge of the second pixel region and the edge of the first pixel region, the edge of the second pixel region is mapped to the edge of the first pixel region to generate the processed The image information of the edge region of the first image.
  • the computer program when executed by the processor, causes the apparatus to perform the step of: extracting the edge of the second pixel area and the initial edge of the first pixel area ; According to the edge density of the second pixel area and the initial edge density of the first pixel area, adjust the parameters in the edge detection operator; According to the parameters in the edge detection operator after the adjustment, determine the first The edge of a pixel area.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: at the edge density of the second pixel area and the initial If the difference between the edge densities is not within the third preset range, adjust the parameters in the edge detection operator; the first pixel area is determined according to the parameters in the adjusted edge detection operator The edges of The difference between is within the third preset range.
  • the computer program when executed by the processor, it causes the apparatus to perform the step of: determining a first edge segment and a second edge segment, the first edge segment being the an edge segment on the edge of the first pixel region, and the second edge segment is an edge segment on the edge of the second pixel region; according to the mapping between the first edge segment and the second edge segment relationship, the second edge segment is mapped to the region where the first edge segment is located, so as to generate image information of the edge region of the processed first image.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: segment the first edge segment to obtain a first line segment set; The second edge is segmented to obtain a second line segment set; the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set is determined; according to the first line segment set The mapping relationship between the line segments in the second line segment set and the line segments in the second line segment set, the line segments in the second line segment set are mapped to the positions of the line segments in the first line segment set to generate the processing The image information of the edge area of the first image after.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: segment the first edge segment to obtain a first initial set of line segments; culling The first line segment set is obtained from the line segments whose length is less than the first preset value in the first initial line segment set.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: segment the second edge segment to obtain a second initial set of line segments; culling The second line segment set is obtained from the line segments whose length is less than the second preset value in the second initial line segment set.
  • the computer program when executed by the processor, the computer program causes the apparatus to perform the following steps:
  • the matching degree between the pixels and the pixels on the line segments in the second line segment set determines the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set.
  • the computer program when executed by the processor, it causes the apparatus to perform the following steps: according to the image information of the edge region of the processed first image, use an interpolation method Image information of the non-edge region of the processed first image is generated.
  • the present application also provides an electronic device or system, where the electronic device or system may include the image processing apparatuses of the various embodiments of the present application.
  • the present application also provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a computer, the computer causes the computer to execute the method of the above method embodiment.
  • the present application also provides a computer program product comprising instructions, the instructions, when executed by the computer, cause the computer to execute the method of the above method embodiment.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)), etc. .

Abstract

Provided are an image processing method and apparatus, capable of generating an image having a large field of view, improving the viewing experience of a user. The method comprises: obtaining a first image of a scene, captured by a first camera of an electronic device; obtaining a second image of the scene, captured by a second camera of the electronic device; the observation range of the first camera is larger than the observation range of the second camera; furthermore, the resolution of the first camera is lower than that of the second camera, and/or the imaging of the first camera is as a non-color image and the imaging of the second camera is as a color image; determining in said first image a first pixel region corresponding to a target object in said scene, and determining in said second image a second pixel region corresponding to a target object in the scene; on the basis of the second pixel region in the second image, performing image processing on the first pixel region in the first image, to obtain a processed first image.

Description

图像处理方法和装置Image processing method and device
版权申明Copyright notice
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。The disclosure of this patent document contains material that is subject to copyright protection. This copyright belongs to the copyright owner. The copyright owner has no objection to the reproduction by anyone of the patent document or the patent disclosure as it exists in the official records and archives of the Patent and Trademark Office.
技术领域technical field
本申请涉及图像处理领域,并且更为具体地,涉及一种图像处理方法和装置。The present application relates to the field of image processing, and more particularly, to an image processing method and apparatus.
背景技术Background technique
搭载有相机的电子设备越来越多,如无人机、无人驾驶的汽车、增强现实(augmented reality,VR)/(virtual reality,AR)眼镜等,这类电子设备极大地丰富了人们的生活。There are more and more electronic devices equipped with cameras, such as drones, unmanned cars, augmented reality (VR)/(virtual reality, AR) glasses, etc. These electronic devices have greatly enriched people's lives. Life.
该相机可以拍摄物体的图像信息,并将拍摄的图像信息传输给用户,用户可以根据该图像信息对电子设备进行操作。但是,目前电子设备上的相机的视场不会很大,这使得用户的视野范围受到限制,影响用户的视觉体验。The camera can capture image information of the object, and transmit the captured image information to the user, and the user can operate the electronic device according to the image information. However, the field of view of the camera on the current electronic device is not very large, which restricts the user's field of view and affects the user's visual experience.
发明内容SUMMARY OF THE INVENTION
本申请提供一种图像处理方法和装置,能够生成具有大视场的图像,提高用户的视觉体验。The present application provides an image processing method and device, which can generate an image with a large field of view and improve the user's visual experience.
第一方面,提高一种图像处理方法,获取由电子设备的第一相机对场景拍摄采集的第一图像;获取由电子设备的第二相机对所述场景拍摄得到的第二图像;其中,所述第一相机的观测范围大于所述第二相机的观测范围;并且,所述第一相机的分辨率低于所述第二相机的分辨率,和/或,所述第一相机的成像为非彩色图像、所述第二相机的成像为彩色图像;In a first aspect, an image processing method is provided, which includes acquiring a first image captured by a first camera of an electronic device of a scene; acquiring a second image captured by a second camera of the electronic device of the scene; wherein, the The observation range of the first camera is larger than the observation range of the second camera; and the resolution of the first camera is lower than the resolution of the second camera, and/or the imaging of the first camera is an achromatic image, the image of the second camera is a color image;
在所述第一图像中确定对应所述场景中目标对象的第一像素区域,在所述第二图像中确定对应所述场景中所述目标对象的第二像素区域;determining a first pixel area corresponding to the target object in the scene in the first image, and determining a second pixel area corresponding to the target object in the scene in the second image;
基于所述第二图像中的所述第二像素区域,对所述第一图像中的所述第一像素区域进行图像处理,得到处理后的第一图像。Based on the second pixel area in the second image, image processing is performed on the first pixel area in the first image to obtain a processed first image.
第二方面,提供一种图像处理装置,包括:存储器,用于存储计算机程 序;处理器,用于调用所述计算机程序,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:获取由电子设备的第一相机对场景拍摄采集的第一图像;获取由电子设备的第二相机对所述场景拍摄得到的第二图像;其中,所述第一相机的观测范围大于所述第二相机的观测范围;并且,所述第一相机的分辨率低于所述第二相机的分辨率,和/或,所述第一相机的成像为非彩色图像、所述第二相机的成像为彩色图像;In a second aspect, an image processing apparatus is provided, comprising: a memory for storing a computer program; a processor for invoking the computer program, and when the computer program is executed by the processor, causes the apparatus to execute The steps are as follows: obtaining a first image captured by a first camera of an electronic device on the scene; obtaining a second image captured by a second camera of the electronic device on the scene; wherein, the observation range of the first camera is greater than the observation range of the second camera; and, the resolution of the first camera is lower than the resolution of the second camera, and/or the imaging of the first camera is an achromatic image, the second camera The imaging of the camera is a color image;
在所述第一图像中确定对应所述场景中目标对象的第一像素区域,在所述第二图像中确定对应所述场景中所述目标对象的第二像素区域;determining a first pixel area corresponding to the target object in the scene in the first image, and determining a second pixel area corresponding to the target object in the scene in the second image;
基于所述第二图像中的所述第二像素区域,对所述第一图像中的所述第一像素区域进行图像处理,得到处理后的第一图像。Based on the second pixel area in the second image, image processing is performed on the first pixel area in the first image to obtain a processed first image.
第三方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序在被执行时,实现第一方面提供的方法。A third aspect provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed, implements the method provided in the first aspect.
第四方面,提供一种包含指令的计算机程序产品,所述指令被计算机执行时使得计算机执行第一方面提供的方法。In a fourth aspect, there is provided a computer program product comprising instructions that, when executed by a computer, cause the computer to perform the method provided by the first aspect.
基于上述技术方案,由于第二图像为彩色图像和/或第二图像为高分辨率的图像,因此第二图像具有更好的视觉效果,而第一相机的观测范围大于第二相机的观测范围,因此,第一图像对应的视场大于第二图像对应的视场。本申请可以将具有更好视觉效果的第二图像与具有大视场的第一图像进行融合,得到既具有大视场又具有更好成像效果的处理后的第一图像,使得用户具有更好的视觉体验。Based on the above technical solution, since the second image is a color image and/or the second image is a high-resolution image, the second image has a better visual effect, and the observation range of the first camera is larger than that of the second camera. , therefore, the field of view corresponding to the first image is larger than the field of view corresponding to the second image. The present application can fuse the second image with better visual effect and the first image with large field of view to obtain the processed first image with both large field of view and better imaging effect, so that users can have better visual experience.
附图说明Description of drawings
图1是本申请实施例提供的一种应用场景的示意图。FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
图2是本申请实施例提供的一种图像处理方法的示意性流程图。FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
图3和图4是本申请实施例提供的第一图像和第二图像的不同重叠情况的示意图。FIG. 3 and FIG. 4 are schematic diagrams of different overlapping situations of the first image and the second image provided by the embodiments of the present application.
图5是本申请实施例提供的第一图像增稳前后的姿态角的变化情况的示意图。FIG. 5 is a schematic diagram of changes in attitude angles before and after stabilization of the first image provided by an embodiment of the present application.
图6是本申请实施例提供的一种处理后的第一图像的示意图。FIG. 6 is a schematic diagram of a processed first image provided by an embodiment of the present application.
图7是本申请实施例提供的使用超像素分割后的图像的示意图。FIG. 7 is a schematic diagram of an image segmented using superpixels according to an embodiment of the present application.
图8是本申请实施例提供的一种图像映射过程的示意图。FIG. 8 is a schematic diagram of an image mapping process provided by an embodiment of the present application.
图9是本申请实施例提供的一种提取图像边缘的示意图。FIG. 9 is a schematic diagram of an image edge extraction provided by an embodiment of the present application.
图10是本申请实施例提供的一种对边缘段进行分段的示意图。FIG. 10 is a schematic diagram of segmenting an edge segment according to an embodiment of the present application.
图11是本申请实施例提供的一种在第二图像中寻找与第一图像中的线匹配的线段的示意图。FIG. 11 is a schematic diagram of finding a line segment in a second image that matches a line in the first image according to an embodiment of the present application.
图12是本申请实施例提供的一种图像处理装置的示意性框图。FIG. 12 is a schematic block diagram of an image processing apparatus provided by an embodiment of the present application.
具体实施方式detailed description
下面将结合附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
本申请实施例中的电子设备可以为各种带有相机的电子设备,例如可移动平台,比如无人机、无人船、无人机车、可自动驾驶的车辆;也可以是智能穿戴设备,例如VR/AR眼镜等;也可以是智能终端设备,例如手机,平板电脑等。The electronic devices in the embodiments of the present application may be various electronic devices with cameras, such as mobile platforms, such as drones, unmanned ships, drone vehicles, and vehicles that can be driven automatically; they may also be smart wearable devices, For example, VR/AR glasses, etc.; it can also be smart terminal devices, such as mobile phones, tablet computers, etc.
本申请实施例中的电子设备可以是上述可移动平台或者与穿戴设备通信相连的其他电子设备。比如,电子设备可以是无人机,或者与所述无人机通信相连的控制终端,比如说与无人机配套使用的遥控器,或者与无人机通过有线或者无线通信的手机。The electronic device in this embodiment of the present application may be the above-mentioned movable platform or other electronic device that is communicatively connected to the wearable device. For example, the electronic device may be an unmanned aerial vehicle, or a control terminal that communicates with the unmanned aerial vehicle, such as a remote control used in conjunction with the unmanned aerial vehicle, or a mobile phone that communicates with the unmanned aerial vehicle through wired or wireless communication.
本申请实施例中的相机也可以称为摄像头、图像传感器等。示例的可以是,RGB色彩相机、红外相机、双目相机、多目相机、多光谱相机等。The camera in the embodiments of the present application may also be referred to as a camera, an image sensor, or the like. Examples may be RGB color cameras, infrared cameras, binocular cameras, multi-eye cameras, multi-spectral cameras, and the like.
为方便描述,下文以可移动平台为例进行描述。For the convenience of description, the following description takes a movable platform as an example.
可移动平台上通常都设置有主相机,该主相机可以在可移动平台运动的过程中,拍摄所述可移动平台周边场景,并获取场景的图像信息。可移动平台可以将该图像信息传输给用户设备,在用户设备上显示该图像。这个图像可以是第一人称视角(first person view,FPV)的图像,用户基于该图像信息可以执行相应的控制操作。例如,控制可移动平台的运动方向、运动速度、停留位置等,或者调整所述可移动平台的主相机的拍摄角度、曝光参数等。具体的,这一操作可由用户触动用户设备的控制按键,或者触控用户设备的屏幕来进行操作。A main camera is usually set on the movable platform, and the main camera can photograph the surrounding scene of the movable platform and obtain image information of the scene during the movement of the movable platform. The removable platform can transmit the image information to the user device, and display the image on the user device. This image can be a first-person view (FPV) image, and the user can perform corresponding control operations based on the image information. For example, the movement direction, movement speed, stop position, etc. of the movable platform are controlled, or the shooting angle and exposure parameters of the main camera of the movable platform are adjusted. Specifically, this operation can be performed by the user touching a control button of the user equipment or touching the screen of the user equipment.
该主相机可以为彩色相机,该彩色相机可以拍摄彩色图像,彩色图像能够使得用户对周围环境有更直观的感知。The main camera can be a color camera, and the color camera can capture a color image, and the color image can make the user have a more intuitive perception of the surrounding environment.
另外,为了进一步提高用户体验,该主相机可以为高分辨率的相机,从而能够拍摄高分辨率的高清图像,使得用户对场景内影像的观感体验更好。 清晰的图像有助于用户分辨感兴趣的目标对象。In addition, in order to further improve the user experience, the main camera can be a high-resolution camera, so that high-resolution high-definition images can be captured, so that the user has a better viewing experience of the images in the scene. Clear images help users identify objects of interest.
该主相机的主要目的是获得高分辨率、鲜艳色彩的成像,考虑到相机的成像成本,具有高成像质量的相机,其视场角(angle of view,FOV)受限。若要兼顾高成像和大视场角,形成画幅大且成像清晰,色彩鲜明的图像,那么相机的设计制造成本非常高昂。为综合成本与用户的观感,主相机成像的观测范围受限,因此,根据主相机拍摄的图像进行操作,使得用户的视野范围受到限制,影响用户的对场景中更大范围内目标对象的观测。The main purpose of the main camera is to obtain high-resolution, vivid-color imaging. Considering the imaging cost of the camera, a camera with high imaging quality has a limited angle of view (FOV). In order to take into account high imaging and a large field of view, and form a large-format image with clear imaging and vivid colors, the design and manufacturing cost of the camera is very high. In order to integrate the cost and the user's perception, the observation range of the main camera's imaging is limited. Therefore, the operation based on the image captured by the main camera limits the user's field of view and affects the user's observation of the target object in a wider range of the scene. .
基于此,本申请实施例提供一种图像处理的方法,旨在于受限的相机观测范围上,增大图像的视场,并一定程度上保证处理后的图像质量,提高用户的视觉体验。Based on this, the embodiments of the present application provide an image processing method, aiming at increasing the field of view of the image in the limited observation range of the camera, ensuring the quality of the processed image to a certain extent, and improving the user's visual experience.
经过研究发现,可移动平台上还可以设置有用于感知周围环境的感知相机,这种相机主要用于获取无人机周围的传感数据。比如说双目相机,多目相机,红外感测数据等。通常,可移动平台可以借助这些感知相机,获取场景中物体的深度信息、温度信息等,进而在运动过程中辅助执行避障、追踪、定位等功能。After research, it is found that a perception camera for perceiving the surrounding environment can also be set on the movable platform, and this camera is mainly used to obtain sensory data around the drone. For example, binocular cameras, multi-eye cameras, infrared sensing data, etc. Usually, the movable platform can obtain the depth information and temperature information of the objects in the scene with the help of these perception cameras, and then assist in the execution of obstacle avoidance, tracking, positioning and other functions during the movement process.
这些感知相机采集的数据会直接输入数据处理环节,提取场景中的目标物,或者生成场景深度信息。感知相机只需要获取准确的深度信息、或者温度信息,这些感知相机采集的图像数据并不需要生成图像向用户展示,由此,这些感知相机的数据采集、及数据处理过程,在电子设备中并非为了更好地视觉效果的图像而设计。The data collected by these perception cameras will be directly input into the data processing link to extract the target objects in the scene, or generate scene depth information. Perceptual cameras only need to obtain accurate depth information or temperature information. The image data collected by these perceptual cameras does not need to generate images to display to users. Therefore, the data acquisition and data processing processes of these perceptual cameras are not in electronic devices. Designed for better visual effects of images.
值得说明的是,为了更大程度上获取场景中的信息,这些感知相机往往具有尽可能大的观测范围,由此能够拍摄更大范围的场景,并获取相应的成像数据。也就是说,这种感知相机拍摄的图像的观感效果不如主相机拍摄的图像的观感效果,但是该相机的视场大于用于拍摄的主相机的视场。It is worth noting that, in order to obtain information in a scene to a greater extent, these perception cameras often have as large an observation range as possible, so that a wider range of scenes can be photographed and corresponding imaging data can be obtained. That is, the perception effect of the image captured by the perception camera is not as good as that of the image captured by the main camera, but the field of view of the camera is larger than the field of view of the main camera used for shooting.
如图1所示,无人机上包括用于拍摄图像的主相机110,还包括用于感知周围环境的相机120。As shown in FIG. 1 , the drone includes a main camera 110 for capturing images and a camera 120 for sensing the surrounding environment.
主相机110拍摄的图像的分辨率高于相机120拍摄的图像的分辨率,但是相机120的视场大于主相机110的视场。The resolution of the image captured by the main camera 110 is higher than the resolution of the image captured by the camera 120 , but the field of view of the camera 120 is larger than that of the main camera 110 .
相机120也可以称为视觉传感器。 Camera 120 may also be referred to as a vision sensor.
基于以上考虑,本申请提出将相机120拍摄的图像与主相机110拍摄的图像进行融合,结合两种相机各自的优点,能够生成既具有大视场、且部分 区域的分辨率又很高的图像。Based on the above considerations, the present application proposes to fuse the image captured by the camera 120 with the image captured by the main camera 110, and combine the respective advantages of the two cameras to generate an image with a large field of view and high resolution in some areas. .
图2是本申请实施例提供的一种图像处理方法200,该方法可应用于电子设备,图2所示的方法可以由电子设备中的处理器执行。该电子设备可以包括第一相机和第二相机,该第一相机和第二相机为两种不同类型的相机,其成像效果不同。FIG. 2 is an image processing method 200 provided by an embodiment of the present application. The method can be applied to an electronic device, and the method shown in FIG. 2 can be executed by a processor in the electronic device. The electronic device may include a first camera and a second camera, and the first camera and the second camera are two different types of cameras with different imaging effects.
例如,该第一相机的观测范围大于第二相机的观测范围。又例如,第一相机的分辨率低于第二相机的分辨率。再例如,第一相机的成像为非彩色图像、第二相机的成像为彩色图像。For example, the observation range of the first camera is larger than the observation range of the second camera. For another example, the resolution of the first camera is lower than the resolution of the second camera. For another example, the image of the first camera is an achromatic image, and the image of the second camera is a color image.
本申请实施例中的观测范围可以理解为相机能够拍摄到的场景区域。The observation range in the embodiment of the present application can be understood as the scene area that can be photographed by the camera.
该方法200包括步骤S210~S240。The method 200 includes steps S210-S240.
S210、获取由第一相机对场景拍摄采集的第一图像。S210: Acquire a first image captured by the first camera of the scene.
S220、获取由第二相机对场景拍摄采集的第二图像。S220: Acquire a second image captured by the second camera of the scene.
S230、在第一图像中确定对应所述场景中目标对象的第一像素区域,在第二图像中确定对应所述场景中所述目标对象的第二像素区域。S230. Determine a first pixel area corresponding to the target object in the scene in the first image, and determine a second pixel area corresponding to the target object in the scene in the second image.
第一像素区域为第一图像中针对该目标对象的成像区域,第二像素区域为第二图像中针对该目标对象的成像区域。The first pixel area is the imaging area for the target object in the first image, and the second pixel area is the imaging area for the target object in the second image.
S240、基于第二图像的第二像素区域,对第一图像中的第一像素区域进行图像处理,得到处理后的第一图像。S240. Perform image processing on the first pixel area in the first image based on the second pixel area of the second image to obtain a processed first image.
本申请实施例中的第一相机和第二相机可以针对同一场景进行拍摄,从而分别得到第一图像和第二图像。The first camera and the second camera in the embodiment of the present application may shoot for the same scene, so as to obtain the first image and the second image respectively.
可以理解的是,由于第一相机和第二相机的观测范围不同,因此,第一相机拍摄的场景中的对象与第二相机拍摄的场景中的对象不完全相同。例如,由于第一相机的观测范围大于第二相机的观测范围,因此,第一相机拍摄的场景中的对象多于第二相机拍摄的场景中的对象。It can be understood that, because the observation ranges of the first camera and the second camera are different, the objects in the scene captured by the first camera are not exactly the same as the objects in the scene captured by the second camera. For example, since the observation range of the first camera is larger than the observation range of the second camera, there are more objects in the scene captured by the first camera than objects in the scene captured by the second camera.
例如,第一相机和第二相机均是针对树林进行拍摄,第一图像和第二图像中有针对相同树木的成像区域,但是第一图像中对应的树木的数量大于第二图像中对应的树木的数量。For example, both the first camera and the second camera shoot at the forest, and there are imaging areas for the same trees in the first image and the second image, but the number of corresponding trees in the first image is greater than the number of trees corresponding to the second image quantity.
为方便描述,下文将由于第一相机和第二相机的观测范围不同引起的第一图像和第二图像的差异,称为第一图像和第二图像的视场不同,因此,第一图像的视场大于第二图像的视场。For the convenience of description, the difference between the first image and the second image caused by the different observation ranges of the first camera and the second camera will be referred to as the difference in the field of view of the first image and the second image. The field of view is larger than the field of view of the second image.
第一像素区域和第二像素区域分别为第一图像和第二图像中针对同一 目标对象的成像区域。The first pixel area and the second pixel area are imaging areas for the same target object in the first image and the second image, respectively.
目标对象为第一相机采集到的场景中的部分场景对象,也就是说,第一像素区域为第一图像中的部分区域。The target object is a partial scene object in the scene captured by the first camera, that is, the first pixel area is a partial area in the first image.
目标对象为第二相机采集到的场景中的全部场景对象或部分场景对象,也就是说,第二像素区域可以为第二图像中的全部区域,也可以为第二图像的部分区域。The target objects are all or part of the scene objects in the scene captured by the second camera, that is, the second pixel area may be the entire area in the second image, or may be a partial area of the second image.
如果第二像素区域为第二图像中的全部区域,则表示第一相机拍摄的场景中的对象包括第二相机拍摄的全部对象。If the second pixel area is the entire area in the second image, it means that the objects in the scene captured by the first camera include all the objects captured by the second camera.
上文描述的第一像素区域和第二像素区域为客观上针对同一目标对象的成像区域。此外,本申请实施例还可以根据用户的选择确定第一像素区域和第二像素区域,即目标对象对应的第一像素区域和/或第二像素区域是基于用户操作确定的。例如,可以根据用户的点选操作,确定用户希望进行图像处理的区域,然后根据用户希望进行图像处理的区域,确定第一像素区域和第二像素区域。The first pixel area and the second pixel area described above are imaging areas that objectively target the same target object. In addition, this embodiment of the present application may also determine the first pixel area and the second pixel area according to the user's selection, that is, the first pixel area and/or the second pixel area corresponding to the target object are determined based on user operations. For example, the region where the user wishes to perform image processing may be determined according to the user's click operation, and then the first pixel region and the second pixel region may be determined according to the region where the user wishes to perform image processing.
如果用户只关心场景中的人物,则可以点选人物,则第一像素区域和第二像素区域分别为第一图像中和第二图像中对应相同人物的成像区域。If the user only cares about the characters in the scene, he can click on the characters, and the first pixel area and the second pixel area are imaging areas corresponding to the same character in the first image and the second image, respectively.
由于第二图像为彩色图像和/或第二图像为高分辨率的图像,因此第二图像具有更好的视觉效果,而第一相机的观测范围大于第二相机的观测范围,因此,第一图像对应的视场大于第二图像对应的视场。本申请可以将具有更好视觉效果的第二图像与具有大视场的第一图像进行融合,得到既具有大视场又具有更好成像效果的处理后的第一图像,使得用户具有更好的视觉体验。Since the second image is a color image and/or the second image is a high-resolution image, the second image has a better visual effect, and the observation range of the first camera is larger than that of the second camera. Therefore, the first The field of view corresponding to the image is larger than the field of view corresponding to the second image. The present application can fuse the second image with better visual effect and the first image with large field of view to obtain the processed first image with both large field of view and better imaging effect, so that users can have better visual experience.
就是说,用户可以通过观测具有更大视场的第一图像,观测到电子设备周边的更多场景中的物体的成像,由此可以做出有利于后续判断的决策。可能地,针对用户感兴趣的图像区域,也可以通过第二图像补入的显示信息,来辅助提升第一图像中的部分区域的视觉效果,比如更强或者更真实的色彩感、更清晰的成像等。That is to say, the user can observe the imaging of objects in more scenes around the electronic device by observing the first image with a larger field of view, thereby making a decision that is beneficial to subsequent judgments. Possibly, for the image area that the user is interested in, the display information supplemented by the second image can also be used to assist in improving the visual effect of some areas in the first image, such as stronger or more realistic color, clearer imaging, etc.
为方便描述,下文将第一像素区域理解为第一图像中与第二图像重叠的区域,第二像素区域理解为第二图像中与第一图像重叠的区域。For convenience of description, the first pixel area is hereinafter understood as an area in the first image that overlaps with the second image, and the second pixel area is understood as an area in the second image that overlaps with the first image.
图3和图4示出的是第一图像和第二图像不同程度的重叠情况。图3示出的是第二图像的全部区域与第一图像的部分区域完全重叠的情况,图4示 出的是第二图像的部分区域与第一图像的部分区域重叠的情况。FIG. 3 and FIG. 4 show different degrees of overlapping of the first image and the second image. Fig. 3 shows a situation where the entire area of the second image completely overlaps with a partial area of the first image, and Fig. 4 shows a situation where a partial area of the second image overlaps with a partial area of the first image.
以图3为例,第二图像320的全部区域与第一图像310的部分区域重叠,在该情况下,第二像素区域即为第二图像的全部区域,第一像素区域311与第二图像重叠。Taking FIG. 3 as an example, the entire area of the second image 320 overlaps with a partial area of the first image 310 . In this case, the second pixel area is the entire area of the second image, and the first pixel area 311 and the second image overlapping.
如图4所示,第二图像320的部分区域与第一图像210的部分区域重叠,在该情况下,第一像素区域312与第二图像320的一部分重叠。优选地,第二图像中与第一图像重叠的区域的面积大于未重叠的区域的面积,即第二像素区域312的尺寸略小于第二图像320的尺寸。As shown in FIG. 4 , a partial area of the second image 320 overlaps with a partial area of the first image 210 , in this case, the first pixel area 312 overlaps with a part of the second image 320 . Preferably, the area of the area overlapping with the first image in the second image is larger than the area of the area not overlapping, that is, the size of the second pixel area 312 is slightly smaller than the size of the second image 320 .
本申请实施例中的相机可以是拍摄照片的相机,也可以是拍摄视频的相机。对于拍摄视频的相机而言,第一图像和第二图像可以是视频流中的某一帧图像。The camera in this embodiment of the present application may be a camera that shoots photos, or may be a camera that shoots videos. For a camera that shoots video, the first image and the second image may be a certain frame of images in the video stream.
第一图像可以是相机直接拍摄得到的图像,或者第一图像可以是对相机拍摄的图像进行图像处理后得到的图像。The first image may be an image directly captured by a camera, or the first image may be an image obtained by performing image processing on an image captured by a camera.
例如,本申请可以获取第一初始图像,并对第一图像进行增稳处理,得到第一图像。图像或视频拍摄过程中,避免不了会发生抖动,本申请实施例可以对相机拍摄的图像进行增稳处理,用户观看经过增稳处理后的图像会更加舒适,不会产生晕眩现象。For example, the present application may acquire the first initial image, and perform stabilization processing on the first image to obtain the first image. During the image or video shooting process, shaking cannot be avoided. In the embodiment of the present application, the image captured by the camera can be stabilized, so that the user will be more comfortable watching the stabilized image without dizziness.
另外,在增稳处理的过程中,需要对图像进行裁剪,即增稳后的图像尺寸要小于增稳前的图像尺寸。因此,本申请实施例可以先对第一初始图像进行增稳处理,然后再与第二图像进行融合,这样能够保证具有高分辨率和/或具有颜色信息的第二图像不会被裁剪,提高第二图像的利用率。In addition, during the stabilization process, the image needs to be cropped, that is, the image size after stabilization is smaller than the image size before stabilization. Therefore, in this embodiment of the present application, the first initial image can be stabilized first, and then fused with the second image, so as to ensure that the second image with high resolution and/or color information will not be cropped, and improve the Utilization of the second image.
本申请实施例对增稳处理的方式不做具体限定,例如可以采用传统的视频增稳算法进行增稳。The embodiment of the present application does not specifically limit the manner of stabilization processing, for example, a traditional video stabilization algorithm may be used for stabilization.
一般是采用惯性测量单元(Inertial measurement unit,IMU)中的陀螺仪测算出相机中传感器的旋转角的变化情况,然后通过滤波器将该变化过滤地更加平稳,生成新的姿态角,最后把图像转换到增稳后的姿态角上。Generally, the gyroscope in the inertial measurement unit (IMU) is used to measure the change of the rotation angle of the sensor in the camera, and then the change is filtered more smoothly through a filter to generate a new attitude angle, and finally the image Switch to the stabilized attitude angle.
图5示出的是增稳前后传感器的姿态角的变化情况。其中,虚线表示的是增稳前的姿态角随时间变化的情况,实线表示的是增稳后的姿态角随时间变化的情况。Figure 5 shows the change of the attitude angle of the sensor before and after stabilization. Among them, the dotted line represents the change of the attitude angle with time before stabilization, and the solid line represents the change of the attitude angle with time after stabilization.
从图5可以看出,增稳后,传感器姿态角的变化更加平稳。It can be seen from Figure 5 that after stabilization, the change of the sensor attitude angle is more stable.
本申请实施例中的姿态角可以包括传感器的俯仰角。The attitude angle in this embodiment of the present application may include the pitch angle of the sensor.
以图1中的无人机为例,第一相机可以为相机120,第二相机可以为主相机110。相机120和相机110在无人机中分别有其各自的作用,相机120用于感知周围环境,其拍摄的图像不会向用户呈现;相机110用于拍摄照片,其拍摄的图像主要用来呈现给用户。Taking the drone in FIG. 1 as an example, the first camera may be the camera 120 , and the second camera may be the main camera 110 . The camera 120 and the camera 110 have their respective functions in the drone. The camera 120 is used to perceive the surrounding environment, and the images it takes will not be presented to the user; the camera 110 is used to take pictures, and the images it takes are mainly used for presentation. to users.
本申请巧妙地结合相机110和相机120拍摄的图像,不需要改变电子设备的硬件机构,能够在不明显增加成本的情况下,向用户呈现既具有大视场又具有较好视觉效果的图像。The present application cleverly combines the images captured by the camera 110 and the camera 120 without changing the hardware mechanism of the electronic device, and can present images with both a large field of view and a better visual effect to the user without significantly increasing the cost.
本申请实施例中的第一相机可用于感知周围物体的深度信息,即第一图像可以是包括物体深度信息的图像。The first camera in this embodiment of the present application may be used to perceive depth information of surrounding objects, that is, the first image may be an image including depth information of the object.
本申请实施例对第一相机获取深度信息的方式不做具体限定。例如,该第一相机可以根据双目视觉原理获取深度信息,如图1所示。又例如,该第一相机可以根据飞行时间(time of flight,TOF)原理获取深度信息。The embodiments of the present application do not specifically limit the manner in which the first camera acquires the depth information. For example, the first camera can acquire depth information according to the principle of binocular vision, as shown in FIG. 1 . For another example, the first camera may acquire depth information according to the time of flight (TOF) principle.
以双(多)目视觉为例,图1中的第一相机120可以包括第一视觉传感器121和第二视觉传感器122。进一步地,可以获取第一视觉传感器121拍摄的第一视觉图像,获取第二视觉传感器122拍摄的第二视觉图像,并根据第一视觉图像和第二视觉图像,生成具有深度信息的图像。Taking binocular vision as an example, the first camera 120 in FIG. 1 may include a first visual sensor 121 and a second visual sensor 122 . Further, the first visual image captured by the first visual sensor 121 may be acquired, the second visual image captured by the second visual sensor 122 may be acquired, and an image with depth information may be generated according to the first visual image and the second visual image.
本申请实施例可以先根据第一视觉图像和第二视觉图像,生成第一初始图像,然后再对第一初始图像进行增稳处理,得到第一图像。或者,也可以先分别对第一视觉图像和第二视觉图像进行增稳处理,然后基于增稳后的左第一视觉图像和第二视觉图像,生成第一图像。In this embodiment of the present application, a first initial image may be generated according to the first visual image and the second visual image, and then stabilization processing is performed on the first initial image to obtain the first image. Alternatively, stabilization processing may also be performed on the first visual image and the second visual image respectively, and then the first image is generated based on the stabilized left first visual image and the second visual image.
由于第一图像和第二图像是通过不同的相机拍摄生成的,不同相机的内部参数、拍摄位置和拍摄角度会存在差异,如果将相机拍摄的图像直接进行融合匹配,复杂度会比较高。Since the first image and the second image are generated by different cameras, there will be differences in the internal parameters, shooting positions and shooting angles of different cameras. If the images captured by the cameras are directly fused and matched, the complexity will be relatively high.
基于此,本申请实施例中的第二图像可以是经过位姿转换后的图像。例如,本申请可以获取第二初始图像,其中,第二初始图像可以是第二相机直接拍摄的图像,然后将第二初始图像转换到第一图像所在的相机位姿上,以生成第二图像。Based on this, the second image in this embodiment of the present application may be an image after pose transformation. For example, the present application may acquire a second initial image, where the second initial image may be an image directly captured by the second camera, and then convert the second initial image to the camera pose where the first image is located to generate the second image .
本申请实施例可以先对第二初始图像进行位姿转换,转换后的第二图像与第一图像所在的相机位姿相同,然后再将相机位姿相同的第二图像与第一图像进行融合处理。对相机位姿相同的两张图像进行融合,能够大大降低融合处理的复杂度。This embodiment of the present application may first perform pose transformation on the second initial image, the transformed second image is the same as the camera pose where the first image is located, and then the second image with the same camera pose is fused with the first image deal with. Fusion of two images with the same camera pose can greatly reduce the complexity of fusion processing.
上文描述的是将第二初始图像转换到第一图像所在的相机位姿上,本申请实施例对此不做具体限定。当然,也可以是将第一图像转换到第二图像所在的相机位姿上,只要最后生成的两张图像的相机位姿相同即可。The above description is to convert the second initial image to the camera pose where the first image is located, which is not specifically limited in this embodiment of the present application. Of course, it is also possible to convert the first image to the camera pose where the second image is located, as long as the camera poses of the two finally generated images are the same.
本申请实施例可以根据以下参数中的至少一个:第一相机的内部参数、第二相机的内部参数、第一图像和第二初始图像之间的旋转关系,将第二初始图像转换到第一图像的相机位姿上,形成第二图像。In this embodiment of the present application, the second initial image can be converted into the first image according to at least one of the following parameters: internal parameters of the first camera, internal parameters of the second camera, and a rotation relationship between the first image and the second initial image. On the camera pose of the image, a second image is formed.
例如,可以根据第一相机的内部参数和第二相机的内部参数,将第二初始图像转换到第一图像的相机位姿上,形成第二图像。这样,转换之后的第二图像和第一图像对应的相机内部参数相同。由于没有调整旋转关系,转换之后的第一图像和第二图像有可能还存在相对旋转关系。For example, the second image may be formed by converting the second initial image to the camera pose of the first image according to the internal parameters of the first camera and the internal parameters of the second camera. In this way, the camera internal parameters corresponding to the converted second image and the first image are the same. Since the rotation relationship is not adjusted, the converted first image and the second image may still have a relative rotation relationship.
又例如,可以根据第一相机的内部参数、第二相机的内部参数、第一图像和第二初始图像之间的旋转关系,将第二初始图像转换到第一图像的相机位姿上,形成第二图像。这样转换之后的第二图像与第一图像不仅内参相同,而且也消除了相对旋转关系,两图的基线已经平行,两图的匹配关系仅存在水平方向的位置偏差,这样在图像匹配过程中,仅需要在水平方向上寻找匹配的像素点即可,能够大大降低图像处理的复杂度。For another example, according to the internal parameters of the first camera, the internal parameters of the second camera, and the rotation relationship between the first image and the second initial image, the second initial image can be converted into the camera pose of the first image to form second image. In this way, the converted second image and the first image not only have the same internal reference, but also eliminate the relative rotation relationship. The baselines of the two images are already parallel, and the matching relationship between the two images only has a positional deviation in the horizontal direction. In this way, during the image matching process, It is only necessary to find matching pixels in the horizontal direction, which can greatly reduce the complexity of image processing.
本申请实施例使用的相机模型可以如下:The camera model used in this embodiment of the present application may be as follows:
Figure PCTCN2020113251-appb-000001
Figure PCTCN2020113251-appb-000001
其中,[u,v,1] T表示齐次图像坐标系(homogeneous image coordinates)中的二维(2D)像素点。 Among them, [u, v, 1] T represents a two-dimensional (2D) pixel point in a homogeneous image coordinate system (homogeneous image coordinates).
[x w,y w,z w] T表示世界坐标系中的三维(world coordinates,3D)像素点。 [x w , y w , z w ] T represents a three-dimensional (world coordinates, 3D) pixel point in the world coordinate system.
矩阵K称为相机矩阵(camera calibration matrix),即每个相机的内部参参(intrinsic parameters),简称内参。The matrix K is called the camera calibration matrix, that is, the internal parameters of each camera, referred to as internal parameters.
对于有限投影相机(finite projective camera)来说,内参K包含了5个参数,如下:For a finite projective camera, the internal parameter K contains 5 parameters, as follows:
Figure PCTCN2020113251-appb-000002
Figure PCTCN2020113251-appb-000002
其中,α x=fm x,α y=fm y,f表示焦距(focal length),m x、m y表示在x、y 轴上,单位距离的像素点的数量。γ表示x、y轴之间的畸变参数(skew parameters),例如,对于电荷耦合器件(charge coupled device,CCD)相机,像素不为正方形。μ、ν为光心(principal point)位置。 Wherein, α x =fm x , α y =fm y , f represents the focal length, and m x and m y represent the number of pixels per unit distance on the x and y axes. γ represents the skew parameters between the x, y axes, eg, for a charge coupled device (CCD) camera, the pixels are not square. μ and ν are the position of the optical center (principal point).
矩阵R为旋转矩阵(rotation matrix),矩阵T为位移矩阵(translation matrix),R和T为相机的外部参数(extrinsic matrix),简称外参,表示在三维空间中,世界坐标系到相机坐标系的旋转与位移变换。The matrix R is the rotation matrix, the matrix T is the translation matrix, and R and T are the external parameters of the camera (extrinsic matrix). Rotation and displacement transformations.
对于增稳处理过程,第一图像在世界坐标系下的原始姿态记为R 0,滤波后的第一图像的姿态记为R 1,那么从原始姿态到滤波后姿态的旋转变换为R 10=R 1R 0 -1For the stabilization process, the original posture of the first image in the world coordinate system is denoted as R 0 , and the posture of the filtered first image is denoted as R 1 , then the rotation transformation from the original posture to the filtered posture is R 10 = R 1 R 0 -1 .
对于位姿转换过程,将第二相机的内参记为K c,第二相机拍摄的第二初始图像记为I c,第一相机的内参记为K g,第一图像记为I g,第一图像和第二初始图像之间的旋转关系记为R gcFor the pose transformation process, the internal parameter of the second camera is recorded as K c , the second initial image captured by the second camera is recorded as I c , the internal parameter of the first camera is recorded as K g , the first image is recorded as I g , and the first image is recorded as I g . The rotation relationship between the first image and the second initial image is denoted as R gc .
由上述增稳过程可以得到,第一图像增稳滤波后的姿态变换为R 10,然后将第二初始图像I c变换到第一图像I g所在的机位上。 It can be obtained from the above stabilization process that the posture of the first image after stabilization and filtering is transformed to R 10 , and then the second initial image I c is transformed to the camera position where the first image I g is located.
Figure PCTCN2020113251-appb-000003
Figure PCTCN2020113251-appb-000003
其中(u c,v c) T是第二初始图像I c上的一点,(u c’,v c’) T是转换后的第二图像I c’上一点。 where (u c , vc ) T is a point on the second initial image I c , and (u c ', vc ') T is a point on the transformed second image I c '.
本申请实施例中的电子设备可以为可移动平台等设备。The electronic device in the embodiment of the present application may be a device such as a movable platform.
通过上文描述的方式得到处理后的第一图像后,用户可以根据处理后的第一图像对电子设备进行操作。例如,用户可以输入指令。电子设备可以根据用户输入的指令,调整第一相机和第二相机的朝向,使得第一相机和第二相机朝向同一场景进行拍摄。After the processed first image is obtained in the manner described above, the user can operate the electronic device according to the processed first image. For example, the user can enter instructions. The electronic device can adjust the orientations of the first camera and the second camera according to the instruction input by the user, so that the first camera and the second camera shoot towards the same scene.
在得到处理后的第一图像之后,用户可以根据处理后的第一图像中的信息,确定电子设备当前的位置信息,并根据当前的位置信息,确定电子设备的移动方向,并输入与该移动方向对应的控制指令。After obtaining the processed first image, the user can determine the current position information of the electronic device according to the information in the processed first image, and determine the moving direction of the electronic device according to the current position information, and input the information related to the movement The control command corresponding to the direction.
本申请实施例中的处理后的第一图像可用于FPV飞行。例如,在无人机上,用户可以根据处理后的第一图像控制无人机的飞行,并且由于处理后的第一图像的视场较大,使得用户的视野更广,用户体验会更好。The processed first image in the embodiment of the present application can be used for FPV flight. For example, on a drone, the user can control the flight of the drone according to the processed first image, and since the field of view of the processed first image is larger, the user's field of view is wider, and the user experience will be better.
本申请实施例中的第一像素区域优选地位于第一图像的中间区域,也就 是说,第二图像与第一图像的重叠区域为第一图像的中间区域。这样,将第二图像与第一图像融合之后,生成的处理后的第一图像可以为中间区域为高分辨率图像、边缘区域为低分辨率图像的图像,和/或中间区域为彩色图像、边缘区域为分彩色图像的图像。由于用户在观看处理后的第一图像时,主要关注点在中间区域,因此,中间区域的高分辨率和/或色彩能够提高用户的视觉体验。另外,边缘区域主要用于辅助用户对周围环境的感知,因此,用户对边缘区域的分辨率和/或颜色的要求不高,边缘区域的低分辨率和/或颜色信息并不会影响用户的视觉体验。The first pixel area in the embodiment of the present application is preferably located in the middle area of the first image, that is, the overlapping area of the second image and the first image is the middle area of the first image. In this way, after the second image is fused with the first image, the generated processed first image may be an image in which the middle area is a high-resolution image, the edge area is a low-resolution image, and/or the middle area is a color image, The edge area is the image of the subcolor image. Since the user mainly focuses on the middle area when viewing the processed first image, the high resolution and/or color of the middle area can improve the user's visual experience. In addition, the edge area is mainly used to assist the user's perception of the surrounding environment. Therefore, the user does not have high requirements for the resolution and/or color of the edge area, and the low resolution and/or color information of the edge area will not affect the user's Visual experience.
另外,用户还可以针对第一像素区域在第一图像中所处的区域,调整第一相机和/或第二相机的拍摄方向。例如,如图4所示,如果第一像素区域312位于第一图像310的边缘区域,而不是中间区域,则用户可以调整第一相机和/或第二相机的拍摄方向,使得第一像素区域位于第一图像的中间区域。In addition, the user may also adjust the shooting direction of the first camera and/or the second camera with respect to the region where the first pixel region is located in the first image. For example, as shown in FIG. 4 , if the first pixel area 312 is located at the edge area of the first image 310 instead of the middle area, the user can adjust the shooting direction of the first camera and/or the second camera so that the first pixel area located in the middle area of the first image.
可选地,第一图像为非彩色图像,第二图像为彩色图像。举例来说,第一图像可以为低分辨率的非彩色图像,第二图像可以为高分辨率的彩色图像。Optionally, the first image is an achromatic image, and the second image is a color image. For example, the first image may be a low-resolution achromatic image, and the second image may be a high-resolution color image.
由于第一图像为非彩色图像,第二图像为彩色图像,在融合过程中,可以保留第二图像的颜色信息,这样,生成的处理后的第一图像为部分区域为彩色、部分区域为黑白的图像。Since the first image is an achromatic image and the second image is a color image, during the fusion process, the color information of the second image can be retained. In this way, the generated processed first image is partially colored and partially black and white. Image.
为了提高用户的视觉体验,在生成处理后的第一图像之后,本申请实施例还可以对处理后的第一图像进行颜色处理,以生成具有美感的艺术照。In order to improve the user's visual experience, after the processed first image is generated, the embodiment of the present application may further perform color processing on the processed first image to generate an aesthetic artistic photo.
其中,颜色处理可以包括以下中的至少一种:颜色填充、颜色去除和颜色保留。The color processing may include at least one of the following: color filling, color removal, and color retention.
由于第二图像的视场小于第一图像的视场,这样在融合后的处理后的第一图像中,会出现同一个物体一半为彩色、一半为黑白的情况,这会影响用户的视觉体验。Since the field of view of the second image is smaller than that of the first image, in the processed first image after fusion, half of the same object is in color and half is in black and white, which will affect the user's visual experience .
如图6所示,图6中的中间区域420为彩色区域、边缘区域410为黑白区域。其中,高楼402一半是彩色、另一半为黑白;轮船404一半是彩色、另一半为黑白;道路406一半是彩色、另一半为黑白;摩天轮408一半是彩色、另一半为黑白。As shown in FIG. 6 , the middle area 420 in FIG. 6 is a color area, and the edge area 410 is a black and white area. Among them, the tall buildings 402 are half color and the other half are black and white; the ship 404 is half color and the other half is black and white; the road 406 is half color and the other half is black and white; the Ferris wheel 408 is half color and the other half is black and white.
基于此,本申请实施例可以对处理后的第一图像进行颜色填充(或称为 颜色扩展),具体地,可以基于处理后的第一图像中同一物体有颜色的区域对没有颜色的区域进行颜色填充。例如,可以对处理后的第一图像进行分割,将同一物体从处理后的第一图像中分割出来;如果同一物体一部分有颜色信息,另一部分无颜色信息,则可以根据同一物体中有颜色信息的部分,对另一部分进行颜色填充。Based on this, the embodiments of the present application may perform color filling (or color expansion) on the processed first image. Specifically, based on the colored areas of the same object in the processed first image, the colorless areas may be filled. color fill. For example, the processed first image can be segmented, and the same object can be segmented from the processed first image; if one part of the same object has color information and the other part has no color information, the same object can be divided according to the color information in the same object. part, fill the other part with color.
本申请实施例对分割处理后的第一图像的方式不做具体限定。例如,可以采用超像素分割方法,对处理后的第一图像进行分割,生成超像素,使得具有相似纹理、颜色、亮度等特征的相邻像素构成一定视觉意义的不规则像素块。This embodiment of the present application does not specifically limit the manner of dividing the processed first image. For example, a superpixel segmentation method can be used to segment the processed first image to generate superpixels, so that adjacent pixels with similar texture, color, brightness and other characteristics form irregular pixel blocks with certain visual significance.
利用像素之间特征的相似性将像素分组,用少量的超像素代替大量的像素来表达图像特征,使用超像素分割后能够明显减少像素的数量,能够很大程度上降低图像后处理的复杂度。Using the similarity of features between pixels to group pixels, replacing a large number of pixels with a small number of superpixels to express image features, the use of superpixel segmentation can significantly reduce the number of pixels, which can greatly reduce the complexity of image post-processing .
图像分割后的超像素的数量越少,则图像处理的复杂度越低,但同时图像中的特征也丢失的越多。本申请实施例对分割后的超像素的数量不做具体限定,可以根据实际需要进行设定。The smaller the number of superpixels after image segmentation, the lower the complexity of image processing, but at the same time, the more features in the image are lost. This embodiment of the present application does not specifically limit the number of divided superpixels, which may be set according to actual needs.
图7示出了一种利用超像素分割方法分割的图像,其中,区域504分割后的超像素数量小于区域502分割后的超像素的数量。因此,区域504的处理复杂度低于区域502的处理复杂度,但是,区域502保留的图像特征大于区域504保留的图像特征。FIG. 7 shows an image segmented by a superpixel segmentation method, wherein the number of superpixels after the segmentation of the region 504 is smaller than the number of superpixels after the segmentation of the region 502 . Therefore, the processing complexity of region 504 is lower than that of region 502 , however, the image features retained by region 502 are greater than those retained by region 504 .
本申请实施例对超像素分割所使用的方法不做具体限定。例如,超像素分割方法可以包括以下算法中的至少一种:简单的线性迭代聚类(simple linear iterativeclustering,SLIC)、Graph-based、NCut、Turbopixel、Quick-shift、Graph-cut a、Graph-cut b。The embodiments of the present application do not specifically limit the method used for superpixel segmentation. For example, the superpixel segmentation method may include at least one of the following algorithms: simple linear iterative clustering (SLIC), Graph-based, NCut, Turbopixel, Quick-shift, Graph-cut a, Graph-cut b.
相比于其他算法,SLIC算法能够显著降低计算复杂度,同时能够提高对超像素的尺寸和紧凑型的控制,因此,本申请实施例优选地使用SLIC算法。Compared with other algorithms, the SLIC algorithm can significantly reduce the computational complexity and at the same time can improve the control over the size and compactness of the superpixels. Therefore, the SLIC algorithm is preferably used in this embodiment of the present application.
为了进一步降低图像处理的复杂度,本申请实施例可以仅将位于彩色区域和黑白区域边界处的同一物体分割出来,如果该物体一部分有颜色信息,则将该颜色信息扩展到整个物体。In order to further reduce the complexity of image processing, in this embodiment of the present application, only the same object located at the boundary between the color area and the black and white area may be segmented, and if a part of the object has color information, the color information is extended to the entire object.
如图6所示,摩天轮408位于彩色区域和黑白区域的边界处,且摩天轮408一部分有颜色信息,另一部分没有颜色信息,则可以根据由颜色信息的 部分,将颜色扩展到整个摩天轮。As shown in FIG. 6 , the Ferris wheel 408 is located at the boundary between the color area and the black and white area, and a part of the Ferris wheel 408 has color information and the other part has no color information, then the color can be extended to the entire Ferris wheel according to the part of the color information. .
本申请实施例还可以基于深度信息,对处理后的第一图像进行颜色填充、和/或颜色去除、和/或颜色保留等颜色处理。This embodiment of the present application may further perform color processing such as color filling, and/or color removal, and/or color retention, on the processed first image based on the depth information.
具体地,由于第一图像具有深度信息,则生成的处理后的第一图像也具有深度信息。本申请可以基于深度信息,对处理后的第一图像进行颜色处理。例如,可以对处理后的第一图像中具有同一深度的区域进行相同的颜色处理。Specifically, since the first image has depth information, the generated processed first image also has depth information. The present application may perform color processing on the processed first image based on the depth information. For example, the same color processing can be performed on regions of the same depth in the processed first image.
作为一个实施例,可以获取用户在处理后的第一图像上的点选操作,确定点选处的物体的第一深度信息,根据该第一深度信息对处理后的第一图像进行颜色处理。As an embodiment, a user's click operation on the processed first image may be acquired, first depth information of the object at the click location may be determined, and color processing may be performed on the processed first image according to the first depth information.
例如,可以根据第一深度信息,确定目标区域,并对目标区域进行颜色填充或仅保留该目标区域的颜色。其中,该目标区域与点选处的物体相连、且目标区域的深度与第一深度信息之间的差值在第一预设范围内。For example, the target area can be determined according to the first depth information, and the target area is filled with color or only the color of the target area is retained. Wherein, the target area is connected to the object at the point selected, and the difference between the depth of the target area and the first depth information is within a first preset range.
用户可以点选需要上色的物体,处理器可以根据点选处的深度信息,将位置上相连、深度相近的像素点确定为目标区域,只显示该目标区域的颜色,从而形成具有美感的艺术照。The user can click on the object that needs to be colored, and the processor can determine the pixel points that are connected in position and similar in depth as the target area according to the depth information of the clicked place, and only display the color of the target area, thus forming an aesthetic art. According to.
作为另一个实施例,可以获取用户输入的距离信息,并根据该距离信息,对处理后的第一图像进行颜色处理。As another embodiment, distance information input by the user may be acquired, and color processing is performed on the processed first image according to the distance information.
可以根据距离信息,确定目标区域,并对目标区域进行颜色填充或仅保留该目标区域的颜色,其中,该目标区域的深度与该距离之间的差值在第二预设范围内。The target area can be determined according to the distance information, and the target area is filled with color or only the color of the target area is retained, wherein the difference between the depth of the target area and the distance is within a second preset range.
用户可以输入要显示颜色的距离,处理器可以将该距离转化为深度,然后将对应深度的物体用颜色高亮出来,其余部分保留黑白,从而形成具有美感的艺术照。The user can input the distance at which the color is to be displayed, and the processor can convert the distance into depth, and then highlight the object corresponding to the depth with color, leaving the rest in black and white, thus forming an aesthetically pleasing artistic photo.
此外,本申请实施例可以根据深度信息,高亮某一距离上的物体、建筑物等,或者基于深度信息,进行辅助测量、侦查等。In addition, the embodiment of the present application can highlight objects, buildings, etc. at a certain distance according to the depth information, or perform auxiliary measurement, detection, etc. based on the depth information.
本申请实施例对基于第二图像中的第二像素区域,对第一图像中的第一像素区域进行图像处理的方式不做具体限定。The embodiments of the present application do not specifically limit the manner of performing image processing on the first pixel region in the first image based on the second pixel region in the second image.
例如,可以使用第二图像中的第二像素区域直接替换第一图像中的第一像素区域,这种方式实现简单,处理复杂度也比较低,但是在处理后的第一图像中会存在图像错位。For example, the second pixel area in the second image can be used to directly replace the first pixel area in the first image. This method is simple to implement and has low processing complexity, but there will be images in the processed first image. dislocation.
又例如,可以根据第二像素区域与第一像素区域之间的映射关系,将第二像素区域映射到第一像素区域。这种方式生成的处理后的第一图像比较美观,不会存在明显的图像错位。For another example, the second pixel area may be mapped to the first pixel area according to the mapping relationship between the second pixel area and the first pixel area. The processed first image generated in this way is relatively beautiful, and there is no obvious image dislocation.
下文以映射方式为例进行描述。The following description takes the mapping method as an example.
本申请可以根据第二像素区域中的像素点与第一像素区域中的像素点之间的映射关系,将第二像素区域映射到第一像素区域。The present application can map the second pixel area to the first pixel area according to the mapping relationship between the pixel points in the second pixel area and the pixel points in the first pixel area.
例如,在第一像素区域中寻找与第二像素区域中的像素点匹配的像素点,并将第二像素区域中的像素点映射到第一像素区域中与其匹配的像素点的位置上。For example, a pixel point matching the pixel point in the second pixel area is found in the first pixel area, and the pixel point in the second pixel area is mapped to the position of the matching pixel point in the first pixel area.
如果针对第二像素区域中的每个像素点,都在第一像素区域中寻找与其匹配的像素点,这无疑会增加图像处理的复杂度。基于此,为了降低图像处理复杂度,本申请仅对第一像素区域和第二像素区域的边缘进行映射,而非边缘区域可以采用插值法进行插值。If, for each pixel in the second pixel area, a matching pixel is searched in the first pixel area, this will undoubtedly increase the complexity of image processing. Based on this, in order to reduce the complexity of image processing, the present application only maps the edges of the first pixel area and the second pixel area, and the non-edge area can be interpolated by using an interpolation method.
具体地,可以根据第二像素区域的边缘与第一像素区域的边缘之间的映射关系,将第二像素区域的边缘映射到第一像素区域的边缘,而非边缘区域则可以利用两边的边缘进行差值映射。Specifically, according to the mapping relationship between the edge of the second pixel area and the edge of the first pixel area, the edge of the second pixel area can be mapped to the edge of the first pixel area, and the non-edge area can use the edges on both sides Do difference mapping.
如图8所示,315、316、317为第一图像310的边缘区域,314为第一图像310的非边缘区域;325、326、327为第二图像的边缘区域,324为第二图像320的非边缘区域。并且,边缘315与边缘325相匹配,边缘316与边缘326相匹配,边缘317与边缘327相匹配,因此,可以将边缘325映射至边缘315所在的位置,将边缘326映射至边缘316所在的位置,将边缘327映射至边缘317所在的位置。非边缘区域314可以基于其两边的边缘315和316的信息,进行插值得到。As shown in FIG. 8 , 315 , 316 , and 317 are edge areas of the first image 310 , and 314 is a non-edge area of the first image 310 ; 325 , 326 , and 327 are edge areas of the second image, and 324 is the second image 320 non-edge areas. Also, edge 315 matches edge 325, edge 316 matches edge 326, and edge 317 matches edge 327. Therefore, edge 325 can be mapped to where edge 315 is, and edge 326 can be mapped to where edge 316 is. , which maps edge 327 to where edge 317 is located. The non-edge region 314 can be obtained by interpolation based on the information of the edges 315 and 316 on both sides thereof.
可选地,如果非边缘区域只有一边有边缘信息,则可以选择不映射。Optionally, if only one side of the non-edge region has edge information, you can choose not to map.
本申请可以确定第二像素区域的边缘和第一像素区域的边缘,并根据第二像素区域的边缘与第一像素区域的边缘之间的映射关系,将第二像素区域的边缘映射到第一像素区域的边缘,以生成处理后的第一图像的边缘区域的图像信息。The present application can determine the edge of the second pixel area and the edge of the first pixel area, and map the edge of the second pixel area to the first pixel area according to the mapping relationship between the edge of the second pixel area and the edge of the first pixel area the edge of the pixel area to generate image information of the edge area of the processed first image.
本申请实施例对确定第二像素区域的边缘和第一像素区域的边缘的方式不做具体限定。The embodiments of the present application do not specifically limit the manner of determining the edge of the second pixel region and the edge of the first pixel region.
例如,可以使用边缘检测算子提取第二像素区域的边缘和第一像素区域 的边缘。边缘检测算子例如可以为Canny Operator算法中的检测算子,即可以采用Canny Operator算法提取第二图像和第一像素区域的边缘。For example, edge detection operators can be used to extract the edge of the second pixel region and the edge of the first pixel region. The edge detection operator may be, for example, a detection operator in the Canny Operator algorithm, that is, the Canny Operator algorithm may be used to extract the edges of the second image and the first pixel area.
对于Canny Operator算法,边缘检测算子中的参数可以为阈值参数。For the Canny Operator algorithm, the parameters in the edge detection operator can be threshold parameters.
由于第一相机和第二相机会存在曝光参数不一致的情况,这会导致两张图像的边缘存在差异。如果两张图像的边缘密度不平衡,将会增加边缘映射的复杂度。例如,第二图像提取的边缘数量大于第一像素区域提取的边缘的数量,这会增加确定边缘之间映射关系的复杂度。Since the exposure parameters of the first camera and the second camera are inconsistent, this will lead to differences in the edges of the two images. If the edge densities of the two images are not balanced, it will increase the complexity of edge mapping. For example, the number of edges extracted from the second image is greater than the number of edges extracted from the first pixel region, which increases the complexity of determining the mapping relationship between the edges.
为了提高边缘匹配的准确度以及降低匹配的复杂度,本申请可以调整边缘检测算子中的参数,以平衡第二像素区域和第一像素区域的边缘密度。In order to improve the accuracy of edge matching and reduce the complexity of matching, the present application can adjust the parameters in the edge detection operator to balance the edge densities of the second pixel area and the first pixel area.
边缘密度可以指图像中的边缘上的像素点的数量与整个图像的像素点的总数量之间的比值。Edge density may refer to the ratio between the number of pixels on the edges in the image and the total number of pixels in the entire image.
对于第一像素区域,边缘密度d g为: For the first pixel area, the edge density d g is:
d g=第一像素区域内边缘上的像素点的数量/第一像素区域内像素点的总数量 d g = the number of pixels on the edge of the first pixel area/the total number of pixels in the first pixel area
对于第二像素区域,边缘密度d c为: For the second pixel area, the edge density dc is:
d c=第二像素区域内边缘上的像素点的数量/第二像素区域内像素点的总数量 d c = the number of pixels on the edge in the second pixel area/the total number of pixels in the second pixel area
本申请可以提取第二像素区域的边缘和第一像素区域的初始边缘;并根据第二像素区域的边缘密度与第一像素区域的初始边缘密度,调整边缘检测算子中的参数;并根据调整之后的边缘检测算子中的参数,确定第一像素区域的边缘。The present application can extract the edge of the second pixel area and the initial edge of the first pixel area; and adjust the parameters in the edge detection operator according to the edge density of the second pixel area and the initial edge density of the first pixel area; The parameters in the subsequent edge detection operator determine the edge of the first pixel area.
具体地,若第一像素区域的初始边缘密度与第二像素区域的边缘密度之间的差值在第三预设范围内,则可以将第一像素区域的初始边缘作为最终的第一像素区域的边缘;若第一像素区域的初始边缘密度与第二像素区域的边缘密度之间的差值不在第三预设范围内,调整边缘检测算子中的阈值参数,并根据调整之后的参数,重新提取第一像素区域的边缘,使得第二像素区域的边缘密度与第一像素区域的边缘密度之间的差值在第三预设范围内,也就是说,可以不断地调整边缘检测算子中的阈值参数,并重新提取第一像素区域的边缘,直到重新提取的第一像素区域的边缘密度与第二像素区域的边缘密度在第三预设范围内。Specifically, if the difference between the initial edge density of the first pixel area and the edge density of the second pixel area is within the third preset range, the initial edge of the first pixel area may be used as the final first pixel area If the difference between the initial edge density of the first pixel area and the edge density of the second pixel area is not within the third preset range, adjust the threshold parameter in the edge detection operator, and according to the adjusted parameters, Re-extract the edge of the first pixel area, so that the difference between the edge density of the second pixel area and the edge density of the first pixel area is within the third preset range, that is, the edge detection operator can be adjusted continuously and re-extract the edge of the first pixel area until the re-extracted edge density of the first pixel area and the edge density of the second pixel area are within a third preset range.
以上描述的是通过重新提取第一像素区域的边缘,使得第一像素区域的 边缘密度与第二像素区域的边缘密度之间的差值在第三预设范围内的情况,本申请实施例并不限于此,只要保证两个图像之间的边缘密度之间的差值在第三预设范围内即可。例如,也可以保持第一像素区域的边缘不变,重新提取第二像素区域的边缘,使得第二像素区域的边缘密度与第一像素区域的边缘密度之间的差值在第三预设范围内。The above description is to re-extract the edge of the first pixel area so that the difference between the edge density of the first pixel area and the edge density of the second pixel area is within the third preset range. It is not limited to this, as long as it is ensured that the difference between the edge densities between the two images is within the third preset range. For example, it is also possible to keep the edge of the first pixel area unchanged, and re-extract the edge of the second pixel area, so that the difference between the edge density of the second pixel area and the edge density of the first pixel area is within the third preset range Inside.
以Canny Operator算法为例,可以先使用默认阈值参数[t l,t n],提取第二像素区域的边缘和第一像素区域的边缘。以图9为例,左边是第二像素区域,右边是提取的第二像素区域的边缘。 Taking the Canny Operator algorithm as an example, the default threshold parameters [t l , t n ] can be used first to extract the edge of the second pixel area and the edge of the first pixel area. Taking FIG. 9 as an example, the left side is the second pixel area, and the right side is the extracted edge of the second pixel area.
如果第二像素区域与第一像素区域的边缘密度不平衡,则可以采用下述过程进行调整。If the edge densities of the second pixel area and the first pixel area are unbalanced, the following process can be used to adjust.
(1)若d g<d c-ε,则调小阈值参数[t l,t n],重新提取第一像素区域的边缘,并重新计算第一像素区域的边缘密度d g(1) If d g <d c -ε, reduce the threshold parameter [t l , t n ], re-extract the edge of the first pixel area, and recalculate the edge density d g of the first pixel area.
(2)若d g>d c+ε,则调大阈值参数[t l,t n],重新提取第一像素区域的边缘,并重新计算第一像素区域的边缘密度d g(2) If d g >d c +ε, increase the threshold parameter [t l , t n ], re-extract the edge of the first pixel area, and recalculate the edge density d g of the first pixel area.
(3)直到d c-ε<d g<d c+ε,则停止上述过程。 (3) Until d c -ε<d g <d c +ε, the above process is stopped.
其中,ε表示两图的边缘密度差异的容忍度,其大小是工程经验值,可以根据需要调整。1/ε表示两图边缘密度的相似度。Among them, ε represents the tolerance of the edge density difference between the two images, and its size is an engineering experience value, which can be adjusted as needed. 1/ε represents the similarity of the edge density of the two images.
根据所述第二像素区域的边缘与所述第一像素区域的边缘之间的映射关系,将所述第二像素区域的边缘映射到所述第一像素区域的边缘,以生成所述处理后的第一图像的边缘区域的图像信息,可以包括:确定第一边缘段和第二边缘段,第一边缘段为第一像素区域的边缘上的边缘段,第二边缘段为第二像素区域的边缘上的边缘段;根据第一边缘段和第二边缘段之间的映射关系,将第二边缘段映射到第一边缘段所在的区域,以生成处理后的第一图像的边缘区域的图像信息。According to the mapping relationship between the edge of the second pixel region and the edge of the first pixel region, the edge of the second pixel region is mapped to the edge of the first pixel region to generate the processed The image information of the edge area of the first image may include: determining a first edge segment and a second edge segment, the first edge segment is the edge segment on the edge of the first pixel area, and the second edge segment is the second pixel area. According to the mapping relationship between the first edge segment and the second edge segment, the second edge segment is mapped to the region where the first edge segment is located, so as to generate the edge region of the processed first image. image information.
由于提取的边缘可以是多个边缘段,并不是连续的一个边缘,因此,本申请可以根据两图中的多个边缘段之间的映射关系进行边缘匹配。Since the extracted edge may be multiple edge segments, rather than one continuous edge, the present application can perform edge matching according to the mapping relationship between multiple edge segments in the two figures.
另外,为了进一步降低边缘匹配过程中的复杂度,本申请可以对提取的边缘进行分段,如分成多个线段,然后根据分段之后的线段进行匹配。In addition, in order to further reduce the complexity in the edge matching process, the present application may segment the extracted edge, for example, into multiple line segments, and then perform matching according to the segmented line segments.
图10示出的是边缘分段前后的示意图。左图是根据边缘提取算法提取的边缘,右图是分段之后的边缘。Figure 10 shows schematic views before and after edge segmentation. The left image is the edge extracted according to the edge extraction algorithm, and the right image is the edge after segmentation.
图像提取的边缘中可能会存在分叉的边缘段,例如,类似于左图中的三 个线段相交于一点的边缘段。如果直接以这种分叉的边缘段进行边缘匹配,会增加匹配的难度。There may be bifurcated edge segments in the extracted edges of the image, for example, an edge segment similar to the one in the left image where three line segments meet at one point. If the edge matching is performed directly with the forked edge segment, it will increase the difficulty of matching.
本申请可以将提取的边缘分成多个线段,即分成多个具有两个端点和一个连线的段,由于线段结构简单,且相比于分段前的边缘段,线段上的像素点也较少,因此,以线段进行匹配,能够提高匹配的效率,降低匹配的难度。In this application, the extracted edge can be divided into multiple line segments, that is, into multiple segments with two endpoints and a connecting line. Because the structure of the line segment is simple, and compared with the edge segment before the segment, the pixels on the line segment are also smaller. Therefore, matching with line segments can improve the efficiency of matching and reduce the difficulty of matching.
本申请实施例中的线段可以是一个直线段,或者也可以是一个曲线段,对此不做具体限定。The line segment in the embodiment of the present application may be a straight line segment, or may also be a curve segment, which is not specifically limited.
具体地,本申请可以对第一边缘段进行分段,得到第一线段集合;对第二边缘段进行分段,得到第二线段集合;确定第一线段集合中的线段与第二线段集合中的线段之间的映射关系;根据第一线段集合中的线段与第二集合中的线段之间的映射关系,将第二线段集合中的线段映射到第一线段集合中的线段,以生成处理后的第一图像的边缘区域的图像信息。Specifically, the present application can segment the first edge segment to obtain the first line segment set; segment the second edge segment to obtain the second line segment set; determine the line segment and the second line segment in the first line segment set The mapping relationship between the line segments in the set; according to the mapping relationship between the line segments in the first line segment set and the line segments in the second set, the line segments in the second line segment set are mapped to the line segments in the first line segment set , to generate image information of the edge region of the processed first image.
进一步地,可以对分段后的线段进行删选,剔除一些不符合要求的线段,然后在进行线段匹配。例如,可以剔除线段集合中长度小于第一预设值的线段,因为对长度太小的线段进行匹配的意义不大,并且还会浪费处理资源。Further, the segmented line segments can be deleted, and some line segments that do not meet the requirements can be removed, and then the line segment matching can be performed. For example, a line segment whose length is less than the first preset value in the line segment set may be eliminated, because it is not meaningful to match a line segment whose length is too small, and processing resources will be wasted.
对于第一边缘段,本申请可以对第一边缘段进行分段,得到第一初始边缘段集合;然后剔除第一初始边缘段集合中的长度小于第一预设值的线段,得到第一线段集合。For the first edge segment, the present application may segment the first edge segment to obtain a first initial edge segment set; and then remove line segments whose length is less than the first preset value in the first initial edge segment set to obtain the first line segment collection.
对于第二边缘段,本申请可以对第二边缘段进行分段,得到第二初始边缘段集合;然后剔除第二初始边缘段集合中的长度小于第二预设值的线段,得到第二线段集合。For the second edge segment, the present application may segment the second edge segment to obtain a second initial edge segment set; and then remove line segments whose length is less than the second preset value in the second initial edge segment set to obtain the second line segment gather.
其中,第一预设值和第二预设值可以相等。Wherein, the first preset value and the second preset value may be equal.
在进行线段匹配时,可以根据第一线段集合中的线段上的像素与第二线段集合中的线段上的像素之间的匹配度,确定第一线段集合中的线段与第二线段集合中的线段之间的映射关系。When performing line segment matching, the line segments in the first line segment set and the second line segment set may be determined according to the matching degree between the pixels on the line segments in the first line segment set and the pixels on the line segments in the second line segment set The mapping relationship between the line segments in .
本申请实施例对第一线段集合和第二线段集合之间的匹配过程所使用的算法不做具体限定。This embodiment of the present application does not specifically limit the algorithm used in the matching process between the first line segment set and the second line segment set.
例如,可以使用归一化交叉相关(normalized cross-correlation,NCC)算法进行第一线段集合和第二线段集合之间的匹配。For example, the matching between the first set of line segments and the second set of line segments may be performed using a normalized cross-correlation (NCC) algorithm.
假设第一线段集合为E g={l 1,l 2,l 3,…,l n},第二线段集合为E c={m 1,m 2,m 3,…,m n}。 Assume that the first line segment set is E g ={l 1 ,l 2 ,l 3 ,...,l n }, and the second line segment set is E c ={m 1 ,m 2 ,m 3 ,...,m n }.
针对E g集合中的第i条线段l i上的每个像素点p i1,p i2,…,p in通过NCC匹配搜索在集合E c中的对应匹配点q i1,q i2,…,q in。若匹配点q属于某个线段m j,则线段l i与线段m j的匹配度加1。重复计算线段l i上的每个点对应的匹配点,从而找出与线段l i最匹配的线段m jFor each pixel point p i1 , p i2 ,...,p in on the i -th line segment li in the set E g , the corresponding matching points q i1 , q i2 ,...,q in the set E c are searched by NCC matching in . If the matching point q belongs to a certain line segment m j , the matching degree of the line segment li and the line segment m j is increased by 1. The matching points corresponding to each point on the line segment li are repeatedly calculated, so as to find the line segment m j that best matches the line segment li .
本申请实施例对确定与线段l i匹配的线段m j的方式不做具体限定。 The embodiment of the present application does not specifically limit the manner of determining the line segment m j that matches the line segment li .
作为一个示例,若线段m j中与线段l i匹配的像素点的数量与线段m j上的像素点的总数量之间的比值大于预设阈值,则可以确定该线段m j与线段l i相匹配。 As an example, if the ratio between the number of pixels matching the line segment li in the line segment m j and the total number of pixels on the line segment m j is greater than a preset threshold, it can be determined that the line segment m j and the line segment li match.
作为另一个示例,可以根据投票原则,将集合E c中与线段l i匹配的像素点的数量最多的线段确定为与线段l i匹配的线段,或者将集合E c中与线段l i匹配的像素点的占比最高的线段确定为与线段l i匹配的线段,或者将集合E c中与线段l i匹配的像素点的占比大于预设阈值的线段确定为与线段l i匹配的线段。 As another example, according to the voting principle, the line segment with the largest number of pixel points matching the line segment li in the set E c may be determined as the line segment matching the line segment li, or the line segment matching the line segment li in the set E c may be determined as the line segment matching the line segment li . The line segment with the highest proportion of pixel points is determined as the line segment matching the line segment li, or the line segment with the proportion of the pixel points matching the line segment li in the set E c that is greater than the preset threshold is determined as the line segment matching the line segment li. .
举例说明,如果l 1和m 1有2个像素点匹配,与m 2有15个像素点匹配,与m 3有3个像素点匹配,则根据投票原则,m 2与l 1匹配的像素点最多,即m 2与l 1匹配度最高,所以m 2与l 1匹配。 For example, if l 1 and m 1 have 2 pixels matching, m 2 has 15 pixels matching, and m 3 has 3 pixels matching, then according to the voting principle, m 2 and l 1 matching pixels At most, that is, m 2 matches l 1 the most, so m 2 matches l 1 .
或者,假设预设阈值为50%,由于m 2与l 1匹配的像素点的占比为15/(2+15+3)=75%,75%>50%,则m 2与l 1匹配。 Or, assuming that the preset threshold is 50%, since the proportion of pixels matching m 2 and l 1 is 15/(2+15+3)=75%, 75%>50%, then m 2 matches l 1 .
根据上文的描述,由于第一图像和第二图像已经对齐了内参,在匹配过程中,仅需要在水平方向上寻找对应的匹配线段即可。According to the above description, since the first image and the second image have already aligned the internal parameters, in the matching process, it is only necessary to search for the corresponding matching line segment in the horizontal direction.
以图11为例,左边为第一图像,右边为第二图像。对于第一图像中的线段,与第二图像中同一行的线段进行比较,在第二图像中找到与第一图像中的线段相匹配的线段。Taking FIG. 11 as an example, the left side is the first image, and the right side is the second image. For the line segment in the first image, compare with the line segment in the same row in the second image, and find the line segment in the second image that matches the line segment in the first image.
图11所示的是为第一图像中的线段602寻找匹配线段的示意图。根据上文描述的投票原则,由于第二图像中线段604中的匹配像素点的数量最多,即线段604的匹配得分值最高,因此,线段604是与线段602匹配的线段。FIG. 11 is a schematic diagram of finding a matching line segment for the line segment 602 in the first image. According to the voting principle described above, since the number of matching pixels in the line segment 604 in the second image is the largest, that is, the line segment 604 has the highest matching score, the line segment 604 is the line segment that matches the line segment 602 .
上文结合图1至图11,详细描述了本申请的方法实施例,下面结合图12,描述本申请的装置实施例,装置实施例与方法实施例相互对应,因此未详细描述的部分可参见前面各部分方法实施例。The method embodiments of the present application are described in detail above with reference to FIG. 1 to FIG. 11 , and the apparatus embodiments of the present application are described below with reference to FIG. 12 . The previous part of the method embodiment.
图12是本申请实施例提供的一种图像处理装置700,该装置700可以包括存储器710和处理器720。FIG. 12 is an image processing apparatus 700 provided by an embodiment of the present application. The apparatus 700 may include a memory 710 and a processor 720 .
所述存储器710,用于存储计算机程序。The memory 710 is used to store computer programs.
所述处理器720,用于调用所述计算机程序,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The processor 720 is configured to invoke the computer program, and when the computer program is executed by the processor, cause the apparatus to perform the following steps:
获取由电子设备的第一相机对场景拍摄采集的第一图像;acquiring a first image captured by the first camera of the electronic device on the scene;
获取由电子设备的第二相机对所述场景拍摄得到的第二图像;acquiring a second image obtained by photographing the scene by a second camera of the electronic device;
其中,所述第一相机的观测范围大于所述第二相机的观测范围;并且,所述第一相机的分辨率低于所述第二相机的分辨率,和/或,所述第一相机的成像为非彩色图像、所述第二相机的成像为彩色图像;Wherein, the observation range of the first camera is larger than the observation range of the second camera; and the resolution of the first camera is lower than the resolution of the second camera, and/or the first camera The imaging of the second camera is an achromatic image, and the imaging of the second camera is a color image;
在所述第一图像中确定对应所述场景中目标对象的第一像素区域,在所述第二图像中确定对应所述场景中所述目标对象的第二像素区域;determining a first pixel area corresponding to the target object in the scene in the first image, and determining a second pixel area corresponding to the target object in the scene in the second image;
基于所述第二图像中的所述第二像素区域,对所述第一图像中的所述第一像素区域进行图像处理,得到处理后的第一图像。Based on the second pixel area in the second image, image processing is performed on the first pixel area in the first image to obtain a processed first image.
可选地,在一些实施例中,所述目标对象为所述第二相机采集到的所述场景中的全部场景对象或者部分场景对象。Optionally, in some embodiments, the target object is all or part of the scene objects in the scene captured by the second camera.
可选地,在一些实施例中,所述目标对象对应的所述第一像素区域和/或所述第二像素区域是基于用户操作确定的。Optionally, in some embodiments, the first pixel area and/or the second pixel area corresponding to the target object is determined based on a user operation.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:获取由所述第一相机对所述场景拍摄采集的第一初始图像;对所述第一初始图像进行增稳处理,得到所述第一图像。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: acquiring a first initial image captured by the first camera of the scene; Stabilization processing is performed on the first initial image to obtain the first image.
可选地,在一些实施例中,所述第一相机包括第一视觉传感器和第二视角传感器,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:获取由所述第一视觉传感器对所述场景拍摄采集的第一视觉图像;获取由所述第二视觉传感器对所述场景拍摄采集的第二视觉图像;根据所述第一视觉图像和所述第二视觉图像,生成具有深度信息的所述第一初始图像。Optionally, in some embodiments, the first camera includes a first visual sensor and a second viewing angle sensor, and when the computer program is executed by the processor, the apparatus causes the apparatus to perform the following steps: acquiring the obtaining a first visual image captured by the first visual sensor on the scene; acquiring a second visual image captured by the second visual sensor on the scene; according to the first visual image and the second visual image image, generating the first initial image with depth information.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:获取由所述第二相机对所述场景拍摄采集的第二初始图像;将所述第二初始图像转换到所述第一图像的相机位姿上,以生成所述第二图像。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: acquiring a second initial image captured by the second camera of the scene; Converting the second initial image to the camera pose of the first image to generate the second image.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:根据所述第一相机的内部参数、所述第二相机的内部参数、以及所述第一图像和所述第二初始图像之间的旋转关系,将所述 第二初始图像转换到所述第一图像的相机位姿上,以生成所述第二图像。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: according to the internal parameters of the first camera, the internal parameters of the second camera, and the rotation relationship between the first image and the second initial image, and converting the second initial image to the camera pose of the first image to generate the second image.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:接收用户输入的指令;根据所述用户输入的指令,调整所述第一相机和所述第二相机的拍摄方向,使得所述第一相机和所述第二相机朝向同一场景进行拍摄。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: receiving an instruction input by a user; adjusting the first instruction according to the instruction input by the user The shooting directions of the camera and the second camera are such that the first camera and the second camera shoot toward the same scene.
可选地,在一些实施例中,所述电子设备为无人机,所述处理后的第一图像用于第一人称主视角FPV飞行。Optionally, in some embodiments, the electronic device is an unmanned aerial vehicle, and the processed first image is used for FPV flight from a first-person main perspective.
可选地,在一些实施例中,所述第一像素区域位于所述第一图像的中间区域。Optionally, in some embodiments, the first pixel area is located in a middle area of the first image.
可选地,在一些实施例中,所述第一图像为非彩色图像,所述第二图像为彩色图像。Optionally, in some embodiments, the first image is an achromatic image, and the second image is a color image.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:使用超像素分割装置,对所述处理后的第一图像进行分割;若分割后的同一物体一部分有颜色信息,另一部分无颜色信息,则根据所述同一物体中有颜色信息的部分,对所述同一物体的另一部分进行颜色填充。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the device to perform the following steps: using a superpixel segmentation device to segment the processed first image; if If a part of the same object after segmentation has color information and the other part has no color information, then another part of the same object is filled with color according to the part of the same object that has color information.
可选地,在一些实施例中,所述非彩色图像包括物体的深度信息,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:根据所述物体的深度信息,对所述处理后的第一图像进行颜色处理,所述颜色处理包括以下中的至少一种:颜色填充、颜色去除和颜色保留。Optionally, in some embodiments, the achromatic image includes depth information of an object, and when the computer program is executed by the processor, causes the apparatus to perform the following steps: according to the depth information of the object, Color processing is performed on the processed first image, and the color processing includes at least one of the following: color filling, color removal, and color retention.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:对所述处理后的第一图像中同一深度的区域进行相同的颜色处理。Optionally, in some embodiments, when the computer program is executed by the processor, the computer program causes the apparatus to perform the following step: performing the same color processing on regions of the same depth in the processed first image .
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:获取用户在所述处理后的第一图像上的点选操作;确定点选处的物体的第一深度信息;根据所述第一深度信息,对所述处理后的第一图像进行颜色处理。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: acquiring a user's clicking operation on the processed first image; determining a point first depth information of the selected object; color processing is performed on the processed first image according to the first depth information.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:根据所述第一深度信息,确定目标区域,其中,所述目标区域与所述点选处的物体相连、和/或所述目标区域的深度与所述第一深度信息之间的差值在第一预设范围内;对所述目标区域进行颜色处理。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following step: determining a target area according to the first depth information, wherein the target area is the same as the The objects at the selected point are connected, and/or the difference between the depth of the target area and the first depth information is within a first preset range; and color processing is performed on the target area.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:获取用户输入的距离信息;根据所述距离信息,对所述处理后的第一图像进行颜色处理。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: acquiring distance information input by a user; The first image is color processed.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:根据所述距离信息,确定目标区域,其中,所述目标区域的深度与所述距离之间的差值在第二预设范围内;对所述目标区域进行颜色处理。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following step: determining a target area according to the distance information, wherein the depth of the target area is the same as that of the target area. The difference between the distances is within a second preset range; and color processing is performed on the target area.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:确定所述第二像素区域的边缘和所述第一像素区域的边缘;根据所述第二像素区域的边缘与所述第一像素区域的边缘之间的映射关系,将所述第二像素区域的边缘映射到所述第一像素区域的边缘,以生成所述处理后的第一图像的边缘区域的图像信息。Optionally, in some embodiments, the computer program, when executed by the processor, causes the apparatus to perform the steps of: determining the edge of the second pixel area and the edge of the first pixel area; According to the mapping relationship between the edge of the second pixel region and the edge of the first pixel region, the edge of the second pixel region is mapped to the edge of the first pixel region to generate the processed The image information of the edge region of the first image.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:提取所述第二像素区域的边缘和所述第一像素区域的初始边缘;根据所述第二像素区域的边缘密度和所述第一像素区域的初始边缘密度,调整边缘检测算子中的参数;根据所述调整之后的边缘检测算子中的参数,确定所述第一像素区域的边缘。Optionally, in some embodiments, the computer program, when executed by the processor, causes the apparatus to perform the step of: extracting the edge of the second pixel area and the initial edge of the first pixel area ; According to the edge density of the second pixel area and the initial edge density of the first pixel area, adjust the parameters in the edge detection operator; According to the parameters in the edge detection operator after the adjustment, determine the first The edge of a pixel area.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:在所述第二像素区域的边缘密度和所述第一像素区域的初始边缘密度之间的差值不在第三预设范围内的情况下,调整边缘检测算子中的参数;所述根据所述调整之后的边缘检测算子中的参数,确定所述第一像素区域的边缘,包括:根据所述调整之后的边缘检测算子中的参数,重新提取所述第一像素区域的边缘,直到所述第二像素区域的边缘密度和所述第一像素区域的边缘密度之间的差值在所述第三预设范围内。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: at the edge density of the second pixel area and the initial If the difference between the edge densities is not within the third preset range, adjust the parameters in the edge detection operator; the first pixel area is determined according to the parameters in the adjusted edge detection operator The edges of The difference between is within the third preset range.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:确定第一边缘段和第二边缘段,所述第一边缘段为所述第一像素区域的边缘上的边缘段,所述第二边缘段为所述第二像素区域的边缘上的边缘段;根据所述第一边缘段和所述第二边缘段之间的映射关系,将所述第二边缘段映射到所述第一边缘段所在的区域,以生成所述处理后的第一图像的边缘区域的图像信息。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the step of: determining a first edge segment and a second edge segment, the first edge segment being the an edge segment on the edge of the first pixel region, and the second edge segment is an edge segment on the edge of the second pixel region; according to the mapping between the first edge segment and the second edge segment relationship, the second edge segment is mapped to the region where the first edge segment is located, so as to generate image information of the edge region of the processed first image.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使 得所述装置执行如下步骤:对所述第一边缘段进行分段,得到第一线段集合;对所述第二边缘进行分段,得到第二线段集合;确定所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系;根据所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系,将所述第二线段集合中的线段映射到所述第一线段集合中的线段所在的位置,以生成所述处理后的第一图像的边缘区域的图像信息。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: segment the first edge segment to obtain a first line segment set; The second edge is segmented to obtain a second line segment set; the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set is determined; according to the first line segment set The mapping relationship between the line segments in the second line segment set and the line segments in the second line segment set, the line segments in the second line segment set are mapped to the positions of the line segments in the first line segment set to generate the processing The image information of the edge area of the first image after.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:对所述第一边缘段进行分段,得到第一初始线段集合;剔除所述第一初始线段集合中长度小于第一预设值的线段,得到所述第一线段集合。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: segment the first edge segment to obtain a first initial set of line segments; culling The first line segment set is obtained from the line segments whose length is less than the first preset value in the first initial line segment set.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:对所述第二边缘段进行分段,得到第二初始线段集合;剔除所述第二初始线段集合中长度小于第二预设值的线段,得到所述第二线段集合。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: segment the second edge segment to obtain a second initial set of line segments; culling The second line segment set is obtained from the line segments whose length is less than the second preset value in the second initial line segment set.
可选地,在一些实施例中,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:根据所述第一线段集合中的线段上的像素与所述第二线段集合中的线段上的像素之间的匹配度,确定所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系。Optionally, in some embodiments, when the computer program is executed by the processor, the computer program causes the apparatus to perform the following steps: The matching degree between the pixels and the pixels on the line segments in the second line segment set determines the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set.
可选地,在一些实施例中,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:根据所处处理后的第一图像的边缘区域的图像信息,通过插值法生成所述处理后的第一图像的非边缘区域的图像信息。Optionally, in some embodiments, when the computer program is executed by the processor, it causes the apparatus to perform the following steps: according to the image information of the edge region of the processed first image, use an interpolation method Image information of the non-edge region of the processed first image is generated.
本申请还提供了一种电子设备或者系统,该电子设备或者系统可以包括上述本申请各种实施例的图像处理装置。The present application also provides an electronic device or system, where the electronic device or system may include the image processing apparatuses of the various embodiments of the present application.
本申请还提供一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得,该计算机执行上述方法实施例的方法。The present application also provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a computer, the computer causes the computer to execute the method of the above method embodiment.
本申请还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。The present application also provides a computer program product comprising instructions, the instructions, when executed by the computer, cause the computer to execute the method of the above method embodiment.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的 流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware or any other combination. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line) or wireless (eg, infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media. The usable media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)), etc. .
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited to this. should be covered within the scope of protection of this application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (55)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, comprising:
    获取由电子设备的第一相机对场景拍摄采集的第一图像;acquiring a first image captured by the first camera of the electronic device on the scene;
    获取由电子设备的第二相机对所述场景拍摄得到的第二图像;acquiring a second image obtained by photographing the scene by a second camera of the electronic device;
    其中,所述第一相机的观测范围大于所述第二相机的观测范围;并且,所述第一相机的分辨率低于所述第二相机的分辨率,和/或,所述第一相机的成像为非彩色图像、所述第二相机的成像为彩色图像;Wherein, the observation range of the first camera is larger than the observation range of the second camera; and the resolution of the first camera is lower than the resolution of the second camera, and/or the first camera The imaging of the second camera is an achromatic image, and the imaging of the second camera is a color image;
    在所述第一图像中确定对应所述场景中目标对象的第一像素区域,在所述第二图像中确定对应所述场景中所述目标对象的第二像素区域;determining a first pixel area corresponding to the target object in the scene in the first image, and determining a second pixel area corresponding to the target object in the scene in the second image;
    基于所述第二图像中的所述第二像素区域,对所述第一图像中的所述第一像素区域进行图像处理,得到处理后的第一图像。Based on the second pixel area in the second image, image processing is performed on the first pixel area in the first image to obtain a processed first image.
  2. 根据权利要求1所述的方法,其特征在于,所述目标对象为所述第二相机采集到的所述场景中的全部场景对象或者部分场景对象。The method according to claim 1, wherein the target objects are all or part of the scene objects in the scene captured by the second camera.
  3. 根据权利要求1或2所述的方法,其特征在于,所述目标对象对应的所述第一像素区域和/或所述第二像素区域是基于用户操作确定的。The method according to claim 1 or 2, wherein the first pixel area and/or the second pixel area corresponding to the target object is determined based on a user operation.
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述获取由电子设备的第一相机对场景拍摄采集的第一图像,包括:The method according to any one of claims 1-3, wherein the acquiring the first image captured by the first camera of the electronic device on the scene comprises:
    获取由所述第一相机对所述场景拍摄采集的第一初始图像;acquiring a first initial image captured by the first camera of the scene;
    对所述第一初始图像进行增稳处理,得到所述第一图像。Stabilization processing is performed on the first initial image to obtain the first image.
  5. 根据权利要求4所述的方法,其特征在于,所述第一相机包括第一视觉传感器和第二视角传感器,所述获取由所述第一相机对所述场景拍摄采集的第一初始图像,包括:The method according to claim 4, wherein the first camera comprises a first visual sensor and a second viewing angle sensor, and the acquiring a first initial image captured by the first camera on the scene, include:
    获取由所述第一视觉传感器对所述场景拍摄采集的第一视觉图像;acquiring a first visual image captured by the first visual sensor of the scene;
    获取由所述第二视觉传感器对所述场景拍摄采集的第二视觉图像;acquiring a second visual image captured by the second visual sensor on the scene;
    根据所述第一视觉图像和所述第二视觉图像,生成具有深度信息的所述第一初始图像。Based on the first visual image and the second visual image, the first initial image with depth information is generated.
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述获取由电子设备的第二相机对所述场景拍摄得到的第二图像,包括:The method according to any one of claims 1-5, wherein the acquiring a second image obtained by photographing the scene by a second camera of the electronic device comprises:
    获取由所述第二相机对所述场景拍摄采集的第二初始图像;acquiring a second initial image captured by the second camera of the scene;
    将所述第二初始图像转换到所述第一图像的相机位姿上,以生成所述第二图像。Converting the second initial image to the camera pose of the first image to generate the second image.
  7. 根据权利要求6所述的方法,其特征在于,所述将所述第二初始图像转换到所述第一图像的相机位姿上,以形成所述第二图像,包括:The method according to claim 6, wherein the converting the second initial image to the camera pose of the first image to form the second image comprises:
    根据所述第一相机的内部参数、所述第二相机的内部参数、以及所述第一图像和所述第二初始图像之间的旋转关系,将所述第二初始图像转换到所述第一图像的相机位姿上,以生成所述第二图像。Converting the second initial image to the first the camera pose of an image to generate the second image.
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1-7, wherein the method further comprises:
    接收用户输入的指令;Receive instructions entered by the user;
    根据所述用户输入的指令,调整所述第一相机和所述第二相机的拍摄方向,使得所述第一相机和所述第二相机朝向同一场景进行拍摄。According to the instruction input by the user, the shooting directions of the first camera and the second camera are adjusted, so that the first camera and the second camera shoot toward the same scene.
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述电子设备为无人机,所述处理后的第一图像用于第一人称主视角FPV飞行。The method according to any one of claims 1-8, wherein the electronic device is an unmanned aerial vehicle, and the processed first image is used for FPV flight from a first-person main perspective.
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述第一像素区域位于所述第一图像的中间区域。The method according to any one of claims 1-9, wherein the first pixel area is located in a middle area of the first image.
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述第一图像为非彩色图像,所述第二图像为彩色图像。The method according to any one of claims 1-10, wherein the first image is an achromatic image, and the second image is a color image.
  12. 根据权利要求11所述的方法,其特征在于,所述方法包括:The method of claim 11, wherein the method comprises:
    使用超像素分割方法,对所述处理后的第一图像进行分割;using a superpixel segmentation method to segment the processed first image;
    若分割后的同一物体一部分有颜色信息,另一部分无颜色信息,则根据所述同一物体中有颜色信息的部分,对所述同一物体的另一部分进行颜色填充。If a part of the same object after segmentation has color information and the other part has no color information, then another part of the same object is filled with color according to the part of the same object that has color information.
  13. 根据权利要求11或12所述的方法,其特征在于,所述非彩色图像包括物体的深度信息,所述方法还包括:The method according to claim 11 or 12, wherein the achromatic image includes depth information of the object, and the method further comprises:
    根据所述物体的深度信息,对所述处理后的第一图像进行颜色处理,所述颜色处理包括以下中的至少一种:颜色填充、颜色去除和颜色保留。According to the depth information of the object, color processing is performed on the processed first image, and the color processing includes at least one of the following: color filling, color removal, and color retention.
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述物体的深度信息,对所述处理后的第一图像进行颜色处理,包括:The method according to claim 13, wherein the performing color processing on the processed first image according to the depth information of the object comprises:
    对所述处理后的第一图像中同一深度的区域进行相同的颜色处理。The same color processing is performed on regions of the same depth in the processed first image.
  15. 根据权利要求13或14所述的方法,其特征在于,所述方法还包括:The method according to claim 13 or 14, wherein the method further comprises:
    获取用户在所述处理后的第一图像上的点选操作;obtaining a user's click operation on the processed first image;
    确定点选处的物体的第一深度信息;determining the first depth information of the object at the point selected;
    根据所述第一深度信息,对所述处理后的第一图像进行颜色处理。Color processing is performed on the processed first image according to the first depth information.
  16. 根据权利要求15所述的方法,其特征在于,所述根据所述第一深度信息,对所述处理后的第一图像进行颜色处理,包括:The method according to claim 15, wherein the performing color processing on the processed first image according to the first depth information comprises:
    根据所述第一深度信息,确定目标区域,其中,所述目标区域与所述点选处的物体相连、和/或所述目标区域的深度与所述第一深度信息之间的差值在第一预设范围内;According to the first depth information, a target area is determined, wherein the target area is connected to the object at the point selected, and/or the difference between the depth of the target area and the first depth information is within the first preset range;
    对所述目标区域进行颜色处理。Color processing is performed on the target area.
  17. 根据权利要求13或14所述的方法,其特征在于,所述方法还包括:The method according to claim 13 or 14, wherein the method further comprises:
    获取用户输入的距离信息;Get the distance information entered by the user;
    根据所述距离信息,对所述处理后的第一图像进行颜色处理。Color processing is performed on the processed first image according to the distance information.
  18. 根据权利要求17所述的方法,其特征在于,所述根据所述距离信息,对所述处理后的第一图像进行颜色处理,包括:The method according to claim 17, wherein the performing color processing on the processed first image according to the distance information comprises:
    根据所述距离信息,确定目标区域,其中,所述目标区域的深度与所述距离之间的差值在第二预设范围内;determining a target area according to the distance information, wherein the difference between the depth of the target area and the distance is within a second preset range;
    对所述目标区域进行颜色处理。Color processing is performed on the target area.
  19. 根据权利要求1-18中任一项所述的方法,其特征在于,所述基于所述第二图像的所述第二像素区域,对所述第一图像中的所述第一像素区域进行图像处理,得到处理后的第一图像,包括:The method according to any one of claims 1-18, wherein the first pixel region in the first image is performed based on the second pixel region of the second image. Image processing to obtain the processed first image, including:
    确定所述第二像素区域的边缘和所述第一像素区域的边缘;determining the edge of the second pixel area and the edge of the first pixel area;
    根据所述第二像素区域的边缘与所述第一像素区域的边缘之间的映射关系,将所述第二像素区域的边缘映射到所述第一像素区域的边缘,以生成所述处理后的第一图像的边缘区域的图像信息。According to the mapping relationship between the edge of the second pixel region and the edge of the first pixel region, the edge of the second pixel region is mapped to the edge of the first pixel region to generate the processed The image information of the edge region of the first image.
  20. 根据权利要求19所述的方法,其特征在于,所述方法还包括:The method of claim 19, wherein the method further comprises:
    提取所述第二像素区域的边缘和所述第一像素区域的初始边缘;extracting the edge of the second pixel area and the initial edge of the first pixel area;
    根据所述第二像素区域的边缘密度和所述第一像素区域的初始边缘密度,调整边缘检测算子中的参数;Adjust parameters in the edge detection operator according to the edge density of the second pixel area and the initial edge density of the first pixel area;
    根据所述调整之后的边缘检测算子中的参数,确定所述第一像素区域的边缘。Determine the edge of the first pixel area according to the parameters in the adjusted edge detection operator.
  21. 根据权利要求20所述的方法,其特征在于,所述根据所述第二像素区域的边缘密度和所述第一像素区域的初始边缘密度,调整边缘检测算子中的参数,包括:The method according to claim 20, wherein the adjusting parameters in the edge detection operator according to the edge density of the second pixel region and the initial edge density of the first pixel region, comprising:
    在所述第二像素区域的边缘密度和所述第一像素区域的初始边缘密度之间的差值不在第三预设范围内的情况下,调整边缘检测算子中的参数;In the case that the difference between the edge density of the second pixel area and the initial edge density of the first pixel area is not within a third preset range, adjusting the parameters in the edge detection operator;
    所述根据所述调整之后的边缘检测算子中的参数,确定所述第一像素区域的边缘,包括:The determining the edge of the first pixel region according to the parameters in the adjusted edge detection operator includes:
    根据所述调整之后的边缘检测算子中的参数,重新提取所述第一像素区域的边缘,直到所述第二像素区域的边缘密度和所述第一像素区域的边缘密度之间的差值在所述第三预设范围内。According to the parameters in the adjusted edge detection operator, re-extract the edge of the first pixel area until the difference between the edge density of the second pixel area and the edge density of the first pixel area within the third preset range.
  22. 根据权利要求19-21中任一项所述的方法,其特征在于,所述根据所述第二像素区域的边缘与所述第一像素区域的边缘之间的映射关系,将所述第二像素区域的边缘映射到所述第一像素区域的边缘,以生成所述处理后的第一图像的边缘区域的图像信息,包括:The method according to any one of claims 19-21, wherein the second pixel area is mapped according to the mapping relationship between the edge of the second pixel area and the edge of the first pixel area. The edge of the pixel area is mapped to the edge of the first pixel area to generate image information of the edge area of the processed first image, including:
    确定第一边缘段和第二边缘段,所述第一边缘段为所述第一像素区域的边缘上的边缘段,所述第二边缘段为所述第二像素区域的边缘上的边缘段;Determine a first edge segment and a second edge segment, the first edge segment is an edge segment on the edge of the first pixel area, and the second edge segment is an edge segment on the edge of the second pixel area ;
    根据所述第一边缘段和所述第二边缘段之间的映射关系,将所述第二边缘段映射到所述第一边缘段所在的区域,以生成所述处理后的第一图像的边缘区域的图像信息。According to the mapping relationship between the first edge segment and the second edge segment, the second edge segment is mapped to the region where the first edge segment is located, so as to generate the processed image of the first image. Image information for edge regions.
  23. 根据权利要求22所述的方法,其特征在于,所述根据所述第一边缘段和所述第二边缘段之间的映射关系,将所述第二边缘段映射到所述第一边缘段所在的区域,以生成所述处理后的第一图像的边缘区域的图像信息,包括:The method according to claim 22, wherein the second edge segment is mapped to the first edge segment according to a mapping relationship between the first edge segment and the second edge segment where the image information of the edge region of the processed first image is generated, including:
    对所述第一边缘段进行分段,得到第一线段集合;segmenting the first edge segment to obtain a first line segment set;
    对所述第二边缘进行分段,得到第二线段集合;segmenting the second edge to obtain a second line segment set;
    确定所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系;determining the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set;
    根据所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系,将所述第二线段集合中的线段映射到所述第一线段集合中的线段所在的位置,以生成所述处理后的第一图像的边缘区域的图像信息。According to the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set, map the line segments in the second line segment set to where the line segments in the first line segment set are located position to generate image information of the edge region of the processed first image.
  24. 根据权利要求23所述的方法,其特征在于,所述对所述第一边缘段进行分段,得到第一线段集合,包括:The method according to claim 23, wherein the segmenting the first edge segment to obtain the first line segment set comprises:
    对所述第一边缘段进行分段,得到第一初始线段集合;Segmenting the first edge segment to obtain a first initial line segment set;
    剔除所述第一初始线段集合中长度小于第一预设值的线段,得到所述第 一线段集合。The line segment whose length is less than the first preset value in the first initial line segment set is eliminated to obtain the first line segment set.
  25. 根据权利要求23所述的方法,其特征在于,所述对所述第二边缘段进行分段,得到第二线段集合,包括:The method according to claim 23, wherein the segmenting the second edge segment to obtain the second line segment set comprises:
    对所述第二边缘段进行分段,得到第二初始线段集合;Segmenting the second edge segment to obtain a second initial line segment set;
    剔除所述第二初始线段集合中长度小于第二预设值的线段,得到所述第二线段集合。The second line segment set is obtained by eliminating the line segments whose length is less than the second preset value in the second initial line segment set.
  26. 根据权利要求23-25中任一项所述的方法,其特征在于,所述确定所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系,包括:The method according to any one of claims 23-25, wherein the determining the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set comprises:
    根据所述第一线段集合中的线段上的像素与所述第二线段集合中的线段上的像素之间的匹配度,确定所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系。Determine the line segment in the first line segment set and the second line segment according to the matching degree between the pixel on the line segment in the first line segment set and the pixel on the line segment in the second line segment set The mapping relationship between the line segments in the collection.
  27. 根据权利要求19-26中任一项所述的方法,其特征在于,所述基于所述第二图像的所述第二像素区域,对所述第一图像中的所述第一像素区域进行图像处理,得到处理后的第一图像,包括:The method according to any one of claims 19-26, wherein the first pixel region in the first image is performed based on the second pixel region of the second image. Image processing to obtain the processed first image, including:
    根据所处处理后的第一图像的边缘区域的图像信息,通过插值法生成所述处理后的第一图像的非边缘区域的图像信息。According to the image information of the edge area of the processed first image, image information of the non-edge area of the processed first image is generated by interpolation.
  28. 一种图像处理装置,其特征在于,包括:An image processing device, comprising:
    存储器,用于存储计算机程序;memory for storing computer programs;
    处理器,用于调用所述计算机程序,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:a processor, configured to invoke the computer program, and when the computer program is executed by the processor, cause the apparatus to perform the following steps:
    获取由电子设备的第一相机对场景拍摄采集的第一图像;acquiring a first image captured by the first camera of the electronic device on the scene;
    获取由电子设备的第二相机对所述场景拍摄得到的第二图像;acquiring a second image obtained by photographing the scene by a second camera of the electronic device;
    其中,所述第一相机的观测范围大于所述第二相机的观测范围;并且,所述第一相机的分辨率低于所述第二相机的分辨率,和/或,所述第一相机的成像为非彩色图像、所述第二相机的成像为彩色图像;Wherein, the observation range of the first camera is larger than the observation range of the second camera; and the resolution of the first camera is lower than the resolution of the second camera, and/or the first camera The imaging of the second camera is an achromatic image, and the imaging of the second camera is a color image;
    在所述第一图像中确定对应所述场景中目标对象的第一像素区域,在所述第二图像中确定对应所述场景中所述目标对象的第二像素区域;determining a first pixel area corresponding to the target object in the scene in the first image, and determining a second pixel area corresponding to the target object in the scene in the second image;
    基于所述第二图像中的所述第二像素区域,对所述第一图像中的所述第一像素区域进行图像处理,得到处理后的第一图像。Based on the second pixel area in the second image, image processing is performed on the first pixel area in the first image to obtain a processed first image.
  29. 根据权利要求28所述的装置,其特征在于,所述目标对象为所述 第二相机采集到的所述场景中的全部场景对象或者部分场景对象。The apparatus according to claim 28, wherein the target objects are all or part of the scene objects in the scene captured by the second camera.
  30. 根据权利要求28或29所述的装置,其特征在于,所述目标对象对应的所述第一像素区域和/或所述第二像素区域是基于用户操作确定的。The apparatus according to claim 28 or 29, wherein the first pixel area and/or the second pixel area corresponding to the target object is determined based on a user operation.
  31. 根据权利要求28-30中任一项所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to any one of claims 28-30, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    获取由所述第一相机对所述场景拍摄采集的第一初始图像;acquiring a first initial image captured by the first camera of the scene;
    对所述第一初始图像进行增稳处理,得到所述第一图像。Stabilization processing is performed on the first initial image to obtain the first image.
  32. 根据权利要求31所述的装置,其特征在于,所述第一相机包括第一视觉传感器和第二视角传感器,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 31, wherein the first camera includes a first visual sensor and a second viewing angle sensor, and when the computer program is executed by the processor, the apparatus causes the apparatus to perform the following steps:
    获取由所述第一视觉传感器对所述场景拍摄采集的第一视觉图像;acquiring a first visual image captured by the first visual sensor of the scene;
    获取由所述第二视觉传感器对所述场景拍摄采集的第二视觉图像;acquiring a second visual image captured by the second visual sensor on the scene;
    根据所述第一视觉图像和所述第二视觉图像,生成具有深度信息的所述第一初始图像。Based on the first visual image and the second visual image, the first initial image with depth information is generated.
  33. 根据权利要求28-32中任一项所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to any one of claims 28-32, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    获取由所述第二相机对所述场景拍摄采集的第二初始图像;acquiring a second initial image captured by the second camera on the scene;
    将所述第二初始图像转换到所述第一图像的相机位姿上,以生成所述第二图像。Converting the second initial image to the camera pose of the first image to generate the second image.
  34. 根据权利要求33所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 33, wherein when the computer program is executed by the processor, it causes the apparatus to perform the following steps:
    根据所述第一相机的内部参数、所述第二相机的内部参数、以及所述第一图像和所述第二初始图像之间的旋转关系,将所述第二初始图像转换到所述第一图像的相机位姿上,以生成所述第二图像。Converting the second initial image to the first the camera pose of an image to generate the second image.
  35. 根据权利要求28-34中任一项所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to any one of claims 28-34, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    接收用户输入的指令;Receive instructions entered by the user;
    根据所述用户输入的指令,调整所述第一相机和所述第二相机的拍摄方向,使得所述第一相机和所述第二相机朝向同一场景进行拍摄。According to the instruction input by the user, the shooting directions of the first camera and the second camera are adjusted, so that the first camera and the second camera shoot toward the same scene.
  36. 根据权利要求28-35中任一项所述的装置,其特征在于,所述电子设备为无人机,所述处理后的第一图像用于第一人称主视角FPV飞行。The apparatus according to any one of claims 28-35, wherein the electronic device is a drone, and the processed first image is used for FPV flight from a first-person main perspective.
  37. 根据权利要求28-36中任一项所述的装置,其特征在于,所述第一像素区域位于所述第一图像的中间区域。The device according to any one of claims 28-36, wherein the first pixel area is located in a middle area of the first image.
  38. 根据权利要求28-37中任一项所述的装置,其特征在于,所述第一图像为非彩色图像,所述第二图像为彩色图像。The apparatus according to any one of claims 28-37, wherein the first image is an achromatic image, and the second image is a color image.
  39. 根据权利要求38所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 38, wherein when the computer program is executed by the processor, it causes the apparatus to perform the following steps:
    使用超像素分割装置,对所述处理后的第一图像进行分割;Using a superpixel segmentation device to segment the processed first image;
    若分割后的同一物体一部分有颜色信息,另一部分无颜色信息,则根据所述同一物体中有颜色信息的部分,对所述同一物体的另一部分进行颜色填充。If a part of the same object after segmentation has color information and the other part has no color information, then another part of the same object is filled with color according to the part of the same object that has color information.
  40. 根据权利要求38或39所述的装置,其特征在于,所述非彩色图像包括物体的深度信息,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to claim 38 or 39, wherein the achromatic image includes depth information of an object, and when the computer program is executed by the processor, the apparatus causes the apparatus to perform the following steps:
    根据所述物体的深度信息,对所述处理后的第一图像进行颜色处理,所述颜色处理包括以下中的至少一种:颜色填充、颜色去除和颜色保留。According to the depth information of the object, color processing is performed on the processed first image, and the color processing includes at least one of the following: color filling, color removal, and color retention.
  41. 根据权利要求40所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 40, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    对所述处理后的第一图像中同一深度的区域进行相同的颜色处理。The same color processing is performed on regions of the same depth in the processed first image.
  42. 根据权利要求40或41所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to claim 40 or 41, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    获取用户在所述处理后的第一图像上的点选操作;obtaining a user's click operation on the processed first image;
    确定点选处的物体的第一深度信息;determining the first depth information of the object at the point selected;
    根据所述第一深度信息,对所述处理后的第一图像进行颜色处理。Color processing is performed on the processed first image according to the first depth information.
  43. 根据权利要求42所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 42, wherein the computer program, when executed by the processor, causes the apparatus to perform the following steps:
    根据所述第一深度信息,确定目标区域,其中,所述目标区域与所述点选处的物体相连、和/或所述目标区域的深度与所述第一深度信息之间的差值在第一预设范围内;According to the first depth information, a target area is determined, wherein the target area is connected to the object at the point selected, and/or the difference between the depth of the target area and the first depth information is within the first preset range;
    对所述目标区域进行颜色处理。Color processing is performed on the target area.
  44. 根据权利要求40或41所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to claim 40 or 41, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    获取用户输入的距离信息;Get the distance information entered by the user;
    根据所述距离信息,对所述处理后的第一图像进行颜色处理。Color processing is performed on the processed first image according to the distance information.
  45. 根据权利要求44所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 44, wherein, when executed by the processor, the computer program causes the apparatus to perform the following steps:
    根据所述距离信息,确定目标区域,其中,所述目标区域的深度与所述距离之间的差值在第二预设范围内;determining a target area according to the distance information, wherein the difference between the depth of the target area and the distance is within a second preset range;
    对所述目标区域进行颜色处理。Color processing is performed on the target area.
  46. 根据权利要求28-45中任一项所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to any one of claims 28-45, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    确定所述第二像素区域的边缘和所述第一像素区域的边缘;determining the edge of the second pixel area and the edge of the first pixel area;
    根据所述第二像素区域的边缘与所述第一像素区域的边缘之间的映射关系,将所述第二像素区域的边缘映射到所述第一像素区域的边缘,以生成所述处理后的第一图像的边缘区域的图像信息。According to the mapping relationship between the edge of the second pixel region and the edge of the first pixel region, the edge of the second pixel region is mapped to the edge of the first pixel region to generate the processed The image information of the edge region of the first image.
  47. 根据权利要求46所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 46, wherein, when executed by the processor, the computer program causes the apparatus to perform the following steps:
    提取所述第二像素区域的边缘和所述第一像素区域的初始边缘;extracting the edge of the second pixel area and the initial edge of the first pixel area;
    根据所述第二像素区域的边缘密度和所述第一像素区域的初始边缘密度,调整边缘检测算子中的参数;Adjust parameters in the edge detection operator according to the edge density of the second pixel area and the initial edge density of the first pixel area;
    根据所述调整之后的边缘检测算子中的参数,确定所述第一像素区域的边缘。Determine the edge of the first pixel area according to the parameters in the adjusted edge detection operator.
  48. 根据权利要求47所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 47, wherein, when executed by the processor, the computer program causes the apparatus to perform the following steps:
    在所述第二像素区域的边缘密度和所述第一像素区域的初始边缘密度之间的差值不在第三预设范围内的情况下,调整边缘检测算子中的参数;In the case that the difference between the edge density of the second pixel area and the initial edge density of the first pixel area is not within a third preset range, adjusting the parameters in the edge detection operator;
    所述根据所述调整之后的边缘检测算子中的参数,确定所述第一像素区域的边缘,包括:The determining the edge of the first pixel region according to the parameters in the adjusted edge detection operator includes:
    根据所述调整之后的边缘检测算子中的参数,重新提取所述第一像素区域的边缘,直到所述第二像素区域的边缘密度和所述第一像素区域的边缘密度之间的差值在所述第三预设范围内。According to the parameters in the adjusted edge detection operator, the edge of the first pixel area is re-extracted until the difference between the edge density of the second pixel area and the edge density of the first pixel area within the third preset range.
  49. 根据权利要求45-48中任一项所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to any one of claims 45-48, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    确定第一边缘段和第二边缘段,所述第一边缘段为所述第一像素区域的边缘上的边缘段,所述第二边缘段为所述第二像素区域的边缘上的边缘段;Determine a first edge segment and a second edge segment, the first edge segment is an edge segment on the edge of the first pixel area, and the second edge segment is an edge segment on the edge of the second pixel area ;
    根据所述第一边缘段和所述第二边缘段之间的映射关系,将所述第二边缘段映射到所述第一边缘段所在的区域,以生成所述处理后的第一图像的边缘区域的图像信息。According to the mapping relationship between the first edge segment and the second edge segment, the second edge segment is mapped to the region where the first edge segment is located, so as to generate the processed image of the first image. Image information for edge regions.
  50. 根据权利要求49所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 49, wherein when the computer program is executed by the processor, it causes the apparatus to perform the following steps:
    对所述第一边缘段进行分段,得到第一线段集合;segmenting the first edge segment to obtain a first line segment set;
    对所述第二边缘进行分段,得到第二线段集合;segmenting the second edge to obtain a second line segment set;
    确定所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系;determining the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set;
    根据所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系,将所述第二线段集合中的线段映射到所述第一线段集合中的线段所在的位置,以生成所述处理后的第一图像的边缘区域的图像信息。According to the mapping relationship between the line segments in the first line segment set and the line segments in the second line segment set, map the line segments in the second line segment set to where the line segments in the first line segment set are located position to generate image information of the edge region of the processed first image.
  51. 根据权利要求50所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 50, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    对所述第一边缘段进行分段,得到第一初始线段集合;Segmenting the first edge segment to obtain a first initial set of line segments;
    剔除所述第一初始线段集合中长度小于第一预设值的线段,得到所述第一线段集合。The first line segment set is obtained by eliminating line segments whose length is less than a first preset value in the first initial line segment set.
  52. 根据权利要求51所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus of claim 51, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    对所述第二边缘段进行分段,得到第二初始线段集合;segmenting the second edge segment to obtain a second initial line segment set;
    剔除所述第二初始线段集合中长度小于第二预设值的线段,得到所述第二线段集合。Line segments whose lengths are less than a second preset value in the second initial line segment set are eliminated to obtain the second line segment set.
  53. 根据权利要求50-52中任一项所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to any one of claims 50-52, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    根据所述第一线段集合中的线段上的像素与所述第二线段集合中的线段上的像素之间的匹配度,确定所述第一线段集合中的线段与所述第二线段集合中的线段之间的映射关系。Determine the line segment in the first line segment set and the second line segment according to the matching degree between the pixel on the line segment in the first line segment set and the pixel on the line segment in the second line segment set The mapping relationship between the line segments in the collection.
  54. 根据权利要求46-53中任一项所述的装置,其特征在于,当所述计算机程序被所述处理器执行时,使得所述装置执行如下步骤:The apparatus according to any one of claims 46-53, wherein when the computer program is executed by the processor, the apparatus is caused to perform the following steps:
    根据所处处理后的第一图像的边缘区域的图像信息,通过插值法生成所述处理后的第一图像的非边缘区域的图像信息。According to the image information of the edge area of the processed first image, image information of the non-edge area of the processed first image is generated by interpolation.
  55. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序在被执行时,实现如权利要求1至27中任一项所述的方法。A computer-readable storage medium, characterized in that a computer program is stored thereon, and when the computer program is executed, the method according to any one of claims 1 to 27 is implemented.
PCT/CN2020/113251 2020-09-03 2020-09-03 Image processing method and apparatus WO2022047701A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080009631.3A CN113348489A (en) 2020-09-03 2020-09-03 Image processing method and device
PCT/CN2020/113251 WO2022047701A1 (en) 2020-09-03 2020-09-03 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/113251 WO2022047701A1 (en) 2020-09-03 2020-09-03 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2022047701A1 true WO2022047701A1 (en) 2022-03-10

Family

ID=77468470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113251 WO2022047701A1 (en) 2020-09-03 2020-09-03 Image processing method and apparatus

Country Status (2)

Country Link
CN (1) CN113348489A (en)
WO (1) WO2022047701A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109643A (en) * 2023-04-13 2023-05-12 深圳市明源云科技有限公司 Market layout data acquisition method, device and computer readable storage medium
CN117232396A (en) * 2023-11-15 2023-12-15 湖南睿图智能科技有限公司 Visual detection system and method for product quality of high-speed production line
CN117528262A (en) * 2023-12-29 2024-02-06 江西赛新医疗科技有限公司 Control method and system for data transmission of medical equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616237A (en) * 2008-06-27 2009-12-30 索尼株式会社 Image processing apparatus, image processing method, program and recording medium
US20110069156A1 (en) * 2009-09-24 2011-03-24 Fujifilm Corporation Three-dimensional image pickup apparatus and method
CN106878605A (en) * 2015-12-10 2017-06-20 北京奇虎科技有限公司 The method and electronic equipment of a kind of image generation based on electronic equipment
CN107481200A (en) * 2017-07-31 2017-12-15 腾讯科技(深圳)有限公司 Image processing method and device
CN109937568A (en) * 2016-11-17 2019-06-25 索尼公司 Image processing apparatus and image processing method
CN110460783A (en) * 2018-05-08 2019-11-15 宁波舜宇光电信息有限公司 Array camera module and its image processing system, image processing method and electronic equipment
CN110855883A (en) * 2019-11-05 2020-02-28 浙江大华技术股份有限公司 Image processing system, method, device equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616237A (en) * 2008-06-27 2009-12-30 索尼株式会社 Image processing apparatus, image processing method, program and recording medium
US20110069156A1 (en) * 2009-09-24 2011-03-24 Fujifilm Corporation Three-dimensional image pickup apparatus and method
CN106878605A (en) * 2015-12-10 2017-06-20 北京奇虎科技有限公司 The method and electronic equipment of a kind of image generation based on electronic equipment
CN109937568A (en) * 2016-11-17 2019-06-25 索尼公司 Image processing apparatus and image processing method
CN107481200A (en) * 2017-07-31 2017-12-15 腾讯科技(深圳)有限公司 Image processing method and device
CN110460783A (en) * 2018-05-08 2019-11-15 宁波舜宇光电信息有限公司 Array camera module and its image processing system, image processing method and electronic equipment
CN110855883A (en) * 2019-11-05 2020-02-28 浙江大华技术股份有限公司 Image processing system, method, device equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109643A (en) * 2023-04-13 2023-05-12 深圳市明源云科技有限公司 Market layout data acquisition method, device and computer readable storage medium
CN116109643B (en) * 2023-04-13 2023-08-04 深圳市明源云科技有限公司 Market layout data acquisition method, device and computer readable storage medium
CN117232396A (en) * 2023-11-15 2023-12-15 湖南睿图智能科技有限公司 Visual detection system and method for product quality of high-speed production line
CN117232396B (en) * 2023-11-15 2024-02-06 湖南睿图智能科技有限公司 Visual detection system and method for product quality of high-speed production line
CN117528262A (en) * 2023-12-29 2024-02-06 江西赛新医疗科技有限公司 Control method and system for data transmission of medical equipment
CN117528262B (en) * 2023-12-29 2024-04-05 江西赛新医疗科技有限公司 Control method and system for data transmission of medical equipment

Also Published As

Publication number Publication date
CN113348489A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
US11869205B1 (en) Techniques for determining a three-dimensional representation of a surface of an object from a set of images
WO2022047701A1 (en) Image processing method and apparatus
US11070725B2 (en) Image processing method, and unmanned aerial vehicle and system
US11170561B1 (en) Techniques for determining a three-dimensional textured representation of a surface of an object from a set of images with varying formats
US11443478B2 (en) Generation apparatus, system and method for generating virtual viewpoint image
EP2328125B1 (en) Image splicing method and device
CN107169924B (en) Method and system for establishing three-dimensional panoramic image
WO2012153447A1 (en) Image processing device, image processing method, program, and integrated circuit
WO2018133589A1 (en) Aerial photography method, device, and unmanned aerial vehicle
WO2019238114A1 (en) Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium
CN108958469B (en) Method for adding hyperlinks in virtual world based on augmented reality
WO2019100219A1 (en) Output image generation method, device and unmanned aerial vehicle
WO2021217398A1 (en) Image processing method and apparatus, movable platform and control terminal therefor, and computer-readable storage medium
CN112207821B (en) Target searching method of visual robot and robot
CN110944101A (en) Image pickup apparatus and image recording method
US20150326847A1 (en) Method and system for capturing a 3d image using single camera
CN108564654B (en) Picture entering mode of three-dimensional large scene
JP6649010B2 (en) Information processing device
US9767580B2 (en) Apparatuses, methods, and systems for 2-dimensional and 3-dimensional rendering and display of plenoptic images
CN106713890A (en) Image processing method and device
CN113724335A (en) Monocular camera-based three-dimensional target positioning method and system
KR102261544B1 (en) Streaming server and method for object processing in multi-view video using the same
TWI536832B (en) System, methods and software product for embedding stereo imagery
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
KR102082131B1 (en) Inserting Method of Augment Reality Information in Drone Moving Picture

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20951948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20951948

Country of ref document: EP

Kind code of ref document: A1