WO2023070387A1 - 一种图像处理方法、装置、拍摄设备及可移动平台 - Google Patents

一种图像处理方法、装置、拍摄设备及可移动平台 Download PDF

Info

Publication number
WO2023070387A1
WO2023070387A1 PCT/CN2021/126803 CN2021126803W WO2023070387A1 WO 2023070387 A1 WO2023070387 A1 WO 2023070387A1 CN 2021126803 W CN2021126803 W CN 2021126803W WO 2023070387 A1 WO2023070387 A1 WO 2023070387A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinates
pixel area
target
area
target pixel
Prior art date
Application number
PCT/CN2021/126803
Other languages
English (en)
French (fr)
Inventor
李广
陈奋
李馥
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/126803 priority Critical patent/WO2023070387A1/zh
Publication of WO2023070387A1 publication Critical patent/WO2023070387A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours

Definitions

  • the present application relates to the field of image processing, and in particular, relates to an image processing method, device, photographing equipment, and movable platform.
  • the original image collected by the image sensor needs to be cropped and then output.
  • the cropping process may cut off some effective pixel areas in the original image, resulting in a smaller field of view displayed by the cropped image.
  • the effective area in the original image is non-rectangular, and the output image is usually a rectangular image. Therefore, it is necessary to cut out a rectangular area from the effective area in the original image collected by the image sensor and then output it, thus losing some effective areas outside the rectangular area. data, resulting in a smaller field of view displayed by the output image, which affects the user experience.
  • the present application provides an image processing method, device, photographing equipment and a movable platform.
  • an image processing method comprising the steps of:
  • the original image including a target pixel area and a reference pixel area other than the target pixel area;
  • an image processing device includes a processor, a memory, and a computer program stored in the memory that can be executed by the processor, and the processor executes the The computer program implements the following steps:
  • the original image including a target pixel area and a reference pixel area other than the target pixel area;
  • a photographing device including the image processing apparatus of the second aspect above.
  • a movable platform is provided, and the movable platform includes the photographing device of the third aspect above.
  • a computer-readable storage medium stores computer instructions, and when the computer instructions are executed, it is used to implement the image processing method of the first aspect of the present application .
  • a target image corresponding to the size of the target pixel area can be obtained based on the original image, wherein the target image includes at least Part of the pixels, so that the content displayed by the pixels in the reference pixel area can be utilized, so that the field of view displayed by the target image is larger than the field of view displayed by the target pixel area in the original image.
  • Fig. 1 is a schematic diagram of an original image according to an embodiment of the present application.
  • Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram showing that different pixel regions are mapped through different mapping relationships according to an embodiment of the present application.
  • 4(a) and 4(b) are schematic diagrams showing changes before and after pixel coordinate mapping in the embodiment of the present application.
  • Fig. 5 is a schematic diagram of an original image to be processed according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an image obtained after direct cropping in the related art of the present application.
  • FIG. 7 is a schematic diagram of changes in the field of view of a processed image according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a curve function used for mapping coordinates according to an embodiment of the present application.
  • Fig. 9 is a schematic diagram of a processed target image according to an embodiment of the present application.
  • Fig. 10 is a schematic diagram of a logical structure of an image processing device according to an embodiment of the present application.
  • the original image collected by the image sensor needs to be cropped and then output.
  • the cropping process may cut off some effective pixel areas in the original image, resulting in a smaller field of view displayed by the cropped image.
  • the lens cannot be too large, the amount of light entering the lens is limited and cannot completely cover the entire optical target surface (that is, the image sensor), so the four corners of the optical target surface are collected. Data is partially invalid.
  • the optical target surface is rectangular, and the effective area is non-rectangular due to the existence of some invalid areas, and the output image is usually a rectangular image, so the effective area needs to be cropped to obtain a rectangular image and output. The following will be described in conjunction with FIG. 1. The area surrounded by the outermost dotted line in FIG.
  • pictures captured in some special scenes such as images captured by the camera in high-speed motion, underwater, etc.
  • pictures captured in high-speed motion, underwater, etc. often part of the area is blurred, and in post-processing, it is often necessary to cut out the blurred area , leaving only the clear part of the image; since the clear part may be of irregular shape, and the output image needs to be a rectangular image, the image needs to be cropped, which results in some effective pixels not in the rectangular area being used Together, the field of view of the image display is smaller.
  • the original image may need to be cropped, and some effective pixels will be lost during the cropping process.
  • the embodiment of the present application provides an image processing method.
  • the value of some pixel points in the reference pixel area other than the target pixel area can be assigned to the target pixel area by means of coordinate mapping. Part of the pixels in the target pixel area, so that the pixels in the reference pixel area can be used, so that the field of view displayed by the reassigned target pixel area becomes larger.
  • the image processing method provided in the embodiment of the present application may be executed by a photographing device, for example, may be executed by an ISP (Image Signal Processor, image signal processing) chip on the photographing device. It can also be executed by other image processing devices other than camera devices, such as mobile phones, notebook computers, desktop computers, cloud servers, etc.
  • the image processing device is equipped with image processing software, and after receiving the original image from the photographing device, the original image is processed by the image processing software.
  • the image processing method of the embodiment of the present application may include the following steps:
  • Step 202 acquiring an original image, the original image includes a target pixel area and a reference pixel area outside the target pixel area;
  • an original image may be acquired first, where the original image may be an image that needs to be cropped before being output.
  • the original image may be an original image obtained by incomplete coverage of the optical target surface, or some non-rectangular images, from which a rectangular area needs to be cropped and output.
  • the original image includes a target pixel area and a reference pixel area other than the target pixel area, the target pixel area is the pixel area to be output, and the reference pixel area is the pixel area other than the target pixel area, that is, the area discarded after cropping, refer to
  • the content displayed in the pixel area is different from the content displayed in the target pixel area.
  • Step S204 outputting a target image corresponding to the size of the target pixel area, wherein the target image includes at least some pixels of the reference pixel area, so that the field of view shown by the target image is larger than that in the original image The field of view displayed by the target pixel area.
  • a target image corresponding to the size of the target pixel area can be obtained based on the original image.
  • the original image can be processed to obtain the target image, so that the target image includes at least part of the pixels in the reference pixel area, so that the pixels in the reference pixel area can be used, so that the field of view displayed by the target image is larger than that in the original image
  • the field of view displayed by the target pixel area, so that the target image can display a larger field of view and more content through the same number of pixels as the target pixel area.
  • the target image can be directly displayed on the user interaction interface, or the target image can also be sent to other devices so that other devices can display the image.
  • the original image after the original image is acquired, the original image can be mapped, and at least part of the pixels in the reference pixel area are mapped to the range of the target pixel area by means of mapping, and the mapped target pixel area is used as the target image , so that the target image includes some pixels in the reference pixel area.
  • coordinate mapping can be performed on the pixels in the target pixel area and the pixels in the reference pixel area to obtain and output a target image corresponding to the size of the target pixel area.
  • the pixel values of some pixels in the reference pixel area can be assigned to some pixels in the target pixel area, so that the mapped target pixel area contains the content displayed by the pixels in the reference pixel area, so that the mapped target pixel
  • the field of view displayed by the region is larger than the field of view displayed by the target pixel region before mapping.
  • the coordinate mapping method can be flexibly selected, for example, the coordinates of the pixel points in the target pixel area can be mapped so that the coordinates of some of the pixel points fall into the reference pixel area.
  • the mapped coordinates can be located in the reference pixel area (assumed to be the location of pixel B), and then the pixel value of pixel B is used to assign a value to pixel A.
  • the coordinates of some pixels in the reference pixel area can be mapped so that the mapped coordinates fall into the target pixel area.
  • the mapped coordinates can be located in the target pixel area ( Suppose it is the position of the pixel point D), and then use the pixel value of the pixel point C to assign to the pixel point D.
  • mapping the pixel coordinates of the target pixel area you can map some pixels in the target pixel area, or you can map all the pixels in the target pixel area; when mapping all the pixels in the target pixel area, The coordinate positions after the mapping of some pixels are located in the reference pixel area, and the coordinate positions of the other part of the pixels are still located in the target pixel area.
  • mapping In addition to mapping, other processing methods may also be used to make the output target image include some pixels in the reference pixel area.
  • the coordinates of the pixel points in the target pixel area may be mapped, so that the pixel values of at least some of the pixel points in the reference pixel area are assigned to the pixel values based on the mapped coordinates. at least part of the pixel points in the target pixel area, so as to map at least part of the pixel points in the reference pixel area within the scope of the target pixel area.
  • the coordinates of the pixel points in the target pixel area can be mapped, so that the pixel values of at least some of the pixel points in the reference pixel area can be assigned to at least some of the pixel points in the target pixel area based on the mapped coordinates, and finally the pixels can be re-
  • the assigned target pixel area is used as the target image.
  • the coordinates of the pixel points in the target pixel area 104 can be mapped, taking the coordinates of the pixel point 111 as an example, the coordinates of the pixel point 111 after mapping, and the pixel point 112 in the reference pixel area 106
  • the coordinates of the pixels are the same, so the pixel value of the pixel point 112 is assigned to the pixel point 111; another example is to map the coordinates of the pixel point 115 as an example, the coordinates of the pixel point 115 after mapping and the coordinates of the pixel point 116 in the target pixel area 104
  • the pixel value of the pixel point 116 is assigned to the pixel point 115; thus, the final target image can also display part of the content of the reference pixel area, expanding the field of view displayed by the target pixel area.
  • this mapping method a larger field of view can be displayed without enlarging the pixel size of the image (that is, using the same number of pixels).
  • the original image is an original image collected with incomplete coverage of the optical target surface, and four corners of the image are invalid areas.
  • Both the target pixel area and the reference pixel area may be located in an effective area of the original image, wherein the effective area is determined based on design parameters of an image acquisition device that acquires the original image.
  • the shape and size of the effective area are related to the design parameters of the optical target surface of the digital camera and the design parameters of the digital camera lens; such as the size of the optical target surface, the focal length, aperture, and type of the lens.
  • the reference pixel area can be located on one side of the target pixel area, or can also be located around the target pixel area. In some embodiments, the reference pixel area may surround the target pixel area. Therefore, by assigning the values of the pixels in the reference area to some of the pixels in the target pixel area, the field of view displayed in all directions around the target pixel area after repeated assignment of pixels can be enlarged. For example, as shown in FIG. 1 , the reference pixel area 106 surrounds the target pixel area 104 .
  • the effective area of the original image may be an approximately circular area
  • the target pixel area may be the largest inscribed rectangle of the effective area in the original image
  • the reference pixel area is an area in the effective area other than the target pixel area.
  • the inscribed rectangle with the largest area can be determined from the effective area as the target pixel area.
  • the reference pixel area may be an area in the effective area other than the largest inscribed rectangle.
  • the area surrounded by the outermost dotted line is the original image 100, and the area around the four corners in the original image is invalid, so the approximately circular closed area 102 is a valid area, and the rectangle with grid
  • the area 104 is the largest rectangular area inscribed in the effective area 102 , that is, the target pixel area.
  • mapping the coordinates of the pixel points in the target pixel area can also be set in various ways based on the display effect of the final output image desired by the user.
  • the coordinates of the pixel points in the target pixel area may be mapped based on different mapping relationships for different pixel points in the target pixel area.
  • different mapping relationships may also be used to map them.
  • the target pixel area can be divided into multiple pixel grids, and each pixel grid can correspond to a mapping relationship. By using different mapping relationships to map different pixel grids, each pixel grid can be Different display effects can be achieved after mapping.
  • the target pixel area is divided into four pixel grid sub-areas, and each sub-area uses a different curve function to map the pixels in the grid; the pixels in the sub-area 302 use a curve 312 Characterized curve function is mapped; pixels in sub-region 304 are mapped using curve function represented by curve 314; pixels in sub-region 306 are mapped using curve function represented by curve 316; pixels in sub-region 308 are mapped using curve 318 characterized curve function for mapping.
  • the coordinates of the pixel points in different directions may be mapped based on different mapping relationships.
  • one mapping relationship can be used to map the coordinates of the pixel points in the target pixel area in one direction
  • another mapping relationship can be used to map the coordinates of the pixel points in the other direction
  • different mappings can be used for different directions.
  • the relationship makes the coordinates in different directions change in different degrees after mapping, that is, the adjustment range of the display field of view is different, so as to achieve different display effects.
  • a pixel point includes coordinates in the horizontal direction and coordinates in the vertical direction, so the coordinates in the horizontal direction and the coordinates in the vertical direction of the pixel point can be mapped.
  • a mapping relationship to the abscissa can be constructed, and the abscissa can be mapped to keep the abscissa of the pixel 111 unchanged;
  • a mapping relationship to the ordinate can be constructed to map the ordinate , so that the vertical coordinate of the pixel point 111 becomes the vertical coordinate of the pixel point 112 .
  • the mapping of the coordinates of the pixel points in the target pixel area may be mapping in different coordinate systems.
  • the coordinates of the pixel points in the target pixel area can be represented by different coordinate systems, and thus mapping relationships in different coordinate systems can be constructed for mapping the coordinates of the pixel points.
  • the coordinates of the pixel points in the target pixel area may be coordinates in a polar coordinate system or a Cartesian coordinate system. Of course, it may also be coordinates in other coordinate systems, which are not limited in this embodiment of the present application.
  • the pixel points in the target pixel area may be coordinates in the polar coordinate system, and when the coordinates of the pixel points in the target pixel area are mapped, the angular coordinates of the pixel points in the target pixel area may be kept unchanged, Only map the radial coordinates, or keep the radial coordinates of the pixel points in the target pixel area unchanged, and only map the angular coordinates. Or map both angular and radial coordinates. Specifically, it can be set according to the user's area setting of the display effect of the mapped image in various directions. For example, for a certain direction (for example, the angular coordinate is 0-30°), it is desired to display a larger field of view. Therefore, The mapping relationship can be set so that the radial coordinates of the pixels in this direction become larger after mapping. For other directions, the mapping relationship can also be set individually based on actual needs.
  • the angular coordinate of the pixel point 113 is a1, and the radial coordinate is b1.
  • the angular coordinate and the radial coordinate of the pixel point 113 are respectively mapped so that a1 becomes a2, and b1 becomes b2; or, only map the angular coordinates of the pixel point 113, and not map the radial coordinates, so that a1 becomes a2, and b1 remains unchanged; or only map the radial coordinates of the pixel point 113, and the angular coordinates remain unchanged, so that b1 becomes b2, and a1 stays the same.
  • radial coordinates and angular coordinates may be mapped separately based on different mapping relationships. For example, when mapping the pixels in the target pixel area, a mapping relationship may be established for the angular coordinates, and another mapping relationship may be established for the radial coordinates. Taking the polar coordinate mapping of pixel point 111 in Fig. 1 as an example, through the mapping relationship of angular coordinates, the angular coordinates of pixel point 111 are mapped, and the angular coordinates after mapping are the angular coordinates of pixel point 112; The mapping relationship is to map the radial coordinates of the pixel point 111, and the mapped radial coordinates are the radial coordinates of the pixel point 112.
  • the mapping relationship when mapping the angular coordinates and the radial coordinates, can be different with the angular coordinates of the pixels.
  • the angular coordinates in the target pixel area can be different based on different mapping relationships. Map to pixels with the same coordinates.
  • the mapping relationship when mapping the diagonal coordinates and the radial coordinates, can also be different with the radial coordinates of the pixels.
  • the same angular coordinates in the target pixel area can be based on different mapping relationships. The pixels with different radial coordinates are mapped.
  • the coordinates of the pixel points in the target pixel area can be coordinates in the Cartesian coordinate system.
  • the abscissa can be kept unchanged, and only the vertical Coordinates are mapped; or keep the ordinate unchanged and only map the abscissa, or map both the abscissa and ordinate at the same time.
  • the horizontal coordinate of the pixel point 111 is x1, and the vertical coordinate is y1, and the horizontal and vertical coordinates of the pixel point 111 are respectively mapped so that x1 becomes x2, and y1 becomes y2; or , only map the abscissa of pixel 111, and not map the ordinate, so that x1 becomes x2, and y1 remains unchanged; or only map the ordinate of pixel 111, and not map the abscissa, so that y1 becomes y2 , x1 remains unchanged.
  • the abscissa and ordinate of the pixel can be mapped based on different mapping relationships.
  • two mapping relationships can be constructed to map the abscissa and ordinate respectively.
  • the mapping relationship corresponding to the abscissa is to map the abscissa of the pixel 111
  • the mapped abscissa is the abscissa of the pixel 112
  • the mapping relationship corresponding to the ordinate the ordinate of the pixel 111 is mapped, and after mapping
  • the ordinate of is the ordinate of the pixel point 112 .
  • the mapped coordinates of some pixels on the edge of the target pixel area may be located at the edge of the reference pixel area.
  • the mapped coordinates of some pixels on the edge of the target pixel area may be located at the edge of the reference pixel area. That is, as much as possible, the mapped target image can display the entire field of view displayed by the reference pixel area.
  • the mapped coordinates of the pixel point 113 at the edge of the target pixel area are the coordinates of the pixel point 114
  • the pixel point 114 is located at the edge of the reference pixel area 104 .
  • the mapped coordinates of some pixels on the edge of the target pixel area are located in the area between the edge of the target pixel area and the edge of the reference pixel area. Since the area of the reference area may be large in some scenes, if the coordinates of the edge of the target pixel area are mapped to the edge of the reference area, the overall field of view of the final generated target image will be much longer, resulting in the need to map The color change between the final pixels is discontinuous, and the image is seriously deformed. Therefore, when mapping the coordinates of the pixels in the target pixel area, it is necessary to ensure that the color changes between the mapped pixels are smooth.
  • the mapped coordinates are not located at the edge of the reference pixel area, but between the edge of the target pixel area and the edge of the reference pixel area, to avoid mapping
  • the resultant is not smooth.
  • the mapped coordinates of the pixel point 115 in the target pixel area are the coordinates of the pixel point 116, and the pixel point 116 is located in the area between the edge of the target pixel area and the edge of the reference pixel area.
  • a curve function for mapping coordinates may be constructed for the pixel points of the target pixel area, and the coordinates of the pixel points may be mapped using the curve function.
  • the curve function can be established in advance, and can also be constructed in real time based on the characteristics of the image during the image processing process. The embodiment of this application is not limited.
  • the coordinates of the pixel points may be coordinates in the polar coordinate system.
  • the angular coordinates of the pixel points may be kept unchanged, based on the constructed
  • the curve function maps the radial coordinates of the pixels in the target pixel area.
  • the curve function is used to make the radial coordinates of the vertex pixels of the largest inscribed rectangle unchanged before and after mapping, and the radial coordinates of the non-vertex pixels after mapping The coordinate value becomes larger.
  • the angular coordinates can be kept unchanged, and the radial coordinates of the pixels can be mapped to become larger, that is, the coordinates of the mapped pixels expand outward as a whole, so that the display field of view of the final target image expands to the surroundings. Therefore, a curve function for coordinate mapping can be constructed.
  • the curve function can make the coordinates of the center of the target pixel area and the four vertices unchanged, and the radial coordinates of the remaining pixels become larger, so that the image displayed by the mapping The field of view becomes larger.
  • the target pixel area is the largest inscribed rectangle, and the four pixel points located at the vertices of the largest inscribed rectangle are Pixel 121, pixel 122, pixel 123, and pixel 124
  • the constructed curve function maps the radial coordinates of the pixel, so that four pixel points 121, pixel 122, and pixel 123 located at the vertices of the largest inscribed rectangle
  • the radial coordinate remains unchanged, and after mapping non-vertex pixel points such as pixel point 113, the radial coordinate value becomes larger.
  • the curve function used for coordinate mapping can be determined based on the coordinates of the target pixel points in the target pixel area before and after mapping, and the target pixel points include one or more of the following: the central pixel point of the target pixel area, the target The vertex pixels of the pixel area and the edge pixels of the target pixel area, wherein the coordinates of the center pixel and the vertex pixels before and after mapping remain unchanged, and the coordinates of the edge pixels after mapping are located at the edge of the reference pixel area.
  • the shape of the curve function can be roughly determined based on the requirements for the mapped coordinates, and the approximate expression of the curve function can be obtained, and then the coordinate changes of some target pixels before and after mapping can be determined based on the desired display effect of the mapped image.
  • the curve function can be adjusted based on the coordinates before and after mapping to obtain the final curve function.
  • a picture with a length and a width of 1024 pixels has 1,048,576 pixels.
  • Performing a coordinate mapping calculation for each pixel will consume a lot of computing resources.
  • the target pixel area can be divided into multiple pixel grids, and the coordinates of the vertex pixel points of each pixel grid are mapped to obtain the mapped coordinates of the vertex pixel points, and then The mapped coordinates of the vertex pixels are interpolated to determine the mapped coordinates of the non-vertex pixels in the pixel grid.
  • the abscissas of the four vertices after mapping are x1, x2, x3 and x4, then the abscissas of the non-vertex pixels in the harness grid can be x1, x2, The mean of x3 and x4; if the ordinates of the four vertices after mapping are y1, y2, y3 and y4 respectively, then the ordinates of non-vertex pixels in the grid after mapping can be the mean of y1, y2, y3 and y4; In actual processing, the median can also be calculated, or the coordinates can be weighted, as long as the final result shows the approximate orientation of the mapped coordinates.
  • a rectangular area 502 (hereinafter referred to as the target pixel area).
  • the target pixel region 502 is directly cropped and output, and the target image obtained after cropping is shown in FIG. 6 . It can be seen that in the effective area 504 , there is still valid data in the area 503 (hereinafter referred to as the reference pixel area) other than the target pixel area 502 , which results in a smaller field of view displayed by the output image.
  • the reference pixel area the area 503 (hereinafter referred to as the reference pixel area) other than the target pixel area 502 , which results in a smaller field of view displayed by the output image.
  • the values of some pixels in the reference area 503 can be used to assign values to some pixels in the target pixel area 502, so that the range of the field of view displayed by the target pixel area after the pixel points are reassigned get bigger.
  • the coordinates of the pixel points in the target pixel area 502 can be mapped to obtain the mapped pixel coordinates, and then the pixel values corresponding to the mapped pixel coordinates can be assigned to the original pixel points, Since the pixel coordinates of some pixels will fall into the reference pixel area 503 after mapping, the pixel values of the pixels in the reference pixel area 503 can be assigned to some pixels in the target pixel area 502, so that the target pixel area ( That is, the target image) can display the content of the reference pixel area, and the field of view becomes larger.
  • this embodiment can establish the coordinate mapping relationship of pixels in the polar coordinate system, with the central area of the target pixel area 502 as the origin of the polar coordinates, while keeping the angular coordinates unchanged In the case of , the radial coordinates of the pixels in the target pixel area 502 are mapped.
  • the present application can construct a curve function in advance, and by substituting the radial coordinates of the pixels in the target pixel area 502 into the constructed curve function, you can get Mapped radial coordinates.
  • the approximate shape of the curve function can be determined based on the desired effect of the mapped coordinates.
  • the curve function can use an arctangent function.
  • the coordinates of several target pixels after mapping can be determined based on the desired display effect of the mapped target image, and the final curve function can be determined based on the coordinates of the target pixels before and after mapping.
  • the width of the target pixel area is 8.
  • the width is 6.944
  • the length of the target pixel area is 8, and the width is 6,
  • the radius of the effective area relative to the center point of the original image is 5, the center pixel point of the target pixel area, and the center point of the original image
  • the pixel point of the point is the same point, so the coordinate position remains unchanged before and after mapping, and the radial coordinate is mapped from 0 to 0; the coordinate position of the vertex pixel point of the target pixel area is also unchanged before and after mapping, and the radial coordinate is mapped from 5 to 5.
  • the mapped coordinates of the pixels on the short edge of the target pixel area can be located at the short edge of the original image, that is, the radial coordinates are given by 3 is mapped to 3.472, which maximizes the field of view of the mapped image in the vertical direction.
  • the coordinates of the pixels on the long edge of the target pixel area after mapping It can be located at the edge of the long side of the original image, that is, the radial coordinate is mapped from 4 to 4.627.
  • the reference pixel area has a large horizontal range, if the field of view in the horizontal direction of the reference pixel area is extended, the resulting image will be discontinuous in the horizontal direction. Therefore, in some embodiments, if the requirements for the field of view in the horizontal direction are not so strict, the coordinates of the pixel point mapping on the long edge of the target pixel area may not fall on the long edge of the original image, but on the target pixel Between the long edge of the region and the long edge of the original image, for example, the radial coordinate is mapped from 4 to 4.377 (this value can be determined based on the display effect of the image after multiple adjustments).
  • the pixels on the long side edge 5022 of the target pixel area 502 will fall in the reference area after coordinate mapping. 503, and in order to ensure that the target image has as large a range as possible in the horizontal direction and the image is continuous, the pixels on the short edge 5021 of the target pixel area 502 will all fall within the edge of the reference area 503 after coordinate mapping.
  • the field of view displayed by the target pixel area is finally obtained, which is the pixel area 505 formed by four arcs in the figure. It can be seen that its field of view is much larger than that displayed in the original target area.
  • the field of view displayed by the original target image area is a rectangular area in the middle
  • the field of view displayed by the target image obtained after coordinate mapping is an area composed of four arcs. It can be seen that the displayed field of view has become much larger.
  • a curve function can be constructed.
  • multiple anchor points can be determined based on the radial coordinates before and after the above-mentioned pixel point mapping, for example, point 802 corresponds to the central pixel point of the target pixel area, and point 710 corresponds to the central pixel point of the long side of the target pixel area , point 804 corresponds to the center point of the short side of the target pixel area, point 806 corresponds to the vertex of the target pixel area, and then a curve function can be obtained based on the above-mentioned anchor point fitting.
  • the final target image is shown in Figure 9. It can be seen that compared with the image obtained by directly clipping the target pixel area in Figure 6, the number of lights on the ceiling in Figure 9 is more, and the cardboard box is more complete. The field of view is wider.
  • the embodiment of the present application also provides an image processing device, as shown in FIG. A computer program executed by 1001, when the processor 1001 executes the computer program, the following steps can be implemented:
  • the original image including a target pixel area and a reference pixel area other than the target pixel area;
  • the processor when configured to output the target image corresponding to the size of the target pixel area, it is specifically configured to: after acquiring the original image, map the original image, and convert the reference pixel area to At least part of the pixel points are mapped within the range of the target pixel area, and the mapped target pixel area is used as the target image.
  • the mapping of the original image includes mapping the coordinates of the pixel points in the target pixel area, so as to assign the pixel values of at least some of the pixel points in the reference pixel area to At least some of the pixels in the target pixel area, so that at least some of the pixels in the reference pixel area are mapped within the range of the target pixel area.
  • both the target pixel area and the reference pixel area are located in an effective area of the original image, and the effective area is determined based on design parameters of an image acquisition device that acquires the original image.
  • the processor when used for mapping the coordinates of the pixel points in the target pixel area, it is specifically used for:
  • the coordinates of the pixel points are mapped based on different mapping relationships.
  • the processor when used for mapping the coordinates of the pixel points in the target pixel area, it is specifically used for:
  • the coordinates of the pixel point in different directions are mapped based on different mapping relationships.
  • mapping the coordinates of the pixel points in different directions based on different mapping relationships includes:
  • the coordinates in the horizontal direction and the coordinates in the vertical direction of the pixel point are mapped.
  • mapping the coordinates of the pixel points in the target pixel area includes mapping in different coordinate systems.
  • the coordinate system includes at least one of a polar coordinate system and a Cartesian coordinate system.
  • the pixel points in the target pixel area are coordinates in a polar coordinate system, and when the processor is used to map the coordinates of the pixel points in the target pixel area, it is specifically used for:
  • the processor when used to map the radial coordinates and/or angular coordinates of the pixel points in the target pixel area, it is specifically used to:
  • the radial coordinates and the angular coordinates are respectively mapped based on different mapping relationships.
  • the processor when the processor respectively maps the radial coordinates and the angular coordinates based on different mapping relationships, it is specifically used for:
  • the pixel points in the target pixel area are coordinates in a Cartesian coordinate system, and when the processor is used to map the coordinates of the pixel points in the target pixel area, it is specifically used for:
  • the processor when used to map the abscissa and/or ordinate of the pixel points in the target pixel area, it specifically includes:
  • the abscissa and the ordinate are respectively mapped based on different mapping relationships.
  • the reference pixel area surrounds the target pixel area.
  • the target pixel area is the largest inscribed rectangle of the effective area in the original image
  • the reference pixel area is an area in the effective area other than the target pixel area.
  • the mapped coordinates of some pixels on the edge of the target pixel area are located at the edge of the reference pixel area.
  • the mapped coordinates of some pixels on the edge of the target pixel area are located in an area between the edge of the target pixel area and the edge of the reference pixel area.
  • the coordinates of the pixel points are coordinates in a polar coordinate system, and when the processor is used to map the coordinates of the pixel points in the target pixel area, it is specifically used for:
  • the radial coordinates of the pixel points in the target pixel area are mapped based on the constructed curve function; the curve function is used to make the maximum inscribed rectangle
  • the radial coordinates of vertex pixels remain unchanged before and after mapping, and the radial coordinate values of non-vertex pixels become larger after mapping.
  • the curve function is determined based on coordinates before and after mapping of target pixel points in the target pixel area
  • the target pixel points include one or more of the following: the central pixel point of the target pixel area, The vertex pixels of the target pixel area and the edge pixels of the target pixel area, wherein the coordinates of the center pixel and the vertex pixels before and after mapping remain unchanged, and the coordinates of the edge pixels after mapping are located at The edge of the reference pixel area.
  • the processor when used for mapping the coordinates of the pixel points in the target pixel area, it is specifically used for:
  • the processor when used for mapping the coordinates of the pixel points in the target pixel area, it is specifically used for:
  • the target pixel area is divided into multiple pixel grids, and pixel points in different pixel grids are mapped based on different mapping relationships.
  • an embodiment of the present application further provides a photographing device, where the photographing device includes the image acquisition device in any of the foregoing embodiments.
  • the photographing device may be various cameras such as a mobile phone, a handheld pan-tilt camera, a professional camera, and the like.
  • an embodiment of the present application further provides a movable platform, where the movable platform is equipped with the photographing device in any of the foregoing embodiments.
  • the mobile platform can be a device such as a drone or an unmanned vehicle.
  • an embodiment of the present application further provides a computer storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the image processing method in any of the foregoing embodiments is implemented.
  • Embodiments of the present description may take the form of a computer program product embodied on one or more storage media (including but not limited to magnetic disk storage, CD-ROM, optical storage, etc.) having program code embodied therein.
  • Computer usable storage media includes both volatile and non-permanent, removable and non-removable media, and may be implemented by any method or technology for information storage.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of storage media for computers include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • Flash memory or other memory technology
  • CD-ROM Compact Disc Read-Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cartridge tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、装置、拍摄设备及可移动平台,所述方法包括:获取原始图像,所述原始图像包括目标像素区域和所述目标像素区域之外的参考像素区域;输出与所述目标像素区域大小对应的目标图像,其中,所述目标图像包括所述参考像素区域的至少部分像素点,使得所述目标图像展示的视场范围大于所述原始图像中所述目标像素区域展示的视场范围。通过这种方式,可以通过有限的像素点展示更多的场景内容。

Description

一种图像处理方法、装置、拍摄设备及可移动平台 技术领域
本申请涉及图像处理领域,具体而言,涉及一种图像处理方法、装置、拍摄设备及可移动平台。
背景技术
在一些场景中,需要对图像传感器采集的原始图像进行裁切后输出。裁切的过程可能会将原始图像中一些有效的像素区域裁掉,导致裁切得到的图像展示的视场范围变小。以非完整覆盖的光学靶面(图像传感器)为例,对于一些小型化的相机模组,由于其镜头较小,进光量会受到影响,导致图像传感器四个角落位置采集得到的数据无效,即原始图像中有效区域为非矩形,而输出的图像通常为矩形图像,因此,需要从图像传感器采集的原始图像中的有效区域裁剪出一个矩形区域再输出,从而会丢失矩形区域以外的一些有效区域的数据,造成输出的图像展示的视场范围变小,影响用户的体验。
发明内容
有鉴于此,本申请提供一种图像处理方法、装置、拍摄设备以及可移动平台。
根据本申请的第一方面,提供一种图像处理方法,所述方法包括步骤:
获取原始图像,所述原始图像包括目标像素区域和所述目标像素区域之外的参考像素区域;
输出与所述目标像素区域大小对应的目标图像,其中,所述目标图像包括所述参考像素区域的至少部分像素点,使得所述目标图像展示的视场范围大于所述原始图像中所述目标像素区域展示的视场范围。
根据本申请的第二方面,提供一种图像处理装置,所述图像处理装置包 括处理器、存储器、存储在所述存储器上可供所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取原始图像,所述原始图像包括目标像素区域和所述目标像素区域之外的参考像素区域;
输出与所述目标像素区域大小对应的目标图像,所述目标图像包括所述参考像素区域的至少部分像素点,使得所述目标图像展示的视场范围大于所述原始图像中所述目标像素区域展示的视场范围。
根据本申请的第三方面,提供一种拍摄设备,所述拍摄设备包括上述第二方面的图像处理装置。
根据本申请的第四方面,提供一种可移动平台,所述可移动平台包括上述第三方面的拍摄设备。
根据本申请的第五方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,用于实现本申请第一方面的图像处理方法。
应用本申请提供的方案,在获取到包括目标像素区域和参考像素区域的原始图像后,可以基于原始图像得到与该目标像素区域大小对应的目标图像,其中,目标图像中包括该参考区域的至少部分像素点,从而可以将参考像素区域中的像素点展示的内容利用起来,使得目标图像展示的视场范围大于原始图像中的目标像素区域展示的视场范围。通过上述方式,可以扩大最终输出的目标图像的视场范围,通过有限的像素点展示更多的场景内容。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅 是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例的原始图像的示意图。
图2是本申请一个实施例的图像处理方法的流程图。
图3是本申请一个实施例的展示不同像素区域通过不同映射关系映射的示意图。
图4(a)和4(b)是本申请实施例中展示像素点坐标映射前后的变化的示意图。
图5是本申请一个实施例的一个需要处理的原始图像的示意图。
图6是本申请相关技术中直接裁剪处理后得到的图像的示意图。
图7是本申请一个实施例的处理后的图像的视场范围变化示意图。
图8是本申请一个实施例的映射坐标所使用的曲线函数的示意图。
图9是本申请一个实施例的处理后的目标图像的示意图。
图10是本申请一个实施例的图像处理装置的逻辑结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在一些场景中,需要对图像传感器采集的原始图像进行裁切后输出。裁切的过程可能会将原始图像中一些有效的像素区域裁掉,导致裁切得到的图像展示的视场范围变小。
比如,针对一些小型化的相机模组,由于镜头的体积不能太大,导致镜头的进光量受限,无法完整覆盖整个光学靶面(即图像传感器),因而光学靶面四个角落位置采集的数据有部分是无效的。通常,光学靶面是矩形的,由 于存在部分无效区域,导致有效区域是非矩形的,而输出的图像通常是矩形图像,因而需要对有效区域进行裁切,得到矩形图像并输出。以下结合图1加以说明,图1中最外部的虚线围起来的区域是图像传感器采集的原始图像100,原始图像100中四个角落周围的区域是无效的,因此近似圆形的闭合区域102为有效区域,由于需输出矩形图像,因而通常需要从有效区域102中裁切出最大内接矩形区域104后输出。从而导致有效区域102中矩形区域104以外的区域106还存在有效的像素数据被丢弃掉了,导致最终输出的图像展示的视场范围变小了,无法展示更多的内容。
再比如,一些特殊场景下拍摄到的图片,比如,摄像机处在高速运动中,水下等拍摄到的图像,往往部分区域是模糊的,在后期处理的时候,往往需要把模糊的区域剪裁掉,只留下清晰部分的图像;由于清晰部分可能是不规则的形状,而输出的图像需要是矩形图像,因而需要对图像进行裁切,这样便导致部分不在矩形区域内的有效像素没有被利用起来,使得图像展示的视场范围较小。或者,在一些场景中,由于输出的图像尺寸受到限制,可能需要对原始图像进行裁切,裁切的过程会损失一些有效的像素。
类似以上的场景中,由于需要对原始图像进行裁切,裁切过程中不可避免会损失到一些有效的像素,导致最终输出的图像展示的视场范围变小,展示的内容受限。而在绝大多数场景中,用户通常希望图像可以尽可能多的展示场景的内容。
基于此,本申请实施例提供一种图像处理方法,针对待输出的目标像素区域,可以通过坐标映射的方式,以此将目标像素区域以外的参考像素区域中的部分像素点的值赋值给到目标像素区域中部分像素点,从而可以将参考像素区域中的像素点利用起来,使得重新赋值后的目标像素区域展示的视场范围变大。
本申请实施例提供的图像处理方法可以由拍照设备执行,比如,可以由拍照设备上的ISP(Image Signal Processor,图像信号处理)芯片执行。也可以由拍照设备以外的其他图像处理设备执行,例如,手机、笔记本电脑、台 式机、云端服务器等。比如,该图像处理设备安装有图像处理软件,从拍照设备中接收到该原始图像后再通过图像处理软件对该原始图像进行处理。
如图2所示,本申请实施例的图像处理方法可以包括以下步骤:
步骤202,获取原始图像,原始图像包括目标像素区域和目标像素区域之外的参考像素区域;
在步骤S202中,可以先获取原始图像,其中,原始图像可以是需要对其进行裁剪操作后再输出的图像。比如,原始图像可以是非完整覆盖光学靶面采集得到的原始图像,或者是一些非矩形的图像,需要从中裁剪出矩形区域后输出。其中,原始图像包括目标像素区域和目标像素区域以外的参考像素区域,目标像素区域为待输出的像素区域,参考像素区域为目标像素区域以外的像素区域,即裁切后舍弃掉的区域,参考像素区域显示的内容与目标像素区域显示的内容不同。以图1所示的非完整覆盖光学靶面采集的图像为例,其中,最外部的虚线围起来的区域100可以是原始图像,矩形区域104可以是目标像素区域,区域106可以是参考像素区域。
步骤S204,输出与所述目标像素区域大小对应的目标图像,其中,所述目标图像包括所述参考像素区域的至少部分像素点,使得所述目标图像展示的视场范围大于所述原始图像中所述目标像素区域展示的视场范围。在获取到原始图像后,可以基于原始图像得到和目标像素区域大小对应的目标图像。其中,可以对原始图像进行处理得到目标图像,使得目标图像中包括参考像素区域的至少部分像素点,从而可以将参考像素区域的像素点利用起来,使得目标图像展示的视场范围大于原始图像中目标像素区域展示的视场范围,从而目标图像可以通过和目标像素区域数量相同的像素点,展示更大的视场和更多的内容。
在得到目标图像后,可以直接在用户交互界面显示该目标图像,或者也可以将目标图像发送给其他设备,以便其他设备显示该图像。
在一些实施例中,在获取原始图像后,可以原始图像进行映射,通过映射的方式将参考像素区域的至少部分像素点映射在目标像素区域的范围内, 将映射后的目标像素区域作为目标图像,使得目标图像中包括参考像素区域的部分像素点。
比如,在获得原始图像后,可以对目标像素区域的像素点和参考像素区域的像素点进行坐标映射,得到和目标像素区域大小对应的目标图像并输出。通过坐标映射,可以将参考像素区域的一部分像素点的像素值赋值给目标像素区域的一部分像素点,使映射后的目标像素区域包含参考像素区域的像素点展示的内容,从而映射后的目标像素区域展示的视场范围大于映射前的目标像素区域展示的视场范围。
其中,坐标映射方式可以灵活选择,比如,可以对目标像素区域的像素点的坐标进行映射,使得部分像素点的坐标落入参考像素区域。比如,针对目标像素区域的像素点A,可以使其映射后的坐标位于参考像素区域(假设为像素点B所在的位置),然后利用像素点B的像素值赋值给像素点A。或者也可以对参考像素区域的部分像素点的坐标进行映射,使得映射后的坐标落入目标像素区域,比如,针对参考像素区域的像素点C,可以使其映射后的坐标位于目标像素区域(假设为像素点D所在的位置),然后利用像素点C的像素值赋值给像素点D。在目标像素区域的像素点坐标进行映射的时候,可以对目标像素区域的部分像素点进行映射,也可以对目标像素区域的所有像素点进行映射;对目标像素区域的所有像素点进行映射时,部分像素点映射后的坐标位置位于参考像素区域,另一部分像素点映射后的坐标位置仍位于目标像素区域。
不难理解,所有通过坐标映射的方式,使得将参考像素区域的一部分像素点的像素值赋值给目标像素区域的一部分像素点的方式均在本申请的保护范围内。
当然,除了通过映射的方式,也可以通过其他处理方式,使得输出的目标图像中包括参考像素区域的部分像素点。
在一些实施例中,在对原始图像进行映射时,可以对目标像素区域的像素点的坐标进行映射,以基于映射后的坐标将所述参考像素区域的至少部分 像素点的像素值赋值给所述目标像素区域内的至少部分像素点,从而将所述参考像素区域的至少部分像素点映射在所述目标像素区域的范围内。比如,可以对目标像素区域的像素点的坐标进行映射,以基于映射后的坐标将参考像素区域内的至少部分像素点的像素值赋值给目标像素区域的至少部分像素点,最后将像素点重新赋值后的目标像素区域作为目标图像。
如图1所示,可以对目标像素区域104中的像素点的坐标进行映射,以对像素点111的坐标进行映射为例,像素点111映射后的坐标,与参考像素区域106中像素点112的坐标相同,因此将像素点112的像素值赋值给像素点111;又如以对像素点115的坐标进行映射为例,像素点115映射后的坐标与目标像素区域104中像素点116的坐标相同,因此将像素点116的像素值赋值给像素点115;从而最后得到的目标图像也可以展示部分参考像素区域的内容,扩大目标像素区域展示的视场范围。通过这种映射的方式,在不扩大图像的像素尺寸(即通过相同数目的像素点)的前提下,可以展示更大的视场范围。
在一些实施例中,原始图像中可以存在有效区域和无效区域,比如,原始图像为非完整覆盖光学靶面采集的原始图像,图像四个角落位置为无效区域。目标像素区域和参考像素区域可以均位于该原始图像的有效区域内,其中,有效区域基于采集原始图像的图像采集装置的设计参数确定。具体的,以数码相机为例,有效区域的形状和大小与数码相机光学靶面的设计参数、数码相机镜头的设计参数有关;如光学靶面的大小,镜头的焦距、光圈、类型等。
参考像素区域可以位于目标像素区域的一侧,或者也可以位于目标像素区域的周围。在一些实施例中,参考像素区域可以环绕于目标像素区域的四周。从而,通过将参考区域的像素点的值赋值给目标像素区域中的部分像素点,可以使得像素点重复赋值后的目标像素区域在周围各个方向展示的视场范围均变大。比如,如图1所示,参考像素区域106环绕于目标像素区域104四周。
在一些实施例中,原始图像有效区域可以是近似圆形的区域,目标像素区域可以是原始图像中的有效区域的最大内接矩形,参考像素区域为有效区域中除目标像素区域以外的区域。为了让输出的目标像素区域展示的内容尽可能多,可以从有效区域中确定出面积最大的内接矩形作为目标像素区域。参考像素区域可以是有效区域中除了该最大内接矩形以外的区域。
如图1所示,最外部的虚线围起来的区域是原始图像100,原始图像中四个顶角周围的区域是无效的,因此近似圆形的闭合区域102为有效区域,带网格的矩形区域104是内接于有效区域102中面积最大的矩形区域,即目标像素区域。
此外,在对目标像素区域中的像素点的坐标进行映射时,其映射的方式也可以基于用户期望最终输出的图像的显示效果进行多样化的设置。比如,在一些实施例中对目标像素区域的像素点的坐标进行映射时,针对目标像素区域中不同的像素点,可以基于不同的映射关系对所述像素点的坐标进行映射。比如,在一些实施例中,针对目标像素区域中的不同的像素区域,也可以采用不同的映射关系对其进行映射。比如,可以将目标像素区域划分成多个像素网格,每个像素网格可以对应一种映射关系,通过利用不同的映射关系对不同的像素网格进行映射,可以使得每个像素网格在映射后可以达到不同的显示效果。比如,如图3所示,将目标像素区域划分为四个像素网格子区域,每个子区域均使用不同的曲线函数对网格中的像素点进行映射;子区域302中的像素点使用曲线312表征的曲线函数进行映射;子区域304中的像素点使用曲线314表征的曲线函数进行映射;子区域306中的像素点使用曲线316表征的曲线函数进行映射;子区域308中的像素点使用曲线318表征的曲线函数进行映射。
在一些实施例中,对目标像素区域的像素点的坐标进行映射时,针对目标像素区域的同一像素点,可以基于不同的映射关系对像素点的不同方向上的坐标进行映射。比如,可以使用一种映射关系,对目标像素区域的像素点一个方向的坐标进行映射,使用另一种映射关系,对所述像素点另一方向上 的坐标进行映射,对不同方向使用不同的映射关系,使得不同方向上的坐标映射后变化的程度不同,即展示视场的调整幅度不同,从而达到不同的显示效果。
比如,在一些实施例中,像素点包括水平方向上的坐标和垂直方向上的坐标,因而可以对像素点的水平方向上的坐标和垂直方向上的坐标进行映射。以图1中的像素点111为例,可以构建对横坐标的映射关系,对横坐标进行映射,保持像素点111的横坐标不变;可以构建对纵坐标的映射关系,对纵坐标进行映射,使像素点111的纵坐标变为像素点112的纵坐标。
在一些实施例中,对目标像素区域的像素点的坐标进行映射,可以是在不同坐标系下的映射。比如,目标像素区域的像素点的坐标可以使用不同的坐标系表示,因而可以构建不同坐标系下映射关系,用于对像素点的坐标进行映射。
在一些实施例中,目标像素区域的像素点的坐标可以是极坐标系、笛卡尔坐标系下坐标。当然,也可以是其他坐标系下的坐标,本申请实施例不做限制。
在一些实施例中,目标像素区域中的像素点可以是极坐标系下的坐标,对目标像素区域的像素点的坐标进行映射时,可以保持目标像素区域中的像素点的角坐标不变,仅对径向坐标进行映射,或者保持目标像素区域中像素点的径向坐标不变,仅对角坐标进行映射。或者同时对角坐标和径向坐标进行映射。具体可以根据用户对映射后的图像在各个方向上的显示效果的区域设置,比如,对于某个方向(比如,角坐标为0~30°)希望其展示的视场范围更大一些,因此,可以设置映射关系,使得该方向的像素点映射后径向坐标变大些,针对其余方向,也可以基于实际需求个性化的设置映射关系。
以图1中的像素点113为例,像素点113的角坐标为a1,径向坐标为b1,分别对像素点113的角坐标和径向坐标进行映射,使得a1变成a2,b1变成b2;或者,只对像素点113的角坐标进行映射,径向坐标不映射,使得a1变成a2,b1保持不变;或者只对像素点113的径向坐标进行映射,角坐标不变, 使得b1变成b2,a1保持不变。
在一些实施例中,可以基于不同的映射关系分别对径向坐标和角坐标进行映射。比如,对目标像素区域中的像素点进行映射时,可以对角坐标建立一个映射关系,对径向坐标建立另一个映射关系。以图1中对像素点111的极坐标映射为例,通过角坐标的映射关系,对像素点111的角坐标进行映射,映射后的角坐标为像素点112的角坐标;通过对径向坐标的映射关系,对像素点111的径向坐标进行映射,映射后的径向坐标为像素点112的径向坐标。
在一些实施例中,在对角坐标和径向坐标进行映射时,映射关系可以随着像素点的角坐标不同而不同,比如,可以基于不同的映射关系对目标像素区域中角坐标不同而径向坐标相同的像素点进行映射。在一些实施例中,在对角坐标和径向坐标进行映射时,映射关系也可以随着像素点的径向坐标不同而不同,比如,可以基于不同的映射关系对目标像素区域中角坐标相同而径向坐标不同的像素点进行映射。
在一些实施例中,目标像素区域中的像素点的坐标可以是笛卡尔坐标系下的坐标,在对目标像素区域中的像素点的坐标进行映射时,可以保持横坐标不变,仅对纵坐标进行映射;或者保持纵坐标不变,仅对横坐标进行映射,或者可以同时对横坐标和纵坐标进行映射。
以图1中的像素点111为例,像素点111的横坐标为x1,纵坐标为y1,分别对像素点111的横和纵坐标进行映射,使得x1变成x2,y1变成y2;或者,只对像素点111的横坐标进行映射,纵坐标不映射,使得x1变成x2,y1保持不变;或者只对像素点111的纵坐标进行映射,横坐标不映射,使得y1变成y2,x1保持不变。
在一些实施例中,可以基于不同的映射关系分别对像素点的横坐标和纵坐标进行映射。当对目标像素区域的像素点的笛卡尔坐标进行映射时,可以构建两个映射关系,分别对横坐标和纵坐标进行映射,以图1中对像素点111的笛卡尔坐标映射为例,通过横坐标对应的映射关系,对像素点111的横坐标进行映射,映射后的横坐标为像素点112的横坐标;通过纵坐标对应的映 射关系,对像素点111的纵坐标进行映射,映射后的纵坐标为像素点112的纵坐标。
在一些实施例中,目标像素区域边缘的部分像素点映射后的坐标可以位于参考像素区域边缘。为了让映射后的目标像素区域展示的视场范围尽可能大,可以让目标像素区域边缘的部分像素点映射后的坐标位于参考像素区域边缘。即尽可能让映射得到的目标图像可以展现参考像素区域所展示的整个视场。比如,如图1所示,目标像素区域边缘的像素点113,映射后的坐标为像素点114的坐标,像素点114位于参考像素区域104的边缘。
在一些实施例中,目标像素区域边缘的部分像素点映射后的坐标位于目标像素区域的边缘与参考像素区域的边缘之间的区域。由于在某些场景,参考区域的面积可能较大,如果将目标像素区域边缘的坐标映射到参考区域边缘,那么最后生成的目标图像整体上展示的视场范围会被拉长很多,导致要映射后的像素点之间的色彩变化不连续,图像变形严重。因此,在对目标像素区域像素点的坐标映射时,需要保证映射后的像素点之间的色彩变化是平滑的。所以,针对目标像素区域某些方向上的边缘像素点,其映射后的坐标,并没有位于参考像素区域的边缘,而是位于目标像素区域的边缘与参考像素区域的边缘之间,以避免映射后得到的不平滑。
如图4(a)所示,目标像素区域中点像素点115,映射后的坐标为像素点116的坐标,像素点116位于目标像素区域的边缘与参考像素区域的边缘之间的区域。
在一些实施例中,可以针对目标像素区域的像素点构建用于对坐标进行映射的曲线函数,利用该曲线函数对像素点的坐标进行映射。其中,曲线函数可以预先建立,也可以在图像处理过程中,基于图像的特点实时构建。本申请实施例不做限制。
在一些实施例中,像素点的坐标为可以是极坐标系下的坐标,对目标像素区域的像素点的坐标进行映射时,可以在保持像素点的角坐标不变的情况下,基于构建的曲线函数对目标像素区域的像素点的径向坐标进行映射,其 中,曲线函数用于使最大内接矩形的顶点像素点在映射前后的径向坐标不变,非顶点像素点映射后的径向坐标值变大。通过采用极坐标对图像中的像素点进行表示,并在极坐标下对像素点的坐标进行映射,得到的目标图像会更加符合人眼特性的需求。可以保持角坐标不变,让像素点的径向坐标映射后变大,即映射后的像素点的坐标整体向外扩展,使得最终得到目标图像的展示视场向周围扩展。所以,可以构建用于进行坐标映射的曲线函数,曲线函数可以使得目标像素区域中心和四个顶点映射后的坐标不变,其余像素点的径向坐标均变大,使得映射得到的图像展示的视场范围变大。
如图4(b)所示,以在图像中心点的像素点120所在的位置为极坐标的中心点,目标像素区域为最大内接矩形,位于最大内接矩形的顶点的四个像素点为像素点121、像素点122、像素点123和像素点124,构建出的曲线函数映射像素点的径向坐标,使得四个位于最大内接矩形顶点的像素点121、像素点122、像素点123和像素点124映射后径向坐标不变,非顶点像素点如像素点113映射后径向坐标值变大。
在一些实施例中,用于进行坐标映射的曲线函数可以基于目标像素区域中的目标像素点映射前后的坐标确定,目标像素点包括以下一种或多种:目标像素区域的中心像素点、目标像素区域的顶点像素点以及目标像素区域的边缘像素点,其中,中心像素点和顶点像素点映射前后的坐标不变,边缘像素点映射后的坐标位于参考像素区域的边缘。比如,可以基于对映射后的坐标的需求大致确定曲线函数的形状,得到曲线函数大致的表达式,然后可以基于映射后的图像想要达到的显示效果确定一些目标像素点映射前后的坐标变化,基于映射前后的坐标可以对曲线函数进行调整,得到最终的曲线函数。
如图4(b)所示,若顶点像素点122的径向坐标为c1,则曲线函数将c1映射到c1;中心像素点120径向坐标为c2,曲线函数将c2映射到c2;边缘像素点113径向坐标为c3,映射后的坐标位于像素点114,像素点114的径向坐标为c4,则曲线函数将c3映射到c4。
图片往往有很多像素点,例如一张长和宽都为1024像素的图片,有 1048576个像素点,对每一个像素点都进行一次坐标映射计算的话,会很耗费计算资源。在一些实施例中,为了节省计算资源,可以将目标像素区域划分为多个像素网格,对每个像素网格的顶点像素点的坐标进行映射,得到顶点像素点映射后的坐标,然后对顶点像素点映射后的坐标进行插值处理,确定像素网格中非顶点像素点映射后的坐标。
以笛卡尔坐标系下的映射为例,若四个顶点映射后的横坐标分别是x1、x2、x3和x4,那么线束网格中非顶点像素点映射后的横坐标可以是x1、x2、x3和x4的均值;若四个顶点映射后的纵坐标分别是y1、y2、y3和y4,那么网格中非顶点像素点映射后的纵坐标可以是y1、y2、y3和y4的均值;实际处理时,也可以计算中位数,或者坐标进行带权重的加权运算,只要最终结果表示出映射后坐标大致方位即可。
为了进一步解释本申请的图像处理方法,以下结合一个具体的实施例加以解释。
如图5所示,是本实施例需要处理的原始图像,原始图像四个角落位置由于进光量不足,因此四个角落附近的区域的像素是无效的,原始图像中的有效区域为近似圆形的区域(图中的504)。
由于通常输出的图像是矩形的,因此需要从有效区域504中剪裁出一个矩形区域,为了最大程度的利用有效区域504中的像素点,裁剪出来的矩形是内接于有效区域的矩形中面积最大的矩形区域502(以下称为目标像素区域)。
相关技术中直接将目标像素区域502裁剪出来并输出,剪裁后得到的目标图像如图6所示。可以看出有效区域504中除目标像素区域502以外的区域503(以下称为参考像素区域)中还存在有效数据没有利用到,导致输出的图像展示的视场范围较小。
为了可以最大大化的利用图像中有效区域,使得输出的图像展示的视场范围最大化,展示尽可能多的内容。如图5中所示,本申请实施例可以利用参考区域503中的部分像素点的值赋值给目标像素区域502中的部分像素点, 使得像素点重新赋值后的目标像素区域展示的视场范围变大。
具体的,本实施例中可以对目标像素区域502中的像素点的坐标进行映射,得到映射后的像素坐标,然后利用映射后的像素坐标对应的像素点的像素值赋值给原来的像素点,由于部分像素点的像素坐标映射后会落入参考像素区域503中,从而可以利用参考像素区域503像素点的像素值赋值给目标像素区域502中部分像素点,使得坐标映射得到的目标像素区域(即目标图像)可以展示参考像素区域的内容,视场变大。
为了让映射后得到的目标图像更加符合人眼特性,本实施例可以在极坐标系建立像素点的坐标映射关系,以目标像素区域502的中心区域为极坐标的原点,在保持角坐标不变的情况下,对目标像素区域502的像素点的径向坐标进行映射。
为了对目标像素区域502的像素点的径向坐标进行映射,本申请可以预先构建出一个曲线函数,通过将目标像素区域502的像素点的径向坐标代入构建出的曲线函数中,就能得到映射后的径向坐标。在构建曲线函数时,可以基于映射后的坐标想要达到的效果确定曲线函数的大致形状,比如,曲线函数可以使用反正切函数。然后可以进一步结合映射得到的目标图像想要达到的显示效果确定几个目标像素点映射后的坐标,基于目标像素点映射前后的坐标确定最终的曲线函数。
比如,假设原始图像的长为9.248,宽为6.944,目标像素区域的长为8,宽为6,有效区域相对原始图像中心点的半径为5,目标像素区域的中心像素点,与原始图像中心点的像素点是同一点,因此映射前后坐标位置不变,径向坐标由0映射成0;目标像素区域的顶点像素点映射前后坐标位置也不变,径向坐标由5映射到5。为了尽可能最大化的利用参考像素区域,使得映射后图像展示的视场范围变大,目标像素区域短边边缘的像素点映射后的坐标可以位于原始图像的短边边缘,即径向坐标由3映射到3.472,这样可以在竖直方向最大化映射得到的图像的视场范围。同理,如果要在水平方向最大化映射得到的图像的视场范围(比如,针对全景图像,希望水平方向的视场范围 尽可能大),目标像素区域长边边缘的像素点映射后的坐标可以位于原始图像的长边边缘,即径向坐标由4映射到4.627。
当然,由于参考像素区域在水平方向的范围较大,如果将参考像素区域水平方向的视场范围都扩展进来,会导致最终得到的图像在水平方向上不连续。所以,在一些实施例中,如果对水平方向的视场范围要求没那么严格,目标像素区域长边边缘的像素点映射后的坐标可以不落在原始图像长边边缘,而是落在目标像素区域长边边缘与原始图像长边边缘之间,比如,径向坐标由4映射到4.377(该值可以通过多次调整后,基于图像的显示效果确定)。
比如,如图7所示,为了最大化的扩展映射后得到的目标图像在竖直方向上的视场范围,目标像素区域502长边边缘5022的部分像素点进行坐标映射后会落在参考区域503的边缘,而为了保证目标图像在水平方向上有尽可能大的范围且图像连续,目标像素区域502短边边缘5021的像素点进行坐标映射后会均落在参考区域503的边缘以内。从而最终得到目标像素区域展示的视场范围即为图中4条圆弧构成的像素区域505。可见,其视场范围远大于原来目标区域展示的视场范围。
目标像素区域的像素点坐标映射后,其坐标会落入像素区域505中(即外围的四条圆弧构成的区域)。即原本目标像区域展示的视场范围为中间的矩形区域,进行坐标映射后得到的目标图像展示的视场范围为四条圆弧构成的区域,可见,其展示的视场范围变大了很多。
通过上述几种类型的像素点映射前后的径向坐标,就能构建出曲线函数。比如,如图8所示,可以基于上述像素点映射前后的径向坐标确定多个锚点,比如,点802对应目标像素区域的中心像素点,点710对应目标像素区域长边的中心像素点,点804对应目标像素区域短边的中心点,点806对应目标像素区域的顶点,然后可以基于上述锚点拟合得到一条曲线函数。
最终得到的目标图像如图9所示,可以看到,相较于图6中直接剪裁目标像素区域获得的图像,图9中的天花板上灯的数目更多了,纸箱子也更加完整了,视场范围更大了。
与上述图像处理方法相对应,本申请实施例还提供了一种图像处理装置,如图10所示,所述装置包括处理器1001、存储器1002、存储于所述存储器1002可供所述处理器1001执行的计算机程序,所述处理器1001执行所述计算机程序时可实现以下步骤:
获取原始图像,所述原始图像包括目标像素区域和所述目标像素区域之外的参考像素区域;
输出与所述目标像素区域大小对应的目标图像,其中,所述目标图像包括所述参考像素区域的至少部分像素点,使得所述目标图像展示的视场范围大于所述原始图像中所述目标像素区域展示的视场范围。
在一些实施例中,所述处理器用于输出与所述目标像素区域大小对应的目标图像时,具体用于:在获取原始图像后,对所述原始图像进行映射,将所述参考像素区域的至少部分像素点映射在所述目标像素区域的范围内,将映射后的目标像素区域作为目标图像。
在一些实施例中,所述对原始图像进行映射包括对所述目标像素区域的像素点的坐标进行映射,以基于映射后的坐标将所述参考像素区域的至少部分像素点的像素值赋值给所述目标像素区域内的至少部分像素点,从而将所述参考像素区域的至少部分像素点映射在所述目标像素区域的范围内。在一些实施例中,所述目标像素区域和所述参考像素区域均位于所述原始图像的有效区域内,所述有效区域基于采集所述原始图像的图像采集装置的设计参数确定。
在一些实施例中,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
针对所述目标像素区域中不同的像素点,基于不同的映射关系对所述像素点的坐标进行映射。
在一些实施例中,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
针对所述目标像素区域的同一像素点,基于不同的映射关系对所述像素 点的不同方向上的坐标进行映射。
在一些实施例中,所述基于不同的映射关系对所述像素点的不同方向上的坐标进行映射包括:
对所述像素点的水平方向上的坐标和垂直方向上的坐标进行映射。
在一些实施例中,对所述目标像素区域的像素点的坐标进行映射包括在不同坐标系下的映射。所述坐标系包括极坐标系、笛卡尔坐标系中的至少一种。
在一些实施例中,所述目标像素区域中的像素点为极坐标系下的坐标,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
对所述目标像素区域中的像素点的径向坐标和/或角坐标进行映射。
在一些实施例中,所述处理器用于对所述目标像素区域中的像素点的径向坐标和/或角坐标进行映射时,具体用于:
基于不同的映射关系分别对所述径向坐标和所述角坐标进行映射。
在一些实施例中,所述处理器基于不同的映射关系分别对所述径向坐标和所述角坐标进行映射时,具体用于:
基于不同的映射关系对所述目标像素区域中角坐标不同而径向坐标相同的像素点进行映射,或,基于不同的映射关系对所述目标像素区域中角坐标相同而径向坐标不同的像素点进行映射。
在一些实施例中,所述目标像素区域中的像素点的为笛卡尔坐标系下的坐标,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
对所述目标像素区域中的像素点的横坐标和/或纵坐标进行映射。
在一些实施例中,所述处理器用于对所述目标像素区域中的像素点的横坐标和/或纵坐标进行映射时,具体包括:
基于不同的映射关系分别对所述横坐标和所述纵坐标进行映射。
在一些实施例中,所述参考像素区域环绕于所述目标像素区域的四周。
在一些实施例中,所述目标像素区域为所述原始图像中的有效区域的最 大内接矩形,所述参考像素区域为所述有效区域中除所述目标像素区域以外的区域。
在一些实施例中,所述目标像素区域边缘的部分像素点映射后的坐标位于所述参考像素区域边缘。
在一些实施例中,所述目标像素区域边缘的部分像素点映射后的坐标位于所述目标像素区域的边缘与所述参考像素区域的边缘之间的区域。
在一些实施例中,所述像素点的坐标为极坐标系下的坐标,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
在保持所述像素点的角坐标不变的情况下,基于构建的曲线函数对所述目标像素区域的像素点的径向坐标进行映射;所述曲线函数用于使所述最大内接矩形的顶点像素点在映射前后的径向坐标不变,非顶点像素点映射后的径向坐标值变大。
在一些实施例中,所述曲线函数基于所述目标像素区域中的目标像素点映射前后的坐标确定,所述目标像素点包括以下一种或多种:所述目标像素区域的中心像素点、所述目标像素区域的顶点像素点以及所述目标像素区域的边缘像素点,其中,所述中心像素点和所述顶点像素点映射前后的坐标不变,所述边缘像素点映射后的坐标位于所述参考像素区域的边缘。
在一些实施例中,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
将所述目标像素区域划分为多个像素网格,对所述每个像素网格的顶点像素点的坐标进行映射,得到顶点像素点映射后的坐标;
对所述顶点像素点映射后的坐标进行插值处理,确定所述网格中非顶点像素点映射后的坐标。
在一些实施例中,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
将所述目标像素区域划分为多个像素网格,基于不同的映射关系对不同像素网格中的像素点进行映射。
相应地,本申请实施例还提供一种拍摄设备,该拍摄设备包括上述任一实施例中的图像采集装置。该拍摄设备可以是手机、手持云台相机、专业相机等各种相机。
相应地,本申请实施例还提供一种可移动平台,该可移动平台搭载有上述任一实施例中的拍摄设备。该可移动平台可以是无人机、无人车等设备。
相应地,本申请实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中的图像处理方法。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者 暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (47)

  1. 一种图像处理方法,其特征在于,包括步骤:
    获取原始图像,所述原始图像包括目标像素区域和所述目标像素区域之外的参考像素区域;
    输出与所述目标像素区域大小对应的目标图像,其中,所述目标图像包括所述参考像素区域的至少部分像素点,使得所述目标图像展示的视场范围大于所述原始图像中所述目标像素区域展示的视场范围。
  2. 根据权利要求1所述的方法,其特征在于,所述输出与所述目标像素区域大小对应的目标图像包括:在获取原始图像后,对所述原始图像进行映射,将所述参考像素区域的至少部分像素点映射在所述目标像素区域的范围内,将映射后的目标像素区域作为目标图像。
  3. 根据权利要求2所述的方法,其特征在于,所述对原始图像进行映射包括:
    对所述目标像素区域的像素点的坐标进行映射,以基于映射后的坐标将所述参考像素区域的至少部分像素点的像素值赋值给所述目标像素区域内的至少部分像素点,从而将所述参考像素区域的至少部分像素点映射在所述目标像素区域的范围内。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述目标像素区域和所述参考像素区域均位于所述原始图像的有效区域内,所述有效区域基于采集所述原始图像的图像采集装置的设计参数确定。
  5. 根据权利要求3所述的方法,其特征在于,针对所述目标像素区域中不同的像素点,基于不同的映射关系对所述像素点的坐标进行映射。
  6. 根据权利要求3所述的方法,其特征在于,针对所述目标像素区域的同一像素点,基于不同的映射关系对所述像素点的不同方向上的坐标进行映射。
  7. 根据权利要求6所述的方法,其特征在于,所述基于不同的映射关系对所述像素点的不同方向上的坐标进行映射包括:
    对所述像素点的水平方向上的坐标和垂直方向上的坐标进行映射。
  8. 根据权利要求3所述的方法,其特征在于,对所述目标像素区域的像素点的坐标进行映射包括在不同坐标系下的映射。
  9. 根据权利要求8所述的方法,其特征在于,所述坐标系包括极坐标系、笛卡尔坐标系中的至少一种。
  10. 根据权利要求8所述的方法,其特征在于,所述目标像素区域中的像素点为极坐标系下的坐标,对所述目标像素区域的像素点的坐标进行映射,包括:
    对所述目标像素区域中的像素点的径向坐标和/或角坐标进行映射。
  11. 根据权利要求8所述的方法,其特征在于,基于不同的映射关系分别对所述径向坐标和所述角坐标进行映射。
  12. 根据权利要求8所述的方法,其特征在于,基于不同的映射关系分别对所述径向坐标和所述角坐标进行映射,包括:
    基于不同的映射关系对所述目标像素区域中角坐标不同而径向坐标相同的像素点进行映射,或,基于不同的映射关系对所述目标像素区域中角坐标相同而径向坐标不同的像素点进行映射。
  13. 根据权利要求8所述的方法,其特征在于,所述目标像素区域中的像素点的为笛卡尔坐标系下的坐标,对所述目标像素区域的像素点的坐标进行映射,包括:
    对所述目标像素区域中的像素点的横坐标和/或纵坐标进行映射。
  14. 根据权利要求13所述的方法,其特征在于,基于不同的映射关系分别对所述横坐标和所述纵坐标进行映射。
  15. 根据权利要求4所述的方法,其特征在于,所述参考像素区域环绕于所述目标像素区域的四周。
  16. 根据权利要求15所述的方法,其特征在于,所述目标像素区域为所述原始图像中的有效区域的最大内接矩形,所述参考像素区域为所述有效区域中除所述目标像素区域以外的区域。
  17. 根据权利要求15所述的方法,其特征在于,所述目标像素区域边缘的部分像素点映射后的坐标位于所述参考像素区域边缘。
  18. 根据权利要求15所述的方法,其特征在于,所述目标像素区域边缘的部分像素点映射后的坐标位于所述目标像素区域的边缘与所述参考像素区域的边缘之间的区域。
  19. 根据权利要求15所述的方法,其特征在于,所述像素点的坐标为极坐标系下的坐标,对所述目标像素区域的像素点的坐标进行映射的步骤包括:
    在保持所述像素点的角坐标不变的情况下,基于构建的曲线函数对所述目标像素区域的像素点的径向坐标进行映射;所述曲线函数用于使所述最大内接矩形的顶点像素点在映射前后的径向坐标不变,非顶点像素点映射后的径向坐标值变大。
  20. 根据权利要求16所述的方法,其特征在于,所述曲线函数基于所述目标像素区域中的目标像素点映射前后的坐标确定,所述目标像素点包括以下一种或多种:所述目标像素区域的中心像素点、所述目标像素区域的顶点像素点以及所述目标像素区域的边缘像素点,其中,所述中心像素点和所述顶点像素点映射前后的坐标不变,所述边缘像素点映射后的坐标位于所述参考像素区域的边缘。
  21. 根据权利要求3所述的方法,其特征在于,对所述目标像素区域的像素点的坐标进行映射包括:
    将所述目标像素区域划分为多个像素网格,对所述每个像素网格的顶点像素点的坐标进行映射,得到顶点像素点映射后的坐标;
    对所述顶点像素点映射后的坐标进行插值处理,确定所述网格中非顶点像素点映射后的坐标。
  22. 根据权利要求3所述的方法,其特征在于,对所述目标像素区域的像素点的坐标进行映射包括:
    将所述目标像素区域划分为多个像素网格,基于不同的映射关系对不同像素网格中的像素点进行映射。
  23. 一种图像处理装置,其特征在于,所述图像处理装置包括处理器、存储器、存储在所述存储器上可供所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
    获取原始图像,所述原始图像包括目标像素区域和所述目标像素区域之外的参考像素区域;
    输出与所述目标像素区域大小对应的目标图像,其中,所述目标图像包括所述参考像素区域的至少部分像素点,使得所述目标图像展示的视场范围大于所述原始图像中所述目标像素区域展示的视场范围。
  24. 根据权利要求23所述的图像处理装置,其特征在于,所述处理器用于输出与所述目标像素区域大小对应的目标图像时,具体用于:在获取原始图像后,对所述原始图像进行映射,将所述参考像素区域的至少部分像素点映射在所述目标像素区域的范围内,将映射后的目标像素区域作为目标图像。
  25. 根据权利要求24所述的图像处理装置,其特征在于,所述对原始图像进行映射包括对所述目标像素区域的像素点的坐标进行映射,以基于映射后的坐标将所述参考像素区域的至少部分像素点的像素值赋值给所述目标像素区域内的至少部分像素点,从而将所述参考像素区域的至少部分像素点映射在所述目标像素区域的范围内。
  26. 根据权利要求23至25任一项所述的图像处理装置,其特征在于,所述目标像素区域和所述参考像素区域均位于所述原始图像的有效区域内,所述有效区域基于采集所述原始图像的图像采集装置的设计参数确定。
  27. 根据权利要求25所述的图像处理装置,其特征在于,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
    针对所述目标像素区域中不同的像素点,基于不同的映射关系对所述像素点的坐标进行映射。
  28. 根据权利要求25所述的图像处理装置,其特征在于,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
    针对所述目标像素区域的同一像素点,基于不同的映射关系对所述像素 点的不同方向上的坐标进行映射。
  29. 根据权利要求28所述的图像处理装置,其特征在于,所述基于不同的映射关系对所述像素点的不同方向上的坐标进行映射包括:
    对所述像素点的水平方向上的坐标和垂直方向上的坐标进行映射。
  30. 根据权利要求25所述的图像处理装置,其特征在于,对所述目标像素区域的像素点的坐标进行映射包括在不同坐标系下的映射。
  31. 根据权利要求30所述的图像处理装置,其特征在于,所述坐标系包括极坐标系、笛卡尔坐标系中的至少一种。
  32. 根据权利要求30所述的图像处理装置,其特征在于,所述目标像素区域中的像素点为极坐标系下的坐标,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
    对所述目标像素区域中的像素点的径向坐标和/或角坐标进行映射。
  33. 根据权利要求32所述的图像处理装置,其特征在于,所述处理器用于对所述目标像素区域中的像素点的径向坐标和/或角坐标进行映射时,具体用于:
    基于不同的映射关系分别对所述径向坐标和所述角坐标进行映射。
  34. 根据权利要求30所述的图像处理装置,其特征在于,所述处理器基于不同的映射关系分别对所述径向坐标和所述角坐标进行映射时,具体用于:
    基于不同的映射关系对所述目标像素区域中角坐标不同而径向坐标相同的像素点进行映射,或,基于不同的映射关系对所述目标像素区域中角坐标相同而径向坐标不同的像素点进行映射。
  35. 根据权利要求30所述的图像处理装置,其特征在于,所述目标像素区域中的像素点的为笛卡尔坐标系下的坐标,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
    对所述目标像素区域中的像素点的横坐标和/或纵坐标进行映射。
  36. 根据权利要求35所述的图像处理装置,其特征在于,所述处理器用于对所述目标像素区域中的像素点的横坐标和/或纵坐标进行映射时,具体包 括:
    基于不同的映射关系分别对所述横坐标和所述纵坐标进行映射。
  37. 根据权利要求25所述的图像处理装置,所述参考像素区域环绕于所述目标像素区域的四周。
  38. 根据权利要求37所述的图像处理装置,其特征在于,所述目标像素区域为所述原始图像中的有效区域的最大内接矩形,所述参考像素区域为所述有效区域中除所述目标像素区域以外的区域。
  39. 根据权利要求37所述的图像处理装置,其特征在于,所述目标像素区域边缘的部分像素点映射后的坐标位于所述参考像素区域边缘。
  40. 根据权利要求37所述的图像处理装置,其特征在于,所述目标像素区域边缘的部分像素点映射后的坐标位于所述目标像素区域的边缘与所述参考像素区域的边缘之间的区域。
  41. 根据权利要求37所述的图像处理装置,其特征在于,所述像素点的坐标为极坐标系下的坐标,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
    在保持所述像素点的角坐标不变的情况下,基于构建的曲线函数对所述目标像素区域的像素点的径向坐标进行映射;所述曲线函数用于使所述最大内接矩形的顶点像素点在映射前后的径向坐标不变,非顶点像素点映射后的径向坐标值变大。
  42. 根据权利要求38所述的图像处理装置,其特征在于,所述曲线函数基于所述目标像素区域中的目标像素点映射前后的坐标确定,所述目标像素点包括以下一种或多种:所述目标像素区域的中心像素点、所述目标像素区域的顶点像素点以及所述目标像素区域的边缘像素点,其中,所述中心像素点和所述顶点像素点映射前后的坐标不变,所述边缘像素点映射后的坐标位于所述参考像素区域的边缘。
  43. 根据权利要求25所述的图像处理装置,其特征在于,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
    将所述目标像素区域划分为多个像素网格,对所述每个像素网格的顶点像素点的坐标进行映射,得到顶点像素点映射后的坐标;
    对所述顶点像素点映射后的坐标进行插值处理,确定所述网格中非顶点像素点映射后的坐标。
  44. 根据权利要求25所述的图像处理装置,其特征在于,所述处理器用于对所述目标像素区域的像素点的坐标进行映射时,具体用于:
    将所述目标像素区域划分为多个像素网格,基于不同的映射关系对不同像素网格中的像素点进行映射。
  45. 一种拍摄设备,其特征在于,所述拍摄设备包括权利要求23-44中任一项所述的图像处理装置。
  46. 一种可移动平台,其特征在于,所述可移动平台包括权利要求45所述的拍摄设备。
  47. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时,实现权利要求1-22任一项图像处理方法。
PCT/CN2021/126803 2021-10-27 2021-10-27 一种图像处理方法、装置、拍摄设备及可移动平台 WO2023070387A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/126803 WO2023070387A1 (zh) 2021-10-27 2021-10-27 一种图像处理方法、装置、拍摄设备及可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/126803 WO2023070387A1 (zh) 2021-10-27 2021-10-27 一种图像处理方法、装置、拍摄设备及可移动平台

Publications (1)

Publication Number Publication Date
WO2023070387A1 true WO2023070387A1 (zh) 2023-05-04

Family

ID=86160298

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/126803 WO2023070387A1 (zh) 2021-10-27 2021-10-27 一种图像处理方法、装置、拍摄设备及可移动平台

Country Status (1)

Country Link
WO (1) WO2023070387A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303231A (zh) * 2016-08-05 2017-01-04 深圳市金立通信设备有限公司 一种图像处理方法及终端
CN107212884A (zh) * 2017-05-11 2017-09-29 天津大学 一种仰卧体位压迫乳房成像方法
CN107454468A (zh) * 2016-05-23 2017-12-08 汤姆逊许可公司 对沉浸式视频进行格式化的方法、装置和流
US20180196336A1 (en) * 2013-12-09 2018-07-12 Geo Semiconductor Inc. System and method for automated test-pattern-free projection calibration
CN109242943A (zh) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 一种图像渲染方法、装置及图像处理设备、存储介质
CN111353946A (zh) * 2018-12-21 2020-06-30 腾讯科技(深圳)有限公司 图像修复方法、装置、设备及存储介质
CN111462205A (zh) * 2020-03-30 2020-07-28 广州虎牙科技有限公司 图像数据的变形、直播方法、装置、电子设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180196336A1 (en) * 2013-12-09 2018-07-12 Geo Semiconductor Inc. System and method for automated test-pattern-free projection calibration
CN107454468A (zh) * 2016-05-23 2017-12-08 汤姆逊许可公司 对沉浸式视频进行格式化的方法、装置和流
CN106303231A (zh) * 2016-08-05 2017-01-04 深圳市金立通信设备有限公司 一种图像处理方法及终端
CN107212884A (zh) * 2017-05-11 2017-09-29 天津大学 一种仰卧体位压迫乳房成像方法
CN109242943A (zh) * 2018-08-21 2019-01-18 腾讯科技(深圳)有限公司 一种图像渲染方法、装置及图像处理设备、存储介质
CN111353946A (zh) * 2018-12-21 2020-06-30 腾讯科技(深圳)有限公司 图像修复方法、装置、设备及存储介质
CN111462205A (zh) * 2020-03-30 2020-07-28 广州虎牙科技有限公司 图像数据的变形、直播方法、装置、电子设备和存储介质

Similar Documents

Publication Publication Date Title
US11272165B2 (en) Image processing method and device
CN107689035B (zh) 一种基于卷积神经网络的单应性矩阵确定方法及装置
US20200410646A1 (en) Method and apparatus for image processing
CN107566688B (zh) 一种基于卷积神经网络的视频防抖方法、装置及图像对齐装置
JP6044328B2 (ja) 画像処理システム、画像処理方法およびプログラム
CN108939556B (zh) 一种基于游戏平台的截图方法及装置
CN107564063B (zh) 一种基于卷积神经网络的虚拟物显示方法及装置
JP2005339313A (ja) 画像提示方法及び装置
US20170150045A1 (en) Device and method for generating a panoramic image
US20210127059A1 (en) Camera having vertically biased field of view
EP3448020B1 (en) Method and device for three-dimensional presentation of surveillance video
JP5743016B2 (ja) 画像を生成する装置および方法
CN110278368A (zh) 图像处理装置、摄影系统、图像处理方法
WO2022116397A1 (zh) 虚拟视点深度图处理方法、设备、装置及存储介质
CN110392202A (zh) 图像处理装置、摄影系统、图像处理方法
US20240348928A1 (en) Image display method, device and electronic device for panorama shooting to improve the user's visual experience
US20240112394A1 (en) AI Methods for Transforming a Text Prompt into an Immersive Volumetric Photo or Video
US8908964B2 (en) Color correction for digital images
JP2024114712A (ja) 撮像装置、撮像方法、及び、プログラム
WO2023070387A1 (zh) 一种图像处理方法、装置、拍摄设备及可移动平台
CN114520903B (zh) 渲染显示方法、装置、电子设备和存储介质
JP6394682B2 (ja) 方法および画像処理装置
WO2022062604A1 (zh) 投影画面调节方法、装置、投影仪和存储介质
JP3660108B2 (ja) 画像保存方法及び機械読み取り可能媒体
CN104754201B (zh) 一种电子设备及信息处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21961756

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE