CN114827385A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN114827385A
CN114827385A CN202110064752.4A CN202110064752A CN114827385A CN 114827385 A CN114827385 A CN 114827385A CN 202110064752 A CN202110064752 A CN 202110064752A CN 114827385 A CN114827385 A CN 114827385A
Authority
CN
China
Prior art keywords
pixel position
pixel
mapping table
pixels
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110064752.4A
Other languages
Chinese (zh)
Inventor
刘永光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Orion Star Technology Co Ltd
Original Assignee
Beijing Orion Star Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Orion Star Technology Co Ltd filed Critical Beijing Orion Star Technology Co Ltd
Priority to CN202110064752.4A priority Critical patent/CN114827385A/en
Publication of CN114827385A publication Critical patent/CN114827385A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a mapping table, inquiring a second pixel position corresponding to a first pixel position of any group of first pixels in a target image in an original image according to the mapping table, determining a group of brightness components matched with the second pixel position in the original image, determining a chroma component matched with the second pixel position, determining the brightness component of each first pixel in a group of first pixels in the target image according to the group of brightness components, determining a group of shared chroma components of the first pixels according to the chroma component, and remapping the original image based on the mapping table to determine the brightness component and the chroma component of each pixel in the corrected target image, so that the image correction efficiency is improved, and the real-time correction requirement is met.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
In a robot system, a camera with a large field angle is generally configured, and corresponding display is performed based on data collected by the camera. Radial distortion and tangential distortion introduced by the lens are very obvious, image distortion correction is needed to be carried out and then the corrected image is displayed to a user, and how to efficiently correct the image is a technical problem to be solved urgently.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present application is to provide an image processing method, so as to implement mapping on an original image based on a mapping table to determine a luminance component and a chrominance component of each pixel in a corrected target image, thereby improving the efficiency of image correction and meeting the requirement of real-time correction.
A second object of the present application is to provide an image processing apparatus.
A third object of the present application is to provide an electronic device.
A fourth object of the present application is to propose a non-transitory computer-readable storage medium.
A fifth object of the present application is to propose a computer program product.
To achieve the above object, an embodiment of a first aspect of the present application provides an image processing method, including:
acquiring a mapping table, wherein the mapping table is used for indicating a mapping relation between a first pixel position in a corrected target image and a second pixel position in an original image before correction;
inquiring a corresponding second pixel position of a first pixel position of any group of first pixels in the target image in the original image according to the mapping table;
determining, in the original image, a set of luminance components that match the second pixel location and determining chrominance components that match the second pixel location;
determining a luminance component of each first pixel in the set of first pixels in the target image according to the set of luminance components, and determining a shared chrominance component of the set of first pixels according to the chrominance component.
To achieve the above object, a second aspect of the present application provides an image processing apparatus, comprising:
the device comprises an acquisition module, a correction module and a correction module, wherein the acquisition module is used for acquiring a mapping table, and the mapping table is used for indicating the mapping relation between a first pixel position in a corrected target image and a second pixel position in an original image before correction;
the query module is used for querying a corresponding second pixel position of a first pixel position of any group of first pixels in the target image in the original image according to the mapping table;
a first determining module for determining a set of luminance components matching the second pixel position and determining chrominance components matching the second pixel position in the original image;
a second determining module, configured to determine, according to the set of luminance components, a luminance component of each first pixel in the set of first pixels in the target image, and determine, according to the chrominance component, a shared chrominance component of the set of first pixels.
In order to achieve the above object, an embodiment of a third aspect of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the image processing method according to the first aspect is implemented.
In order to achieve the above object, a fourth aspect of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is used to implement the image processing method according to the first aspect when executed by a processor.
In order to achieve the above object, an embodiment of a fifth aspect of the present application provides a computer program product, where instructions of the computer program product, when executed by a processor, implement the image processing method according to the first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method comprises the steps of obtaining a mapping table, inquiring a second pixel position corresponding to a first pixel position of any group of first pixels in a target image in an original image according to the mapping table, determining a group of brightness components matched with the second pixel position in the original image, determining a chroma component matched with the second pixel position, determining the brightness component of each first pixel in a group of first pixels in the target image according to the group of brightness components, determining a group of shared chroma components of the first pixels according to the chroma component, and remapping the original image based on the mapping table to determine the brightness component and the chroma component of each pixel in the corrected target image, so that the image correction efficiency is improved, and the real-time correction requirement is met.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of pixel sampling and storage according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another image processing method according to an embodiment of the present application;
FIG. 5 is a diagram illustrating mapping table based image processing according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application; and
FIG. 7 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
An image processing method, an apparatus, and an electronic device according to an embodiment of the present application are described below with reference to the drawings.
Interpretation of terms:
1) pixel coordinate system: the origin is at the upper left corner of the image, the X axis and the image width direction coincide and point to the right, the Y axis and the image height direction coincide and point to the down, and the unit is a pixel.
2) Camera coordinate system: the origin is at the center of the sensor (lens optical center), the X-axis is horizontally to the right, the Y-axis is vertically down, and the unit is millimeters.
3) Original image: the image captured directly from the camera is an image with significant distortion.
4) The corrected target image: a distortion corrected image.
5) And (3) internal reference matrix and internal reference inverse matrix: 3 x 3 matrix obtained by calibration, I and I respectively -1 To indicate.
6) Radial distortion: the distributed distortion along the radius of the lens is caused by the fact that the rays are more curved away from the center of the lens than close to the center, and the radial distortion coefficients k1 and k2 can be obtained by calibration.
7) Tangential distortion: the tangential distortion coefficients p1 and p2 can be obtained by calibration due to mounting deviations of the lens attached to the lens module.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the method comprises the steps of:
step 101, obtaining a mapping table.
The mapping table is used for indicating the mapping relation between a first pixel position in the corrected target image and a second pixel position in the original image before correction.
The corrected target image is an image having the same size as the original image and only containing pixel point positions without distortion, for example, the original image occupies 1920 × 1080 bytes, and the corrected target image also occupies 1920 × 1080 bytes. However, the corrected target image does not include the luminance component information and the chrominance component information of each pixel point, and may be understood as an image only including the position information of each pixel point, where each pixel point is a pixel point after position correction. In this embodiment, each time one frame of original image is obtained, based on the corrected target image, the original image and the mapping table are used to determine the luminance component and the chrominance component of each first pixel in the corrected target image, so as to obtain the target image containing the luminance and chrominance information of each pixel point after position correction based on the one frame of original image, thereby implementing real-time correction of the image.
It should be noted that, in order to simplify and describe the target image in the subsequent embodiments, the target image is the corrected target image.
In this embodiment, for convenience of distinction, a pixel in the target image is referred to as a first pixel, and a first pixel position is determined according to a position of the first pixel, where in a scene, when the first pixel is one, the first pixel position is a pixel position of the one first pixel; in another scenario, when the first pixels are multiple, that is, the first pixels are in one group, the first pixel position is determined according to the pixel positions of the group of first pixels, for example, the first pixel position is the middle position of the group of first pixels. The pixel in the original image is referred to as the second pixel, and the position of the second pixel is determined according to the position of the second pixel. The determination mode of the second pixel position is the same as the determination mode of the first pixel position in principle, and is not repeated.
In an implementation manner of this embodiment, because there is a deviation in the installation of a lens module of an image capturing device of an electronic device, a tangential distortion exists in an acquired original image, and meanwhile, there is a radial distortion in a light ray along a radial direction of a lens, so that distortion correction of positions of pixel points is required to be performed on the original image acquired by the image capturing device to obtain a target image.
In this embodiment, the mapping table is used to indicate a mapping relationship between a first pixel position in the corrected target image and a second pixel position in the original image before correction, and the mapping table may be determined based on a position relationship between pixel points in the original image and the corrected target image and stored when the electronic device is initialized, so that repeated calculation of the mapping table is avoided, and operation pressure of the electronic device is reduced. The mapping table is used for indicating the mapping relation between a first pixel position in the corrected target image and a second pixel position in the original image before correction.
And step 102, inquiring a corresponding second pixel position in the original image for the first pixel position of any group of first pixels in the target image according to the mapping table.
In this embodiment, any one of the sets of first pixels includes one or more first pixels, for example, a set of first pixels includes 4 first pixels.
In an implementation manner of this embodiment, an element in the mapping table corresponds to a first pixel position in the target image, and a value of the element is used to indicate a second pixel position corresponding to the first pixel position. And taking the central position of any group of first pixels in the target image as a first pixel position, and searching a second pixel position in the original image corresponding to the first pixel position in the mapping table.
In the original image, a set of luminance components matching the second pixel position is determined, and chrominance components matching the second pixel position are determined, step 103.
In the present embodiment, according to the determined second pixel position in the original image corresponding to the first pixel position, a set of luminance components matching the second pixel position is determined in the original image, and a chrominance component matching the second pixel position is determined. The number of the matched group of brightness components and the number of the first pixels contained in the group of first pixels have a corresponding relation. This is because each of a group of first pixels has a corresponding luminance component; one U component and one V component in a set of first pixels.
Step 104, determining a luminance component of each first pixel in a group of first pixels in the target image according to the group of luminance components, and determining a shared chrominance component of the group of first pixels according to the chrominance component.
In the embodiment, the luminance component of each first pixel in a group of first pixels in the target image is replaced according to a group of luminance components; according to the chrominance components, if there are a plurality of chrominance components, any one chrominance component can be used as the shared chrominance component of a group of first pixels, or the average value of the chrominance components can be used as the shared chrominance component of a group of first pixels, so as to improve the accuracy of determining the shared chrominance component.
In the image processing method of this embodiment, a mapping table is obtained, a second pixel position corresponding to a first pixel position of any one group of first pixels in a target image in an original image is queried according to the mapping table, a group of luminance components matched with the second pixel position is determined in the original image, a chrominance component matched with the second pixel position is determined, a luminance component of each first pixel in a group of first pixels in the target image is determined according to the group of luminance components, and a shared chrominance component of the group of first pixels is determined according to the chrominance component, so that mapping is performed on the original image based on the mapping table to determine a luminance component and a chrominance component of each pixel in the corrected target image, the image correction efficiency is improved, and the real-time correction requirement is met.
Based on the previous embodiment, this embodiment provides an implementation manner, and specifically illustrates how to generate the mapping table. Fig. 2 is a schematic flowchart of another image processing method according to an embodiment of the present application.
As shown in fig. 2, step 101 may include the steps of:
step 201, transforming the first pixel position to a camera coordinate system by using an internal reference inverse matrix to obtain a third pixel position in the camera coordinate system.
In this embodiment, a specific scenario, a standard image format of NV21 of an android platform, is provided. As shown in fig. 3, the NV21 format samples the image according to the YUV420 rule, that is, 1 luminance component (Y) is sampled every 1 pixel, and 2 color components (U and V) are shared by every adjacent 4 pixels, according to this rule, for the image of the same size, the YUV420 format occupies only half of the storage space of the RGB format, for example, a 1920 × 1080 image, after the image is stored in the memory, the Y component occupies 1920 × 1080 bytes, and the V and U components occupy 960 × 540 bytes, respectively, so that the storage capacity is reduced, and the storage space is reduced. The NV21 format is arranged in the memory in the following way: storing a Y component, and then alternately storing V and U components, as shown in fig. 3, Y1-Y16 is the stored Y component, V00 and U00, and V01 and U01 are the alternately stored V and U components, where Y is the luminance component Y corresponding to Y00 is Y1, and Y is the luminance component Y corresponding to Y01 is Y2; y00, Y01, Y10, and Y11 share color components V00 and U00, which are only illustrated in fig. 3 and not listed in this embodiment.
And aiming at the corrected target image, acquiring a first pixel position in the target image, and transforming the first pixel position to a camera coordinate system by adopting an internal reference inverse matrix to obtain a third pixel position in the camera coordinate system.
Wherein, the internal reference inverse matrix is:
Figure BDA0002903884470000061
for example, take the center point [ U ] of 4 adjacent pixels in the target image d ,V d ,1]Converting the internal reference inverse matrix into a camera coordinate system to obtain a third pixel position in the camera coordinate system, wherein the third pixel position is [ X ] d ,Y d ,1]The down sampling of the target image is realized, the number of third pixels is reduced, and the size of the mapping table is reduced.
And 202, performing distortion processing on the third pixel position to obtain a fourth pixel position in a camera coordinate system.
In this embodiment, in the camera coordinate system, distortion processing is performed on each third pixel position in sequence by using a radial distortion formula and a tangential distortion formula, where the distortion processing is to apply distortion to the third pixel position to obtain a fourth pixel position of the original image with distortion in the camera coordinate system.
Wherein, the radial distortion formula is as follows:
X d '=X d *[1+K 1 r 2 +K 2 r 4 ];
Y d '=Y d *[1+K 1 r 2 +K 2 r 4 ];
Figure BDA0002903884470000062
wherein, X d And Y d Respectively the abscissa and ordinate of the third pixel position; x d ' and Y d ' is divided into an abscissa and an ordinate of the third pixel position after the radial distortion processing is applied; k 1 And K 2 Is the radial distortion coefficient determined by calibration.
The tangential distortion formula is as follows:
Figure BDA0002903884470000063
Figure BDA0002903884470000064
Figure BDA0002903884470000065
wherein, P 1 And P 2 The tangential distortion coefficients Xo and Yo determined by calibration are the abscissa and ordinate of the third pixel position after applying the tangential distortion processing, that is, the fourth pixel position in the camera coordinate system, and the fourth pixel position indicates a position before correction corresponding to the third pixel position.
And step 203, transforming the fourth pixel position in the camera coordinate system to the pixel coordinate system by using the internal reference matrix to obtain a second pixel position.
In this embodiment, for the obtained fourth pixel position in the camera coordinate system, an internal reference matrix is used to transform the fourth pixel position in the camera coordinate system into the pixel coordinate system, so as to obtain the second pixel position. Wherein the second pixel position indicates a pre-correction position in the original image corresponding to the first pixel position in the pixel coordinate system.
Wherein, the reference matrix is:
Figure BDA0002903884470000071
and 204, rounding the second pixel position to be used as a value of an element corresponding to the first pixel position in the mapping table.
Rounding the second pixel position, as an implementation, discards the fractional part and retains the integer part of the second pixel position. In this embodiment, the second pixel position is rounded and used as the value of the element corresponding to the first pixel position in the mapping table, so that generation of the mapping table is realized, and the mapping relationship between the first pixel position in the corrected target image and the second pixel position in the original image before correction is indicated in the mapping table.
It should be noted that the Y component, the U component, and the V component share the mapping table, and the size of the mapping table is one fourth of the image size, that is, the size of the mapping table is reduced, which reduces the time complexity and the space complexity, and realizes the real-time distortion correction on the android platform.
In an implementation manner of the embodiment of the present application, in order to improve efficiency of reading the mapping table, the mapping table is divided into a mapping table with an abscissa and a mapping table with an ordinate, so as to reduce a storage capacity of a single mapping table and improve reading efficiency.
Table 1 is a schematic diagram of a mapping table, and a set of 4 first pixels is taken as an example in table 1 to describe a corresponding mapping table, where in table 1, a left MapX is a mapping table of a horizontal axis, and a right MapY is a mapping table of a vertical axis.
Figure BDA0002903884470000072
TABLE 1
In the image processing method of this embodiment, based on the first pixel position in the corrected target image, radial and tangential distortion processing is performed in the camera coordinate system through conversion of the coordinate system, so as to obtain a fourth pixel position in the camera coordinate system corresponding to the third pixel position before correction, that is, the fourth pixel position is a distorted pixel position, the fourth pixel position is converted into the image coordinate system, and a second pixel position is obtained through rounding operation, and the second pixel position is used as a value of the first pixel position in the mapping table, so that establishment of the mapping table is realized, and establishment of the mapping relationship between the pixel positions of the original image and the corrected target image is realized.
It is explained in the above embodiments that the mapping table may include a mapping table of abscissa and a mapping table of ordinate. Based on the foregoing embodiments, this embodiment provides an implementation manner, which specifically describes a process of determining a luminance component and a chrominance component of a corrected target image corresponding to an original image based on a mapping table of an abscissa and a mapping table of an ordinate. Fig. 4 is a schematic flowchart of another image processing method according to an embodiment of the present application, and as shown in fig. 4, the method includes the following steps:
step 401, obtaining a mapping table.
The mapping table comprises a mapping table of an abscissa and a mapping table of an ordinate.
Fig. 5 is a schematic diagram of image processing according to an embodiment of the present application. As shown in fig. 5, the mapping table 1 is a mapping table of abscissa, and the mapping table 2 is a mapping table of ordinate. The mapping table in fig. 5 only shows a part of the mapping table, which is only an example, and not a whole mapping table, and does not limit the present embodiment.
For the obtaining manner of the mapping table, reference may be made to the description in any of the above embodiments, and the principle is the same, which is not described herein again.
Step 402, the center position of a group of first pixels is taken as a first pixel position.
In this embodiment, the description will be given taking an example in which the number of first pixels included in one group of first pixels is 4, but the principle is the same when the number of first pixels included in one group is other, and these are not listed.
As shown in FIG. 5, in the target image, the first pixel position is determined to be [ U ] d ,V d ,1] T
Step 403, querying the abscissa of the second pixel position corresponding to the abscissa of the first pixel position in the abscissa mapping table.
In step 404, the ordinate of the second pixel position corresponding to the ordinate of the first pixel position is queried in the ordinate mapping table.
In this embodiment, in the mapping table of the abscissa and the mapping table of the ordinate, the abscissa of the second pixel position corresponding to the abscissa of the first pixel position and the ordinate of the second pixel position are queried, and the corresponding second pixel position in the original image is obtained and is represented as [ U [ ] o ,V o ,1] T
In step 405, a set of luminance components matching the second pixel position is queried in the luminance component storage area of the original image, and at least one chrominance component matching the second pixel position is queried in the chrominance component storage area of the original image.
In this embodiment, in the original image, according to the determined second pixel position, a group of luminance components matching the second pixel position is queried, where the group of luminance components belongs to a group of second pixels in the original image, and a region range of each second pixel in the original image includes the second pixel position or a distance is smaller than a threshold value. As shown in FIG. 5, the second pixel position [ U ] o ,V o ,1] T And querying 4 matched groups of brightness components according to the positions of the second pixels, wherein the group of brightness components belongs to a group of second pixels in the original image, the brightness components corresponding to the group of second pixels are respectively Y12, Y13, Y22 and Y23, wherein the area range of the group of second pixels in the original image comprises the second pixel position corresponding to the A2 point, or the distance between each second pixel in the group of second pixels and the position of the second pixel A2 is smallAt a threshold value.
And inquiring at least one chrominance component matched with the second pixel position in the original image according to the determined second pixel position, wherein each chrominance component belongs to a group of second pixels in the original image, and the distance between the central position of the group of second pixels in the original image and the second pixel position is less than a threshold value. This is because there is a deviation in the second pixel position when the second pixel position is determined based on the mapping table, and at least one chrominance component matching the second pixel position is determined in order to improve the accuracy of the determination of the chrominance component of the first pixel position corresponding to the second pixel position. As an implementation manner, the chrominance component a corresponding to the second pixel position is queried, and then at least one chrominance component within a preset distance from the chrominance component a is taken as the chrominance component matching the second pixel position according to the preset distance, for example, as shown in fig. 5, the chrominance components matching the second pixel position are queried to be 4U components and 4V components, respectively, where the 4V components are V00, V01, V10 and V11, and the 4U components are U00, U01, U10 and U11.
It should be noted that the positions of at least one chrominance component and the chrominance component that match the second pixel position in fig. 5 are merely illustrative, and do not limit the present embodiment.
Step 406, determining a luminance component of each first pixel in a set of first pixels in the target image according to the set of luminance components, and determining a shared chrominance component of the set of first pixels according to the chrominance component.
In this embodiment, each first pixel corresponds to a luminance component, and a group of first pixels share a chrominance component. As a possible implementation manner, when the chroma components are at least two, an average value of the at least two chroma components is determined as a shared chroma component of a group of first pixels.
As shown in fig. 5, in the target image, in a group of first pixels, the value of the luminance component of each first pixel is the determined value of the corresponding luminance component, i.e., Y00 is replaced by Y12, Y01 is replaced by Y13, Y10 is replaced by Y22, and Y11 is replaced by Y23. The average of U00, U01, U02, and U03 in the chrominance component is taken as the U component of a set of first pixels, and the average of V00, V01, V02, and V03 in the chrominance component is taken as the V component of a set of first pixels. Therefore, the determination of the brightness component and the chromaticity component of each first pixel in the target image with the corrected position is realized, and the correction of each acquired frame image is realized, so that the displayed image is the image corrected in real time, and the image effect is improved.
In this embodiment, according to the generated mapping table, for each first pixel position in the corrected target image, the second pixel position in the original image is determined, the luminance component of each first pixel is determined according to a group of luminance components matched with the second pixel position in the original image, and the chrominance components of a group of first pixels are determined in an averaging manner according to at least one chrominance component matched with the second pixel position in the original image, so that the determination of the luminance component and the chrominance components of each first pixel in a frame image can be realized each time a frame image is obtained, real-time update is realized, and the image effect is improved.
In order to implement the above embodiments, the present application also provides an image processing apparatus.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 6, the apparatus includes: an acquisition module 61, a query module 62, a first determination module 63 and a second determination module 64.
An obtaining module 61, configured to obtain a mapping table, where the mapping table is used to indicate a mapping relationship between a first pixel position in the corrected target image and a second pixel position in the original image before correction.
And the query module 62 is configured to query, according to the mapping table, a second pixel position corresponding to the first pixel position of any group of first pixels in the target image in the original image.
A first determining module 63 is configured to determine a set of luminance components matching the second pixel position and determine chrominance components matching the second pixel position in the original image.
A second determining module 64, configured to determine a luminance component of each first pixel in a group of first pixels in the target image according to a group of luminance components, and determine a shared chrominance component of the group of first pixels according to the chrominance component.
Further, in a possible implementation manner of the embodiment of the present application, there are at least two chrominance components matched with the second pixel position;
the second determining module 64 is specifically configured to:
determining a mean of at least two of the chrominance components as a shared chrominance component of the set of first pixels.
In a possible implementation manner of the embodiment of the present application, the mapping table includes a mapping table of an abscissa and a mapping table of an ordinate;
the query module 62 is specifically configured to: taking a center position of the set of first pixels as the first pixel position; inquiring the abscissa of a second pixel position corresponding to the abscissa of the first pixel position in the mapping table of the abscissa; and inquiring the vertical coordinate of the second pixel position corresponding to the vertical coordinate of the first pixel position in the mapping table of the vertical coordinate.
In a possible implementation manner of the embodiment of the present application, the first determining module 63 is specifically configured to:
querying a set of luminance components matching the second pixel position in a luminance component storage area of the original image; and the group of brightness components belong to a group of second pixels in the original image, and the area range of each second pixel in the original image contains the position of the second pixel or the distance of the second pixel is less than a threshold value.
In a possible implementation manner of the embodiment of the present application, the first determining module 63 is specifically configured to:
querying at least one chrominance component matching the second pixel position in a chrominance component storage area of the original image; wherein each of the chrominance components belongs to a group of second pixels in the original image, and a distance between a center position of the group of second pixels in the original image and the position of the second pixels is smaller than a threshold value.
In a possible implementation manner of the embodiment of the present application, an element in the mapping table corresponds to a first pixel position in the target image, and a value of the element is used to indicate a second pixel position corresponding to the first pixel position.
In a possible implementation manner of the embodiment of the present application, the obtaining module 71 is specifically configured to:
transforming the first pixel position to a camera coordinate system by adopting an internal reference inverse matrix to obtain a third pixel position under the camera coordinate system; carrying out distortion processing on the third pixel position to obtain a fourth pixel position under a camera coordinate system; transforming a fourth pixel position under the camera coordinate system to a pixel coordinate system by adopting an internal reference matrix to obtain a second pixel position; and taking the second pixel position after being rounded as the value of the element corresponding to the first pixel position in the mapping table.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and is not repeated herein.
The embodiment of the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the image processing method according to the foregoing method embodiment is implemented.
The electronic device may be a robot, but is not limited to a robot.
The present application also proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the image processing method according to the foregoing method embodiment.
The present application further proposes a computer program product, wherein when the instructions of the computer program product are executed by a processor, the image processing method according to the foregoing method embodiment is implemented.
FIG. 7 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application. The electronic device 12 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in FIG. 7, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be appreciated that although not shown in FIG. 7, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
In the description of the present specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. An image processing method, characterized in that it comprises the steps of:
acquiring a mapping table, wherein the mapping table is used for indicating a mapping relation between a first pixel position in a corrected target image and a second pixel position in an original image before correction;
inquiring a corresponding second pixel position of a first pixel position of any group of first pixels in the target image in the original image according to the mapping table;
determining, in the original image, a set of luminance components that match the second pixel location and determining chrominance components that match the second pixel location;
determining a luminance component of each first pixel in the set of first pixels in the target image according to the set of luminance components, and determining a shared chrominance component of the set of first pixels according to the chrominance component.
2. The image processing method according to claim 1, wherein the chroma components that match the second pixel position are at least two;
said determining a shared chroma component of the set of first pixels from the chroma component comprises:
determining a mean of at least two of the chrominance components as a shared chrominance component of the set of first pixels.
3. The image processing method according to claim 1, wherein the mapping table includes a mapping table of an abscissa and a mapping table of an ordinate;
the querying, according to the mapping table, a second pixel position of a first pixel position of any group of first pixels in the target image, which corresponds to the first pixel position in the original image, includes:
taking a center position of the set of first pixels as the first pixel position;
inquiring the abscissa of a second pixel position corresponding to the abscissa of the first pixel position in the mapping table of the abscissa;
and inquiring the vertical coordinate of the second pixel position corresponding to the vertical coordinate of the first pixel position in the mapping table of the vertical coordinate.
4. The method according to any one of claims 1 to 3, wherein determining a set of luminance components in the original image that match the second pixel position comprises:
inquiring a group of brightness components matched with the second pixel position in a brightness component storage area of the original image; and the group of brightness components belong to a group of second pixels in the original image, and the area range of each second pixel in the original image contains the position of the second pixel or the distance of the second pixel is less than a threshold value.
5. The method according to any one of claims 1 to 3, wherein determining the chrominance component matching the second pixel position in the original image comprises:
querying at least one chrominance component matching the second pixel position in a chrominance component storage area of the original image; wherein each of the chrominance components belongs to a group of second pixels in the original image, and a distance between a center position of the group of second pixels in the original image and the position of the second pixels is smaller than a threshold value.
6. The image processing method according to any one of claims 1 to 3,
and the element in the mapping table corresponds to a first pixel position in the target image, and the value of the element is used for indicating a second pixel position corresponding to the first pixel position.
7. The image processing method according to claim 6, wherein the obtaining a mapping table comprises:
transforming the first pixel position to a camera coordinate system by adopting an internal reference inverse matrix to obtain a third pixel position under the camera coordinate system;
carrying out distortion processing on the third pixel position to obtain a fourth pixel position under the camera coordinate system;
transforming a fourth pixel position under the camera coordinate system to a pixel coordinate system by adopting an internal reference matrix to obtain a second pixel position;
and taking the second pixel position after being rounded as the value of the element corresponding to the first pixel position in the mapping table.
8. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition module, a correction module and a correction module, wherein the acquisition module is used for acquiring a mapping table, and the mapping table is used for indicating the mapping relation between a first pixel position in a corrected target image and a second pixel position in an original image before correction;
the query module is used for querying a corresponding second pixel position of a first pixel position of any group of first pixels in the target image in the original image according to the mapping table;
a first determining module for determining a set of luminance components matching the second pixel position and determining chrominance components matching the second pixel position in the original image;
a second determining module, configured to determine a luminance component of each first pixel in the set of first pixels in the target image according to the set of luminance components, and determine a shared chrominance component of the set of first pixels according to the chrominance component.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any of claims 1-7 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202110064752.4A 2021-01-18 2021-01-18 Image processing method and device and electronic equipment Pending CN114827385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110064752.4A CN114827385A (en) 2021-01-18 2021-01-18 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110064752.4A CN114827385A (en) 2021-01-18 2021-01-18 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114827385A true CN114827385A (en) 2022-07-29

Family

ID=82524945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110064752.4A Pending CN114827385A (en) 2021-01-18 2021-01-18 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114827385A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751404A (en) * 2013-12-30 2015-07-01 腾讯科技(深圳)有限公司 Image transformation method and device
US20180315170A1 (en) * 2017-04-27 2018-11-01 Apple Inc. Image Warping in an Image Processor
CN109255760A (en) * 2018-08-13 2019-01-22 青岛海信医疗设备股份有限公司 Distorted image correction method and device
CN109308686A (en) * 2018-08-16 2019-02-05 北京市商汤科技开发有限公司 A kind of fish eye images processing method and processing device, equipment and storage medium
CN110599427A (en) * 2019-09-20 2019-12-20 普联技术有限公司 Fisheye image correction method and device and terminal equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751404A (en) * 2013-12-30 2015-07-01 腾讯科技(深圳)有限公司 Image transformation method and device
US20180315170A1 (en) * 2017-04-27 2018-11-01 Apple Inc. Image Warping in an Image Processor
CN109255760A (en) * 2018-08-13 2019-01-22 青岛海信医疗设备股份有限公司 Distorted image correction method and device
CN109308686A (en) * 2018-08-16 2019-02-05 北京市商汤科技开发有限公司 A kind of fish eye images processing method and processing device, equipment and storage medium
CN110599427A (en) * 2019-09-20 2019-12-20 普联技术有限公司 Fisheye image correction method and device and terminal equipment

Similar Documents

Publication Publication Date Title
WO2021115071A1 (en) Three-dimensional reconstruction method and apparatus for monocular endoscope image, and terminal device
EP2870585B1 (en) A method and system for correcting a distorted image
KR101017802B1 (en) Image distortion correction
US20210368147A1 (en) Projection image automatic correction method and system based on binocular vision
US11669942B2 (en) Image de-warping system
CN107749050B (en) Fisheye image correction method and device and computer equipment
US11244431B2 (en) Image processing
US11880993B2 (en) Image processing device, driving assistance system, image processing method, and program
CN109690611B (en) Image correction method and device
US20140111672A1 (en) Image processing device and image capture device
US8600157B2 (en) Method, system and computer program product for object color correction
JPH11250239A (en) Digital image pickup device for operating distortion correction by yuv data
KR20120020821A (en) Method and apparatus for correcting distorted image
CN114820581A (en) Axisymmetric optical imaging parallel simulation method and device
CN111681271B (en) Multichannel multispectral camera registration method, system and medium
CN114827385A (en) Image processing method and device and electronic equipment
US8380006B2 (en) System and method for merging separated pixel blocks into an integral image of an object
CN114399540A (en) Heterogeneous image registration method and system based on two-dimensional iteration closest point
CN114418908A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114466143A (en) Shooting angle calibration method and device, terminal equipment and storage medium
AU2017204848A1 (en) Projecting rectified images on a surface using uncalibrated devices
CN114697533A (en) Image processing method and device, computer readable storage medium and smart television
CN115908105A (en) Method and device for splicing all-round images and storage medium
CN111161148A (en) Panoramic image generation method, device, equipment and storage medium
US20220172318A1 (en) Method for capturing and processing a digital panoramic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination