WO2018214365A1 - 图像校正方法、装置、设备、系统及摄像设备和显示设备 - Google Patents

图像校正方法、装置、设备、系统及摄像设备和显示设备 Download PDF

Info

Publication number
WO2018214365A1
WO2018214365A1 PCT/CN2017/104351 CN2017104351W WO2018214365A1 WO 2018214365 A1 WO2018214365 A1 WO 2018214365A1 CN 2017104351 W CN2017104351 W CN 2017104351W WO 2018214365 A1 WO2018214365 A1 WO 2018214365A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coordinates
coordinate
corrected
pixel
Prior art date
Application number
PCT/CN2017/104351
Other languages
English (en)
French (fr)
Inventor
杨铭
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018214365A1 publication Critical patent/WO2018214365A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image correction method, apparatus, device, system, and imaging apparatus and display apparatus.
  • the camera module In practical applications, many scenes require the camera module to capture a blackboard image (the blackboard is a writing plane that can be repeatedly written), convert the captured image into an electronic document, and display it on the display terminal.
  • the image captured by the imaging device is distorted, for example, radial distortion or tangential distortion caused by the perspective distortion inherent to the optical lens of the camera module, perspective distortion caused by the relative position between the camera module and the object being photographed, and the like.
  • the related image processing technology is difficult to effectively correct the distortion of the image.
  • the present invention provides an image correction method, apparatus, device, system, and imaging apparatus and display apparatus to solve the problem that the related image processing technology is difficult to effectively correct distortion.
  • an image correction method comprising the steps of:
  • the coordinate mapping relationship is a coordinate of a pixel in an image before and after distortion correction a direct correspondence relationship, wherein the relationship parameter includes an internal parameter and an external parameter of the camera module that captures the image;
  • a corrected image of the image is generated based on the corrected coordinates of each pixel.
  • an image correcting apparatus comprising:
  • An image acquisition module configured to acquire an image to be corrected
  • a coordinate mapping module configured to acquire coordinates corresponding to coordinates of each pixel in the image based on a predetermined coordinate mapping relationship, and constitute a correction coordinate of each pixel point;
  • the coordinate mapping relationship is a pixel point in distortion correction a direct correspondence relationship between coordinates in the image before and after, the relationship parameter includes an internal parameter and an external parameter of the camera module that captures the image;
  • an image correction module configured to generate a corrected image of the image according to the corrected coordinates of each pixel.
  • an image correction apparatus comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the program to implement the following steps:
  • the coordinate mapping relationship is a coordinate of a pixel in an image before and after distortion correction a direct correspondence relationship, wherein the relationship parameter includes an internal parameter and an external parameter of the camera module that captures the image;
  • a corrected image of the image is generated based on the corrected coordinates of each pixel.
  • an image pickup apparatus comprising a camera, a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor executing the program to implement the following steps:
  • the coordinate mapping relationship is the pixel point in the image before and after the distortion correction a coordinate correspondence relationship, wherein the relationship parameter includes an inner parameter and an outer parameter of the camera;
  • a corrected image of the captured image is generated based on the corrected coordinates of each pixel.
  • an integrated writing machine comprising a writing device and an image capturing device mounted at a predetermined position on the writing device, the image capturing device comprising a camera, a memory, a processor, and being stored in the memory And a computer program operable on the processor, the processor implementing the program to implement the following steps:
  • the coordinate mapping relationship is a pixel point in an image before and after distortion correction a direct correspondence relationship of coordinates, the relationship parameters including internal parameters and external parameters of the camera;
  • a corrected image of the captured image is generated based on the corrected coordinates of each pixel.
  • an image correction system comprising a writing device and an image pickup device mounted at a predetermined position, including a camera, a memory, a processor, and stored in the memory and operable on the processor
  • Computer program that implements the following steps when the processor executes the program:
  • the coordinate mapping relationship is the pixel point in the image before and after the distortion correction a coordinate correspondence relationship, wherein the relationship parameter includes an inner parameter and an outer parameter of the camera;
  • a corrected image of the captured image is generated based on the corrected coordinates of each pixel.
  • a display device comprising a display unit, a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the program to implement the following steps :
  • the coordinate mapping relationship is a coordinate of a pixel point in an image before and after distortion correction a direct correspondence relationship, the relationship parameters of which include an internal parameter and an external parameter of the camera module that captures the image;
  • the display unit is controlled to display the corrected image.
  • an image correction system including a writing device, a display device, and an image pickup device associated with the display device, the image pickup device being installed at a predetermined position for photographing the writing device,
  • the display device includes a network interface, a display unit, a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the program to implement the following steps:
  • the coordinate mapping relationship is a coordinate of a pixel point in an image before and after distortion correction a direct correspondence relationship, the relationship parameters of which include an internal parameter and an external parameter of the camera module that captures the image;
  • the display unit is controlled to display the corrected image.
  • Embodiments provided by the present invention when there is an image to be corrected for distortion, based on a predetermined coordinate mapping relationship, acquire coordinates corresponding to coordinates of each pixel in the image, and form correction coordinates of each pixel point, and then A corrected image of the image is generated based on the corrected coordinates of each pixel.
  • the predetermined coordinate mapping relationship is a coordinate correspondence relationship of the pixel points in the image before and after the distortion correction
  • the relationship parameter of the coordinate mapping relationship includes an internal parameter capable of characterizing the radial distortion and an external parameter capable of characterizing the perspective distortion, therefore, It is not necessary to adopt different distortion correction operations for different distortions, and gradually correct different distortions.
  • the coordinates of the pixel points in the image to be corrected are directly The coordinates of the pixel in the image after correcting the radial distortion and the perspective distortion are mapped. While effectively correcting the distortion, the distortion correction operation can be simplified, thereby reducing the degree of damage to the image information by the distortion correction operation and reducing the computational resources consumed by the distortion operation.
  • FIG. 1a is a schematic diagram of an application scenario of a captured image according to an exemplary embodiment of the present invention
  • FIG. 1b is a schematic diagram showing a corrected captured image according to an exemplary embodiment of the present invention.
  • FIG. 2 is a flow chart showing an image correction method according to an exemplary embodiment of the present invention.
  • FIG. 3 is a hardware configuration diagram of an image correction device for implementing image correction according to an exemplary embodiment of the present invention
  • FIG. 4 is a hardware configuration diagram of an image pickup apparatus for realizing image correction according to an exemplary embodiment of the present invention
  • FIG. 5 is a schematic diagram showing hardware and hardware interaction of an image correction system according to an exemplary embodiment of the present invention.
  • FIG. 6 is a hardware structural diagram of a display device for implementing image correction according to an exemplary embodiment of the present invention.
  • FIG. 7 is a schematic diagram showing hardware and hardware interaction of another image correction system according to an exemplary embodiment of the present invention.
  • FIG. 8 is a logic block diagram of an image correcting apparatus according to an exemplary embodiment of the present invention.
  • first, second, third, etc. may be used in the present invention to describe various information, these letters Information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information without departing from the scope of the invention.
  • second information may also be referred to as the first information.
  • word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
  • FIG. 1a is a schematic diagram of an application scenario of a captured image according to an exemplary embodiment of the present invention:
  • the application scenario shown in FIG. 1a includes a writing device, an imaging device, and a fixed display device and/or a mobile display device associated with the imaging device.
  • the imaging device is installed at a predetermined position, and the writing device is generally photographed.
  • the captured image may refer to a still image or a frame of video, carrying the writing plane of the writing device.
  • the imaging device can transmit the image to the fixed display device and/or the mobile display device for display.
  • the captured image includes the "writing content” shown in Fig. 1a
  • the displayed image also includes the "writing content” shown in Fig. 1a.
  • paper sheets, pictures, and the like with data information can also be attached to the writing area for imaging by the imaging device.
  • the writing device may be a blackboard, a smart tablet or a combination of a blackboard and a smart tablet, and the blackboard refers to a plane that can be repeatedly written with a specific writing material such as chalk (such as the writing pen shown in FIG. 1a).
  • the plane color is mostly black, dark green, white or beige.
  • An intelligent writing board can be a device that can sense the touch of a user's finger or a smart writing pen and display and display corresponding text/graphic information.
  • the writing device may also be other devices having a writing function, which is not limited in the present invention.
  • the camera device may be a wide-angle camera device or a fisheye camera device with a wide lens angle, or may be other types of camera devices, which is not limited by the present invention.
  • the fixed display device can be a personal computer, an Internet TV, a display wall, etc., and the example is only exemplified by an LED display.
  • the mobile display device may be a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a tablet computer, a wearable device, etc., and the example is only exemplified by a smart phone.
  • the application scenario of the captured image may include the number of multiple mobile display devices and/or multiple fixed display devices, which is not limited by the present invention.
  • the inherent perspective distortion of the optical lens of the camera module may cause radial distortion and/or tangential distortion of the captured image, and between the imaging device and the object being photographed.
  • the relative position causes the perspective of the captured image to be distorted.
  • the wider the lens angle of the camera device, the image taken The more obvious the radial distortion, the smaller the angle between the lens of the camera device and the writing tablet, and the more pronounced the perspective distortion in the captured image.
  • the original image 101 is an image taken by the lens for a wide-angle lens or a fisheye lens camera device, and has obvious radial distortion and perspective distortion, and the image quality is poor.
  • the designer of the present invention can first correct the radial distortion in the original image 101 by using the radial distortion correction model, generate the intermediate image 102, and then correct the perspective distortion in the intermediate image 102 by using the perspective mapping model to generate the final image 103.
  • the radial distortion and the perspective distortion in the final image 103 are not obvious, and the image quality is significantly higher than the original image 101.
  • the two processes of correcting the perspective distortion and correcting the radial distortion will cause irreparable damage to the image information, and the distortion correction will be performed step by step, which will cause cumulative damage, reduce the image quality of the corrected image, and two distortions.
  • the calibration process is complicated and consumes a lot of computing resources, resulting in low distortion correction efficiency.
  • the present invention can further solve the problem of correcting the radial distortion and the perspective distortion in the captured image step by step, and has a problem of poor image quality and low correction efficiency, and proposes a solution.
  • the solution of the present invention considers two processes of correcting the perspective distortion and correcting the radial distortion. If the steps are performed step by step, the operation is complicated, the computational resources consumed are large, and the accumulated image damage is easily caused, and the reactive pixel points can be determined in advance.
  • the coordinate mapping relationship of the coordinate correspondence relationship in the image before and after the distortion correction (such as perspective distortion and radial distortion correction), when there is an image to be corrected for distortion, based on a predetermined coordinate mapping relationship, acquiring and each pixel point in the The coordinates corresponding to the coordinates in the image constitute the corrected coordinates of the respective pixel points, and then the corrected image of the image is generated based on the corrected coordinates of the respective pixel points.
  • the predetermined coordinate mapping relationship is a coordinate correspondence relationship of the pixel points in the image before and after the distortion correction
  • the relationship parameter of the coordinate mapping relationship includes an internal parameter capable of characterizing the radial distortion and an external parameter capable of characterizing the perspective distortion, therefore, It is not necessary to adopt different distortion correction operations for different distortions, and gradually correct different distortions.
  • the coordinate mapping relationship Through the coordinate mapping relationship, the coordinates of the pixel points in the image to be corrected directly map out the pixel points after correcting the radial distortion and the perspective distortion.
  • the coordinates in the image While effectively correcting the distortion, the distortion correction operation can be simplified, thereby reducing the degree of damage to the image information by the distortion correction operation and reducing the computational resources consumed by the distortion operation.
  • FIG. 2 is a flowchart of an image correction method according to an exemplary embodiment of the present invention.
  • the embodiment can be applied to various electronic devices having image processing capabilities, including the following steps S201-S203:
  • Step S201 acquiring an image to be corrected.
  • Step S202 Acquire coordinates corresponding to coordinates of each pixel in the image based on a predetermined mapping relationship, and form correction coordinates of each pixel point; the coordinate mapping relationship is that the pixel point is in an image before and after distortion correction.
  • the direct correspondence of the coordinates, the relationship parameters including the internal parameters and the external parameters of the camera module that captures the image.
  • Step S203 generating a corrected image of the image according to the corrected coordinates of each pixel.
  • the internal parameter is determined by the inherent property of the camera module, and is an optical and geometric parameter inherent to the camera module, and may include a focal length, a principal point coordinate, a radial distortion coefficient, and/or a tangential distortion coefficient, etc., in other examples.
  • the image sensor format may also be included; the outer parameter may include a rotation parameter and a translation parameter, and the rotation parameter and the translation parameter represent a conversion relationship of the three-dimensional real world coordinate system to the three-dimensional camera coordinate system.
  • the external parameters are determined by the relative position of the camera module and the object being photographed, and the internal parameters.
  • the electronic device may be a device capable of realizing an image correction function such as an imaging device or a display device.
  • the execution subject of the image correction method of the present invention may be the aforementioned various electronic devices.
  • the camera module is a camera of the imaging device, and the image is captured by the camera.
  • the angle of view of the camera is different, the radial distortion and/or the distortion of the tangential distortion of the captured image are different.
  • the incident angle of the camera determined by the relative position of the imaging device and the object to be photographed
  • the captured image exists.
  • the distortion of the perspective distortion is different.
  • an embodiment of the present invention can acquire an image to be corrected by the following operations:
  • the image transmitted by the imaging device is received as an image to be corrected.
  • the predetermined display device may pre-store the image captured by the associated imaging device.
  • the pre-stored image is retrieved, and the captured image is corrected as the image to be corrected.
  • the camera of the present invention can predetermine the coordinate mapping relationship for the imaging device with the fixed subject and the fixed mounting position, and then according to the coordinates of each pixel in the randomly captured image, according to the coordinates.
  • the mapping relationship directly calculates the corresponding coordinates, and then establishes a mapping table for the coordinates of each pixel in the image and the calculated coordinates, and stores the mapping table corresponding to the coordinate mapping relationship.
  • the coordinates corresponding to the coordinates of each pixel in the image can be quickly acquired by the following operations:
  • the coordinates corresponding to the coordinates of the respective pixel points in the image are searched from the mapping table.
  • the coordinates mentioned here refer to the positional parameters of a single pixel in the image, and may also refer to the positional parameters of certain pixels in the predetermined set of pixel points in the image.
  • the image coordinate system can be established in advance, and the pixel points are in the image coordinates.
  • the projection value in the system is used as the coordinate of the pixel in the image.
  • the coordinates of the pixel point may be expressed in other ways, which is not limited in the embodiment of the present invention.
  • a plurality of images may be randomly captured for an imaging device with a fixed target and a fixed installation position, and the coordinates of the pixel points in each image before and after the distortion are corrected for function fitting. , fitting the coordinate mapping relationship.
  • the radial distortion correction model and the perspective mapping model can also be combined to derive the coordinate mapping relationship mentioned above.
  • a part of the object in the predetermined real scene is determined as a target point, and the camera module is controlled to capture the real scene at a predetermined position.
  • the predetermined realistic scene may be as shown in FIG. 1a, and the target point may be each area in the writing area where "writing content" exists.
  • the camera module is a camera of the camera device.
  • the coordinate correspondence between the target point in the real world and the captured image can be obtained by the following operations:
  • Determining according to a coordinate conversion relationship between the world coordinate system of the real scene and the camera coordinate system of the camera module, determining a correspondence relationship between the world coordinate system and coordinates of the camera coordinate system A correspondence.
  • the correspondence between the coordinates of the target point in the camera coordinate system and the pinhole projection coordinates is determined to be a second correspondence.
  • Determining a correspondence between an incident angle of the photographing module and a pinhole projection coordinate of the target point is a third correspondence.
  • a correspondence relationship between an incident angle of the photographing module and a coordinate of the target point in the world coordinate system is a fourth correspondence relationship .
  • a distance from the target image to the center of the image distortion is determined in the captured image.
  • the correspondence between the incident angle and the distance is determined to be a fifth correspondence.
  • the coordinate correspondence between the target point in the corrected radial distortion captured image and the captured image after correcting the radial distortion and the perspective distortion can be obtained by the following operation:
  • the radial distortion in the captured image is corrected, and the coordinates of the target point in the image after correcting the radial distortion are determined.
  • the boundary coordinates of the trapezoidal region in the image after correcting the radial distortion are extracted.
  • the matrix elements of the transformation matrix are external parameters of the shooting module Composition.
  • the size ratio of the rectangular area is consistent with the size ratio of the trapezoidal area in a real scene.
  • the size ratio (such as the aspect ratio) of the trapezoidal region in the real scene may be the size ratio (such as the aspect ratio) of the writing area shown in FIG. 1, and the size ratio (such as the aspect ratio) of the rectangular area.
  • the size ratio (such as the aspect ratio) of the writing area is identical, and there are various methods for extracting this trapezoidal area.
  • an infrared lamp can be installed at the four corners of the writing area, and the position of the four infrared lamps can be detected by the captured image to complete the extraction of the trapezoidal region.
  • the edge of the writing area can be directly extracted from the image to form a trapezoidal area by an algorithm such as edge detection.
  • the user can also specify the coordinates of the four key points of the writing area in an interactive manner.
  • boundary coordinates of regions of other shapes in the corrected radial distortion image such as boundary coordinates of the elliptical regions; predetermine regions of other shapes than the rectangular regions, such as circular regions.
  • the boundary coordinates are not limited by the present invention.
  • the radial distortion correction model can be an equidistant projection model, an equal solid angle projection model, an orthogonal projection model, a stereoscopic projection model, a polynomial approximation model, and the world shown by equations (1) to (5), respectively.
  • the coordinate conversion relationship between the coordinate system and the camera coordinate system is as shown in equation (6)
  • the pinhole projection coordinates are as shown in equation (7).
  • the relationship between the incident angle of the camera module and the pinhole projection coordinates is as shown in equation (8) and (9)
  • the coordinates of the target point in the captured image are as shown in equations (10) and (11)
  • the perspective mapping model containing the transformation matrix is as shown in equation (12)
  • the target point is in the correction path.
  • ⁇ d represents the distance from the point in the image to the center of the distortion
  • f is the focal length of the camera
  • is the angle between the incident ray and the optical axis of the camera, ie the angle of incidence.
  • ⁇ d k 0 ⁇ +k 1 ⁇ 3 +k 2 ⁇ 5 +k 3 ⁇ 7 +... (5)
  • k 0 to k n are polynomial coefficients, which can be obtained by fitting or calibrating.
  • the coordinates of the target point P in the real world (x, y, z)
  • the coordinates in the camera coordinate system are (x', y', z')
  • the rotation matrix of the machine coordinate system relative to the world coordinate system is R.
  • the offset vector is T.
  • the pinhole projection coordinate of the target point P is (x", y").
  • f x and f y are the focal lengths in the x and y directions
  • c x and c y are the principal points, usually at the center of the image.
  • the parameters of these unknown variables and the radial distortion correction model are the internal parameters of the camera module, which can be obtained by camera calibration. In other examples, it can also be obtained by fitting or other means, and the invention does not limit this.
  • the captured image may be corrected for radial distortion, and then the perspective distortion corrected for the corrected radial distortion image, which may be based on the two-step corrected image.
  • the image after correction of the radial distortion is corrected as the intermediate image 102 shown in FIG. 1b, and the image after correction of the radial distortion and the perspective distortion is shown in FIG. 1b as the final image 103, and the writing device may be a blackboard, and finally
  • the image 103 may be a standard blackboard image, and since the distortion of the standard blackboard image is not conspicuous, the edges of the blackboard area (writing area) are restored to a straight line, and thus the blackboard area may be described by a quadrangle. There are many ways to extract this quadrilateral (trapezoid).
  • an infrared lamp can be installed at four corners of the blackboard, and the position of the four infrared lamps can be detected by taking a picture to complete the extraction of the quadrilateral.
  • an algorithm such as edge detection can directly form a quadrilateral from the edge of the image extraction blackboard.
  • the user can also specify the coordinates of the four key points of the blackboard area in an interactive manner.
  • the perspective mapping model (12) can be obtained by deriving the transformation matrix M of the rectangle formed by the quadrilateral to the standard blackboard area:
  • (u", v") is the coordinate of the target point P in the standard blackboard image
  • (u', v') is the coordinate of the target point P in the image after correcting the radial distortion.
  • the component of M is the external parameter of the camera module. The solution of the present invention does not limit the extraction method of the blackboard area, and does not limit how to obtain the M.
  • the simultaneous formulas (1)-(11) can derive the coordinate correspondence between the coordinates (x, y, z) of the target point P in the real world and the coordinates (u, v) in the captured image:
  • the real world coordinate system may be adapted to correct images of the captured image after perspective distortion
  • the coordinates (u', v', 1) can be regarded as the coordinates (x, y, z) of the P point in the real world, which are substituted into the formulas (14) and (15), and Combined with the formula (13), the mapping relationship between the (u", v") of the target point P in the image after correcting the radial distortion and the transmission distortion to the coordinates (u, v) in the captured image can be derived:
  • the present invention can pre-calculate the coordinates of each pixel in the corrected image and the coordinates of the pixel points in the corresponding original captured image to obtain a mapping table.
  • the pixel features of each pixel point in the image to be corrected may be read, and then added according to the corresponding correction coordinates in the corresponding position of the image editing interface.
  • the pixel is set, and the pixel feature of the pixel is set to coincide with the pixel feature of the corresponding pixel in the image to be corrected, and a corrected image is generated.
  • other techniques may be adopted to generate a corrected image according to the corrected coordinates of each pixel point, which is not limited in the present invention.
  • the correction coordinates of each pixel are not integers. If the correction coordinates are directly rounded up, a corrected image is generated, and the pixel feature difference of adjacent pixels in the corrected image is large, which may cause image distortion, in order to solve the problem.
  • non-integer corrected coordinates can be processed by the pixel difference method. In the specific processing, if the correction coordinate of the pixel point A is not an integer, the average value of the coordinates of the pixel points around the pixel point A can be obtained, and the obtained average value is used as the correction coordinate of the pixel point A.
  • the distortion caused by the inherent perspective distortion of the optical lens of the camera module is different from the radial distortion, for example, causing tangential distortion, and other related distortion correction models may be used instead of the radial distortion correction model for correlation distortion.
  • the correction model and the perspective mapping model are combined to derive the coordinate mapping relationship mentioned above.
  • the radial distortion correction model and the perspective mapping model can also be combined to derive the coordinate mapping relationship mentioned above.
  • the specific derivation process can be referred to the following. Operation:
  • a part of the object in the predetermined real scene is determined as a target point, and the camera module is controlled to capture the real scene at a predetermined position.
  • the predetermined realistic scene may be as shown in FIG. 1a, and the target point may be each area in the writing area where "writing content" exists.
  • the camera module is a camera of the camera device.
  • the relationship parameter of the coordinate correspondence relationship includes the internal parameter.
  • the relationship parameter of the coordinate correspondence relationship includes the external parameter.
  • the coordinate mapping relationship is obtained based on the acquired coordinate correspondences.
  • the related distortion may be referred to as tangential distortion, and may also refer to radial distortion and/or tangential distortion.
  • the imaging device is assumed to be a wide-angle imaging device
  • the predetermined display device is a fixed display device and/or a mobile display device.
  • the embodiment of the image correcting apparatus of the present invention may be implemented by software, or may be implemented by hardware or a combination of hardware and software.
  • the processor of the device in which it is located reads the corresponding computer program instructions in the non-volatile memory into the memory.
  • FIG. 3 it is a hardware structure diagram of the image correcting device where the image correcting device 331 is located, except for the processor 310, the memory 330, the network interface 320, and the non-easy
  • the device in which the device is located in the embodiment may also include other hardware according to the actual function of the device, and details are not described herein.
  • the memory of the image correction device may store processor-executable instructions; the processor may couple the memory for reading the program instructions stored by the memory, and in response, perform the operations of: acquiring an image to be corrected; based on the predetermined mapping Relation, acquiring coordinates corresponding to coordinates of each pixel in the image, and forming correction coordinates of each pixel; the coordinate mapping relationship is a direct correspondence relationship between coordinates of pixels in an image before and after distortion correction, and The relationship parameters include an internal parameter and an external parameter of the camera module that captures the image; and a corrected image of the image is generated based on the corrected coordinates of each pixel.
  • the operations performed by the processor may be referred to the related description in the foregoing method embodiments, and details are not described herein.
  • the image correcting device may be specifically an image capturing device. From a hardware level, as shown in FIG. 4, it is a hardware structure diagram of the image capturing device where the image correcting device 431 is located, except for the processing shown in FIG.
  • the device in which the device is located in the embodiment may also include the camera 450 and other hardware according to the actual functions of the device, and details are not described herein.
  • Video storage device 440 The processor 410 can store the executable instructions; the processor 410 can be coupled to the memory 440 for reading the program instructions stored in the memory 440 to the memory 330, and in response, performing the following operations: controlling the camera to take a shot; based on the predetermined a coordinate mapping relationship, which acquires coordinates corresponding to coordinates of each pixel in the captured image, and constitutes correction coordinates of each pixel point; the coordinate mapping relationship is a coordinate correspondence relationship of the pixel points in the image before and after the distortion correction,
  • the relationship parameters include internal parameters and external parameters of the camera; and a corrected image of the captured image is generated according to the corrected coordinates of each pixel.
  • the image correcting system of the embodiment of the present invention may include a writing device and an image capturing device installed at a predetermined position, as shown in FIG.
  • a camera 450, a memory 430, a network interface 420, a memory 440, a processor 410, and a computer program stored on the memory 440 and executable on the processor 410 can be included as shown in FIG.
  • the memory 440 of the imaging device may store the processor 410 executable instructions; the processor 410 may couple the memory 440 for reading the program instructions stored by the memory 440 to the memory 430, and in response, perform the following operations: controlling the camera pair
  • the writing device performs shooting; based on the predetermined coordinate mapping relationship, coordinates corresponding to the coordinates of each pixel in the captured image are acquired, and the correction coordinates of each pixel point are formed; the coordinate mapping relationship is that the pixel point is distorted
  • the relationship parameter includes an inner parameter and an outer parameter of the camera; and a corrected image of the captured image is generated according to the corrected coordinates of each pixel point.
  • the imaging device first captures the writing area of the writing device, and corrects the captured image.
  • the processor 410 reads the program instruction stored in the memory 440 to the memory 430, and in response, executes the above.
  • the described operation generates a corrected image.
  • the image correction system of the embodiment of the present invention may further include a display device associated with the image capturing device, and the image capturing device and the display device may be connected through a network, and the display device may include, for example, The fixed display device and/or the mobile display device shown in FIG. After the photographing device produces the corrected image, the corrected image may be transmitted to the fixed display device and/or the mobile display device via the network interface 420 for display.
  • the imaging device can be mounted at a predetermined location on the writing device, both of which form an integrated writing machine, the predetermined location referred to herein being the intermediate portion of the outer casing of the writing device.
  • the camera device may include a camera 450 as shown in FIG. 4, a memory 430, a network interface 420, a memory 440, a processor 410, and a computer program stored on the memory 440 and operable on the processor 410.
  • the memory 440 of the imaging device may store the processor 410 executable instructions; the processor 410 may couple the memory 440 for reading the program instructions stored by the memory 440 to the memory 430, and in response, perform the following operations: controlling the camera pair
  • the writing device performs shooting; based on the predetermined coordinate mapping relationship, coordinates corresponding to the coordinates of each pixel in the captured image are acquired, and the correction coordinates of each pixel point are formed;
  • the coordinate mapping relationship is a direct correspondence relationship between coordinates of a pixel point in an image before and after distortion correction, and the relationship parameter includes an internal parameter and an external parameter of the camera; and the captured image is generated according to the corrected coordinates of each pixel point. Correct the image.
  • the writing device can be set to different types of devices according to actual application needs.
  • the writing device may include a blackboard and a smart writing pad, the predetermined position being a designated position of the bezel of the blackboard.
  • the blackboard and the smart writing board can be combined to form a writing device.
  • the smart writing board can be used to sense the touch of the user's finger or the smart writing pen, and display and display the corresponding text/graphic information, and can also display the function of displaying the corrected image.
  • the writing apparatus may include a blackboard and a display, the predetermined position is a designated position of a border of the blackboard, and the display may be used to display the correction. image.
  • the display and the blackboard can be combined to form a writing device.
  • the image correcting device may be specifically a display device. From a hardware level, as shown in FIG. 6 , it is a hardware structure diagram of the display device where the image correcting device 631 is located, except for the processing shown in FIG. 6 .
  • the device in which the device is located in the embodiment may also include the display unit 650 and other hardware according to the actual functions of the device, and details are not described herein. .
  • the memory 640 of the display device may store the processor 610 executable instructions; the processor 610 may be coupled to the memory 640 for reading the program instructions stored by the memory 640 to the memory 630, and in response, performing the following operations: acquiring an image to be corrected; Obtaining coordinates corresponding to coordinates of each pixel in the image based on a predetermined coordinate mapping relationship, and forming correction coordinates of each pixel point; the coordinate mapping relationship is a coordinate of a pixel point in an image before and after distortion correction a direct correspondence relationship, the relationship parameter includes an inner parameter and an outer parameter of the camera module that captures the image; a corrected image of the image is generated according to the corrected coordinates of each pixel point; and the display unit is controlled to display the corrected image.
  • the image correcting system of the embodiment of the present invention may include a writing device, a display device, and an imaging device associated with the display device, as shown in FIG. 7 .
  • the display device may include a fixed display device and/or a mobile display device, and the image capturing device is installed at a predetermined position for capturing the writing device. After the image is captured, the image capturing device may separately transmit the captured image to the fixed image. Display devices and mobile display devices.
  • the fixed display device and the mobile display device may respectively include a processor 610, a memory 630, a network interface 620, and a non-volatile memory 640 as shown in FIG. 6, and the device in which the device is located in the embodiment is generally based on the actual device.
  • the function may also include the display unit 650 and other hardware, which will not be described again.
  • the memory 640 of the fixed display device or the mobile display device may store the processor 610 executable instructions; the processor 610 may couple the memory 640 for reading the program instructions stored by the memory 640 to the memory 630, and in response, perform the following operations:
  • the network interface is obtained Obtaining an image captured by the imaging device; acquiring coordinates corresponding to coordinates of each pixel in the image based on a predetermined coordinate mapping relationship, and forming correction coordinates of each pixel;
  • the coordinate mapping relationship is a pixel a direct correspondence relationship of coordinates in an image before and after distortion correction, the relationship parameter includes an inner parameter and an outer parameter of the camera module that captures the image; and a corrected image of the image is generated according to the corrected coordinates of each pixel;
  • the display unit displays the corrected image.
  • the writing device and the imaging device in the image correction system of the embodiment of the present invention may constitute an integrated writing machine, and the imaging device is installed at a predetermined position on the writing device.
  • the writing device, the display device, and the imaging device in the image correction system of the embodiment of the present invention may constitute an integrated writing machine, and the imaging device is installed at a predetermined position on the writing device, and the writing device and the display device may be combined.
  • the present invention also provides embodiments of the apparatus.
  • FIG. 8 is a logic block diagram of an image correction apparatus according to an exemplary embodiment of the present invention.
  • the apparatus may include an image acquisition module 810, a coordinate mapping module 820, and an image correction module 830.
  • the image obtaining module 810 is configured to acquire an image to be corrected.
  • the coordinate mapping module 820 is configured to acquire coordinates corresponding to coordinates of each pixel in the image based on a predetermined coordinate mapping relationship, and form correction coordinates of each pixel point; the coordinate mapping relationship is that the pixel point is distorted The direct correspondence of the coordinates in the image before and after the correction, the relationship parameters including the internal parameters and the external parameters of the camera module that captures the image.
  • the image correction module 830 is configured to generate a corrected image of the image according to the corrected coordinates of each pixel point.
  • the image correcting device of the embodiment of the present invention is installed in a predetermined image capturing device, and the camera module is a camera of the image capturing device, and the image is captured by the camera.
  • the image correction device of the embodiment of the present invention is installed in a predetermined display device, and the image acquisition module 810 may include:
  • an imaging notification module configured to notify an imaging device associated with the display device to capture an image.
  • an image receiving module configured to receive an image sent by the imaging device as an image to be corrected.
  • the coordinate mapping module 820 can include:
  • the mapping table obtaining module is configured to obtain a mapping table that can reflect the coordinate mapping relationship.
  • a coordinate finding module configured to search from the mapping table for a seat corresponding to coordinates of each pixel in the image Standard.
  • the predetermined module of the coordinate mapping relationship is used to:
  • Part of the object in the predetermined real scene is designated as the target point.
  • the relationship parameter of the coordinate correspondence relationship includes an internal parameter of the camera module.
  • a coordinate correspondence relationship between the target point and the captured image after correcting the radial distortion and correcting the perspective distortion and the perspective distortion is obtained.
  • the target point When the coordinates of the display world coincide with the coordinates of the captured image after correcting the radial distortion and the perspective distortion, the target point generates the coordinate mapping relationship based on the acquired coordinate correspondences.
  • the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the units or modules described as separate components may or may not be physically separate, and the components displayed as units or modules may or may not be physical units. Or modules, which can be located in one place, or distributed to multiple network units or modules. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solution of the present invention. Those of ordinary skill in the art can understand and implement without any creative effort.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像校正方法、装置、设备、系统及摄像设备和显示设备,所述方法包括:获取待校正的图像(S201);基于预确定的映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标(S202);根据各像素点的校正坐标,生成所述图像的校正图像(S203)。无需针对不同的畸变分别采取不同的畸变校正操作,可以通过预确定的坐标映射关系,由像素点在待校正的图像中的坐标,直接映射出像素点在校正畸变后的图像中的坐标。在有效校正畸变的同时,能简化畸变校正操作,进而降低畸变校正操作对图像信息的破坏程度、减少畸变操作所消耗的计算资源。

Description

图像校正方法、装置、设备、系统及摄像设备和显示设备 技术领域
本发明涉及图像处理技术领域,尤其涉及图像校正方法、装置、设备、系统及摄像设备和显示设备。
背景技术
实际应用中,很多场景需要摄像模块捕获黑板图像(黑板为可以反复书写的书写平面),将捕获的图像转换成电子文档,并在显示终端显示。但是,摄像设备捕获的图像存在畸变,例如:摄像模块的光学透镜固有的透视失真造成的径向畸变或切向畸变,摄像模块与被拍对象间的相对位置造成的透视畸变等等。而相关的图像处理技术难以有效校正图像存在的畸变。
发明内容
有鉴于此,本发明提供一种图像校正方法、装置、设备、系统及摄像设备和显示设备,以解决相关的图像处理技术难以有效校正畸变的问题。
根据本发明的第一方面,提供一种图像校正方法,包括步骤:
获取待校正的图像;
基于预确定的映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
根据各像素点的校正坐标,生成所述图像的校正图像。
根据本发明的第二方面,提供一种图像校正装置,包括:
图像获取模块,用于获取待校正的图像;
坐标映射模块,用于基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
图像校正模块,用于根据各像素点的校正坐标,生成所述图像的校正图像。
根据本发明的第三方面,提供一种图像校正设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
获取待校正的图像;
基于预确定的映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
根据各像素点的校正坐标,生成所述图像的校正图像。
根据本发明的第四方面,提供一种摄像设备,包括摄像头、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
控制所述摄像头进行拍摄;
基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像内的坐标对应关系,其关系参数包括所述摄像头的内参数和外参数;
根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
根据本发明的第五方面,提供一种一体化书写机,包括书写设备以及安装在所述书写设备上的预定位置的摄像设备,所述摄像设备包括摄像头、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
控制所述摄像头对所述书写设备进行拍摄;
基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括所述摄像头的内参数和外参数;
根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
根据本发明的第六方面,提供一种图像校正系统,包括书写设备以及摄像设备,所述摄像设备安装在预定位置,包括摄像头、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
控制所述摄像头对所述书写设备进行拍摄;
基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像内的坐标对应关系,其关系参数包括所述摄像头的内参数和外参数;
根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
根据本发明的第七方面,提供一种显示设备,包括显示单元、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
获取待校正的图像;
基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
根据各像素点的校正坐标,生成所述图像的校正图像;
控制所述显示单元显示所述校正图像。
根据本发明的第八方面,提供一种图像校正系统,包括书写设备、显示设备以及所述显示设备关联的摄像设备,所述摄像设备安装在预定位置,用于对所述书写设备进行拍摄,所述显示设备包括网络接口、显示单元、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
控制所述网络接口获取所述摄像设备所拍摄的图像;
基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
根据各像素点的校正坐标,生成所述图像的校正图像;
控制所述显示单元显示所述校正图像。
实施本发明提供的实施例,在有图像需要校正畸变时,基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标,然后根据各像素点的校正坐标,生成所述图像的校正图像。由于预确定的坐标映射关系为像素点在畸变校正前后的图像中的坐标对应关系,而且坐标映射关系的关系参数包括能表征径向畸变的内参数、以及能表征透视畸变的外参数,因此,无需针对不同的畸变分别采取不同的畸变校正操作,逐步校正不同的畸变,通过该坐标映射关系,由像素点在待校正的图像中的坐标,直接 映射出像素点在校正径向畸变和透视畸变后的图像中的坐标。在有效校正畸变的同时,能简化畸变校正操作,进而降低畸变校正操作对图像信息的破坏程度、减少畸变操作所消耗的计算资源。
附图说明
图1a是本发明一示例性实施例示出的一种拍摄图像的应用场景示意图;
图1b是本发明一示例性实施例示出的一种校正拍摄图像的示意图;
图2是本发明一示例性实施例示出的一种图像校正方法的流程图;
图3是本发明一示例性实施例示出的用于实现图像校正的图像校正设备的硬件结构图;
图4是本发明一示例性实施例示出的用于实现图像校正的摄像设备的硬件结构图;
图5是本发明一示例性实施例示出的一种图像校正系统的硬件以及硬件间的交互示意图;
图6是本发明一示例性实施例示出的用于实现图像校正的显示设备的硬件结构图;
图7是本发明一示例性实施例示出的另一种图像校正系统的硬件以及硬件间的交互示意图;
图8是本发明一示例性实施例示出的图像校正装置的逻辑框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
在本发明使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本发明。在本发明和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本发明可能采用术语第一、第二、第三等来描述各种信息,但这些信 息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本发明范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
请参阅图1a,图1a是本发明一示例性实施例示出的一种拍摄图像的应用场景示意图:
图1a示出的应用场景包括书写设备、摄像设备以及与摄像设备关联的固定显示设备和/或移动显示设备,摄像设备装设在预定位置,对书写设备进行拍摄,摄像设备一般安装在能拍摄到书写设备的书写平面的区域,拍摄的图像可以指静态图像或视频的各帧图像,携带有书写设备的书写平面。在拍摄到图像后,摄像设备可以将图像传输到固定显示设备和/或移动显示设备进行显示。例如:书写区域书写有图1a所示的“书写内容”时,拍摄的图像包括图1a所示的“书写内容”,显示的图像也包括图1a所示的“书写内容”。在其他例子中,还可以将带有数据信息的纸页、图片等贴到书写区域,供摄像设备拍摄。
其中,书写设备可以是黑板、智能手写板或者黑板与智能手写板的结合体等,黑板指能用粉笔等特定的书写材料(如图1a中所示的书写笔)反复书写的平面,可以用于教学或会议,其平面颜色多为黑色、墨绿色、白色或米黄色等颜色。
智能书写板,可以指能感应用户手指或智能书写笔的触击,成并显示对应的文本/图形信息的设备。在其他例子中,书写设备还可以是具有书写功能的其他设备,本发明对此不做限制。
摄像设备,可以是镜头角度较广的广角摄像设备或鱼眼摄像设备,也可以是其他类型的摄像设备,本发明对此不做限制。
固定显示设备,可以是个人计算机、互联网电视、显示墙等设备,本例子仅以LED显示屏为例进行示例说明。
移动显示设备,可以是膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、平板计算机、可穿戴设备等设备,本例子仅以智能电话为例进行示例说明。
在其他例子中,拍摄图像的应用场景可以包括多台移动显示设备的数目和/或多台固定显示设备,本发明对此不做限制。
实际应用中,摄像设备在拍摄书写设备的书写区域时,其摄像模块的光学透镜固有的透视失真会导致拍摄的图像存在径向畸变和/或切向畸变,而且摄像设备与被拍对象间的相对位置会导致拍摄的图像存在透视畸变。一般情况下,摄像设备的镜头角度越广,拍摄的图像中 的径向畸变越明显,摄像设备的镜头与书写平板间的夹角越小,拍摄的图像中的透视畸变越明显。
如图1b所示,原始图像101是镜头为广角镜头或鱼眼镜头的摄像设备拍摄的图像,存在较明显的径向畸变和透视畸变,图像质量较差。为了提高图像质量,本发明设计人员可以先采用径向畸变校正模型校正原始图像101中的径向畸变,生成中间图像102,然后采用透视映射模型校正中间图像102中的透视畸变,生成最终图像103,该最终图像103中的径向畸变和透视畸变都不明显,相对原始图像101,图像质量明显较高。但是,校正透视畸变和校正径向畸变的两个过程,会分别对图像信息造成不可以的破坏,分步进行畸变校正,会造成累积的破坏,降低校正所得图像的图像质量,而且两个畸变校正过程操作繁复、消耗的计算资源较多,导致畸变校正效率较低。本发明可以进而针对分步校正拍摄图像中的径向畸变和透视畸变的校正方案,存在图像质量差、校正效率低的问题,提出解决方案。
本发明的方案,考虑到校正透视畸变和校正径向畸变的两个过程,分步执行的话,操作繁复、消耗的计算资源较多,且易造成累积的图像破坏,可以预先确定能反应像素点在畸变校正(如透视畸变和径向畸变校正)前后的图像中的坐标对应关系的坐标映射关系,在有图像需要校正畸变时,基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标,然后根据各像素点的校正坐标,生成所述图像的校正图像。由于预确定的坐标映射关系为像素点在畸变校正前后的图像中的坐标对应关系,而且坐标映射关系的关系参数包括能表征径向畸变的内参数、以及能表征透视畸变的外参数,因此,无需针对不同的畸变分别采取不同的畸变校正操作,逐步校正不同的畸变,通过该坐标映射关系,由像素点在待校正的图像中的坐标直接映射出像素点在校正径向畸变和透视畸变后的图像中的坐标。在有效校正畸变的同时,能简化畸变校正操作,进而降低畸变校正操作对图像信息的破坏程度、减少畸变操作所消耗的计算资源。以下结合附图详细说明本发明的图像校正过程。
请参阅图2,图2是本发明一示例性实施例示出的图像校正方法的流程图,该实施例可以应用于具备图像处理能力的各种电子设备,包括以下步骤S201-S203:
步骤S201,获取待校正的图像。
步骤S202,基于预确定的映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数。
步骤S203,根据各像素点的校正坐标,生成所述图像的校正图像。
本发明实施例中,内参数由摄像模块的固有属性决定,为摄像模块固有的光电和几何参数,可以包括焦距、主点坐标以及径向畸变系数和/或切向畸变系数等,在其他例子中还可以包括图像传感器格式(image sensor format);外参数可以包括旋转参数和平移参数,旋转参数和平移参数表示三维现实世界坐标系到三维相机坐标系的转换关系。外参数由摄像模块与被拍对象的相对位置决定,内参数。
电子设备可以是摄像设备、显示设备等能实现图像校正功能的设备。本发明的图像校正方法的执行主体可以是前述各种电子设备。
当电子设备为摄像设备时,所述摄像模块为所述摄像设备的摄像头,所述图像由所述摄像头拍摄所得。摄像头的视角不同时,拍摄所得图像存在的径向畸变和/或切向畸变的畸变程度不同,摄像头的入射角(由摄像设备与被拍对象的相对位置决定)不同时,拍摄所得图像存在的透视畸变的畸变程度不同。
当电子设备为预定的显示设备时,如图1a所示的移动显示设备或固定显示设备,本发明的实施例可以通过以下操作获取待校正的图像:
通知与所述显示设备关联的摄像设备拍摄图像。
接收所述摄像设备发送的图像为待校正的图像。
在其他例子中,预定的显示设备可以预先存储关联的摄像设备拍摄的图像,当需要显示预存的图像时,再调取预存的图像,将调取的图像作为待校正的图像进行校正。
实际应用中,考虑到畸变校正速度,针对拍摄对象固定、且安装位置固定的摄像设备,本发明设计人员可以预确定坐标映射关系,然后针对随机拍摄的图像中的各像素点的坐标,根据坐标映射关系,直接计算出对应的坐标,然后针对图像中的各像素点的坐标和计算出的坐标建立映射表,将映射表与所述坐标映射关系对应存储。在获取到待校正的图像后,能基于预确定的坐标映射关系,通过以下操作快速获取与各像素点在所述图像中的坐标所对应的坐标:
获取能反映所述坐标映射关系的映射表。
从所述映射表中查找与各像素点在所述图像中的坐标所对应的坐标。
这里提到的坐标指单个像素点在图像中的位置参数,也可以指预定的像素点集合中某些像素点在图像中的位置参数。一般情况下,可以预先建立图像坐标系,将像素点在图像坐标 系中的投影值作为像素点在图像中的坐标,其他例子中,可以其他方式表示像素点的坐标,本发明实施例对此不做限制。
其中,预确定坐标映射关系的过程中,可以针对拍摄对象固定、且安装位置固定的摄像设备,随机拍摄多幅图像,对校正畸变前后的每幅图像中的像素点的坐标,进行函数拟合,拟合出坐标映射关系。也可以对径向畸变校正模型和透视映射模型进行联合,推导出以上提到的坐标映射关系,具体推导过程可以参阅以下操作S1-S4:
S1、将预定的现实场景中的部分对象定为目标点,并控制所述摄像模块在预定位置拍摄所述现实场景。
其中,预定的现实场景可以如图1a所示,目标点可以是书写区域中存在“书写内容”的各区域。所述摄像模块为摄像设备的摄像头。
S2、获取该目标点在现实世界与拍摄图像中的坐标对应关系;该坐标对应关系的关系参数包括所述内参数。
S3、获取该目标点在校正径向畸变的拍摄图像中与在校正径向畸变和透视畸变后的拍摄图像中的坐标对应关系;该坐标对应关系的关系参数包括所述外参数。
S4、当该目标点在所述显示世界的坐标与在校正径向畸变和透视畸变后的拍摄图像的坐标一致时,基于获取的各坐标对应关系,获得所述坐标映射关系。
在一个例子中,可以通过以下操作获取该目标点在现实世界与拍摄图像中的坐标对应关系:
基于所述现实场景的世界坐标系与所述摄像模块的相机坐标系之间的坐标转换关系,确定该目标点在所述世界坐标系与在所述相机坐标系的坐标间的对应关系为第一对应关系。
确定该目标点在所述相机坐标系中的坐标与其针孔投影坐标间的对应关系为第二对应关系。
确定所述拍摄模块的入射角与该目标点的针孔投影坐标间的对应关系为第三对应关系。
根据所确定的第一对应关系、第二对应关系和第三对应关系,计算出所述拍摄模块的入射角与该目标点在所述世界坐标系中的坐标间的对应关系为第四对应关系。
确定该目标点在所述拍摄图像中到图像畸变中心的距离。
根据校正径向畸变的畸变校正模型,确定所述入射角与所述距离间的对应关系为第五对应关系。
基于所述第四对应关系和所述第五对应关系,得到该目标点在现实世界与拍摄图像中的坐标对应关系。
在另一个例子中,可以通过以下操作获取该目标点在校正径向畸变的拍摄图像中与在校正径向畸变和透视畸变后的拍摄图像中的坐标对应关系:
根据校正径向畸变的畸变校正模型,校正所述拍摄图像中的径向畸变,并确定该目标点在校正径向畸变后的图像中的坐标。
提取校正径向畸变后的图像中的梯形区域的边界坐标。
基于提取的边界坐标以及预确定的矩形区域的边缘坐标,获取所述梯形区域的边界坐标与所述矩形区域的坐标间的变换矩阵;所述变换矩阵的矩阵元素由所述拍摄模块的外参数构成。其中,所述矩形区域的尺寸比例与所述梯形区域在现实场景中尺寸比例一致。
基于所述变换矩阵,获得该目标点在校正径向畸变的拍摄图像中与在校正径向畸变和透视畸变后的拍摄图像中的坐标对应关系。
本例子中,梯形区域在现实场景中的尺寸比例(如长宽比例)可以是图1所示的书写区域的尺寸比例(如长宽比例),所述矩形区域的尺寸比例(如长宽比例)与所述书写区域的尺寸比例(如长宽比例)一致,提取这个梯形区域可以有多种方法。例如,可在书写区域的四个角的位置安装红外灯,利用拍摄图像检测出这四个红外灯的位置即完成梯形区域的提取。再如,可通过边缘检测等算法,直接从图像提取书写区域的边缘构成梯形区域。当然,还可以交互的方式让用户指定书写区域的四个关键点的坐标。
在其他例子中,还可以提取校正径向畸变后的图像中的其他形状的区域的边界坐标,例如椭圆形区域的边界坐标;预确定除矩形区域外的其他形状的区域,例如圆形区域的边界坐标,本发明对此不做限制。
实际应用中,径向畸变校正模型可以为公式(1)至公式(5)分别所示的等距投影模型、等立体角投影模型、正交投影模型、体视投影模型、多项式近似模型,世界坐标系与相机坐标系之间的坐标转换关系如公式(6)所示,针孔投影坐标如公式(7)所示,摄像模块的入射角与针孔投影坐标的关系如公式(8)和(9)所示,目标点在拍摄的图像内的坐标如公式(10)和(11)所示,含有所述变换矩阵的透视映射模型如公式(12)所示,该目标点在校正径向畸变后的图像和校正径向畸变与透视畸变后的图像中的坐标对应关系,如公式(13)所示,该目标点在现实世界与拍摄图像中的坐标对应关系如公式(14)和(15)所示,坐标映射关系如公式(16)和公式(17)所示,通过各公式(1)至(15)可以推导出公式(16) 和公式(17),以下介绍具体推导过程:
θd=fθ          (1)
其中,θd表示图像中的点到畸变中心的距离,f是相机的焦距,θ是入射光线与相机光轴之间的夹角,即入射角。
Figure PCTCN2017104351-appb-000001
θd=f sinθ      (3)
Figure PCTCN2017104351-appb-000002
θd=k0θ+k1θ3+k2θ5+k3θ7+...     (5)
其中,k0至kn是多项式系数,可以通过拟合或标定的方式获得。
Figure PCTCN2017104351-appb-000003
其中,目标点P在现实世界的坐标(x,y,z),在相机坐标系中的坐标为(x′,y′,z′),机坐标系相对于世界坐标系的旋转矩阵为R,偏移向量为T。
Figure PCTCN2017104351-appb-000004
其中,目标点P的针孔投影坐标为(x″,y″)。
r2=x″2+y″2         (8)
θ=arctan(r)          (9)
由公式(9)中的θ和径向畸变校正模型,可以计算出图像中的点到畸变中心的距离θd,从而最终算出目标点P在拍摄的图像中的坐标(u,v):
Figure PCTCN2017104351-appb-000005
Figure PCTCN2017104351-appb-000006
其中,fx和fy为x方向和y方向上的焦距,cx和cy为主点(principal point),通常位于图像中心。这些未知变量、以及径向畸变校正模型的参数为摄像模块的内参数,均可通过相机标定获得。在其他例子中也可以通过拟合或其他方式获得,本发明对此不做限制。
在确定好目标点P在拍摄的图像中的坐标后,可以对拍摄的图像校正径向畸变,然后对校正径向畸变后的图像校正透视畸变,基于两步校正后的图像,可以该目标点在校正径向畸变与校正透视畸变和透视畸变后的拍摄图像中的坐标对应关系。
在某个例子中,校正径向畸变后的图像如图1b所示的中间图像102,校正径向畸变和透视畸变后的图像如图1b所示的最终图像103,书写设备可以是黑板,最终图像103可以是标准黑板图像,由于标准黑板图像的畸变不明显,黑板区域(书写区域)的边缘均恢复为直线,因此黑板区域可由四边形描述。提取这个四边形(梯形)可以有多种方法。例如,可在黑板四个角的位置安装红外灯,利用拍摄图像检测出这四个红外灯的位置即完成四边形的提取。再如,可通过边缘检测等算法,直接从图像提取黑板的边缘构成四边形。当然,还可以交互的方式让用户指定黑板区域的四个关键点的坐标。得到这个四边形后,通过推导这个四边形到标准黑板区域所构成的矩形的变换矩阵M,即可得到透视映射模型(12):
Figure PCTCN2017104351-appb-000007
其中,(u″,v″)是目标点P在标准黑板图像中的坐标,(u′,v′)是目标点P在校正径向畸变后的图像中的坐标。M的组成部分即为摄像模块的外参数,本发明方案对于黑板区域的提取方法不做限制,对M如何获得亦不做限制。
由公式(12)可以得出目标点P在校正径向畸变后的图像中的坐标的表达式(13):
Figure PCTCN2017104351-appb-000008
然后,联立公式(1)-(11)可导出目标点P在现实世界的坐标(x,y,z)和在拍摄的图像中的坐标(u,v)的坐标对应关系:
u=gu(x,y,z)               (14)
v=gv(x,y,z)     (15)
在某些例子中,所述现实世界的世界坐标系可以与校正透视畸变后的拍摄图像的图像
坐标系一致,此时,坐标(u′,v′,1)可以看作是现实世界中P点的坐标(x,y,z),将之代入公式(14)和(15)中,并结合公式(13),可导出目标点P在校正径向畸变和透射畸变后的图像中的(u″,v″)到拍摄图像中的坐标(u,v)的映射关系:
u=hu(u″,v″)           (16)
v=hv(u″,v″)          (17)
由上述实施例和公式可知:摄像设备的固定位置、姿态以及对拍摄对象(如书写设备)拍摄角度固定后,便可以确定出以上提到的坐标映射关系(单一的直接映射关系)。换言之,对于校正图像中的每一个像素的坐标(u″,v″),其对应的原始的拍摄图像上的像素点的坐标为(hu(u″,v″),hv(u″,v″))。因此,本发明可以把校正图像中的各个像素的坐标,以及其对应的原始的拍摄图像中的像素点的坐标预先算好,得到一张映射表。这样,处理待校正的图像时就不再需要计算hu(u″,v″)和hv(u″,v″),直接查表即以实现快速映射。
在根据以上提到的坐标映射关系确定好各像素点的校正坐标后,可以读取各像素点在待校正的图像内的像素特征,然后根据对应的校正坐标,在图像编辑界面的对应位置添加像素点,并设置该像素点的像素特征与待校正的图像内的对应像素点的像素特征一致,生成校正图像。在其他例子中,还可以采取其他技术根据各像素点的校正坐标,生成校正图像,本发明对此不做限制。
某些例子中,各像素点的校正坐标并非整数,如果直接将校正坐标取整后,生成校正图像,校正图像内相邻像素点的像素特征差距较大,会造成图像失真,为了解决该问题,本发明方案,可以通过像素差值法处理非整数的校正坐标。具体处理时,如果像素点A的校正坐标不是整数,可以求取像素点A周围的像素点的坐标的平均值,将求取的平均值作为像素点A的校正坐标。
此外,为了进一步提高校正图像的图像质量,还可以对校正图像进行图像增强、文字/符号/公式检测与识别。
在其他实施例中,摄像模块的光学透镜固有的透视失真造成的畸变不同于径向畸变时,例如造成切向畸变,还可以采用其他的相关畸变校正模型替代径向畸变校正模型,对相关畸变校正模型和透视映射模型进行联合,推导出以上提到的坐标映射关系,也可以对径向畸变校正模型和透视映射模型进行联合,推导出以上提到的坐标映射关系,具体推导过程可以参阅以下操作:
将预定的现实场景中的部分对象定为目标点,并控制所述摄像模块在预定位置拍摄所述现实场景。
其中,预定的现实场景可以如图1a所示,目标点可以是书写区域中存在“书写内容”的各区域。所述摄像模块为摄像设备的摄像头。
获取该目标点在现实世界与拍摄图像中的坐标对应关系;该坐标对应关系的关系参数包括所述内参数。
获取该目标点在校正相关畸变的拍摄图像中与在校正相关畸变和透视畸变后的拍摄图像中的坐标对应关系;该坐标对应关系的关系参数包括所述外参数。
当该目标点在所述显示世界的坐标与在校正相关畸变和透视畸变后的拍摄图像的坐标一致时,基于获取的各坐标对应关系,获得所述坐标映射关系。
其中,相关畸变可以单指切向畸变,也可以指径向畸变和/或切向畸变。
以下结合图像校正操作的不同执行主体,对本发明实施例进行详细描述,其中假设摄像设备为广角摄像设备,预定的显示设备为固定显示设备和/或移动显示设备。
本发明的图像校正装置的实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图3所示,为本发明图像校正装置331所在图像校正设备的一种硬件结构图,除了图3所示的处理器310、内存330、网络接口320、以及非易失性存储器340之外,实施例中装置所在的设备通常根据该设备的实际功能,还可以包括其他硬件,对此不再赘述。图像校正设备的存储器可以存储处理器可执行指令;处理器可以耦合存储器,用于读取所述存储器存储的程序指令,并作为响应,执行如下操作:获取待校正的图像;基于预确定的映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;根据各像素点的校正坐标,生成所述图像的校正图像。在其他实施例中,处理器所执行的操作可以参考上文方法实施例中相关的描述,在此不予赘述。
某些场景下,图像校正设备可以具体为摄像设备,从硬件层面而言,如图4所示,为本发明图像校正装置431所在摄像设备的一种硬件结构图,除了图4所示的处理器410、内存430、网络接口420、以及非易失性存储器440之外,实施例中装置所在的设备通常根据该设备的实际功能,还可以包括摄像头450和其他硬件,对此不再赘述。摄像设备的存储器440 可以存储处理器410可执行指令;处理器410可以耦合存储器440,用于读取存储器440存储的程序指令到内存330,并作为响应,执行如下操作:控制所述摄像头进行拍摄;基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像内的坐标对应关系,其关系参数包括所述摄像头的内参数和外参数;根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
在图像校正设备具体为摄像设备时,结合图1a所示的应用场景,本发明实施例的图像校正系统如图5所示,可以包括书写设备以及安装在预定位置的摄像设备,所述摄像设备可以包括如图4所示的摄像头450、内存430、网络接口420、存储器440、处理器410及存储在存储器440上并可在处理器410上运行的计算机程序。摄像设备的存储器440可以存储处理器410可执行指令;处理器410可以耦合存储器440,用于读取存储器440存储的程序指令到内存430,并作为响应,执行如下操作:控制所述摄像头对所述书写设备进行拍摄;基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像内的坐标对应关系,其关系参数包括所述摄像头的内参数和外参数;根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
实际应用时,摄像设备先拍摄书写设备的书写区域,对拍摄的图像进行校正,具体校正时,由处理器410用于读取存储器440存储的程序指令到内存430,并作为响应,执行如上所述的操作生成校正图像。
在某些例子中,需要将校正图像呈现给用户,本发明实施例的图像校正系统还可以进一步包括与摄像设备关联的显示设备,摄像设备和显示设备可以通过网络进行连接,显示设备可以包括如图5所示的固定显示设备和/或移动显示设备。在拍摄设备生产校正图像后,可以通过网络接口420将校正图像发送给固定显示设备和/或移动显示设备进行显示。
在其他例子中,摄像设备可以安装在书写设备上的预定位置,两者组成一体化书写机,这里提到的预定位置可以是书写设备的外壳的中间部位。其中,摄像设备可以包括如图4所示的摄像头450、内存430、网络接口420、存储器440、处理器410及存储在存储器440上并可在处理器410上运行的计算机程序。摄像设备的存储器440可以存储处理器410可执行指令;处理器410可以耦合存储器440,用于读取存储器440存储的程序指令到内存430,并作为响应,执行如下操作:控制所述摄像头对所述书写设备进行拍摄;基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标; 所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括所述摄像头的内参数和外参数;根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
书写设备可以根据实际的应用需要,设置为不同类型的设备。例如,既需要传统的书写板又需要智能书写板的场景,所述书写设备可以包括黑板和智能书写板,所述预定位置为所述黑板的边框的指定位置。黑板和智能书写板可以组合在一起,构成书写设备。智能书写板可以用来感应用户手指或智能书写笔的触击,成并显示对应的文本/图形信息,还可以兼具显示校正图像的功能。
再比如,既需要传统的书写板又需要显示屏的场景,所述书写设备可以包括黑板和显示器,所述预定位置为所述黑板的边框的指定位置,所述显示器可以用于显示所述校正图像。显示器和黑板可以组合在一起,构成书写设备。
在其他场景中,图像校正设备可以具体为显示设备,从硬件层面而言,如图6所示,为本发明图像校正装置631所在显示设备的一种硬件结构图,除了图6所示的处理器610、内存630、网络接口620、以及非易失性存储器640之外,实施例中装置所在的设备通常根据该设备的实际功能,还可以包括显示单元650和其他硬件,对此不再赘述。显示设备的存储器640可以存储处理器610可执行指令;处理器610可以耦合存储器640,用于读取存储器640存储的程序指令到内存630,并作为响应,执行如下操作:获取待校正的图像;基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;根据各像素点的校正坐标,生成所述图像的校正图像;控制所述显示单元显示所述校正图像。
当图像校正设备具体为显示设备时,结合图1a所示的应用场景,本发明实施例的图像校正系统如图7所示,可以包括包括书写设备、显示设备以及所述显示设备关联的摄像设备,显示设备可以包括固定显示设备和/或移动显示设备,所述摄像设备安装在预定位置,用于对所述书写设备进行拍摄,拍摄好图像后,摄像设备可以将拍摄的图像分别传输到固定显示设备和移动显示设备。
固定显示设备和移动显示设备可以分别包括如图6所示的处理器610、内存630、网络接口620、以及非易失性存储器640之外,实施例中装置所在的设备通常根据该设备的实际功能,还可以包括显示单元650和其他硬件,对此不再赘述。固定显示设备或移动显示设备的存储器640可以存储处理器610可执行指令;处理器610可以耦合存储器640,用于读取存储器640存储的程序指令到内存630,并作为响应,执行如下操作:制所述网络接口获 取所述摄像设备所拍摄的图像;基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;根据各像素点的校正坐标,生成所述图像的校正图像;控制所述显示单元显示所述校正图像。
在其他例子中,本发明实施例的图像校正系统中的书写设备和摄像设备可以组成一体化的书写机,摄像设安装在书写设备上的预定位置。
此外,本发明实施例的图像校正系统中的书写设备、显示设备和摄像设备可以组成一体化的书写机,摄像设安装在书写设备上的预定位置,书写设备和显示设备可以组合在一起。
与前述方法、系统和设备的实施例相对应,本发明还提供了装置的实施例。
参见图8,图8是本发明一示例性实施例示出的图像校正装置的逻辑框图,该装置可以包括:图像获取模块810、坐标映射模块820和图像校正模块830。
其中,图像获取模块810,用于获取待校正的图像。
坐标映射模块820,用于基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数。
图像校正模块830,用于根据各像素点的校正坐标,生成所述图像的校正图像。
一些例子中,本发明实施例的图像校正装置装设于预定的摄像设备内,所述摄像模块为所述摄像设备的摄像头,所述图像由所述摄像头拍摄所得。
另一些例子中,本发明实施例的图像校正装置装设于预定的显示设备内,图像获取模块810可以包括:
摄像通知模块,用于通知与所述显示设备关联的摄像设备拍摄图像。
图像接收模块,用于接收所述摄像设备发送的图像为待校正的图像。
另一些例子中,坐标映射模块820可以包括:
映射表获取模块,用于获取能反映所述坐标映射关系的映射表。
坐标查找模块,用于从所述映射表中查找与各像素点在所述图像中的坐标所对应的坐 标。
另一些例子中,所述坐标映射关系的预确定模块用于:
将预定的现实场景中的部分对象定为目标点。
获取该目标点在现实世界与拍摄图像中的坐标对应关系;该坐标对应关系的关系参数包括所述摄像模块的内参数。
获取该目标点在校正径向畸变与校正透视畸变和透视畸变后的拍摄图像中的坐标对应关系。
该目标点在所述显示世界的坐标与在校正径向畸变和透视畸变后的拍摄图像的坐标一致时,基于获取的各坐标对应关系,生成所述坐标映射关系。
上述装置中各个单元(或模块)的功能和作用的实现过程具体详见上述实施例中所述的物联网、以及物联网的路由方法中对应步骤的实现过程,在此不再赘述。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元或模块可以是或者也可以不是物理上分开的,作为单元或模块显示的部件可以是或者也可以不是物理单元或模块,即可以位于一个地方,或者也可以分布到多个网络单元或模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本发明方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。

Claims (19)

  1. 一种图像校正方法,其特征在于,包括步骤:
    获取待校正的图像;
    基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
    根据各像素点的校正坐标,生成所述图像的校正图像。
  2. 根据权利要求1所述的方法,其特征在于,运行于预定的摄像设备,所述摄像模块为所述摄像设备的摄像头,所述图像由所述摄像头拍摄所得。
  3. 根据权利要求1所述的方法,其特征在于,运行于预定的显示设备,所述获取待校正的图像,包括:
    通知与所述显示设备关联的摄像设备拍摄图像;
    接收所述摄像设备发送的图像为待校正的图像。
  4. 根据权利要求1所述的方法,其特征在于,所述基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,包括:
    获取能反映所述坐标映射关系的映射表;
    从所述映射表中查找与各像素点在所述图像中的坐标所对应的坐标。
  5. 根据权利要求1所述的方法,其特征在于,所述内参数包括焦距、主点坐标以及径向畸变系数和/或切向畸变系数。
  6. 根据权利要求1所述的方法,其特征在于,所述外参数包括旋转参数和平移参数。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述坐标映射关系的预确定的步骤包括:
    将预定的现实场景中的部分对象定为目标点,并控制所述摄像模块在预定位置拍摄所述现实场景;
    获取该目标点在现实世界与拍摄图像中的坐标对应关系;该坐标对应关系的关系参数包括所述内参数;
    获取该目标点在校正径向畸变的拍摄图像中与在校正径向畸变和透视畸变后的拍摄图像中的坐标对应关系;该坐标对应关系的关系参数包括所述外参数;
    当该目标点在所述显示世界的坐标与在校正径向畸变和透视畸变后的拍摄图像的坐标一致时,基于获取的各坐标对应关系,获得所述坐标映射关系。
  8. 根据权利要求7任一项所述的方法,其特征在于,所述获取该目标点在现实世界与拍 摄图像中的坐标对应关系,包括:
    基于所述现实场景的世界坐标系与所述摄像模块的相机坐标系之间的坐标转换关系,确定该目标点在所述世界坐标系与在所述相机坐标系的坐标间的对应关系为第一对应关系;
    确定该目标点在所述相机坐标系中的坐标与其针孔投影坐标间的对应关系为第二对应关系;
    确定所述拍摄模块的入射角与该目标点的针孔投影坐标间的对应关系为第三对应关系;
    根据所确定的第一对应关系、第二对应关系和第三对应关系,计算出所述拍摄模块的入射角与该目标点在所述世界坐标系中的坐标间的对应关系为第四对应关系;
    确定该目标点在所述拍摄图像中到图像畸变中心的距离;
    根据校正径向畸变的畸变校正模型,确定所述入射角与所述距离间的对应关系为第五对应关系;
    基于所述第四对应关系和所述第五对应关系,得到该目标点在现实世界与拍摄图像中的坐标对应关系。
  9. 根据权利要求8所述的方法,其特征在于,所述获取该目标点在校正径向畸变的拍摄图像中与在校正径向畸变和透视畸变后的拍摄图像中的坐标对应关系,包括:
    根据校正径向畸变的畸变校正模型,校正所述拍摄图像中的径向畸变,并确定该目标点在校正径向畸变后的图像中的坐标;
    提取校正径向畸变后的图像中的梯形区域的边界坐标;
    基于提取的边界坐标以及预确定的矩形区域的边缘坐标,获取所述梯形区域的边界坐标与所述矩形区域的坐标间的变换矩阵;所述变换矩阵的矩阵元素由所述拍摄模块的外参数构成;
    基于所述变换矩阵,获得该目标点在校正径向畸变的拍摄图像中与在校正径向畸变和透视畸变后的拍摄图像中的坐标对应关系。
  10. 一种图像校正装置,其特征在于,包括:
    图像获取模块,用于获取待校正的图像;
    坐标映射模块,用于基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
    图像校正模块,用于根据各像素点的校正坐标,生成所述图像的校正图像。
  11. 一种图像校正设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现以下步骤:
    获取待校正的图像;
    基于预确定的映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
    根据各像素点的校正坐标,生成所述图像的校正图像。
  12. 一种摄像设备,包括摄像头、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现以下步骤:
    控制所述摄像头进行拍摄;
    基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像内的坐标对应关系,其关系参数包括所述摄像头的内参数和外参数;
    根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
  13. 一种一体化书写机,其特征在于,包括书写设备以及安装在所述书写设备上的预定位置的摄像设备,所述摄像设备包括摄像头、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
    控制所述摄像头对所述书写设备进行拍摄;
    基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括所述摄像头的内参数和外参数;
    根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
  14. 根据权利要求13所述的一体化书写机,其特征在于,所述书写设备包括黑板和智能书写板,所述预定位置为所述黑板的边框的指定位置。
  15. 根据权利要求13所述的一体化书写机,其特征在于,所述书写设备包括黑板和显示器,所述预定位置为所述黑板的边框的指定位置。
  16. 一种图像校正系统,其特征在于,包括书写设备以及摄像设备,所述摄像设备安装在预定位置,包括摄像头、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
    控制所述摄像头对所述书写设备进行拍摄;
    基于预确定的坐标映射关系,获取与各像素点在所拍摄的图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像内的坐标对应关系,其关系参数包括所述摄像头的内参数和外参数;
    根据各像素点的校正坐标,生成所拍摄的图像的校正图像。
  17. 根据权利要求16所述的系统,其特征在于,还包括显示设备,所述显示设备与所述摄像设备关联,所述处理器执行所述程序时还能实现以下步骤:
    将所述校正图像发送到所述显示设备进行显示。
  18. 一种显示设备,包括显示单元、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现以下步骤:
    获取待校正的图像;
    基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
    根据各像素点的校正坐标,生成所述图像的校正图像;
    控制所述显示单元显示所述校正图像。
  19. 一种图像校正系统,其特征在于,包括书写设备、显示设备以及所述显示设备关联的摄像设备,所述摄像设备安装在预定位置,用于对所述书写设备进行拍摄,所述显示设备包括网络接口、显示单元、存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现以下步骤:
    控制所述网络接口获取所述摄像设备所拍摄的图像;
    基于预确定的坐标映射关系,获取与各像素点在所述图像中的坐标所对应的坐标,构成各像素点的校正坐标;所述坐标映射关系为像素点在畸变校正前后的图像中的坐标的直接对应关系,其关系参数包括拍摄所述图像的摄像模块的内参数和外参数;
    根据各像素点的校正坐标,生成所述图像的校正图像;
    控制所述显示单元显示所述校正图像。
PCT/CN2017/104351 2017-05-26 2017-09-29 图像校正方法、装置、设备、系统及摄像设备和显示设备 WO2018214365A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710385587.6 2017-05-26
CN201710385587.6A CN107424126A (zh) 2017-05-26 2017-05-26 图像校正方法、装置、设备、系统及摄像设备和显示设备

Publications (1)

Publication Number Publication Date
WO2018214365A1 true WO2018214365A1 (zh) 2018-11-29

Family

ID=60429337

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/104351 WO2018214365A1 (zh) 2017-05-26 2017-09-29 图像校正方法、装置、设备、系统及摄像设备和显示设备

Country Status (2)

Country Link
CN (1) CN107424126A (zh)
WO (1) WO2018214365A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215886A (zh) * 2020-10-10 2021-01-12 深圳道可视科技有限公司 全景泊车的标定方法及其系统
CN112488966A (zh) * 2020-12-23 2021-03-12 深圳疆程技术有限公司 一种图像的反畸变方法、装置、电子设备及汽车
CN113222997A (zh) * 2021-03-31 2021-08-06 上海商汤智能科技有限公司 神经网络的生成、图像处理方法、装置、电子设备及介质
CN113963065A (zh) * 2021-10-19 2022-01-21 杭州蓝芯科技有限公司 一种基于外参已知的镜头内参标定方法及装置、电子设备
CN114078093A (zh) * 2020-08-19 2022-02-22 武汉Tcl集团工业研究院有限公司 一种图像校正方法、智能终端及存储介质
CN114155157A (zh) * 2021-10-20 2022-03-08 成都旷视金智科技有限公司 图像处理方法、装置、电子设备及存储介质
CN116645426A (zh) * 2023-06-20 2023-08-25 成都飞机工业(集团)有限责任公司 一种相机内参标定方法、装置、存储介质及电子设备
CN118351036A (zh) * 2024-06-17 2024-07-16 浙江托普云农科技股份有限公司 基于容器成像畸变的图像校正方法、系统及装置

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833893A (zh) * 2018-05-31 2018-11-16 北京邮电大学 一种基于光场显示的3d图像校正方法
WO2020014881A1 (zh) * 2018-07-17 2020-01-23 华为技术有限公司 一种图像校正方法和终端
CN109146781A (zh) * 2018-07-20 2019-01-04 深圳市创客工场科技有限公司 激光切割中的图像校正方法及装置、电子设备
CN110874135B (zh) * 2018-09-03 2021-12-21 广东虚拟现实科技有限公司 光学畸变的校正方法、装置、终端设备及存储介质
US11308592B2 (en) * 2018-10-04 2022-04-19 Canon Kabushiki Kaisha Image processing method, image processing apparatus, imaging apparatus, and storage medium, that correct a captured image using a neutral network
WO2020097851A1 (zh) * 2018-11-15 2020-05-22 深圳市大疆创新科技有限公司 一种图像处理方法、控制终端及存储介质
CN109726697B (zh) * 2019-01-04 2021-07-20 北京灵优智学科技有限公司 融合av视频通讯与ai实物识别的在线视频系统及方法
CN109785265B (zh) * 2019-01-16 2022-11-11 西安全志科技有限公司 畸变矫正图像处理方法及图像处理装置
CN110020656B (zh) * 2019-01-30 2023-06-23 创新先进技术有限公司 图像的校正方法、装置及设备
CN109799073B (zh) * 2019-02-13 2021-10-22 京东方科技集团股份有限公司 一种光学畸变测量装置及方法、图像处理系统、电子设备和显示设备
CN111625210B (zh) * 2019-02-27 2023-08-04 杭州海康威视系统技术有限公司 一种大屏控制方法、装置及设备
CN109978997A (zh) * 2019-04-03 2019-07-05 广东电网有限责任公司 一种基于倾斜影像的输电线路三维建模方法和系统
CN110072045B (zh) * 2019-05-30 2021-11-09 Oppo广东移动通信有限公司 镜头、摄像头及电子设备
CN110209873A (zh) * 2019-06-04 2019-09-06 北京梦想加信息技术有限公司 白板板书记录方法、设备、系统和存储介质
CN110276734B (zh) * 2019-06-24 2021-03-23 Oppo广东移动通信有限公司 图像畸变校正方法和装置
CN112200731A (zh) * 2019-07-08 2021-01-08 上海隽珑信息技术有限公司 一种图像像素的校正方法
CN110675349B (zh) * 2019-09-30 2022-11-29 华中科技大学 内窥镜成像方法及装置
CN110660034B (zh) * 2019-10-08 2023-03-31 北京迈格威科技有限公司 图像校正方法、装置及电子设备
CN112686959B (zh) * 2019-10-18 2024-06-11 菜鸟智能物流控股有限公司 待识别图像的矫正方法及装置
CN111028290B (zh) * 2019-11-26 2024-03-08 北京光年无限科技有限公司 一种用于绘本阅读机器人的图形处理方法及装置
CN111397513A (zh) * 2020-04-14 2020-07-10 东莞明睿机器视觉科技有限公司 一种x-y正交运动平台运动标定系统以及方法
CN113724141B (zh) * 2020-05-26 2023-09-05 杭州海康威视数字技术股份有限公司 一种图像校正方法、装置及电子设备
CN111680662B (zh) * 2020-06-19 2024-03-12 苏州数字地图信息科技股份有限公司 一种轨迹确定方法、系统、设备及计算机可读存储介质
CN111815714B (zh) * 2020-07-01 2024-05-17 广州视源电子科技股份有限公司 一种鱼眼相机标定方法、装置、终端设备及存储介质
CN111915683B (zh) * 2020-07-27 2024-06-25 湖南大学 图像位置标定方法、智能设备及存储介质
CN113420581B (zh) * 2020-10-19 2024-08-23 杨宏伟 书面文档图像的校正方法、装置、电子设备及可读介质
CN112312041B (zh) * 2020-10-22 2023-07-25 北京虚拟动点科技有限公司 基于拍摄的图像校正方法、装置、电子设备及存储介质
JP2022069967A (ja) * 2020-10-26 2022-05-12 住友重機械工業株式会社 歪曲収差補正処理装置、歪曲収差補正方法、及びプログラム
CN112489114B (zh) * 2020-11-25 2024-05-10 深圳地平线机器人科技有限公司 图像转换方法、装置、计算机可读存储介质及电子设备
CN112712474B (zh) * 2020-12-16 2023-07-14 杭州小伴熊科技有限公司 一种视频流动态图像的透视矫正方法和系统
CN114648449A (zh) * 2020-12-18 2022-06-21 华为技术有限公司 一种图像重映射方法以及图像处理装置
CN113223137B (zh) * 2021-05-13 2023-03-24 广州虎牙科技有限公司 透视投影人脸点云图的生成方法、装置及电子设备
CN113487500B (zh) * 2021-06-28 2022-08-02 北京紫光展锐通信技术有限公司 图像畸变校正方法与装置、电子设备和存储介质
CN114331814A (zh) * 2021-12-24 2022-04-12 合肥视涯技术有限公司 一种畸变画面校正方法及显示设备
CN114943764B (zh) * 2022-05-19 2023-05-26 苏州华兴源创科技股份有限公司 曲面屏幕像素定位方法、装置和设备
CN115937010B (zh) * 2022-08-17 2023-10-27 北京字跳网络技术有限公司 一种图像处理方法、装置、设备及介质
CN115393230B (zh) * 2022-10-28 2023-02-03 武汉楚精灵医疗科技有限公司 超声内镜图像标准化方法、装置及其相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116889A (zh) * 2013-02-05 2013-05-22 海信集团有限公司 一种定位方法及电子设备
CN104240216A (zh) * 2013-06-07 2014-12-24 光宝电子(广州)有限公司 图像校正方法、模块及其电子装置
CN104361580A (zh) * 2014-10-22 2015-02-18 山东大学 基于平面幕布的投影图像实时校正方法
CN105894467A (zh) * 2016-03-30 2016-08-24 联想(北京)有限公司 一种图像校正方法及系统
CN106303477A (zh) * 2016-08-11 2017-01-04 Tcl集团股份有限公司 一种自适应的投影仪图像校正方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116889A (zh) * 2013-02-05 2013-05-22 海信集团有限公司 一种定位方法及电子设备
CN104240216A (zh) * 2013-06-07 2014-12-24 光宝电子(广州)有限公司 图像校正方法、模块及其电子装置
CN104361580A (zh) * 2014-10-22 2015-02-18 山东大学 基于平面幕布的投影图像实时校正方法
CN105894467A (zh) * 2016-03-30 2016-08-24 联想(北京)有限公司 一种图像校正方法及系统
CN106303477A (zh) * 2016-08-11 2017-01-04 Tcl集团股份有限公司 一种自适应的投影仪图像校正方法及系统

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114078093A (zh) * 2020-08-19 2022-02-22 武汉Tcl集团工业研究院有限公司 一种图像校正方法、智能终端及存储介质
CN112215886A (zh) * 2020-10-10 2021-01-12 深圳道可视科技有限公司 全景泊车的标定方法及其系统
CN112488966A (zh) * 2020-12-23 2021-03-12 深圳疆程技术有限公司 一种图像的反畸变方法、装置、电子设备及汽车
CN113222997A (zh) * 2021-03-31 2021-08-06 上海商汤智能科技有限公司 神经网络的生成、图像处理方法、装置、电子设备及介质
CN113963065A (zh) * 2021-10-19 2022-01-21 杭州蓝芯科技有限公司 一种基于外参已知的镜头内参标定方法及装置、电子设备
CN114155157A (zh) * 2021-10-20 2022-03-08 成都旷视金智科技有限公司 图像处理方法、装置、电子设备及存储介质
CN116645426A (zh) * 2023-06-20 2023-08-25 成都飞机工业(集团)有限责任公司 一种相机内参标定方法、装置、存储介质及电子设备
CN118351036A (zh) * 2024-06-17 2024-07-16 浙江托普云农科技股份有限公司 基于容器成像畸变的图像校正方法、系统及装置

Also Published As

Publication number Publication date
CN107424126A (zh) 2017-12-01

Similar Documents

Publication Publication Date Title
WO2018214365A1 (zh) 图像校正方法、装置、设备、系统及摄像设备和显示设备
WO2021227360A1 (zh) 一种交互式视频投影方法、装置、设备及存储介质
KR100796849B1 (ko) 휴대 단말기용 파노라마 모자이크 사진 촬영 방법
US10915998B2 (en) Image processing method and device
WO2022179108A1 (zh) 投影校正方法、装置、存储介质和电子设备
JP5437311B2 (ja) 画像補正方法、画像補正システム、角度推定方法、および角度推定装置
CN109474780B (zh) 一种用于图像处理的方法和装置
JP4556813B2 (ja) 画像処理装置、及びプログラム
Ha et al. Panorama mosaic optimization for mobile camera systems
US11282232B2 (en) Camera calibration using depth data
JP2014131257A (ja) 画像補正システム、画像補正方法及びプログラム
JP2017208619A (ja) 画像処理装置、画像処理方法、プログラム及び撮像システム
WO2018045596A1 (zh) 一种处理方法及移动设备
JP2007074578A (ja) 画像処理装置、撮影装置、及びプログラム
WO2022160857A1 (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
CN111866523B (zh) 全景视频合成方法、装置、电子设备和计算机存储介质
CN112085775A (zh) 图像处理的方法、装置、终端和存储介质
TW201839716A (zh) 環景影像的拼接方法及其系統
CN103500471A (zh) 实现高分辨率增强现实系统的方法
CN109785225B (zh) 一种用于图像矫正的方法和装置
JP2010072813A (ja) 画像処理装置および画像処理プログラム
US20080170812A1 (en) Image composition processing method, computer system with image composition processing function
CN111260574B (zh) 一种印章照片矫正的方法、终端及计算机可读存储介质
Ha et al. Embedded panoramic mosaic system using auto-shot interface
CN107563960A (zh) 一种自拍图片的处理方法、存储介质及移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17911274

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06.04.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17911274

Country of ref document: EP

Kind code of ref document: A1