CN106846407B - Method and device for realizing image correction - Google Patents

Method and device for realizing image correction Download PDF

Info

Publication number
CN106846407B
CN106846407B CN201611054933.4A CN201611054933A CN106846407B CN 106846407 B CN106846407 B CN 106846407B CN 201611054933 A CN201611054933 A CN 201611054933A CN 106846407 B CN106846407 B CN 106846407B
Authority
CN
China
Prior art keywords
pixel
coordinate system
coordinate
physical
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611054933.4A
Other languages
Chinese (zh)
Other versions
CN106846407A (en
Inventor
徐爱辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhihui IOT Technology Co., Ltd
Original Assignee
Shenzhen Zhihui Iot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhihui Iot Technology Co Ltd filed Critical Shenzhen Zhihui Iot Technology Co Ltd
Priority to CN201611054933.4A priority Critical patent/CN106846407B/en
Publication of CN106846407A publication Critical patent/CN106846407A/en
Application granted granted Critical
Publication of CN106846407B publication Critical patent/CN106846407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

A method and apparatus for implementing image correction, comprising: acquiring a first image by adopting a first camera, and acquiring a second image by adopting a second camera; correcting the first image according to a preset first parameter of the first camera for correcting the image; and correcting the second image according to a preset second parameter of the second camera for correcting the image. According to the scheme of the embodiment of the invention, the images obtained by the two cameras are corrected through the first parameter and the second parameter which are set as follows, so that the position difference of the same point on the images obtained by the two cameras is reduced.

Description

Method and device for realizing image correction
Technical Field
The present disclosure relates to, but not limited to, optical technologies and terminal technologies, and more particularly, to a method and apparatus for performing image correction.
Background
Adopt the binocular camera can obtain depth information, but because there is certain distance between two cameras in the binocular camera, lead to the visual field between two cameras to can not coincide completely for when adopting two cameras to shoot a point simultaneously, the position that this point was shot on the image that obtains at two cameras is inequality, bring the difficulty for the application of later stage binocular camera, consequently, need to rectify the image that two cameras in the binocular camera obtained, and the effectual correction method has not been given in the correlation technique.
Disclosure of Invention
The embodiment of the invention provides a method for realizing image correction, which can correct images shot by two cameras in a binocular camera so as to reduce the position difference of the same point on the images shot by the two cameras.
The embodiment of the invention provides a device for realizing image correction, which comprises:
the acquisition module is used for acquiring a first image by adopting a first camera and acquiring a second image by adopting a second camera;
the first correction module is used for correcting the first image according to a preset first parameter of the first camera for correcting the image;
and the second correction module is used for correcting the second image according to a preset second parameter of the second camera for correcting the image.
Optionally, the first parameter includes: a first rotation matrix and a first camera parameter from a pre-established fourth physical coordinate system to a pre-established second physical coordinate system where a first camera is located;
the second parameter includes: a second rotation matrix and second camera parameters from a pre-established fourth physical coordinate system to a pre-established third physical coordinate system where a second camera is located;
the first correction module is specifically configured to:
predefining a grid image with the same size as the first image or the second image;
for each first pixel point in the grid image, converting the pixel coordinate of the first pixel point in a third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter; converting the physical coordinate of the first pixel point in a third pixel coordinate system into a coordinate in a fourth physical coordinate system; the fourth physical coordinate system is a coordinate system between a second physical coordinate system where the first camera is located and a third physical coordinate system where the second camera is located, which are established in advance, and the third pixel coordinate system is a coordinate system corresponding to the fourth physical coordinate system;
converting the coordinates of the first pixel points in the fourth physical coordinate system into the coordinates in the second physical coordinate system according to the first rotation matrix, and converting the coordinates of the first pixel points in the second physical coordinate system into the physical coordinates in the first pixel coordinate system; converting the physical coordinates of the first pixel points in the first pixel coordinate system into pixel coordinates in the first pixel coordinate system according to the first camera parameters; wherein the first pixel coordinate system is a coordinate system corresponding to the second physical coordinate system;
filtering out first pixel points of the grid image, wherein the pixel coordinates of the grid image under the first pixel coordinate system are smaller than 0 or larger than a first image frame;
for each second pixel point in the filtered grid image, performing shaping processing on the pixel coordinate of the second pixel point under the first pixel coordinate system;
and according to the pixel coordinate in the first image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the shaped pixel coordinate of the second pixel point in the first pixel coordinate system.
Optionally, the first camera parameter includes:
a focal length fx0 of the first camera in an x-axis direction of the second physical coordinate system, a focal length fy0 of the first camera in a y-axis direction of the second physical coordinate system, an x-axis coordinate cx0 of a pixel coordinate of the first camera projected to the first pixel coordinate system, and a y-axis coordinate cy0 of a pixel coordinate of the first camera projected to the first pixel coordinate system;
the second camera parameters include:
a focal length fx1 of the second camera in an x-axis direction of the third physical coordinate system, a focal length fy1 of the second camera in a y-axis direction of the third physical coordinate system, an x-axis coordinate cx1 of a pixel coordinate of the second pixel coordinate system onto which an optical center of the second camera is projected, and a y-axis coordinate cy1 of a pixel coordinate of the second pixel coordinate system onto which an optical center of the second camera is projected; wherein the second pixel coordinate system is a coordinate system corresponding to the third physical coordinate system;
the first correction module is specifically configured to convert the pixel coordinate of the first pixel point in the third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter in the following manner:
according to the formulaAndor according to a formulaAndcalculating the object of the jth first pixel point under the third pixel coordinate systemManaging coordinates;
the pud0_ x _ j is an x-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, the p0_ x _ j is an x-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system, the pud0_ y _ j is a y-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, and the p0_ y _ j is a y-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system.
Optionally, the first correction module is specifically configured to convert the coordinate of the first pixel in the fourth physical coordinate system into the coordinate in the second physical coordinate system according to the first rotation matrix in the following manner:
according to the formulaCalculating the coordinate of the jth first pixel point in the second physical coordinate system;
wherein, pOL _ j (x) is an x-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (y) is a y-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (z) is a z-axis coordinate of the jth first pixel point in the second physical coordinate system, R0 is the first rotation matrix, pO _ j (x) is an x-axis coordinate of the jth first pixel point in the fourth physical coordinate system, pO _ j (y) is a y-axis coordinate of the jth first pixel point in the fourth physical coordinate system, and pO _ j (z) is a z-axis coordinate of the jth first pixel point in the fourth physical coordinate system.
Optionally, the first correction module is specifically configured to implement the shaping processing on the pixel coordinate of the second pixel point in the first pixel coordinate system by using the following method:
respectively rounding up and rounding down the x-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, respectively rounding up and rounding down the y-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, and respectively taking up and rounding down the pixel coordinate of the shaped second pixel point in the first pixel coordinate system And
wherein, pOL _ k (x) is an x-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system, and pOL _ k (y) is a y-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system.
Optionally, the first correction module is specifically configured to implement that, according to the pixel coordinate in the first image, assigning the color value of the pixel point of the pixel coordinate of the second pixel point under the first pixel coordinate system after the reshaping processing to the color value of the second pixel point in the filtered grid image includes:
according to the formula
And giving the color value of the second pixel point in the filtered grid image.
Wherein I (k) is the gray value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,the coordinates of the pixels in the first image areThe gray value of the pixel point of (a),the coordinates of the pixels in the first image areThe gray value of the pixel point of (a),the coordinates of the pixels in the first image areThe gray value of the pixel point of (a),the coordinates of the pixels in the first image areThe gray value of the pixel point;
or according to a formula
And formulaAnd giving the color value of the second pixel point in the filtered grid image.
Wherein IR (k) is the R value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,the coordinates of the pixels in the first image areThe R value of the pixel point of (a),the coordinates of the pixels in the first image areThe R value of the pixel point of (a),the coordinates of the pixels in the first image areThe R value of the pixel point of (a),the coordinates of the pixels in the first image areR value of the pixel point of (1);
ig (k) is the G value of the k-th second pixel point in the grid image,the coordinates of the pixels in the first image areThe value of G of the pixel point of (a),the coordinates of the pixels in the first image areThe value of G of the pixel point of (a),the coordinates of the pixels in the first image areThe value of G of the pixel point of (a),the coordinates of the pixels in the first image areG value of the pixel point of (1);
IB (k) isThe B value of the k-th second pixel point in the grid image,the coordinates of the pixels in the first image areThe value of B of the pixel point of (a),the coordinates of the pixels in the first image areThe value of B of the pixel point of (a),the coordinates of the pixels in the first image areThe value of B of the pixel point of (a),the coordinates of the pixels in the first image areB value of the pixel point of (1).
The embodiment of the invention also provides a method for realizing image correction, which comprises the following steps:
acquiring a first image by adopting a first camera, and acquiring a second image by adopting a second camera;
correcting the first image according to a preset first parameter of the first camera for correcting the image;
and correcting the second image according to a preset second parameter of the second camera for correcting the image.
Optionally, the first parameter includes: a first rotation matrix and a first camera parameter from a pre-established fourth physical coordinate system to a pre-established second physical coordinate system where a first camera is located;
the second parameter includes: a second rotation matrix and second camera parameters from a pre-established fourth physical coordinate system to a pre-established third physical coordinate system where a second camera is located;
the correcting the first image according to a preset first parameter of the first camera for correcting the image comprises:
predefining a grid image with the same size as the first image or the second image;
for each first pixel point in the grid image, converting the pixel coordinate of the first pixel point in a third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter; converting the physical coordinate of the first pixel point in a third pixel coordinate system into a coordinate in a fourth physical coordinate system; the fourth physical coordinate system is a coordinate system between a second physical coordinate system where the first camera is located and a third physical coordinate system where the second camera is located, which are established in advance, and the third pixel coordinate system is a coordinate system corresponding to the fourth physical coordinate system;
converting the coordinates of the first pixel points in the fourth physical coordinate system into the coordinates in the second physical coordinate system according to the first rotation matrix, and converting the coordinates of the first pixel points in the second physical coordinate system into the physical coordinates in the first pixel coordinate system; converting the physical coordinates of the first pixel points in the first pixel coordinate system into pixel coordinates in the first pixel coordinate system according to the first camera parameters; wherein the first pixel coordinate system is a coordinate system corresponding to the second physical coordinate system;
filtering out first pixel points of the grid image, wherein the pixel coordinates of the grid image under the first pixel coordinate system are smaller than 0 or larger than a first image frame;
for each second pixel point in the filtered grid image, performing shaping processing on the pixel coordinate of the second pixel point under the first pixel coordinate system;
and according to the pixel coordinate in the first image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the shaped pixel coordinate of the second pixel point in the first pixel coordinate system.
Optionally, the first camera parameter includes:
a focal length fx0 of the first camera in an x-axis direction of the second physical coordinate system, a focal length fy0 of the first camera in a y-axis direction of the second physical coordinate system, an x-axis coordinate cx0 of a pixel coordinate of the first camera projected to the first pixel coordinate system, and a y-axis coordinate cy0 of a pixel coordinate of the first camera projected to the first pixel coordinate system;
the second camera parameters include:
a focal length fx1 of the second camera in an x-axis direction of the third physical coordinate system, a focal length fy1 of the second camera in a y-axis direction of the third physical coordinate system, an x-axis coordinate cx1 of a pixel coordinate of the second pixel coordinate system onto which an optical center of the second camera is projected, and a y-axis coordinate cy1 of a pixel coordinate of the second pixel coordinate system onto which an optical center of the second camera is projected; wherein the second pixel coordinate system is a coordinate system corresponding to the third physical coordinate system;
the converting the pixel coordinate of the first pixel point in the third pixel coordinate system into the physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter includes:
according to the formulaAndor according to a formulaAndcalculating the physical coordinate of the jth first pixel point in the third pixel coordinate system;
the pud0_ x _ j is an x-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, the p0_ x _ j is an x-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system, the pud0_ y _ j is a y-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, and the p0_ y _ j is a y-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system.
Optionally, the converting the coordinate of the first pixel point in the fourth physical coordinate system into the coordinate in the second physical coordinate system according to the first rotation matrix includes:
according to the formulaCalculating the coordinate of the jth first pixel point in the second physical coordinate system;
wherein, pOL _ j (x) is an x-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (y) is a y-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (z) is a z-axis coordinate of the jth first pixel point in the second physical coordinate system, R0 is the first rotation matrix, pO _ j (x) is an x-axis coordinate of the jth first pixel point in the fourth physical coordinate system, pO _ j (y) is a y-axis coordinate of the jth first pixel point in the fourth physical coordinate system, and pO _ j (z) is a z-axis coordinate of the jth first pixel point in the fourth physical coordinate system.
Optionally, the shaping the pixel coordinate of the second pixel point in the first pixel coordinate system includes:
respectively rounding up and rounding down the x-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, respectively rounding up and rounding down the y-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, and obtaining the pixel coordinate of the second pixel point in the first pixel coordinate system after the reshaping treatmentIs otherwise provided with And
wherein, pOL _ k (x) is an x-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system, and pOL _ k (y) is a y-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system.
Optionally, the giving, according to the pixel coordinate in the first image, the color value of the pixel point of the pixel coordinate of the second pixel point under the first pixel coordinate system after the reshaping process to the color value of the second pixel point in the filtered grid image includes:
according to the formula
And giving the color value of the second pixel point in the filtered grid image.
Wherein I (k) is the gray value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,the coordinates of the pixels in the first image areThe gray value of the pixel point of (a),the coordinates of the pixels in the first image areThe gray value of the pixel point of (a),the coordinates of the pixels in the first image areThe gray value of the pixel point of (a),the coordinates of the pixels in the first image areThe gray value of the pixel point;
or according to a formula
And formulaAnd giving the color value of the second pixel point in the filtered grid image.
Wherein IR (k) is the R value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,the coordinates of the pixels in the first image areThe R value of the pixel point of (a),the coordinates of the pixels in the first image areThe R value of the pixel point of (a),the coordinates of the pixels in the first image areThe R value of the pixel point of (a),the coordinates of the pixels in the first image areR value of the pixel point of (1);
ig (k) is the G value of the k-th second pixel point in the grid image,the coordinates of the pixels in the first image areThe value of G of the pixel point of (a),the coordinates of the pixels in the first image areThe value of G of the pixel point of (a),the coordinates of the pixels in the first image areThe value of G of the pixel point of (a),the coordinates of the pixels in the first image areG value of the pixel point of (1);
IB (k) is the B value of the kth second pixel point in the grid image,The coordinates of the pixels in the first image areThe value of B of the pixel point of (a),the coordinates of the pixels in the first image areThe value of B of the pixel point of (a),the coordinates of the pixels in the first image areThe value of B of the pixel point of (a),the coordinates of the pixels in the first image areB value of the pixel point of (1).
Compared with the related art, the embodiment of the invention comprises the following steps: simultaneously acquiring a first image by adopting a first camera and acquiring a second image by adopting a second camera; correcting the first image according to a preset first parameter of the first camera for correcting the image; and correcting the second image according to a preset second parameter of the second camera for correcting the image. According to the scheme of the embodiment of the invention, the images obtained by the two cameras are corrected through the first parameter and the second parameter which are set as follows, so that the position difference of the same point on the images obtained by the two cameras is reduced.
Drawings
The accompanying drawings in the embodiments of the present invention are described below, and the drawings in the embodiments are provided for further understanding of the present invention, and together with the description serve to explain the present invention without limiting the scope of the present invention.
FIG. 1 is a diagram illustrating an alternative hardware configuration of a mobile terminal implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
FIG. 3 is a flowchart illustrating a method for implementing image correction according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of a coordinate system established by the first embodiment of the present invention;
FIG. 5(a) is a schematic diagram of a first image and a second image according to the first embodiment of the present invention;
FIG. 5(b) is a schematic diagram of the tolerance of the first image and the second image according to the first embodiment of the present invention;
FIG. 6(a) is a schematic diagram of the first image and the second image after being corrected according to the first embodiment of the present invention;
FIG. 6(b) is a diagram illustrating the tolerance of the corrected first image and the corrected second image according to the first embodiment of the present invention;
FIG. 7 is a flowchart illustrating a method for obtaining a first parameter and a second parameter according to a first embodiment of the present invention;
FIG. 8 is a flowchart of a method for calculating a first parameter and a second parameter according to a first embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an apparatus for implementing image correction according to a second embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The following further description of the present invention, in order to facilitate understanding of those skilled in the art, is provided in conjunction with the accompanying drawings and is not intended to limit the scope of the present invention. In the present application, the embodiments and various aspects of the embodiments may be combined with each other without conflict.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "modules" and "components" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic diagram of an alternative hardware configuration of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 100 may include a wireless communication unit 110, an a/V (audio/video) input unit 120, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190, etc. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. Elements of the mobile terminal will be described in detail below.
The wireless communication unit 110 typically includes one or more components that allow radio communication between the mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit may include a mobile communication module 112.
The mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station (e.g., access point, node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the cameras 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the construction of the mobile terminal.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 1810 for reproducing (or playing back) multimedia data, and the multimedia module 1810 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, mobile terminals have been described in terms of their functionality. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which a mobile terminal according to the present invention is operable will now be described with reference to fig. 2.
Such communication systems may use different air interfaces and/or physical layers. For example, the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)), global system for mobile communications (GSM), and the like. By way of non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
Referring to fig. 2, the CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of Base Stations (BSs) 270, Base Station Controllers (BSCs) 275, and a Mobile Switching Center (MSC) 280. The MSC280 is configured to interface with a Public Switched Telephone Network (PSTN) 290. The MSC280 is also configured to interface with a BSC275, which may be coupled to the base station 270 via a backhaul. The backhaul may be constructed according to any of several known interfaces including, for example, E1/T1, ATM, IP, PPP, frame Relay, HDSL, ADSL, or xDSL. It will be understood that a system as shown in fig. 2 may include multiple BSCs 2750.
Each BS270 may serve one or more sectors (or regions), each sector covered by a multi-directional antenna or an antenna pointing in a particular direction being radially distant from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS270 may be configured to support multiple frequency allocations, with each frequency allocation having a particular frequency spectrum (e.g., 1.25MHz,5MHz, etc.).
The intersection of partitions with frequency allocations may be referred to as a CDMA channel. The BS270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to generically refer to a single BSC275 and at least one BS 270. The base stations may also be referred to as "cell sites". Alternatively, each sector of a particular BS270 may be referred to as a plurality of cell sites.
As shown in fig. 2, a Broadcast Transmitter (BT)295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module 111 as shown in fig. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In fig. 2, several Global Positioning System (GPS) satellites 300 are shown. The satellite 300 assists in locating at least one of the plurality of mobile terminals 100.
In fig. 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information may be obtained with any number of satellites. The GPS module 115 as shown in fig. 1 is generally configured to cooperate with satellites 300 to obtain desired positioning information. Other techniques that can track the location of the mobile terminal may be used instead of or in addition to GPS tracking techniques. In addition, at least one GPS satellite 300 may selectively or additionally process satellite DMB transmission.
As a typical operation of the wireless communication system, the BS270 receives reverse link signals from various mobile terminals 100. The mobile terminal 100 is generally engaged in conversations, messaging, and other types of communications. Each reverse link signal received by a particular base station 270 is processed within the particular BS 270. The obtained data is forwarded to the associated BSC 275. The BSC provides call resource allocation and mobility management functions including coordination of soft handoff procedures between BSs 270. The BSCs 275 also route the received data to the MSC280, which provides additional routing services for interfacing with the PSTN 290. Similarly, the PSTN290 interfaces with the MSC280, the MSC interfaces with the BSCs 275, and the BSCs 275 accordingly control the BS270 to transmit forward link signals to the mobile terminal 100.
Based on the above mobile terminal hardware structure and communication system, the present invention provides various embodiments of the method.
As shown in fig. 3, a first embodiment of the present invention proposes a method for implementing image correction, including:
and 300, acquiring a first image by adopting a first camera, and acquiring a second image by adopting a second camera.
In this step, the first camera and the second camera form a binocular camera, and have a common view field, which may be a left camera and a right camera located on the same horizontal plane, an upper camera and a lower camera located on the same vertical plane, or other situations, and the embodiment of the present invention is not limited thereto.
Step 301, correcting the first image according to a preset first parameter of the first camera for correcting the image.
In this step, the first parameter includes: a first rotation matrix R0 of a pre-established fourth physical coordinate system to a pre-established second physical coordinate system in which the first camera is located, and first camera parameters.
Wherein, the first camera parameter includes: the focal length fx0 of the first camera in the x-axis direction of the second physical coordinate system, the focal length fy0 of the first camera in the y-axis direction of the second physical coordinate system, the x-axis coordinate cx0 of the pixel coordinate of the first pixel coordinate system projected by the optical center of the first camera (i.e., the origin of the second physical coordinate system), and the y-axis coordinate cy0 of the pixel coordinate of the first pixel coordinate system projected by the optical center of the first camera.
The second parameters include: a second rotation matrix R1 from the fourth physical coordinate system to a pre-established third physical coordinate system in which the second camera is located, and second camera parameters.
The second camera parameters include: the focal length fx1 of the second camera in the x-axis direction of the third physical coordinate system, the focal length fy1 of the second camera in the y-axis direction of the third physical coordinate system, the optical center of the second camera (i.e. the origin of the third physical coordinate system) projected to the x-axis coordinate cx1 of the pixel coordinates in the second pixel coordinate system, and the optical center of the second camera projected to the y-axis coordinate cy1 of the pixel coordinates in the second pixel coordinate system.
As shown in fig. 4, the first physical coordinate system P, the third physical coordinate system OR, the fourth physical coordinate system O, and the second physical coordinate system OL are three-dimensional coordinate systems, and the first pixel coordinate system Pl, the second pixel coordinate system Pr, and the third pixel coordinate system P0 are two-dimensional coordinate systems.
The first physical coordinate system is a coordinate system where a shot object is located, the first physical coordinate system can be set at will according to actual needs, the z-axis of the second physical coordinate system can be set to be parallel to the optical axis of the first camera, the z-axis of the third physical coordinate system can be set to be parallel to the optical axis of the second camera, the fourth physical coordinate system is a virtual physical coordinate system, and the distance from the origin of the fourth physical coordinate system to the origin of the second physical coordinate system and the distance from the origin of the third physical coordinate system where the second camera is located, which are set in advance, are equal.
The first pixel coordinate system is a coordinate system corresponding to the second physical coordinate system, namely a coordinate system corresponding to the detector of the first camera; the second pixel coordinate system is a coordinate system corresponding to the third physical coordinate system, namely a coordinate system corresponding to the detector of the second camera; the third pixel coordinate system is a coordinate system corresponding to the fourth physical coordinate system, that is, a coordinate system corresponding to the detector of the virtual camera where the fourth physical coordinate system is located. The three pixel coordinate systems can be set according to actual requirements.
In this step, correcting the first image according to a preset first parameter for correcting the image of the first camera includes:
predefining a grid image with the same size as the first image or the second image;
for each first pixel point in the grid image, converting the pixel coordinate of the first pixel point in a third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter; converting the physical coordinate of the first pixel point in the third pixel coordinate system into the coordinate in the fourth physical coordinate system; the fourth physical coordinate system is a coordinate system between a second physical coordinate system where the first camera is located and a third physical coordinate system where the second camera is located, which are established in advance, and the third pixel coordinate system is a coordinate system corresponding to the fourth physical coordinate system;
converting the coordinate of the first pixel point in the fourth physical coordinate system into the coordinate in the second physical coordinate system according to the first rotation matrix, and converting the coordinate of the first pixel point in the second physical coordinate system into the physical coordinate in the first pixel coordinate system; converting the physical coordinates of the first pixel points in the first pixel coordinate system into pixel coordinates in the first pixel coordinate system according to the first camera parameters; wherein, the first pixel coordinate system is a coordinate system corresponding to the second physical coordinate system;
filtering out first pixel points of which the pixel coordinates under the first pixel coordinate system in the grid image are smaller than 0 or larger than a first image frame;
for each second pixel point in the filtered grid image, performing reshaping treatment on the pixel coordinate of the second pixel point in the first pixel coordinate system;
and according to the pixel coordinate in the first image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the shaped pixel coordinate of the second pixel point in the first pixel coordinate system.
When the first image and the second image are both gray level images, the gray level of the grid image can be set at will, for example, the gray level of the grid image is set to 255 or0, and certainly, other values can be set; when the first image and the second image are both color images, the value of R, G, B in the grid image may be set arbitrarily, for example, both R, G, B in the grid image may be set to 255 or0, and of course, other values may also be set, which is not limited in this embodiment of the present invention.
Converting the pixel coordinate of the first pixel point in the third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter includes:
according to the formulaAndor according to a formulaCalculating the physical coordinate of the jth first pixel point in a third pixel coordinate system;
the pud0_ x _ j is an x-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, the p0_ x _ j is an x-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system, the pud0_ y _ j is a y-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, and the p0_ y _ j is a y-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system.
The conversion of the physical coordinate of the first pixel point in the third pixel coordinate system into the coordinate in the fourth physical coordinate system may be implemented by using techniques well known to those skilled in the art, which is not used to limit the protection scope of the embodiment of the present invention and will not be described herein again.
Converting the coordinate of the first pixel point in the fourth physical coordinate system into the coordinate in the second physical coordinate system according to the first rotation matrix comprises:
according to the formulaCalculating the coordinate of the jth first pixel point in the second physical coordinate system;
wherein, pOL _ j (x) is an x-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (y) is a y-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (z) is a z-axis coordinate of the jth first pixel point in the second physical coordinate system, R0 is a first rotation matrix, pO _ j (x) is an x-axis coordinate of the jth first pixel point in the fourth physical coordinate system, pO _ j (y) is a y-axis coordinate of the jth first pixel point in the fourth physical coordinate system, and pO _ j (z) is a z-axis coordinate of the jth first pixel point in the fourth physical coordinate system.
The method for converting the coordinate of the first pixel point in the second physical coordinate system into the physical coordinate of the first pixel point in the first physical coordinate system may be implemented by using a well-known technical means of those skilled in the art, and is not used to limit the protection scope of the embodiment of the present invention, and will not be described herein again.
The method for converting the physical coordinate of the first pixel point in the first pixel coordinate system into the pixel coordinate in the first pixel coordinate system according to the first camera parameter comprises the following steps:
calculating the pixel coordinate of the first pixel point in the first pixel coordinate system according to the formulas POL _ j _ pixel (x) ═ c _ OL _ j (x) fx0+ cx0 and POL _ j _ pixel (y) ═ c _ OL _ j (y) fy0+ cy 0;
wherein, POL _ j _ pixiel (x) is the x-axis coordinate of the pixel coordinate of the jth first pixel point in the first pixel coordinate system, POL _ j _ pixiel (y) is the y-axis coordinate of the pixel coordinate of the jth first pixel point in the first pixel coordinate system, c _ OL _ j (x) is the x-axis coordinate of the physical coordinate of the jth first pixel point in the first pixel coordinate system, and c _ OL _ j (y) is the y-axis coordinate of the physical coordinate of the jth first pixel point in the first pixel coordinate system.
And if the x-axis coordinate of the pixel coordinate of the first pixel point in the first pixel coordinate system is smaller than 0 or larger than the width of the first image, or the y-axis coordinate of the pixel coordinate of the first pixel point in the first pixel coordinate system is smaller than 0 or larger than the length of the first image, filtering the first pixel point.
The shaping processing of the pixel coordinate of the second pixel point under the first pixel coordinate system comprises the following steps:
respectively rounding up and rounding down the x-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, respectively rounding up and rounding down the y-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, and respectively rounding up and rounding down the pixel coordinate of the shaped second pixel point in the first pixel coordinate system And
wherein, pOL _ k (x) is an x-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system, and pOL _ k (y) is a y-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system.
Wherein, according to the pixel coordinate in the first image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the pixel coordinate of the second pixel point in the first pixel coordinate system after the reshaping process comprises:
according to the formula
+And giving the color value of the second pixel point in the filtered grid image.
Wherein I (k) is the gray value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,is the pixel coordinate in the first image asThe gray value of the pixel point of (a),is the pixel coordinate in the first image asThe gray value of the pixel point of (a),is the pixel coordinate in the first image asThe gray value of the pixel point of (a),is the pixel coordinate in the first image asThe gray value of the pixel point;
or according to a formula
And formulaAnd giving the color value of the second pixel point in the filtered grid image.
Wherein IR (k) is the R value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,is the pixel coordinate in the first image asThe R value of the pixel point of (a),is the pixel coordinate in the first image asThe R value of the pixel point of (a),is the pixel coordinate in the first image asThe R value of the pixel point of (a),is the pixel coordinate in the first image asR value of the pixel point of (1);
ig (k) is the G value of the k-th second pixel point in the grid image,is the pixel coordinate in the first image asThe value of G of the pixel point of (a),is the pixel coordinate in the first image asThe value of G of the pixel point of (a),is the pixel coordinate in the first image asThe value of G of the pixel point of (a),is the pixel coordinate in the first image asG value of the pixel point of (1);
IB (k) is the B value of the kth second pixel point in the grid image,is the pixel coordinate in the first image asThe value of B of the pixel point of (a),is the pixel coordinate in the first image asThe value of B of the pixel point of (a),is the pixel coordinate in the first image asThe value of B of the pixel point of (a),is the pixel coordinate in the first image asB value of the pixel point of (1).
Wherein, according to the formulaCalculating w 1; where pk (x) is an x-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system, and pk (y) is a y-axis coordinate of the pixel coordinate of the kth second pixel point in the first pixel coordinate system.
According to the formulaCalculating w 2;
according to the formulaCalculating w 3;
according to the formulaCalculate w 4.
And step 302, correcting the second image according to a preset second parameter of the second camera for correcting the image.
In this step, correcting the second image according to a preset second parameter of the second camera for correcting the image includes:
predefining a grid image with the same size as the first image or the second image;
for each first pixel point in the grid image, converting the pixel coordinate of the first pixel point in a third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter; converting the physical coordinate of the first pixel point in the third pixel coordinate system into the coordinate in the fourth physical coordinate system; the fourth physical coordinate system is a coordinate system between a second physical coordinate system where the first camera is located and a third physical coordinate system where the second camera is located, which are established in advance, and the third pixel coordinate system is a coordinate system corresponding to the fourth physical coordinate system;
converting the coordinate of the first pixel point in the fourth physical coordinate system into the coordinate in the third physical coordinate system according to the second rotation matrix, and converting the coordinate of the first pixel point in the third physical coordinate system into the physical coordinate in the second pixel coordinate system; converting the physical coordinates of the first pixel points in the second pixel coordinate system into pixel coordinates in the second pixel coordinate system according to the second camera parameters; wherein, the second pixel coordinate system is a coordinate system corresponding to the third physical coordinate system;
filtering out first pixel points of which the pixel coordinates under the second pixel coordinate system in the grid image are smaller than 0 or larger than a second image frame;
for each second pixel point in the filtered grid image, performing shaping processing on the pixel coordinate of the second pixel point in a second pixel coordinate system;
and according to the pixel coordinate in the second image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the shaped pixel coordinate of the second pixel point in the second pixel coordinate system.
When the first image and the second image are both gray level images, the gray level of the grid image can be set at will, for example, the gray level of the grid image is set to 255 or0, and certainly, other values can be set; when the first image and the second image are both color images, the value of R, G, B in the grid image may be set arbitrarily, for example, both R, G, B in the grid image may be set to 255 or0, and of course, other values may also be set, which is not limited in this embodiment of the present invention.
Converting the pixel coordinate of the first pixel point in the third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter includes:
according to the formulaAndor according to a formulaAndcalculating the physical coordinate of the jth first pixel point in a third pixel coordinate system;
the pud0_ x _ j is an x-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, the p0_ x _ j is an x-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system, the pud0_ y _ j is a y-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, and the p0_ y _ j is a y-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system.
The conversion of the physical coordinate of the first pixel point in the third pixel coordinate system into the coordinate in the fourth physical coordinate system may be implemented by using techniques well known to those skilled in the art, which is not used to limit the protection scope of the embodiment of the present invention and will not be described herein again.
Converting the coordinate of the first pixel point in the fourth physical coordinate system into the coordinate in the third physical coordinate system according to the second rotation matrix comprises:
according to the formulaCalculating the coordinate of the jth first pixel point in the second physical coordinate system;
wherein, pO _ j (x) is an x-axis coordinate of the jth first pixel in the third physical coordinate system, pO _ j (y) is a y-axis coordinate of the jth first pixel in the third physical coordinate system, pO _ j (z) is a z-axis coordinate of the jth first pixel in the third physical coordinate system, R1 is a second rotation matrix, pO _ j (x) is an x-axis coordinate of the jth first pixel in the fourth physical coordinate system, pO _ j (y) is a y-axis coordinate of the jth first pixel in the fourth physical coordinate system, and pO _ j (z) is a z-axis coordinate of the jth first pixel in the fourth physical coordinate system.
The method for converting the coordinate of the first pixel point in the third physical coordinate system into the physical coordinate of the first pixel point in the second physical coordinate system may be implemented by using a known technical means of a person skilled in the art, and is not used to limit the protection scope of the embodiment of the present invention, and will not be described herein again.
Wherein, converting the physical coordinate of the first pixel point under the second pixel coordinate system into the pixel coordinate under the second pixel coordinate system according to the second camera parameter comprises:
calculating the pixel coordinates of the first pixel point in the second pixel coordinate system according to the formulas POR _ j _ pixel (x) ═ c _ OR _ j (x) fx1+ cx1 and POR _ j _ pixel (y) ═ c _ OR _ j (y) fy1+ cy 1;
POR _ j _ pixiel (x) is an x-axis coordinate of a pixel coordinate of the jth first pixel in the second pixel coordinate system, POR _ j _ pixiel (y) is a y-axis coordinate of the pixel coordinate of the jth first pixel in the second pixel coordinate system, c _ OR _ j (x) is an x-axis coordinate of a physical coordinate of the jth first pixel in the second pixel coordinate system, and c _ OR _ j (y) is a y-axis coordinate of the physical coordinate of the jth second pixel in the first pixel coordinate system.
And if the x-axis coordinate of the pixel coordinate of the first pixel point in the second pixel coordinate system is smaller than 0 or larger than the width of the second image, or the y-axis coordinate of the pixel coordinate of the first pixel point in the second pixel coordinate system is smaller than 0 or larger than the length of the second image, filtering the first pixel point.
The shaping processing of the pixel coordinate of the second pixel point under the second pixel coordinate system comprises the following steps:
respectively rounding up and rounding down the x-axis coordinate of the pixel coordinate of the second pixel point in the second pixel coordinate system, respectively rounding up and rounding down the y-axis coordinate of the pixel coordinate of the second pixel point in the second pixel coordinate system, and respectively rounding up and rounding down the pixel coordinate of the shaped second pixel point in the first pixel coordinate system And
wherein, po _ k (x) is an x-axis coordinate of the pixel coordinate of the kth second pixel point in the second pixel coordinate system, and po _ k (y) is a y-axis coordinate of the pixel coordinate of the kth second pixel point in the second pixel coordinate system.
Wherein, according to the pixel coordinate in the second image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the pixel coordinate of the second pixel point in the second pixel coordinate system after the reshaping process comprises:
according to the formula
And giving the color value of the second pixel point in the filtered grid image.
Wherein I (k) is the gray value of the kth second pixel point in the grid image, and w1, w2, w3 and w4 are weightsThe coefficient of value is a function of,as the coordinates of the pixels in the second image areThe gray value of the pixel point of (a),as the coordinates of the pixels in the second image areThe gray value of the pixel point of (a),as the coordinates of the pixels in the second image areThe gray value of the pixel point of (a),as the coordinates of the pixels in the second image areThe gray value of the pixel point;
or according to a formula
And formulaAnd giving the color value of the second pixel point in the filtered grid image.
Wherein IR (k) is the R value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,as the coordinates of the pixels in the second image areThe R value of the pixel point of (a),as the coordinates of the pixels in the second image areThe R value of the pixel point of (a),as the coordinates of the pixels in the second image areThe R value of the pixel point of (a),as the coordinates of the pixels in the second image areR value of the pixel point of (1);
ig (k) is the G value of the k-th second pixel point in the grid image,as the coordinates of the pixels in the second image areThe value of G of the pixel point of (a),as the coordinates of the pixels in the second image areThe value of G of the pixel point of (a),as the coordinates of the pixels in the second image areThe value of G of the pixel point of (a),as the coordinates of the pixels in the second image areG value of the pixel point of (1);
IB (k) is the B value of the kth second pixel point in the grid image,as the coordinates of the pixels in the second image areThe value of B of the pixel point of (a),as the coordinates of the pixels in the second image areThe value of B of the pixel point of (a),as the coordinates of the pixels in the second image areThe value of B of the pixel point of (a),as the coordinates of the pixels in the second image areB value of the pixel point of (1).
Wherein, according to the formulaCalculating w 1; where pk (x) is an x-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system, and pk (y) is a y-axis coordinate of the pixel coordinate of the kth second pixel point in the first pixel coordinate system.
According to the formulaCalculating w 2;
according to the formulaCalculating w 3;
according to the formulaCalculate w 4.
Fig. 5(a) is a schematic diagram of the first image and the second image. As shown in fig. 5(a), the left image is the first image, and the right image is the second image. Fig. 5(b) is a tolerance diagram of the first image and the second image. As shown in fig. 5(b), the black in fig. 5(b) represents the difference between the gray values of the first image and the second image with the same pixel point, and as can be seen from fig. 5(b), the difference between the gray values of the first image and the second image is large, and therefore, the first image and the second image need to be corrected.
Fig. 6(a) is a schematic diagram of the corrected first image and second image. As shown in fig. 6(a), the left image is the corrected first image, and the right image is the corrected first image. Fig. 6(b) is a tolerance diagram of the corrected first image and the corrected first image. As shown in fig. 6(b), black in fig. 6(b) represents a difference between the gray values of the corrected first image and the corrected first image of the same pixel point, and as can be seen from fig. 6(b), the difference between the gray values of the corrected first image and the corrected second image is much reduced compared with fig. 5(b), so that the position difference of the same point on the images captured by the two cameras is reduced by the method of the embodiment of the present invention.
According to the scheme of the embodiment of the invention, the images obtained by the two cameras are corrected through the first parameter and the second parameter which are set as follows, so that the position difference of the same point on the images obtained by the two cameras is reduced.
The first parameter and the second parameter can be obtained by the following method, and after the first parameter and the second parameter are obtained, the first parameter and the second parameter are stored in a terminal with a binocular camera in advance, so that the image can be corrected.
Referring to fig. 7, the method of acquiring the first parameter and the second parameter includes:
step 700, establishing a coordinate system: establishing a first physical coordinate system where a preset object is located, a second physical coordinate system where a first camera is located, a corresponding first pixel coordinate system, a third physical coordinate system where a second camera is located, a corresponding second pixel coordinate system, a fourth physical coordinate system arranged between the second physical coordinate system and the third physical coordinate system, and a corresponding third pixel coordinate system.
In this step, as shown in fig. 4, the first physical coordinate system P, the second physical coordinate system OL, the third physical coordinate system OR, and the fourth physical coordinate system O are three-dimensional coordinate systems, and the first pixel coordinate system Pl, the second pixel coordinate system Pr, and the third pixel coordinate system P0 are two-dimensional coordinate systems.
The first physical coordinate system can be set randomly according to actual needs, the z-axis of the second physical coordinate system can be set to be parallel to the optical axis of the first camera, the z-axis of the third physical coordinate system is parallel to the optical axis of the second camera, the fourth physical coordinate system is a virtual physical coordinate system, and the distance from the origin of the fourth physical coordinate system to the origin of the second physical coordinate system and the distance from the origin of the third physical coordinate system can be set to be equal.
The first pixel coordinate system is a coordinate system corresponding to the detector of the first camera, the second pixel coordinate system is a coordinate system corresponding to the detector of the second camera, the third pixel coordinate system is a coordinate system corresponding to the detector of the virtual camera where the fourth physical coordinate system is located, and the three pixel coordinate systems can be set according to actual requirements.
In this step, the first camera and the second camera form a binocular camera, and have a common view field, which may be a left camera and a right camera located on the same horizontal plane, an upper camera and a lower camera located on the same vertical plane, or other situations, and the embodiment of the present invention is not limited thereto.
And 701, acquiring a third image of the preset object by adopting the first camera, and acquiring a fourth image of the preset object by adopting the second camera.
Step 702 is to acquire a fifth image overlapping the fourth image from the third image, and acquire a sixth image overlapping the third image from the fourth image.
And 703, calculating a first parameter of the first camera for correcting the image and a second parameter of the second camera for correcting the image according to the established coordinate system, the fifth image and the sixth image.
In this step, the first parameter includes: a first rotation matrix R0 from the fourth physical coordinate system to the second physical coordinate system and the first camera parameters.
The second parameters include: a second rotation matrix R1 from the fourth physical coordinate system to the third physical coordinate system and second camera parameters.
Wherein, the first camera parameter includes: the focal length fx0 of the first camera in the x-axis direction of the second physical coordinate system, the focal length fy0 of the first camera in the y-axis direction of the second physical coordinate system, the x-axis coordinate cx0 of the pixel coordinate of the first pixel coordinate system projected by the optical center of the first camera (i.e., the origin of the second physical coordinate system), and the y-axis coordinate cy0 of the pixel coordinate of the first pixel coordinate system projected by the optical center of the first camera.
The second camera parameters include: the focal length fx1 of the second camera in the x-axis direction of the third physical coordinate system, the focal length fy1 of the second camera in the y-axis direction of the third physical coordinate system, the optical center of the second camera (i.e. the origin of the third physical coordinate system) projected to the x-axis coordinate cx1 of the pixel coordinates in the second pixel coordinate system, and the optical center of the second camera projected to the y-axis coordinate cy1 of the pixel coordinates in the second pixel coordinate system.
In this step, referring to fig. 8, calculating a first parameter for correcting an image of the first camera and a second parameter for correcting an image of the second camera according to the established coordinate system, the fifth image and the sixth image includes:
step 800, initializing a first weight coefficient matrix a and a second weight coefficient matrix b;
in this step, the first weight coefficient matrix a is a matrix with 6 rows and 1 columns, and the second weight coefficient matrix b is a matrix with 3 rows and 1 columns, that is, a is [ a 1; a 2; a 3; a 4; a 5; a6], b ═ b 1; b 2; b3 ].
In initialization, the first weight coefficient matrix a and the second weight coefficient matrix b may be initialized to a zero matrix, that is, a is ═ 0; 0; 0; 0; 0; 0], b ═ 0; 0; 0], of course, the first weight coefficient matrix a and the second weight coefficient matrix may also be initialized to other values, which is not limited in the embodiment of the present invention.
Step 801, calculating a first rotation matrix R0 and a second rotation matrix R1 according to the second weight coefficient matrix b;
in this step, according to the formulaCalculating a first rotation matrix R0 according to the formulaA second rotation matrix R1 is calculated.
Step 802, projecting a coordinate pw of the ith preset point in the first physical coordinate system to the second physical coordinate system according to the first weight coefficient matrix and the first rotation matrix to obtain a coordinate Pc0_ i of the ith preset point in the second physical coordinate system, and projecting the coordinate pw of the ith preset point in the first physical coordinate system to the third physical coordinate system according to the first weight coefficient matrix and the second rotation matrix to obtain a coordinate Pc1_ i of the ith preset point in the third physical coordinate system; wherein i is an integer greater than or equal to 1; the method comprises the following steps:
calculating a first transformation matrix M of projecting the first physical coordinate system to a fourth physical coordinate system according to the first weight coefficient matrix a; a second transformation matrix M0 of the first physical coordinate system projected to the second physical coordinate system is calculated from the first transformation matrix M and the first rotation matrix R0, and a third transformation matrix M1 of the first physical coordinate system projected to the third physical coordinate system is calculated from the first transformation matrix M and the second rotation matrix R1.
Wherein, according to the formula
Calculating a first transformation matrix M;
wherein,for the offset in the x-axis of the projection of the first physical coordinate system P to the fourth physical coordinate system O,for the offset in the y-axis of the projection of the first physical coordinate system P to the fourth physical coordinate system O,for the offset in the z-axis of the projection of the first physical coordinate system P to the fourth physical coordinate system O,is a rotation matrix of the x-axis of the first physical coordinate system P rotated around the x-axis of the fourth physical coordinate system O,is a rotation matrix of the y-axis of the first physical coordinate system P rotated around the y-axis of the fourth physical coordinate system O,is a rotation matrix of the z-axis of the first physical coordinate system P around the z-axis of the fourth physical coordinate system O.
Wherein, according to formula M0 ═ R0, -R0C 0; 0,0,0,1] M calculates a second transformation matrix M0, according to the formula M0 ═ R1, -R1C 1; 0,0,0,1] M computes a third transformation matrix M1.
Where C0 is the distance from the origin of the fourth physical coordinate system O to the origin of the second physical coordinate system OL, and C1 is the distance from the origin of the fourth physical coordinate system O to the origin of the third physical coordinate system OR.
Wherein, according to the formulaCalculating the coordinates Pc0_ i of the ith preset point in the second physical coordinate system according to the formulaAnd calculating the coordinates Pc1_ i of the ith preset point in the third physical coordinate system.
Wherein Pc0_ i (x) is the x-axis coordinate of the i-th preset point in the second physical coordinate system, Pc0_ i (y) is the y-axis coordinate of the i-th preset point in the second physical coordinate system, Pc0_ i (z) is the z-axis coordinate of the i-th preset point in the second physical coordinate system, Pc1_ i (x) is the x-axis coordinate of the i-th preset point in the third physical coordinate system, Pc1_ i (y) is the y-axis coordinate of the i-th preset point in the third physical coordinate system, Pc1_ i (z) is the z-axis coordinate of the i-th preset point in the third physical coordinate system, pw (x) is the x-axis coordinate of the i-th preset point in the first physical coordinate system, pw (y) is the y-axis coordinate of the i-th point in the first physical coordinate system, and pw (z) is the z-axis coordinate of the i-th preset point in the first physical coordinate system.
Step 803, converting the coordinate Pc0_ i of the ith preset point in the second physical coordinate system into a first pixel coordinate Pc0_ i _ pixil of the ith preset point in the first pixel coordinate system according to the first camera parameter, and converting the coordinate Pc0_ i _ pixil of the ith preset point in the third physical coordinate system into a second pixel coordinate Pc1_ i _ pixil of the ith preset point in the second pixel coordinate system according to the second camera parameter. The method comprises the following steps:
projecting a coordinate Pc0_ i of the ith preset point in a second physical coordinate system to a first pixel coordinate system to obtain a first physical coordinate c _ y0_ i of the ith preset point in the first pixel coordinate system, and projecting a coordinate Pc1_ i of the ith preset point in the third physical coordinate system to a second pixel coordinate system to obtain a second physical coordinate c _ y1_ i of the ith preset point in the second pixel coordinate system; calculating a first pixel coordinate Pc0_ i _ pixiel of the ith preset point in a first pixel coordinate system according to the first physical coordinate c _ y0_ i and the first camera parameter, and calculating a second pixel coordinate Pc1_ i _ pixiel of the ith preset point in a second pixel coordinate system according to the second physical coordinate c _ y1_ i and the second camera parameter;
the coordinates Pc0_ i of the ith preset point in the second physical coordinate system may be projected to the first pixel coordinate system by using a technique known to those skilled in the art to obtain a first physical coordinate c _ y0_ i of the ith preset point in the first pixel coordinate system, and the coordinates Pc1_ i of the ith preset point in the third physical coordinate system may be projected to the second pixel coordinate system to obtain a second physical coordinate c _ y1_ i of the ith preset point in the second pixel coordinate system.
Wherein, the first pixel coordinate Pc0_ i _ pixel of the i-th preset point in the first pixel coordinate system is calculated according to the formula Pc0_ i _ pixel (x) c _ y0_ i (x) fx0+ cx0 and Pc0_ i _ pixel (y) c _ y0_ i (y) fy0+ cy0, and the second pixel coordinate Pc1_ i _ pixel of the i-th preset point in the second pixel coordinate system is calculated according to the formula Pc1_ i _ pixel (x) c _ y1_ i (x) fx1+ cx1 and Pc1_ i _ pixel (y) c _ y1_ i (y) fy1+ cy 1.
Wherein Pc0_ i _ pixiel (x) is an x-axis coordinate of a first pixel coordinate of the i-th preset point in the first pixel coordinate system, Pc0_ i _ pixiel (y) is a y-axis coordinate of the first pixel coordinate of the i-th preset point in the first pixel coordinate system, c _ y0_ i (x) is an x-axis coordinate of the first physical coordinate, c _ y0_ i (y) is a y-axis coordinate of the first physical coordinate, Pc1_ i _ pixiel (x) is an x-axis coordinate of a second pixel coordinate of the i-th preset point in the second pixel coordinate system, Pc1_ i _ pixiel (y) is a y-axis coordinate of the second pixel coordinate of the i-th preset point in the second pixel coordinate system, c _ y1_ i (x) is an x-axis coordinate of the second physical coordinate, and c _ y1_ i) is a y-axis coordinate of the second physical coordinate.
And 804, calculating an increment matrix according to the coordinates of all the preset points in the third image, the first pixel coordinates, the coordinates of all the preset points in the fourth image, the first physical coordinates and the second physical coordinates of all the preset points. The method comprises the following steps:
calculating a first difference value error0_ i of the ith preset point according to the coordinate P0_ i and the first pixel coordinate Pc0_ i _ pixiel of the ith preset point in the fifth image, calculating a second difference value error1_ i of the ith preset point according to the coordinate P1_ i and the first pixel coordinate Pc0_ i _ pixiel of the ith preset point in the sixth image, and forming a difference matrix residusl by the first difference value error0_ i and the second difference value error1_ i of all the preset points; calculating a jacobian matrix Jac of the first physical coordinate c _ y0_ i and the second physical coordinate c _ y1_ i of all the preset points relative to the first weight coefficient matrix a, the second weight coefficient matrix b, the first camera parameter and the second camera parameter; calculating an increment matrix plus according to the Jacobian matrix Jac and the difference matrix residusl;
in this step, according to the formulaCalculating a first difference value error0_ i of the ith preset point according to a formulaA second difference error1_ i for the ith preset point is calculated.
Wherein, P0_ i (x) is the x-axis coordinate of the i-th preset point in the third image, P0_ i (y) is the y-axis coordinate of the i-th preset point in the third image, error0_ i (x) is the first difference value of the i-th preset point in the x-axis, error0_ i (y) is the first difference value of the i-th preset point in the y-axis, error1_ i (x) is the second difference value of the i-th preset point in the x-axis, and error1_ i (y) is the second difference value of the i-th preset point in the y-axis.
In this step, the difference matrix residusl is a matrix with 4n rows and 1 column, where n is the number of preset points, that is, residusl ═ error0_1 (x); error0_1 (y); error1_1 (x); error1_1 (y); error0_2 (x); error0_2 (y); error1_2 (x); error1_2 (y); … …, respectively; error0_ n (x); error0_ n (y); error1_ n (x); error1_ n (y).
Wherein, according to the formula
That is, the jacobian matrix is a matrix with 4n rows and 17 columns, the 1 st line of the jacobian matrix is the partial derivatives of 17 parameters including 6 elements in the first weight coefficient matrix a, 3 elements in the second weight coefficient matrix b, the first camera parameter and the second camera parameter respectively for the x-axis coordinate of the first physical coordinate of the first preset point, the 2 nd line is the partial derivatives of 17 parameters respectively for the y-axis coordinate of the first physical coordinate of the first preset point, the 3 rd line is the partial derivatives of 17 parameters respectively for the x-axis coordinate of the second physical coordinate of the first preset point, the 4 th line is the partial derivatives of 17 parameters respectively for the y-axis coordinate of the second physical coordinate of the first preset point, the 5 th line is the partial derivatives of 17 parameters respectively for the x-axis coordinate of the first physical coordinate of the second preset point, and the 6 th line is the partial derivatives of 17 parameters respectively for the y-axis coordinate of the first physical coordinate of the second preset point, the 7 th action is the partial derivation of 17 parameters by the x-axis coordinate of the second physical coordinate of the second preset point, the 8 th action is the partial derivation of 17 parameters by the y-axis coordinate of the second physical coordinate of the second preset point, and so on, the (4n-3) th action is the partial derivation of 17 parameters by the x-axis coordinate of the first physical coordinate of the nth preset point, the (4n-2) th action is the partial derivation of 17 parameters by the y-axis coordinate of the first physical coordinate of the nth preset point, the (4n-1) th action is the partial derivation of 17 parameters by the x-axis coordinate of the second physical coordinate of the nth preset point, and the 4n th action is the partial derivation of 17 parameters by the y-axis coordinate of the second physical coordinate of the nth preset point.
Wherein the incremental matrix plus is calculated according to the formula (Jac 'Jac) \ (Jac' residusl).
The incremental matrix plus is a matrix of 1 row and 17 columns, each row corresponding to an increment of one of the 17 parameters, i.e., plus [ Δ a1, [ Δ a2, [ Δ a3, [ Δ a4 ], [ Δ a5 ], [ Δ a6], [ Δ b 1], [ Δ b2 ], [ Δ b3], [ Δ fx 0], [ Δ fy 0], [ Δ cx 0], [ Δ cy ] 0, [ Δ fx 1], [ Δ fy 1], [ Δ cx 1], [ Δ cy ] 1 ].
Where Δ a1 is an increment of a1, Δ a2 is an increment of a2, Δ a3 is an increment of a3, Δ a4 is an increment of a4, Δ a5 is an increment of a5, Δ a6 is an increment of a6, Δ b1 is an increment of b1, Δ b2 is an increment of b2, Δ b3 is an increment of b3, Δ fx0 is an increment of fx0, Δ fy0 is an increment of fy0, Δ cx0 is an increment of cx0, Δ cy0 is an increment of cy0, Δ fx1 is an increment of fx1, Δ fy1 is an increment of fy1, Δ cx1 is an increment of cx1, and Δ c 39 1 is an increment of cy 1.
Step 805, updating the first weight coefficient matrix a, the second weight coefficient matrix b, the first parameter and the second parameter according to the increment matrix plus;
in this step, the updated parameter is obtained by adding the corresponding increment to the initial value of each parameter. For example, updated a1 is 0 +. DELTA.a 1, and so on.
And 806, continuing to execute the steps 801 to 805 according to the updated first weight coefficient matrix a, the updated second weight coefficient matrix b, the updated first parameter and the updated second parameter until the iteration frequency is greater than or equal to the preset frequency, and outputting the updated first parameter and the updated second parameter.
In this step, in the odd number of iterations, the first rotation matrix R0 is calculated according to the updated second weight coefficient matrix b, and the second rotation matrix R1 still adopts the last value; during even iterations, the second rotation matrix R1 is calculated based on the updated second weight coefficient matrix b, while the first rotation matrix R0 still takes the last value.
Referring to fig. 9, a second embodiment of the present invention proposes an apparatus for implementing image correction, including:
the acquisition module is used for acquiring a first image by adopting a first camera and acquiring a second image by adopting a second camera;
the first correction module is used for correcting the first image according to a preset first parameter of the first camera for correcting the image;
and the second correction module is used for correcting the second image according to a preset second parameter of the second camera for correcting the image.
Optionally, the first parameter includes: a first rotation matrix and a first camera parameter from a pre-established fourth physical coordinate system to a pre-established second physical coordinate system where a first camera is located;
the second parameters include: a second rotation matrix and second camera parameters from a pre-established fourth physical coordinate system to a pre-established third physical coordinate system where a second camera is located;
the first correction module is specifically configured to:
predefining a grid image with the same size as the first image or the second image;
for each first pixel point in the grid image, converting the pixel coordinate of the first pixel point in a third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter; converting the physical coordinate of the first pixel point in the third pixel coordinate system into the coordinate in the fourth physical coordinate system; the fourth physical coordinate system is a coordinate system between a second physical coordinate system where the first camera is located and a third physical coordinate system where the second camera is located, which are established in advance, and the third pixel coordinate system is a coordinate system corresponding to the fourth physical coordinate system;
converting the coordinate of the first pixel point in the fourth physical coordinate system into the coordinate in the second physical coordinate system according to the first rotation matrix, and converting the coordinate of the first pixel point in the second physical coordinate system into the physical coordinate in the first pixel coordinate system; converting the physical coordinates of the first pixel points in the first pixel coordinate system into pixel coordinates in the first pixel coordinate system according to the first camera parameters; wherein, the first pixel coordinate system is a coordinate system corresponding to the second physical coordinate system;
filtering out first pixel points of which the pixel coordinates under the first pixel coordinate system in the grid image are smaller than 0 or larger than a first image frame;
for each second pixel point in the filtered grid image, performing reshaping treatment on the pixel coordinate of the second pixel point in the first pixel coordinate system;
and according to the pixel coordinate in the first image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the shaped pixel coordinate of the second pixel point in the first pixel coordinate system.
Optionally, the first camera parameter includes:
a focal length fx0 of the first camera in the x-axis direction of the second physical coordinate system, a focal length fy0 of the first camera in the y-axis direction of the second physical coordinate system, an x-axis coordinate cx0 of the physical coordinate of the first camera projected to the first pixel coordinate system, and a y-axis coordinate cy0 of the physical coordinate of the first camera projected to the first pixel coordinate system;
the second camera parameters include:
a focal length fx1 of the second camera in the x-axis direction of the third physical coordinate system, a focal length fy1 of the second camera in the y-axis direction of the third physical coordinate system, an x-axis coordinate cx1 of the physical coordinate of the second camera projected to the second pixel coordinate system, and a y-axis coordinate cy1 of the physical coordinate of the second camera projected to the second pixel coordinate system; wherein, the second pixel coordinate system is a coordinate system corresponding to the third physical coordinate system;
the first correction module is specifically configured to convert the pixel coordinates of the first pixel point in the third pixel coordinate system into the physical coordinates in the third pixel coordinate system according to the first camera parameter or the second camera parameter in the following manner:
according to the formulaAndor according to a formulaAndcalculating the physical coordinate of the jth first pixel point in a third pixel coordinate system;
the pud0_ x _ j is an x-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, the p0_ x _ j is an x-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system, the pud0_ y _ j is a y-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, and the p0_ y _ j is a y-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system.
Optionally, the first correction module is specifically configured to convert the coordinate of the first pixel point in the fourth physical coordinate system into the coordinate in the second physical coordinate system according to the first rotation matrix in the following manner:
according to the formulaCalculating the coordinate of the jth first pixel point in the second physical coordinate system;
wherein, pOL _ j (x) is an x-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (y) is a y-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (z) is a z-axis coordinate of the jth first pixel point in the second physical coordinate system, R0 is a first rotation matrix, pO _ j (x) is an x-axis coordinate of the jth first pixel point in the fourth physical coordinate system, pO _ j (y) is a y-axis coordinate of the jth first pixel point in the fourth physical coordinate system, and pO _ j (z) is a z-axis coordinate of the jth first pixel point in the fourth physical coordinate system.
Optionally, the first correction module is specifically configured to perform reshaping on the pixel coordinate of the second pixel point in the first pixel coordinate system by using the following method, where the reshaping includes:
respectively rounding up and rounding down the x-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, respectively rounding up and rounding down the y-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, and respectively rounding up and rounding down the pixel coordinate of the shaped second pixel point in the first pixel coordinate system And
wherein, pOL _ k (x) is an x-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system, and pOL _ k (y) is a y-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system.
Optionally, the first correction module is specifically configured to assign, according to the pixel coordinate in the first image, the color value of the pixel point of the pixel coordinate of the second pixel point under the first pixel coordinate system after the reshaping processing to the color value of the second pixel point in the filtered grid image by using the following method:
according to the formula
+And giving the color value of the second pixel point in the filtered grid image.
Wherein I (k) is the gray value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,is the pixel coordinate in the first image asThe gray value of the pixel point of (a),is the pixel coordinate in the first image asThe gray value of the pixel point of (a),is the pixel coordinate in the first image asThe gray value of the pixel point of (a),is the pixel coordinate in the first image asThe gray value of the pixel point;
or according to a formula
And formulaAnd giving the color value of the second pixel point in the filtered grid image.
Wherein IR (k) is the R value of the kth second pixel point in the grid image, w1, w2, w3 and w4 are weight coefficients,is the pixel coordinate in the first image asThe R value of the pixel point of (a),is the pixel coordinate in the first image asThe R value of the pixel point of (a),is the pixel coordinate in the first image asThe R value of the pixel point of (a),is the pixel coordinate in the first image asR value of the pixel point of (1);
ig (k) is the G value of the k-th second pixel point in the grid image,is the pixel coordinate in the first image asThe value of G of the pixel point of (a),is the pixel coordinate in the first image asThe value of G of the pixel point of (a),is the pixel coordinate in the first image asThe value of G of the pixel point of (a),is the pixel coordinate in the first image asG value of the pixel point of (1);
IB (k) is the B value of the kth second pixel point in the grid image,is the pixel coordinate in the first image asThe value of B of the pixel point of (a),is the pixel coordinate in the first image asThe value of B of the pixel point of (a),is the pixel coordinate in the first image asThe value of B of the pixel point of (a),is the pixel coordinate in the first image asB value of the pixel point of (1).
For the specific implementation process of the apparatus, reference may be made to the implementation process of the method in the first embodiment, which is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. An apparatus for performing image correction, comprising:
the acquisition module is used for acquiring a first image by adopting a first camera and acquiring a second image by adopting a second camera;
the first correction module is used for correcting the first image according to a preset first parameter of the first camera for correcting the image;
the second correction module is used for correcting the second image according to a preset second parameter of the second camera for correcting the image;
the first parameter includes: a first rotation matrix and a first camera parameter from a pre-established fourth physical coordinate system to a pre-established second physical coordinate system where a first camera is located;
the second parameter includes: a second rotation matrix and second camera parameters from a pre-established fourth physical coordinate system to a pre-established third physical coordinate system where a second camera is located;
the first correction module is specifically configured to:
predefining a grid image with the same size as the first image or the second image;
for each first pixel point in the grid image, converting the pixel coordinate of the first pixel point in a third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter; converting the physical coordinate of the first pixel point in a third pixel coordinate system into a coordinate in a fourth physical coordinate system; the fourth physical coordinate system is a coordinate system between a second physical coordinate system where the first camera is located and a third physical coordinate system where the second camera is located, which are established in advance, and the third pixel coordinate system is a coordinate system corresponding to the fourth physical coordinate system;
converting the coordinates of the first pixel points in the fourth physical coordinate system into the coordinates in the second physical coordinate system according to the first rotation matrix, and converting the coordinates of the first pixel points in the second physical coordinate system into the physical coordinates in the first pixel coordinate system; converting the physical coordinates of the first pixel points in the first pixel coordinate system into pixel coordinates in the first pixel coordinate system according to the first camera parameters; wherein the first pixel coordinate system is a coordinate system corresponding to the second physical coordinate system;
filtering out first pixel points of the grid image, wherein the pixel coordinates of the grid image under the first pixel coordinate system are smaller than 0 or larger than a first image frame;
for each second pixel point in the filtered grid image, performing shaping processing on the pixel coordinate of the second pixel point under the first pixel coordinate system;
and according to the pixel coordinate in the first image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the shaped pixel coordinate of the second pixel point in the first pixel coordinate system.
2. The apparatus of claim 1, wherein the first camera parameters comprise:
a focal length fx0 of the first camera in an x-axis direction of the second physical coordinate system, a focal length fy0 of the first camera in a y-axis direction of the second physical coordinate system, an x-axis coordinate cx0 of a pixel coordinate of the first camera projected to the first pixel coordinate system, and a y-axis coordinate cy0 of a pixel coordinate of the first camera projected to the first pixel coordinate system;
the second camera parameters include:
a focal length fx1 of the second camera in an x-axis direction of the third physical coordinate system, a focal length fy1 of the second camera in a y-axis direction of the third physical coordinate system, an x-axis coordinate cx1 of a pixel coordinate of the second pixel coordinate system onto which an optical center of the second camera is projected, and a y-axis coordinate cy1 of a pixel coordinate of the second pixel coordinate system onto which an optical center of the second camera is projected; wherein the second pixel coordinate system is a coordinate system corresponding to the third physical coordinate system;
the first correction module is specifically configured to convert the pixel coordinate of the first pixel point in the third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter in the following manner:
according to the formulaAndor according to a formulaAndcalculating the physical coordinate of the jth first pixel point in the third pixel coordinate system;
the pud0_ x _ j is an x-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, the p0_ x _ j is an x-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system, the pud0_ y _ j is a y-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, and the p0_ y _ j is a y-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system.
3. The apparatus of claim 1, wherein the first calibration module is specifically configured to implement the converting of the coordinates of the first pixel point in the fourth physical coordinate system into the coordinates in the second physical coordinate system according to the first rotation matrix by:
according to the formulaCalculating the coordinate of the jth first pixel point in the second physical coordinate system;
wherein, pOL _ j (x) is an x-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (y) is a y-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (z) is a z-axis coordinate of the jth first pixel point in the second physical coordinate system, R0 is the first rotation matrix, pO _ j (x) is an x-axis coordinate of the jth first pixel point in the fourth physical coordinate system, pO _ j (y) is a y-axis coordinate of the jth first pixel point in the fourth physical coordinate system, and pO _ j (z) is a z-axis coordinate of the jth first pixel point in the fourth physical coordinate system.
4. The apparatus according to claim 1, wherein the first correction module is specifically configured to shape pixel coordinates of the second pixel point in the first pixel coordinate system by:
respectively rounding up and rounding down the x-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, respectively rounding up and rounding down the y-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, and respectively taking up and rounding down the pixel coordinate of the shaped second pixel point in the first pixel coordinate system And
wherein, pOL _ k (x) is an x-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system, and pOL _ k (y) is a y-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system.
5. A method for implementing image correction, comprising:
acquiring a first image by adopting a first camera, and acquiring a second image by adopting a second camera;
correcting the first image according to a preset first parameter of the first camera for correcting the image;
correcting the second image according to a preset second parameter of the second camera for correcting the image;
the first parameter includes: a first rotation matrix and a first camera parameter from a pre-established fourth physical coordinate system to a pre-established second physical coordinate system where a first camera is located;
the second parameter includes: a second rotation matrix and second camera parameters from a pre-established fourth physical coordinate system to a pre-established third physical coordinate system where a second camera is located;
the correcting the first image according to a preset first parameter of the first camera for correcting the image comprises:
predefining a grid image with the same size as the first image or the second image;
for each first pixel point in the grid image, converting the pixel coordinate of the first pixel point in a third pixel coordinate system into a physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter; converting the physical coordinate of the first pixel point in a third pixel coordinate system into a coordinate in a fourth physical coordinate system; the fourth physical coordinate system is a coordinate system between a second physical coordinate system where the first camera is located and a third physical coordinate system where the second camera is located, which are established in advance, and the third pixel coordinate system is a coordinate system corresponding to the fourth physical coordinate system;
converting the coordinates of the first pixel points in the fourth physical coordinate system into the coordinates in the second physical coordinate system according to the first rotation matrix, and converting the coordinates of the first pixel points in the second physical coordinate system into the physical coordinates in the first pixel coordinate system; converting the physical coordinates of the first pixel points in the first pixel coordinate system into pixel coordinates in the first pixel coordinate system according to the first camera parameters; wherein the first pixel coordinate system is a coordinate system corresponding to the second physical coordinate system;
filtering out first pixel points of the grid image, wherein the pixel coordinates of the grid image under the first pixel coordinate system are smaller than 0 or larger than a first image frame;
for each second pixel point in the filtered grid image, performing shaping processing on the pixel coordinate of the second pixel point under the first pixel coordinate system;
and according to the pixel coordinate in the first image, giving the color value of the second pixel point in the filtered grid image to the color value of the pixel point of the shaped pixel coordinate of the second pixel point in the first pixel coordinate system.
6. The method of claim 5, wherein the first camera parameters comprise:
a focal length fx0 of the first camera in an x-axis direction of the second physical coordinate system, a focal length fy0 of the first camera in a y-axis direction of the second physical coordinate system, an x-axis coordinate cx0 of a pixel coordinate of the first camera projected to the first pixel coordinate system, and a y-axis coordinate cy0 of a pixel coordinate of the first camera projected to the first pixel coordinate system;
the second camera parameters include:
a focal length fx1 of the second camera in an x-axis direction of the third physical coordinate system, a focal length fy1 of the second camera in a y-axis direction of the third physical coordinate system, an x-axis coordinate cx1 of a pixel coordinate of the second pixel coordinate system onto which an optical center of the second camera is projected, and a y-axis coordinate cy1 of a pixel coordinate of the second pixel coordinate system onto which an optical center of the second camera is projected; wherein the second pixel coordinate system is a coordinate system corresponding to the third physical coordinate system;
the converting the pixel coordinate of the first pixel point in the third pixel coordinate system into the physical coordinate in the third pixel coordinate system according to the first camera parameter or the second camera parameter includes:
according to the formulaAndor according to a formulaAndcalculating the physical coordinate of the jth first pixel point in the third pixel coordinate system;
the pud0_ x _ j is an x-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, the p0_ x _ j is an x-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system, the pud0_ y _ j is a y-axis coordinate of a physical coordinate of the jth first pixel point in the third pixel coordinate system, and the p0_ y _ j is a y-axis coordinate of a pixel coordinate of the jth first pixel point in the third pixel coordinate system.
7. The method of claim 5, wherein converting the coordinates of the first pixel in the fourth physical coordinate system to the coordinates in the second physical coordinate system according to the first rotation matrix comprises:
according to the formulaCalculating the coordinate of the jth first pixel point in the second physical coordinate system;
wherein, pOL _ j (x) is an x-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (y) is a y-axis coordinate of the jth first pixel point in the second physical coordinate system, pOL _ j (z) is a z-axis coordinate of the jth first pixel point in the second physical coordinate system, R0 is the first rotation matrix, pO _ j (x) is an x-axis coordinate of the jth first pixel point in the fourth physical coordinate system, pO _ j (y) is a y-axis coordinate of the jth first pixel point in the fourth physical coordinate system, and pO _ j (z) is a z-axis coordinate of the jth first pixel point in the fourth physical coordinate system.
8. The method of claim 5, wherein the shaping the pixel coordinates of the second pixel point in the first pixel coordinate system comprises:
respectively rounding up and rounding down the x-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, respectively rounding up and rounding down the y-axis coordinate of the pixel coordinate of the second pixel point in the first pixel coordinate system, and respectively taking up and rounding down the pixel coordinate of the shaped second pixel point in the first pixel coordinate system And
wherein, pOL _ k (x) is an x-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system, and pOL _ k (y) is a y-axis coordinate of a pixel coordinate of the kth second pixel point in the first pixel coordinate system.
CN201611054933.4A 2016-11-25 2016-11-25 Method and device for realizing image correction Active CN106846407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611054933.4A CN106846407B (en) 2016-11-25 2016-11-25 Method and device for realizing image correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611054933.4A CN106846407B (en) 2016-11-25 2016-11-25 Method and device for realizing image correction

Publications (2)

Publication Number Publication Date
CN106846407A CN106846407A (en) 2017-06-13
CN106846407B true CN106846407B (en) 2019-12-20

Family

ID=59146125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611054933.4A Active CN106846407B (en) 2016-11-25 2016-11-25 Method and device for realizing image correction

Country Status (1)

Country Link
CN (1) CN106846407B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI677413B (en) * 2018-11-20 2019-11-21 財團法人工業技術研究院 Calibration method and device for robotic arm system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473771A (en) * 2013-09-05 2013-12-25 上海理工大学 Method for calibrating camera
CN105931188A (en) * 2016-05-06 2016-09-07 安徽伟合电子科技有限公司 Method for image stitching based on mean value duplication removal
CN106023073A (en) * 2016-05-06 2016-10-12 安徽伟合电子科技有限公司 Image splicing system
CN106131527A (en) * 2016-07-26 2016-11-16 深圳众思科技有限公司 Dual camera color synchronization method, device and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473771A (en) * 2013-09-05 2013-12-25 上海理工大学 Method for calibrating camera
CN105931188A (en) * 2016-05-06 2016-09-07 安徽伟合电子科技有限公司 Method for image stitching based on mean value duplication removal
CN106023073A (en) * 2016-05-06 2016-10-12 安徽伟合电子科技有限公司 Image splicing system
CN106131527A (en) * 2016-07-26 2016-11-16 深圳众思科技有限公司 Dual camera color synchronization method, device and terminal

Also Published As

Publication number Publication date
CN106846407A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106454121B (en) Double-camera shooting method and device
CN106412324B (en) Device and method for prompting focusing object
CN106909274B (en) Image display method and device
CN105163042B (en) A kind of apparatus and method for blurring processing depth image
CN106612397A (en) Image processing method and terminal
CN105303543A (en) Image enhancement method and mobile terminal
CN106713716B (en) Shooting control method and device for double cameras
CN105739099B (en) Virtual reality device, display equipment and image adjusting method
CN106097284B (en) A kind of processing method and mobile terminal of night scene image
CN105100775A (en) Image processing method and apparatus, and terminal
CN106791367B (en) A kind of filming apparatus and method, mobile terminal
CN107071263B (en) Image processing method and terminal
CN106954020B (en) A kind of image processing method and terminal
CN106534553B (en) Mobile terminal and shooting method thereof
CN105554386A (en) Mobile terminal and camera shooting control method thereof
CN106846408B (en) Method and device for acquiring correction parameters
CN106973226B (en) Shooting method and terminal
CN105554285B (en) Processing method for taking person photo and intelligent mobile terminal
CN106550190A (en) A kind of camera control method and device
CN106657783A (en) Image shooting device and method
CN106846407B (en) Method and device for realizing image correction
CN106603909B (en) A kind of method and apparatus for realizing focusing
CN105743170A (en) Control method for parallel charging, and terminal
CN106778557B (en) Fingerprint identification device and method
CN106803883B (en) The prompt terminal and method that the depth of field is moved forward and backward in pan-shot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20191126

Address after: 518000 703, Fangda building, No. 011, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Zhihui IOT Technology Co., Ltd

Address before: 518000 Guangdong Province, Shenzhen high tech Zone of Nanshan District City, No. 9018 North Central Avenue's innovation building A, 6-8 layer, 10-11 layer, B layer, C District 6-10 District 6 floor

Applicant before: Nubian Technologies Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant