CN107730462A - A kind of image processing method, terminal and computer-readable recording medium - Google Patents
A kind of image processing method, terminal and computer-readable recording medium Download PDFInfo
- Publication number
- CN107730462A CN107730462A CN201710913611.9A CN201710913611A CN107730462A CN 107730462 A CN107730462 A CN 107730462A CN 201710913611 A CN201710913611 A CN 201710913611A CN 107730462 A CN107730462 A CN 107730462A
- Authority
- CN
- China
- Prior art keywords
- image
- characteristic point
- camera
- correction
- camera shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 238000012937 correction Methods 0.000 claims abstract description 56
- 238000012545 processing Methods 0.000 claims abstract description 33
- 238000000605 extraction Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 20
- 230000006870 function Effects 0.000 description 19
- 230000006854 communication Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
Abstract
The invention discloses a kind of image processing method, terminal and computer-readable recording medium, methods described includes step:Obtain the first image of the first camera shooting and the second image of second camera shooting;The first image to the shooting of the first camera and the second image of second camera shooting are corrected respectively, the first image and the second image after being corrected;The characteristic point of the first image after extraction correction and the second image respectively;Count the row error between the characteristic point of the first image and the characteristic point of the second image;Row error between the characteristic point of first image of statistics and the characteristic point of the second image is compared with predetermined threshold value, and handled accordingly according to comparison result.The row error between characteristic point is compared with predetermined threshold value by being corrected to the image of binocular camera and feature point extraction by the present invention;And then the precision of judgement and the processing of three-dimensional correction is improved, while improve background blurring stability and accuracy.
Description
Technical field
The present invention relates to field of terminal technology, more particularly to a kind of image processing method, terminal and computer-readable storage
Medium.
Background technology
With the development of mobile terminal technology, the mobile terminal with camera function has been obtained in the life of people
Popularization.The increasingly abundanter mobile terminal of function is very easy to the life of people.In recent years, image processing techniques is rapidly sent out
Exhibition, the camera function of mobile terminal also becomes stronger day by day, plus mobile terminal it is easy to carry the advantages of, increasing user's favor
Taken pictures by mobile terminal.
In order to improve the effect of taking pictures of mobile terminal, increasing mobile terminal uses dual camera.Pass through double shootings
The photo that the mobile terminal of head is shot is very higher than the effect for the photo that the terminal of single camera is taken, and image quality becomes apparent from.But
It is that the photo with different imaging effects can not directly be shot by the mobile terminal of dual camera, it is also necessary to after terminal user
Phase is handled photo.During image procossing, background blurring is a gimmick often occurred, because it can be dashed forward rapidly
Go out main body and known to numerous shutterbugs and use.
During the present invention is realized, inventor has found that prior art has problems with:In the process of image procossing
In, the precision of judgement and the processing of three-dimensional correction is not high, it is easy to influences background blurring stability and accuracy.
The content of the invention
It is a primary object of the present invention to propose a kind of image processing method, terminal and computer-readable recording medium, purport
Solving the problems, such as that prior art is present.
To achieve the above object, first aspect of the embodiment of the present invention provides a kind of image processing method, and methods described includes
Step:
Obtain the first image of the first camera shooting and the second image of second camera shooting;
The first image to first camera shooting and the second image of second camera shooting are carried out respectively
Correction, the first image and the second image after being corrected;
The characteristic point of the first image after extraction correction and the second image respectively;
Count the row error between the characteristic point of described first image and the characteristic point of second image;
By the row error between the characteristic point of the described first image of statistics and the characteristic point of second image with presetting
Threshold value is compared, and is handled accordingly according to comparison result.
Optionally, first image for obtaining the shooting of the first camera and the second image of second camera shooting include
Step:
First camera and the second camera are demarcated respectively;
Obtain the first image of calibrated first camera shooting and the second figure of calibrated second camera shooting
Picture.
Optionally, the of first image to first camera shooting respectively and second camera shooting
Two images are corrected including step:
The first image to first camera shooting and the second image of second camera shooting are carried out respectively
Distortion correction and three-dimensional correction.
Optionally, the distortion correction includes Lens Distortion Correction and/or tangential distortion corrects.
Optionally, the characteristic point of the first image after the correction of extraction respectively and the second image specifically includes step:
Extract the characteristic point of the first image and the second image after correction respectively by ORB algorithms.
Optionally, step is also included after the characteristic point of the first image after the correction of extraction respectively and the second image:
The characteristic point of the first image extracted and the second image is saved in the first array and the second array respectively.
Optionally, it is described that step is specifically included according to the corresponding processing of comparison result progress:
If the row error between the characteristic point of described first image and the characteristic point of second image exceedes predetermined threshold value,
Then display reminding information and perform obtain the first camera shooting the first image and second camera shooting the second image step
Suddenly, or obtain the disparity map of previous frame image and export the disparity map;
If the row error between the characteristic point of described first image and the characteristic point of second image is no more than default threshold
Value, then Stereo matching is carried out to the first image after correction and the second image, export disparity map.
Optionally, step is also included after the output disparity map:
Preserve the disparity map of output.
In addition, to achieve the above object, second aspect of the embodiment of the present invention provides a kind of terminal, the terminal includes:Deposit
Reservoir, processor and it is stored in the image processing program that can be run on the memory and on the processor, described image
The step of image processing method described in first aspect is realized when processing routine is by the computing device.
Furthermore to achieve the above object, the third aspect of the embodiment of the present invention provides a kind of computer-readable recording medium, institute
State and picture processing program is stored with computer-readable recording medium, the picture processing program realizes when being executed by processor
The step of image processing method described in one side.
A kind of image processing method, terminal and computer-readable recording medium provided in an embodiment of the present invention, by double
The image of mesh camera is corrected and feature point extraction, and the row error between characteristic point is compared with predetermined threshold value;
And then the precision of judgement and the processing of three-dimensional correction is improved, while improve background blurring stability and accuracy.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for the mobile terminal for realizing each embodiment of the present invention;
Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention;
Fig. 3 is the image processing method schematic flow sheet of the embodiment of the present invention;
Fig. 4 is the image processing flow schematic diagram of the mobile terminal of the embodiment of the present invention;
Fig. 5 is the terminal structure schematic diagram of the embodiment of the present invention.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be advantageous to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. move
Dynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
Outside element for moving purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of each embodiment of the realization present invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,
Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrate
Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need
To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is receiving or
It is sound that the voice data stored in memory 109, which is converted into audio signal and exported,.Moreover, audio output unit 103
The audio output related to the specific function that mobile terminal 100 performs can also be provided (for example, call signal receives sound, disappeared
Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out
Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be deposited
Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound
Caused noise or interference during frequency signal.
Mobile terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity transducer can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axles) size of acceleration, size and the direction of gravity are can detect that when static, the application available for identification mobile phone posture
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;
The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer,
The other sensors such as hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
Contact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, can
Input and the output function of mobile terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not done herein specifically
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by running or performing the software program and/or module that are stored in memory 109, and call and be stored in storage
Data in device 109, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) to all parts power supply, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system being based on below to the mobile terminal of the present invention enters
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, the UE that the LTE system includes communicating connection successively (User Equipment, is used
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling between EPC203, there is provided carrying and connection management.HSS2032 is all to manage for providing some registers
Such as the function of attaching position register (not shown) etc, and preserve some and used about service features, data rate etc.
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
First embodiment
As shown in figure 3, first embodiment of the invention provides a kind of image processing method, methods described includes step:
S31, the first image for obtaining the shooting of the first camera and second camera shooting the second image.
In actual photographed, some cameras can produce distortion, and the image polar curve collected intersects, follow-up in order to reduce
The difficulty of images match is, it is necessary to obtain the focal length of two cameras, principal point coordinate, inclination factor, distortion factor and they it
Between the parameter information such as rotating vector, camera is demarcated according to obtained parameter information.The specific algorithm of demarcation can be adopted
With calibration algorithm of the prior art, this is not restricted and repeats.
In the present embodiment, first image for obtaining the shooting of the first camera and the second figure of second camera shooting
As including step:
First camera and the second camera are demarcated respectively;
Obtain the first image of calibrated first camera shooting and the second figure of calibrated second camera shooting
Picture.
S32, respectively the first image to first camera shooting and the second image of second camera shooting
It is corrected, the first image and the second image after being corrected.
In the present embodiment, it is described that the first image and the second camera of first camera shooting are clapped respectively
The second image taken the photograph is corrected including step:
The first image to first camera shooting and the second image of second camera shooting are carried out respectively
Distortion correction and three-dimensional correction.
The distortion of camera be due to imaging model it is inaccurate caused by, people are replaced to improve luminous flux with lens
Aperture is imaged, and because this replacement can not comply fully with the property of pinhole imaging system, therefore distortion just generates.In the present embodiment
In, the distortion correction includes Lens Distortion Correction and/or tangential distortion corrects.
In order in Stereo matching, the plane of binocular camera image is row alignment, it is necessary to be corrected to image, is stood
Body corrects the amount of calculation that can effectively reduce Stereo matching.
S33, the respectively characteristic point of the first image after extraction correction and the second image.
In the present embodiment, the characteristic point of the first image after the correction of extraction respectively and the second image specifically includes step
Suddenly:
Extract the characteristic point of the first image and the second image after correction respectively by ORB algorithms.
In the present embodiment, ORB (Oriented FAST and Rotated BRIEF) is a kind of rapid characteristic points extraction
With the algorithm of description.ORB algorithms are divided into two parts, are feature point extraction and feature point description respectively.Feature extraction is by FAST
What (Features from Accelerated Segment Test) algorithm development was come, feature point description is according to BRIEF
(Binary Robust Independent Elementary Features) feature describes algorithm improvement.ORB features be by
The detection method of FAST characteristic points combines with BRIEF Feature Descriptors, and done on the basis of they are original improvement with
Optimization.
In one embodiment, after the characteristic point of the first image after the correction of extraction respectively and the second image also
Including step:
The characteristic point of the first image extracted and the second image is saved in the first array and the second array respectively.
In this embodiment, by the way that the characteristic point extracted is saved in array, it is easy to the statistics of subsequent characteristics point.
Row error between S34, the characteristic point for counting described first image and second image characteristic point.
S35, by the row error between the characteristic point of the described first image of statistics and the characteristic point of second image with
Predetermined threshold value is compared, and is handled accordingly according to comparison result.
In the present embodiment, it is described that step is specifically included according to the corresponding processing of comparison result progress:
If the row error between the characteristic point of described first image and the characteristic point of second image exceedes predetermined threshold value,
Then display reminding information and perform obtain the first camera shooting the first image and second camera shooting the second image step
Suddenly, or obtain the disparity map of previous frame image and export the disparity map;
If the row error between the characteristic point of described first image and the characteristic point of second image is no more than default threshold
Value, then Stereo matching is carried out to the first image after correction and the second image, export disparity map.
In the present embodiment, the target of Stereo matching is that the corresponding points of matching are found out in the image after two corrections, is led to
Cross the parallax for calculating the mathematic interpolation of coordinate these corresponding points of these corresponding points in two width pictures, one parallax of final output
Figure.
The Stereo matching that Stereo Matching Algorithm includes sectional perspective matching algorithm, global Stereo Matching Algorithm and reference is calculated
Method.Sectional perspective matching algorithm is mainly the method using sliding window, and the estimation of parallax is carried out using local optimum function, local
Stereo Matching Algorithm has SAD (Sum of Absolute Differences), SSD (Sum of Square Differences)
Scheduling algorithm.Global Stereo Matching Algorithm is mainly to employ the optimum theory method estimating disparity of the overall situation, establishes global energy letter
Number, optimal disparity map is obtained by minimizing global energy function.The Stereo Matching Algorithm of reference, such as can be BM algorithms,
SGBM algorithms and Graph cut algorithms.
In one embodiment, step is also included after the output disparity map:
Preserve the disparity map of output.
In this embodiment, the disparity map for preserving output is current disparity map, is easy to follow-up Stereo matching process
In calculating.
In order to further illustrate the present embodiment, now by taking smart mobile phone as an example, illustrated with reference to Fig. 4:
As shown in figure 4, smart mobile phone includes binocular camera, i.e., main camera and secondary camera.By main camera and
Secondary camera gets left mesh image and right mesh image respectively.
The left mesh image photographed to main camera carries out distortion correction and three-dimensional correction, the left mesh figure after being corrected
Picture.Similarly, the right mesh image after being corrected.
The ORB characteristic points of left mesh image after extraction correction, number is stored in by the ORB characteristic points of the left mesh image extracted
In group a.Similar, the ORB characteristic points of the right mesh image extracted are stored in array b.
The row error between ORB characteristic points in a, b array is counted, and whether error in judgement exceedes threshold value T.If error exceedes
Threshold value T, illustrate that correction accuracy is not high, at this time can display reminding information, such as:User is prompted to correct the information to fail, prompting
The disparity map preserved in the information that user re-shoots, or output c, the disparity map is previous frame image parallactic figure.If by mistake
Difference is no more than threshold value T, illustrates that correction accuracy is higher, then can perform Stereo matching, output disparity map, and the disparity map of output is protected
Exist in c.
A kind of image processing method provided in an embodiment of the present invention, by being corrected to the image of binocular camera and special
Sign point extraction, and the row error between characteristic point is compared with predetermined threshold value;And then improve three-dimensional correction judgement and
The precision of processing, while improve background blurring stability and accuracy.
Second embodiment
Reference picture 5, Fig. 5 provide a kind of terminal for second embodiment of the invention, and the terminal 40 includes:Memory 41, place
Manage device 42 and be stored in the image processing program that can be run on the memory 41 and on the processor 42, at described image
Reason program is by the processor 42 when being performed, the step of for realizing image processing method as described below:
S31, the first image for obtaining the shooting of the first camera and second camera shooting the second image;
S32, respectively the first image to first camera shooting and the second image of second camera shooting
It is corrected, the first image and the second image after being corrected;
S33, the respectively characteristic point of the first image after extraction correction and the second image;
Row error between S34, the characteristic point for counting described first image and second image characteristic point;
S35, by the row error between the characteristic point of the described first image of statistics and the characteristic point of second image with
Predetermined threshold value is compared, and is handled accordingly according to comparison result.
When described image processing routine is performed by the processor 42, for realizing image processing method as described below
Step:
First image for obtaining the shooting of the first camera and the second image of second camera shooting include step:
First camera and the second camera are demarcated respectively;
Obtain the first image of calibrated first camera shooting and the second figure of calibrated second camera shooting
Picture.
When described image processing routine is performed by the processor 42, for realizing image processing method as described below
Step:
First image to first camera shooting respectively and the second image of second camera shooting
It is corrected including step:
The first image to first camera shooting and the second image of second camera shooting are carried out respectively
Distortion correction and three-dimensional correction.
When described image processing routine is performed by the processor 42, for realizing image processing method as described below
Step:
The distortion correction includes Lens Distortion Correction and/or tangential distortion corrects.
When described image processing routine is performed by the processor 42, for realizing image processing method as described below
Step:
The characteristic point of the first image and the second image after the correction of extraction respectively specifically includes step:
Extract the characteristic point of the first image and the second image after correction respectively by ORB algorithms.
When described image processing routine is performed by the processor 42, for realizing image processing method as described below
Step:
The characteristic point of the first image and the second image after the correction of extraction respectively also includes step afterwards:
The characteristic point of the first image extracted and the second image is saved in the first array and the second array respectively.
When described image processing routine is performed by the processor 42, for realizing image processing method as described below
Step:
It is described that step is specifically included according to the corresponding processing of comparison result progress:
If the row error between the characteristic point of described first image and the characteristic point of second image exceedes predetermined threshold value,
Then display reminding information and perform obtain the first camera shooting the first image and second camera shooting the second image step
Suddenly, or obtain the disparity map of previous frame image and export the disparity map;
If the row error between the characteristic point of described first image and the characteristic point of second image is no more than default threshold
Value, then Stereo matching is carried out to the first image after correction and the second image, export disparity map.
When described image processing routine is performed by the processor 42, for realizing image processing method as described below
Step:
Also include step after the output disparity map:
Preserve the disparity map of output.
In order to further illustrate the present embodiment, now by taking smart mobile phone as an example, illustrated with reference to Fig. 4:
As shown in figure 4, smart mobile phone includes binocular camera, i.e., main camera and secondary camera.By main camera and
Secondary camera gets left mesh image and right mesh image respectively.
The left mesh image photographed to main camera carries out distortion correction and three-dimensional correction, the left mesh figure after being corrected
Picture.Similarly, the right mesh image after being corrected.
The ORB characteristic points of left mesh image after extraction correction, number is stored in by the ORB characteristic points of the left mesh image extracted
In group a.Similar, the ORB characteristic points of the right mesh image extracted are stored in array b.
The row error between ORB characteristic points in a, b array is counted, and whether error in judgement exceedes threshold value T.If error exceedes
Threshold value T, illustrate that correction accuracy is not high, at this time can display reminding information, such as:User is prompted to correct the information to fail, prompting
The disparity map preserved in the information that user re-shoots, or output c, the disparity map is previous frame image parallactic figure.If by mistake
Difference is no more than threshold value T, illustrates that correction accuracy is higher, then can perform Stereo matching, output disparity map, and the disparity map of output is protected
Exist in c.
Terminal provided in an embodiment of the present invention, by being corrected to the image of binocular camera and feature point extraction, and
Row error between characteristic point is compared with predetermined threshold value;And then the precision of judgement and the processing of three-dimensional correction is improved,
Improve background blurring stability and accuracy simultaneously.
3rd embodiment
Third embodiment of the invention provides a kind of computer-readable recording medium, is deposited on the computer-readable recording medium
Image processing program is contained, described image processing routine realizes the image processing method described in first embodiment when being executed by processor
The step of method.
Computer-readable recording medium provided in an embodiment of the present invention, by being corrected to the image of binocular camera and
Feature point extraction, and the row error between characteristic point is compared with predetermined threshold value;And then improve the judgement of three-dimensional correction
With the precision of processing, while background blurring stability and accuracy are improved.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal (can be mobile phone, computer, service
Device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art
Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot
Form, these are belonged within the protection of the present invention.
Claims (10)
1. a kind of image processing method, it is characterised in that methods described includes step:
Obtain the first image of the first camera shooting and the second image of second camera shooting;
The first image to first camera shooting and the second image of second camera shooting are corrected respectively,
The first image and the second image after being corrected;
The characteristic point of the first image after extraction correction and the second image respectively;
Count the row error between the characteristic point of described first image and the characteristic point of second image;
By the row error and predetermined threshold value between the characteristic point of the described first image of statistics and the characteristic point of second image
It is compared, and is handled accordingly according to comparison result.
2. a kind of image processing method according to claim 1, it is characterised in that described to obtain what the first camera was shot
First image and the second image of second camera shooting include step:
First camera and the second camera are demarcated respectively;
Obtain the first image of calibrated first camera shooting and the second image of calibrated second camera shooting.
3. a kind of image processing method according to claim 1, it is characterised in that described respectively to first camera
First image of shooting and the second image of second camera shooting are corrected including step:
The first image to first camera shooting and the second image of second camera shooting enter line distortion respectively
Correction and three-dimensional correction.
4. a kind of image processing method according to claim 3, it is characterised in that the distortion correction includes radial distortion
Correction and/or tangential distortion correction.
5. a kind of image processing method according to claim 1, it is characterised in that first after the correction of extraction respectively
The characteristic point of image and the second image specifically includes step:
Extract the characteristic point of the first image and the second image after correction respectively by ORB algorithms.
6. a kind of image processing method according to claim 1, it is characterised in that first after the correction of extraction respectively
Also include step after the characteristic point of image and the second image:
The characteristic point of the first image extracted and the second image is saved in the first array and the second array respectively.
7. a kind of image processing method according to claim 1, it is characterised in that described to be carried out accordingly according to comparison result
Processing specifically include step:
If the row error between the characteristic point of described first image and the characteristic point of second image exceedes predetermined threshold value, show
Show prompt message and perform the second image step of the first image for obtaining the shooting of the first camera and second camera shooting, or
Person obtains the disparity map of previous frame image and exports the disparity map;
If the row error between the characteristic point of described first image and the characteristic point of second image is no more than predetermined threshold value,
Stereo matching is carried out to the first image after correction and the second image, exports disparity map.
8. a kind of image processing method according to claim 7, it is characterised in that also include after the output disparity map
Step:
Preserve the disparity map of output.
9. a kind of terminal, it is characterised in that the terminal includes:Memory, processor and it is stored on the memory and can
The image processing program run on the processor, described image processing routine are realized such as right during the computing device
It is required that the step of image processing method any one of 1 to 8.
10. a kind of computer-readable recording medium, it is characterised in that be stored with the computer-readable recording medium at image
Program is managed, the image procossing as any one of claim 1 to 8 is realized when described image processing routine is executed by processor
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710913611.9A CN107730462A (en) | 2017-09-30 | 2017-09-30 | A kind of image processing method, terminal and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710913611.9A CN107730462A (en) | 2017-09-30 | 2017-09-30 | A kind of image processing method, terminal and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107730462A true CN107730462A (en) | 2018-02-23 |
Family
ID=61209421
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710913611.9A Pending CN107730462A (en) | 2017-09-30 | 2017-09-30 | A kind of image processing method, terminal and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107730462A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108376384A (en) * | 2018-03-12 | 2018-08-07 | 海信集团有限公司 | Antidote, device and the storage medium of disparity map |
CN108737735A (en) * | 2018-06-15 | 2018-11-02 | Oppo广东移动通信有限公司 | Method for correcting image, electronic equipment and computer readable storage medium |
CN109559353A (en) * | 2018-11-30 | 2019-04-02 | Oppo广东移动通信有限公司 | Camera module scaling method, device, electronic equipment and computer readable storage medium |
CN109658459A (en) * | 2018-11-30 | 2019-04-19 | Oppo广东移动通信有限公司 | Camera calibration method, device, electronic equipment and computer readable storage medium |
CN109886894A (en) * | 2019-02-26 | 2019-06-14 | 长沙八思量信息技术有限公司 | Calculation method, device and the computer readable storage medium of laser marking figure adjustment value |
CN110636276A (en) * | 2019-08-06 | 2019-12-31 | RealMe重庆移动通信有限公司 | Video shooting method and device, storage medium and electronic equipment |
CN111292380A (en) * | 2019-04-02 | 2020-06-16 | 展讯通信(上海)有限公司 | Image processing method and device |
CN112257713A (en) * | 2020-11-12 | 2021-01-22 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
US11538175B2 (en) | 2019-09-29 | 2022-12-27 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for detecting subject, electronic device, and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130135439A1 (en) * | 2011-11-29 | 2013-05-30 | Fujitsu Limited | Stereoscopic image generating device and stereoscopic image generating method |
CN103649997A (en) * | 2011-07-13 | 2014-03-19 | 高通股份有限公司 | Method and apparatus for calibrating an imaging device |
CN106910222A (en) * | 2017-02-15 | 2017-06-30 | 中国科学院半导体研究所 | Face three-dimensional rebuilding method based on binocular stereo vision |
-
2017
- 2017-09-30 CN CN201710913611.9A patent/CN107730462A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103649997A (en) * | 2011-07-13 | 2014-03-19 | 高通股份有限公司 | Method and apparatus for calibrating an imaging device |
US20130135439A1 (en) * | 2011-11-29 | 2013-05-30 | Fujitsu Limited | Stereoscopic image generating device and stereoscopic image generating method |
CN106910222A (en) * | 2017-02-15 | 2017-06-30 | 中国科学院半导体研究所 | Face three-dimensional rebuilding method based on binocular stereo vision |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108376384A (en) * | 2018-03-12 | 2018-08-07 | 海信集团有限公司 | Antidote, device and the storage medium of disparity map |
CN108376384B (en) * | 2018-03-12 | 2020-09-18 | 海信集团有限公司 | Method and device for correcting disparity map and storage medium |
CN108737735B (en) * | 2018-06-15 | 2019-09-17 | Oppo广东移动通信有限公司 | Method for correcting image, electronic equipment and computer readable storage medium |
CN108737735A (en) * | 2018-06-15 | 2018-11-02 | Oppo广东移动通信有限公司 | Method for correcting image, electronic equipment and computer readable storage medium |
CN109658459B (en) * | 2018-11-30 | 2020-11-24 | Oppo广东移动通信有限公司 | Camera calibration method, device, electronic equipment and computer-readable storage medium |
CN109658459A (en) * | 2018-11-30 | 2019-04-19 | Oppo广东移动通信有限公司 | Camera calibration method, device, electronic equipment and computer readable storage medium |
CN109559353A (en) * | 2018-11-30 | 2019-04-02 | Oppo广东移动通信有限公司 | Camera module scaling method, device, electronic equipment and computer readable storage medium |
CN109559353B (en) * | 2018-11-30 | 2021-02-02 | Oppo广东移动通信有限公司 | Camera module calibration method and device, electronic equipment and computer readable storage medium |
CN109886894A (en) * | 2019-02-26 | 2019-06-14 | 长沙八思量信息技术有限公司 | Calculation method, device and the computer readable storage medium of laser marking figure adjustment value |
CN111292380A (en) * | 2019-04-02 | 2020-06-16 | 展讯通信(上海)有限公司 | Image processing method and device |
CN111292380B (en) * | 2019-04-02 | 2022-12-02 | 展讯通信(上海)有限公司 | Image processing method and device |
CN110636276A (en) * | 2019-08-06 | 2019-12-31 | RealMe重庆移动通信有限公司 | Video shooting method and device, storage medium and electronic equipment |
CN110636276B (en) * | 2019-08-06 | 2021-12-28 | RealMe重庆移动通信有限公司 | Video shooting method and device, storage medium and electronic equipment |
US11538175B2 (en) | 2019-09-29 | 2022-12-27 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for detecting subject, electronic device, and computer readable storage medium |
CN112257713A (en) * | 2020-11-12 | 2021-01-22 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107730462A (en) | A kind of image processing method, terminal and computer-readable recording medium | |
CN107680060A (en) | A kind of image distortion correction method, terminal and computer-readable recording medium | |
CN108322644A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN107194963A (en) | A kind of dual camera image processing method and terminal | |
CN108024065A (en) | A kind of method of terminal taking, terminal and computer-readable recording medium | |
CN107133939A (en) | A kind of picture synthesis method, equipment and computer-readable recording medium | |
CN107690065A (en) | A kind of white balance correcting, device and computer-readable recording medium | |
CN107704176A (en) | A kind of picture-adjusting method and terminal | |
CN107707821A (en) | Modeling method and device, bearing calibration, terminal, the storage medium of distortion parameter | |
CN107317963A (en) | A kind of double-camera mobile terminal control method, mobile terminal and storage medium | |
CN107682627A (en) | A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium | |
CN107295269A (en) | A kind of light measuring method and terminal, computer-readable storage medium | |
CN107959795A (en) | A kind of information collecting method, equipment and computer-readable recording medium | |
CN107948360A (en) | Image pickup method, terminal and the computer-readable recording medium of flexible screen terminal | |
CN107333056A (en) | Image processing method, device and the computer-readable recording medium of moving object | |
CN107239205A (en) | A kind of photographic method, mobile terminal and storage medium | |
CN107566731A (en) | A kind of focusing method and terminal, computer-readable storage medium | |
CN107909540A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN107357500A (en) | A kind of picture-adjusting method, terminal and storage medium | |
CN107613208A (en) | Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area | |
CN107483804A (en) | A kind of image pickup method, mobile terminal and computer-readable recording medium | |
CN107240072A (en) | A kind of screen luminance adjustment method, terminal and computer-readable recording medium | |
CN107689029A (en) | Image processing method, mobile terminal and computer-readable recording medium | |
CN107040723A (en) | A kind of imaging method based on dual camera, mobile terminal and storage medium | |
CN107705247A (en) | A kind of method of adjustment of image saturation, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180223 |
|
RJ01 | Rejection of invention patent application after publication |