CN108151647A - A kind of image processing method, device and mobile terminal - Google Patents

A kind of image processing method, device and mobile terminal Download PDF

Info

Publication number
CN108151647A
CN108151647A CN201611109479.8A CN201611109479A CN108151647A CN 108151647 A CN108151647 A CN 108151647A CN 201611109479 A CN201611109479 A CN 201611109479A CN 108151647 A CN108151647 A CN 108151647A
Authority
CN
China
Prior art keywords
image
image data
point
target
capture module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611109479.8A
Other languages
Chinese (zh)
Inventor
杨起
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201611109479.8A priority Critical patent/CN108151647A/en
Publication of CN108151647A publication Critical patent/CN108151647A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of image processing methods, and applied to mobile terminal, mobile terminal includes:First image capture module and the second image capture module;This method includes:The first image from the first image capture module is obtained, and obtains the second image from the second image capture module, wherein, include same target object in the first image and the second image;Respectively from the first image and the second image, corresponding first image data of at least two target points and the second image data are obtained, wherein, at least two target points are the point on target object;Obtain the characteristic parameter of the first image capture module and the second image capture module;Feature based parameter, the first image data and the second image data calculate the distance between at least two target points according to preset strategy, obtain the size of target object.The embodiment of the present invention also discloses a kind of image data processing system and mobile terminal.

Description

A kind of image processing method, device and mobile terminal
Technical field
The present invention relates to a kind of image processing field more particularly to image processing method, device and mobile terminals.
Background technology
With the rapid development of intelligent terminal, especially mobile phone, have become user's tool used in everyday, and be deep into The every aspect of user's daily life.Meanwhile user often need in daily life measure surrounding objects length, but due to The traditional measurements tool inconvenience such as ruler or tape measure is carried, and therefore, the work(of traditional measurement tool how is realized using mobile phone It can attract wide attention.
Also there are mobile phone development ruler tool, such as super ruler at present, mainly utilize mobile phone screen size and mobile phone Screen pixels points realizes the measurement to object size.Using this method come when measuring the size of object, need will be to be measured The object of amount is placed on according to certain rule on mobile phone screen, in this way, not only requiring the size of object to be measured cannot Excessive, under normal circumstances, the size of object to be measured will be in the same order of magnitude with the size of mobile phone screen, moreover, also Ask object to be measured that can be in direct contact with mobile phone screen.Therefore, problem of the existing technology is measurable object Range is excessively narrow, and the convenience of measurement is poor.
Invention content
For above-mentioned technical problem, an embodiment of the present invention is intended to provide a kind of image processing method, device and movements Terminal expands object range to be measured to realize, improves the convenience of measurement, promotes user experience.
In order to achieve the above objectives, the technical proposal of the invention is realized in this way:
In a first aspect, the embodiment of the present invention provides a kind of image processing method, applied to mobile terminal, the movement Terminal includes:First image capture module and the second image capture module, wherein, described first image acquisition module and institute It states the second image capture module and is in same plane and with identical focal length and acquisition direction;The method includes:Come From the first image of described first image acquisition module, and the second image from second image capture module is obtained, In, include same target object in described first image and second image;Respectively from described first image and described In two images, corresponding first image data of at least two target points and the second image data are obtained, wherein, described at least two A target point is the point on the target object;Obtain described first image acquisition module and second image capture module Characteristic parameter, wherein, the characteristic parameter is described first image acquisition module and second image capture module Optical parameter;Based on the characteristic parameter, described first image data and second image data, according to preset strategy meter The distance between described at least two target point is calculated, obtains the size of the target object.
Second aspect, the embodiment of the present invention provide a kind of image data processing system, which is characterized in that described device packet It includes:First obtains unit, first acquisition unit, second acquisition unit and the second obtaining unit, wherein, described first obtains list Member for obtaining the first image from the first image capture module, and obtains the second figure from the second image capture module Picture, wherein, described first image acquisition module and second image capture module are in same plane and with identical coke Away from and acquisition direction, include same target object in described first image and second image;Described first obtains list Member, for from described first image and second image, obtaining corresponding first picture number of at least two target points respectively According to this and the second image data, wherein, at least two target point is the point on the target object;Described second obtains list Member, for obtaining the characteristic parameter of described first image acquisition module and second image capture module, wherein, the spy Parameter is levied as described first image acquisition module and the optical parameter of second image capture module;Described second obtains list Member for being based on the characteristic parameter, described first image data and second image data, is calculated according to preset strategy The distance between described at least two target point obtains the size of the target object.
The third aspect, the embodiment of the present invention provide a kind of mobile terminal, which is characterized in that the mobile terminal includes:The One camera, second camera and processor, wherein, first camera is in same flat with the second camera Face and with identical focal length and acquisition direction, for acquiring the first image;The second camera, for acquiring the second figure Picture, wherein, include same target object in second image and described first image;The processor, for passing through The first camera and the second camera are stated, obtains the first image and the second image;Respectively from described first image and institute State in the second image, obtain corresponding first image data of at least two target points and the second image data, wherein, it is described extremely Few two target points are the point on the target object;Obtain the feature ginseng of first camera and the second camera Number, wherein, the characteristic parameter is first camera and the optical parameter of the second camera;Based on the feature Parameter, described first image data and second image data calculate at least two target point according to preset strategy The distance between, obtain the size of the target object.
The embodiment of the present invention provides a kind of image processing method, device and mobile terminal, and first, mobile terminal can obtain The first image from the first image capture module is obtained, and obtains the second image from the second image capture module, wherein, the Include same target object in one image and the second image;Secondly, respectively from the first image and the second image, target is obtained The first image data and the second image data on object corresponding at least two target points;Then, the first image is obtained to adopt Collect the characteristic parameter of module and the second image capture module, wherein, characteristic parameter is the first image capture module and second The optical parameter of image capture module;Finally, feature based parameter, the first image data and the second image data, according to pre- If the distance between at least two target point of policy calculation.In this way, mobile terminal can be by including same target object Two images, to obtain the size of target object.So as to so that user can measure arbitrary target object in actual life Size expands the range of object to be measured, and it is convenient to be brought to the daily measurement of user, while greatly improves user's body It tests.
Description of the drawings
Fig. 1 is a kind of flow diagram of the image processing method in the embodiment of the present invention one;
Fig. 2 is the matching schematic diagram of the characteristic point and match point in the embodiment of the present invention one;
Fig. 3 is another flow diagram of the image processing method in the embodiment of the present invention one;
Fig. 4 is the vision mode schematic diagram in the embodiment of the present invention one;
Fig. 5 is the structure diagram of the image data processing system in the embodiment of the present invention two;
Fig. 6 is the structure diagram of the mobile terminal in the embodiment of the present invention three.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes.
Embodiment one
The present embodiment provides a kind of image processing method, this method is applied to mobile terminal, which includes: First image capture module and the second image capture module;Wherein, the first image capture module and the second Image Acquisition mould Block is in same plane and with identical focal length and acquisition direction.
In practical applications, the image sensing cell in above-mentioned first image capture module and the second image capture module It can be complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) image Sensing unit or Charged Couple (CCD, Charge-coupled Device) image sensing cell, it is, of course, also possible to It is other kinds of image sensing cell.Here, the embodiment of the present invention is not specifically limited.
It should be noted that above-mentioned first image capture module and the type of the second image capture module and physical parameter one It causes.Here, type can refer to the type of imaging sensor in image capture module, and including CMOS, CCD etc., physical parameter can To refer to the pixel of image capture module.
In practical applications, which can be the smart mobile phone, tablet computer, high-tech for having binocular camera Glasses etc., as long as there are two the images in same horizontal line and with identical focal length and acquisition direction for mobile terminal setting Acquisition module.
So, Fig. 1 is the flow diagram of the image processing method in the embodiment of the present invention one, shown in Figure 1, The image processing method includes:
S101:The first image from the first image capture module is obtained, and is obtained from the second image capture module Second image;
Wherein, include same target object in the first image and the second image.
Here, which can be any actual object that mobile terminal can be in capture range, illustratively, should Target object can with and be not limited to include books, cup, carton, desk etc..
In practical applications, when user is in measurement size of the application to measure target object on using mobile terminal, Mobile terminal can capture the two of the target object images by image capture module respectively, and specifically, mobile terminal can lead to The first image capture module is crossed to capture the first image of the target object, and the mesh is captured by the second image capture module Mark the second image of object.
Further, since mobile terminal is often made an uproar by image capture module the image collected there are various Sound, these noises can be external noise or image capture module caused by the light of external environment, dust granule etc. Internal noise caused by internal circuit, image sensing module material etc., the presence of these noises can cause the object in image Body is obscured or even can not be distinguished, so as to which the first image data obtained can be caused inaccurate with the second image data.
Therefore, in specific implementation process, the size for accurately measuring target object is ensured that, mobile terminal exists After obtaining the first image and the second image, can also denoising be carried out, and then use denoising to the first image and the second image The second image after rear the first image and denoising, come obtain on target object at least 2 points of corresponding first image datas and Second image data.
In practical applications, for mobile terminal when carrying out denoising, used denoising method can be linear filtering The frequency domains such as the spatial domains such as method, median filtering method, Wiener Filter Method denoising method or Fourier transformation, wavelet transformation are gone Method for de-noising, it is, of course, also possible to be other kinds of denoising method such as color histogram equalization etc., here, the embodiment of the present invention It is not specifically limited.
Further, in order to be more accurate quantization identifies target object, and mobile terminal can also obtain first After image and the second image, by the algorithm of image identification, judge that can the first be obtained image and the second image identify Go out target object, if it is determined that same target object can be identified simultaneously by going out the first image and the second image, indicated that and obtained The first image and the second image obtained can be used, and otherwise, show that the first obtained image and the second image are unavailable, it is necessary to weight It is new to obtain the first image and the second image.
Therefore, after S101, which can also include:The first image and second are first established respectively The background model of image, the target object being then based in respective the first image of static background Model Identification and the second image, when When all recognizing target object in the first image and the second image, S102 is performed;Otherwise the first image and second are reacquired Image, until target object can simultaneously recognize in the first image and the second image.
Specifically, when establishing static background model, gauss hybrid models can be used, code index (Code Book) is calculated The common modeling algorithm such as method is determined that here, the present invention is real by those skilled in the art according to actual conditions in the specific implementation Example is applied to be not specifically limited.
S102:Respectively from the first image and the second image, corresponding first image data of at least two target points is obtained And second image data;
Wherein, at least two target points are the point on target object.
Here, mobile terminal can obtain target object after the first image and the second image is obtained from the first image Respective first image data corresponding to upper at least two target point, and from the second image obtain target object on this at least Respective second image data corresponding to two target points.Illustratively, it is assumed that target object is a pencil, then, mesh At least two target points can be two endpoints on pencil on mark object.
In specific implementation process, S102 can include:Target object, and acquisition and target are calibrated on the first image One-to-one at least two characteristic point of at least two target points on object;Using image matching algorithm, from the second image It obtains and at least two characteristic points matched at least two match point one by one.
Specifically, mobile terminal can be shown the first image of acquisition by display screen, at this point, user can be The target object for needing to measure is calibrated on first image manually, then, mobile terminal is according to operation of the user to the first image At least two characteristic point on target object is obtained, finally, mobile terminal is again using preset image matching algorithm to the second figure As being handled, then, mobile terminal can obtain matched one by one with the characteristic point on the first image from the second image Match point.Illustratively, it is shown in Figure 2, it is assumed that have terminal A and terminal B, the light of the first image capture module on target object The heart is c1, the first image comprising target object of the first image capture module acquisition is 201, the light of the second image capture module The heart is that the second image comprising target object of the second image capture module acquisition is 202, then, terminal A and terminal B are exactly Two target points on target object, the match point a on the second image 2022With the characteristic point a on the first image 2011Match, Match point b on second image 2022With the characteristic point b on the first image 2011Match.
In practical applications, Block- matching (BM, Block Matching) algorithm may be used to be matched in mobile terminal Point can also cut (GC, Graph Cut) algorithm using figure to obtain match point, it is, of course, also possible to using other Stereo matchings Algorithm.Here, the embodiment of the present invention is not specifically limited.
Here, in order to obtain the size of target object, mobile terminal needs to obtain the operation of the first image according to user At least two characteristic point on target object is taken, such as:As soon as when user wants to measure the length of a pencil, at least need to obtain lead The corresponding characteristic point of two endpoints of pen;When user wants the width of one books of measurement, two endpoints on books short side It can be as two target points measured needed for books width, at this point, mobile terminal just needs to obtain two on books short side Characteristic point corresponding to endpoint;And when user wants the perimeter of one books of measurement, mobile terminal can obtain book at this time Characteristic point corresponding to this four endpoints.Therefore, according to the difference of user's measurement demand, on the target object determined The quantity of target point is just different, such as two endpoints, four endpoints of books on two endpoints of pencil, books short side, from And the quantity of the characteristic point of acquisition for mobile terminal is also different, can be two, three, four etc., here, the present invention is implemented Example is not specifically limited.Herein, it is also necessary to explanation, due to characteristic point and match point be match one by one it is corresponding, move The number of characteristic point that the quantity of match point that dynamic terminal is obtained from the second image can be obtained with mobile terminal from the first image Amount is consistent.
In practical applications, when calibrating target object on the first image, user can be to required on target object The part of measurement is by the way of " clicking ", to directly select out target point, at this point, corresponding to the target point that user selectes Pixel on one image is exactly the characteristic point that mobile terminal needs obtain;User can also the required measurement on target object Part by the way of " line selection ", to select target line, at this point, corresponding to the endpoint for the target line that user selectes Pixel on first image is exactly the characteristic point that mobile terminal needs obtain;Certainly, user can also using other modes come The selected part to be measured, here, the embodiment of the present invention is not specifically limited.
In addition, when S102 is embodied, user can also calibrate characteristic point manually in the second image, then mobile whole End obtains the match point to match with the characteristic point on the second image by image matching algorithm from the first image.
It should be noted that features described above point and its corresponding match point of matching both correspond to physical presence on target object Same point, such as:Point on four angles of endpoint, books of pencil etc., difference lies in Based on Feature Points is on target object Actual point is mapped in the situation on the first image, and is on target object with what is represented to the corresponding match point of this feature point Actual point is mapped in the situation on the second image.Therefore, either characteristic point or match point, with the reality on target object Point has certain mapping relations, so as to be that can obtain characteristic point on the first image to target by binocular vision algorithm It actual point on object the first transformation rule or can obtain in characteristic point to target object on the first image Second transformation rule of actual point, and then, mobile terminal can be obtained by the actual position letter of the actual point on target object Breath.
S103:Obtain the characteristic parameter of the first image capture module and the second image capture module;
Wherein, characteristic parameter is the first image capture module and the optical parameter of the second image capture module.
Here, the distance between above-mentioned at least two target point is calculated in order to obtain, mobile terminal is obtaining the After one image data and the second image data, it is also necessary to obtain two image capture modules possessed by after binocular calibration is carried out Fixed optical property parameter.
Specifically, features described above parameter can be the optical center of the first image capture module and the light of the second image capture module The distance between heart or the focal length of the first image capture module or the second image capture module, it is, of course, also possible to be it His physical parameter, here, the embodiment of the present invention is not specifically limited.
In specific implementation process, S103 can include:The optical center and the second image for obtaining the first image capture module are adopted Collect the distance between optical center of module, and obtain the focal length of the first image capture module or the second image capture module.
In practical applications, the distance between optical center of the optical center of the first image capture module and the second image capture module Can be the distance between optical center of optical mirror slip in two image capture modules, as between two camera convexity optical center of lens Distance.In addition, in embodiments of the present invention, the focal length of two image capture modules is identical.
S104:Feature based parameter, the first image data and the second image data calculate at least two according to preset strategy The distance between a target point obtains the size of target object.
Here, first image data and of the mobile terminal on target object is obtained corresponding at least two target points After two image datas, it is possible to according to first image data and the second image data, go out according to preset policy calculation on The distance between at least two target points are stated, so as to which mobile terminal is obtained with the size of target object.
Shown in Figure 3 in specific implementation process, S104 can include:
S301:By the first image data and the second image data, determine that the corresponding vision of at least two target points is inclined Poor parameter;
Here, mobile terminal passes through the first image data and the second image data, the number of determining vision deviation parameter Amount is corresponding consistent with the quantity of the target point in target.It is that is, each at least two target points in target A target point corresponds to a vision deviation parameter.
In other embodiments of the invention, S301 can include:Obtain at least two characteristic points corresponding each first Location information, and obtain the corresponding each second position information of at least two match points, wherein, each second position information with it is each A first position information corresponds;It calculates correspondingly between each first position information and each second position information Deviation.
Specifically, according to the content of above-mentioned S102 it is found that the first image data of acquisition for mobile terminal can be the first figure As at least two upper characteristic points, the second image data of acquisition for mobile terminal can be on the second image at least two features One-to-one at least two match point of point.At this point, the method for mobile terminal computation vision straggling parameter is:First obtain the first figure As the upper corresponding first position information of characteristic point, the of match point on the second image corresponding with this feature point is then obtained Two location informations finally calculate the deviation between the first position information and the second position information, in this way, mobile terminal is just Define the vision deviation parameter when point on target object is mapped in the first image and the second image.
Further, above-mentioned first position information can be first image coordinate of the characteristic point in the first image, above-mentioned Second position information can be second image coordinate of the match point in the second image.
Illustratively, it is assumed that target object to be measured is a pencil, which has terminal A and terminal B.Below with end The method of the vision deviation parameter of terminal A is clearly determined for for point A.
It is shown in Figure 4 in specific implementation process, according to terminal A in the first image corresponding characteristic point a1Seat It is marked with and characteristic point a1Corresponding match point a in the second image2Coordinate, mobile terminal may be used equation below (1) come Calculate the vision deviation value in the first image and the second image of terminal A.
dA=xa1-xa2 (1)
Wherein, xa1Represent characteristic point a1Abscissa, xa2Represent match point a2Abscissa, dARepresent that terminal A is corresponding Vision deviation value.
S302:Feature based parameter and vision deviation parameter, by binocular vision algorithm to the first image data and Second image data is handled, and obtains the corresponding spatial positional information of at least two target points;
Here, mobile terminal is after characteristic parameter and vision deviation parameter is obtained, it is possible to pass through binocular vision Algorithm handles the first image data and the second picture number of obtained at least two target points about on target object According to so as to which mobile terminal can be obtained by least two target point corresponding to spatial positional information with the real world.
In other embodiments of the invention, S302 can include:Obtain at least two characteristic points corresponding each first Location information, and obtain the corresponding each second position information of at least two match points, wherein, each second position information with it is each A first position information corresponds;According to characteristic parameter and vision deviation parameter generation transformation matrix;Pass through transformation matrix Each first position information and each second position information are handled, obtains corresponding each first space of at least two characteristic points Location information or the corresponding each second space location information of at least two match points.
Specifically, according to the content of above-mentioned S301 it is found that the first position information of acquisition for mobile terminal can be characteristic point The first image coordinate in the first image, the second position information of acquisition for mobile terminal can be match points in the second image The second image coordinate.So, mobile terminal can by above-mentioned transformation matrix, by the first image coordinate of this feature point from Two dimensional image coordinate system is converted to three-dimensional world coordinate system, obtains characteristic point corresponding first space in world coordinate system and sits Mark, alternatively, mobile terminal can be by above-mentioned transformation matrix, by the second image coordinate of the match point from two dimensional image coordinate system Three-dimensional world coordinate system is converted to, obtains match point corresponding second space coordinate in world coordinate system.
Illustratively, it is still assumed that target object to be measured is a pencil, which has terminal A and terminal B.Below The method of the spatial positional information of terminal A is clearly determined for by taking terminal A as an example.
In specific implementation process, referring still to shown in Fig. 4, according to terminal A in the first image corresponding characteristic point a1 Coordinate, the first image capture module optical center c1With the optical center c of the second image capture module2The distance between T and two image The corresponding vision deviation value d of focal length f and terminal A of acquisition moduleA, equation below (2) may be used to obtain in mobile terminal The space coordinate of terminal A.
Wherein, (XA,YA,ZA) represent terminal A space coordinate, (xa1,ya1) represent characteristic point a1Coordinate, T represent two The centre-to-centre spacing of image capture module, dARepresent the corresponding vision deviation value of terminal A, f represents the coke of two image capture modules Away from.
S303:Based on the corresponding spatial positional information of at least two target points, at least two are calculated using range formula The distance of target point.
Here, the spatial positional information corresponding at least two target points of the mobile terminal on target object is obtained with Afterwards, it is possible to the distance between at least two target point is calculated by range formula, so as to, it will be able to determine target The size of object.
In practical applications, mobile terminal can be calculated by Euclidean distance formula between above-mentioned at least two target point Distance, can also distance be calculated using mahalanobis distance formula, certainly, mobile terminal can also use other distance calculating side Method determines the distance between above-mentioned at least two target point.Here, the embodiment of the present invention is not specifically limited.
Illustratively, it is still assumed that target object to be measured is a pencil, which has terminal A and terminal B.Below The method of pencil length is clearly determined for by taking Euclidean distance formula as an example.
In specific implementation process, according to terminal A and the space coordinate of terminal B, following public affairs may be used in mobile terminal Formula (3) obtains the distance between terminal A and terminal B, so that it is determined that go out the length of pencil.
Wherein, (XA,YA,ZA) represent terminal A space coordinate, (XA,YA,ZA) represent terminal B space coordinate, D represent The distance between terminal A and terminal B.
So far, the processing to image data is just completed.
As shown in the above, the technical solution that the embodiment of the present invention is provided is in same plane applied to being provided with And the mobile terminal with identical focal length and the image capture module for acquiring direction.The mobile terminal is being obtained from first First image of image capture module and after obtaining the second image from the second image capture module, can respectively from In first image and the second image, corresponding first image data of at least two target points and the second figure on target object are obtained As data, then the characteristic parameter of the first image capture module and the second image capture module is obtained, wherein, characteristic parameter the The optical parameter of one image capture module and the second image capture module, finally, can be with feature based parameter, the first picture number According to this and the second image data, the distance between at least two target points are calculated according to preset strategy.In this way, by of the invention real The image processing method of example offer is applied, mobile terminal can just include the image of same target object by two width, come Calculate the size of the target object.
In addition, using technical solution provided in an embodiment of the present invention in the size for determining target object, on the one hand, be not required to The size of limited target object and the adjoining dimensions of mobile terminal are wanted, on the other hand, it is not required that target object and mobile terminal Be in contact, so as to, solve measurable target object range of the existing technology be excessively narrow, the convenience that measures it is poor The problem of, and then, it realizes and expands target object range to be measured, improve the convenience of measurement, can provide to the user good Good user experience.
Embodiment two
Based on same inventive concept, the present embodiment two provides a kind of image data processing system.Fig. 5 is the embodiment of the present invention The structure diagram of image data processing system in two, shown in Figure 5, which includes:First Obtaining unit 501, first acquisition unit 502,503 and second obtaining unit 504 of second acquisition unit, wherein, first obtains Unit 501, for obtaining the first image from the first image capture module, and obtain from the second image capture module the Two images, wherein, the first image capture module and the second image capture module be in same plane and with identical focal length with And direction is acquired, include same target object in the first image and the second image;First acquisition unit 502, for respectively from In first image and the second image, corresponding first image data of at least two target points and the second image data are obtained, In, at least two target points are the point on target object;Second acquisition unit 503, for obtain the first image capture module with And second image capture module characteristic parameter, wherein, characteristic parameter be the first image capture module and the second Image Acquisition The optical parameter of module;Second obtaining unit 504, for feature based parameter, the first image data and the second image data, The distance between at least two target points are calculated according to preset strategy, obtain the size of target object.
In specific implementation process, above-mentioned image data processing system is adopted with the first image capture module and the second image Collection module, which can close, to be set, and can also be set up separately.That is, can by the first image capture module, the second image capture module with And image data processing system three is set to simultaneously in same mobile terminal;It can also be by the first image capture module and Two image capture modules are set in mobile terminal one, and image data processing system is set in mobile terminal two.Example Property, when the first image capture module and the second image capture module are divided into two movements with image data processing system During terminal, two width from mobile terminal one can be received with mobile terminal two and includes the image of same target object and adopts Collect the characterisitic parameter of the image capture module of this two images, then by image data processing system provided by the present invention at Reason, in this way, mobile terminal two is obtained with the size of target object, is finally sent to mobile terminal one by the size.
Further, the second obtaining unit is additionally operable to through the first image data and the second image data, is determined at least The corresponding vision deviation parameter of two target points;Feature based parameter and vision deviation parameter, pass through binocular vision algorithm pair First image data and the second image data are handled, and obtain the corresponding spatial positional information of at least two target points;Base In the corresponding spatial positional information of at least two target points, the distance of at least two target points is calculated using range formula.
Further, first acquisition unit is additionally operable to calibrate target object on the first image, and obtains at least two Characteristic point, wherein, at least two characteristic points are corresponded at least two target points;Using image matching algorithm, from the second figure It is obtained as in and at least two characteristic points matched at least two match point one by one.
Further, the second obtaining unit is additionally operable to obtain the corresponding each first position information of at least two characteristic points, And obtain the corresponding each second position information of at least two match points, wherein, each second position information with each first Confidence breath corresponds;The deviation between each first position information and each second position information is calculated correspondingly.
Further, the second obtaining unit is additionally operable to obtain the corresponding each first position information of at least two characteristic points, And obtain the corresponding each second position information of at least two match points, wherein, each second position information with each first Confidence breath corresponds;According to characteristic parameter and vision deviation parameter generation transformation matrix;It is handled by transformation matrix each First position information and each second position information obtain corresponding each first spatial positional information of at least two characteristic points Or at least two corresponding each second space location information of match point.
Further, second acquisition unit is additionally operable to obtain the optical center and the second Image Acquisition of the first image capture module The distance between optical center of module, and obtain the focal length of the first image capture module or the second image capture module.
In practical applications, above-mentioned first obtains unit, first acquisition unit, second acquisition unit and second obtain single Member can be by central processing unit (CPU, Central Processing Unit), graphics processor (GPU, Graphics Processing Unit), microprocessor (MPU, Micro Processor Unit), digital signal processor (DSP, Digital Signal Processor) or field programmable gate array (FPGA, Field Programmable Gate The realizations such as Array).
It need to be noted that be:The description of apparatus above embodiment, the description with above method embodiment be it is similar, With the similar advantageous effect of same embodiment of the method, therefore do not repeat.For the skill not disclosed in apparatus of the present invention embodiment Art details please refers to the description of the method for the present invention embodiment and understands, to save length, therefore repeats no more.
Embodiment three
Based on same inventive concept, the present embodiment three provides a kind of mobile terminal.Fig. 6 is the shifting in the embodiment of the present invention three The structure diagram of dynamic terminal, shown in Figure 6, which includes:First camera 601, second camera 602 with And processor 603, wherein, the first camera 601, be in same plane with second camera 602 and with identical focal length and Direction is acquired, for acquiring the first image;Second camera 602, for acquiring the second image, wherein, the second image and first Include same target object in image;Processor 603 for passing through the first camera 601 and second camera 602, obtains Obtain the first image and the second image;Respectively from the first image and the second image, at least two target points corresponding first are obtained Image data and the second image data, wherein, at least two target points are the point on target object;Obtain the first camera with And the characteristic parameter of second camera, wherein, characteristic parameter is the first camera and the optical parameter of second camera;It is based on Characteristic parameter, the first image data and the second image data, according to preset strategy calculate at least two target points between away from From obtaining the size of target object.
Further, processor is additionally operable to, by the first image data and the second image data, determine at least two mesh The corresponding vision deviation parameter of punctuate;Feature based parameter and vision deviation parameter, by binocular vision algorithm to the first figure As data and the second image data are handled, the corresponding spatial positional information of at least two target points is obtained;Based at least The corresponding spatial positional information of two target points calculates the distance of at least two target points using range formula.
Further, processor is additionally operable to calibrate target object on the first image, and obtains at least two features Point, wherein, at least two characteristic points are corresponded at least two target points;Using image matching algorithm, from the second image It obtains and at least two characteristic points matched at least two match point one by one.
Further, processor is additionally operable to obtain the corresponding each first position information of at least two characteristic points, and obtain The corresponding each second position information of at least two match points, wherein, each second position information and each first position information It corresponds;The deviation between each first position information and each second position information is calculated correspondingly.
Further, processor is additionally operable to obtain the corresponding each first position information of at least two characteristic points, and obtain The corresponding each second position information of at least two match points, wherein, each second position information and each first position information It corresponds;According to characteristic parameter and vision deviation parameter generation transformation matrix;Each first is handled by transformation matrix Confidence ceases and each second position information, obtains corresponding each first spatial positional information of at least two characteristic points or at least The corresponding each second space location information of two match points.
Further, processor, be additionally operable to obtain the first camera optical center and second camera optical center between away from From, and obtain the focal length of the first camera or second camera.
In practical applications, the imaging sensor in above-mentioned first camera and second camera can be cmos image Sensor or ccd image sensor, it is, of course, also possible to be other kinds of imaging sensor.Here, the present embodiment It is not specifically limited.
It need to be noted that be:More than mobile terminal implements the description of item, is similar, tool with above method description There is the advantageous effect identical with embodiment of the method, therefore do not repeat.For what is do not disclosed in mobile terminal embodiment of the present invention Technical detail, those skilled in the art please refer to the description of the method for the present invention embodiment and understand, to save length, here not It repeats again.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program Product.Therefore, the shape of the embodiment in terms of hardware embodiment, software implementation or combination software and hardware can be used in the present invention Formula.Moreover, the present invention can be used can use storage in one or more computers for wherein including computer usable program code The form of computer program product that medium is implemented on (including but not limited to magnetic disk storage and optical memory etc.).
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided The processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices is generated for real The device of function specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction generation being stored in the computer-readable memory includes referring to Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or The function of being specified in multiple boxes.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps are performed on calculation machine or other programmable devices to generate computer implemented processing, so as in computer or The instruction offer performed on other programmable devices is used to implement in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in a box or multiple boxes.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.

Claims (11)

1. a kind of image processing method, which is characterized in that applied to mobile terminal, the mobile terminal includes:First figure As acquisition module and the second image capture module, wherein, described first image acquisition module and second Image Acquisition Module is in same plane and with identical focal length and acquisition direction;
The method includes:
Obtain the first image from described first image acquisition module, and obtain from second image capture module the Two images, wherein, include same target object in described first image and second image;
Respectively from described first image and second image, obtain corresponding first image data of at least two target points with And second image data, wherein, at least two target point is the point on the target object;
The characteristic parameter of described first image acquisition module and second image capture module is obtained, wherein, the feature Parameter is described first image acquisition module and the optical parameter of second image capture module;
Based on the characteristic parameter, described first image data and second image data, institute is calculated according to preset strategy The distance between at least two target points are stated, obtain the size of the target object.
It is 2. according to the method described in claim 1, it is characterized in that, described based on the characteristic parameter, described first image number According to this and second image data, the distance between described at least two target point is calculated according to preset strategy, including:
By described first image data and second image data, the corresponding vision of at least two target point is determined Straggling parameter;
Based on the characteristic parameter and the vision deviation parameter, by binocular vision algorithm to described first image data with And second image data is handled, and obtains the corresponding spatial positional information of at least two target point;
Based on the corresponding spatial positional information of at least two target point, at least two mesh is calculated using range formula The distance of punctuate.
It is 3. according to the method described in claim 2, it is characterized in that, described respectively from described first image and second image In, corresponding first image data of at least two target point and the second image data are obtained, including:
The target object is calibrated in described first image, and obtains at least two characteristic points, wherein, described at least two Characteristic point is corresponded at least two target point;
Using image matching algorithm, obtained from second image and at least two characteristic point one by one matched at least two A match point.
4. according to the method described in claim 3, it is characterized in that, described pass through described first image data and described second Image data determines the corresponding vision deviation parameter of at least two target point, including:
The corresponding each first position information of at least two characteristic point is obtained, and obtains at least two matching double points and answers Each second position information, wherein, each second position information is corresponded with each first position information;
The deviation between each first position information and each second position information is calculated correspondingly.
5. according to the method described in claim 3, it is characterized in that, described be based on the characteristic parameter and the vision deviation Parameter is handled described first image data and second image data by binocular vision algorithm, described in acquisition The corresponding spatial positional information of at least two target points, including:
The corresponding each first position information of at least two characteristic point is obtained, and obtains at least two matching double points and answers Each second position information, wherein, each second position information is corresponded with each first position information;
According to the characteristic parameter and vision deviation parameter generation transformation matrix;
Each first position information and each second position information are handled by the transformation matrix, described in acquisition Corresponding each first spatial positional information of at least two characteristic points or corresponding each second sky of at least two match point Between location information.
6. the according to the method described in claim 1, it is characterized in that, acquisition described first image acquisition module and described The characteristic parameter of second image capture module, including:
The distance between the optical center of described first image acquisition module and the optical center of second image capture module are obtained, and is obtained Take the focal length of described first image acquisition module or second image capture module.
7. a kind of image data processing system, which is characterized in that described device includes:First obtains unit, first acquisition unit, Second acquisition unit and the second obtaining unit, wherein,
The first obtains unit for obtaining the first image from the first image capture module, and is obtained from the second figure As the second image of acquisition module, wherein, described first image acquisition module and second image capture module are in same One plane and with identical focal length and acquisition direction, includes same object in described first image and second image Body;
The first acquisition unit, for from described first image and second image, obtaining at least two targets respectively Corresponding first image data of point and the second image data, wherein, at least two target point is on the target object Point;
The second acquisition unit, for obtaining the spy of described first image acquisition module and second image capture module Parameter is levied, wherein, the characteristic parameter is described first image acquisition module and the optics of second image capture module Parameter;
Second obtaining unit, for being based on the characteristic parameter, described first image data and second picture number According to calculating the distance between described at least two target point according to preset strategy, obtain the size of the target object.
8. device according to claim 7, which is characterized in that second obtaining unit is additionally operable to by described first Image data and second image data determine the corresponding vision deviation parameter of at least two target point;Based on institute Characteristic parameter and the vision deviation parameter are stated, by binocular vision algorithm to described first image data and described second Image data is handled, and obtains the corresponding spatial positional information of at least two target point;Based at least two mesh The corresponding spatial positional information of punctuate calculates the distance of at least two target point using range formula.
9. device according to claim 8, which is characterized in that the first acquisition unit is additionally operable in first figure Calibrate the target object on picture, and obtain at least two characteristic points, wherein, at least two characteristic point with it is described at least Two target points correspond;Using image matching algorithm, obtained and at least two characteristic point from second image Matched at least two match point one by one.
10. device according to claim 7, which is characterized in that the second acquisition unit is additionally operable to obtain described first The distance between optical center of the optical center of image capture module and second image capture module, and obtain described first image and adopt Collect the focal length of module or second image capture module.
11. a kind of mobile terminal, which is characterized in that the mobile terminal includes:First camera, second camera and processing Device, wherein,
First camera is in same plane and with identical focal length and acquisition direction with the second camera, uses In acquiring the first image;
The second camera, for acquiring the second image, wherein, include in second image and described first image same One target object;
The processor for passing through first camera and the second camera, obtains the first image and the second figure Picture;Respectively from described first image and second image, obtain corresponding first image data of at least two target points with And second image data, wherein, at least two target point is the point on the target object;Obtain first camera And the characteristic parameter of the second camera, wherein, the characteristic parameter is taken the photograph for first camera and described second As the optical parameter of head;Based on the characteristic parameter, described first image data and second image data, according to default The distance between at least two target points described in policy calculation obtain the size of the target object.
CN201611109479.8A 2016-12-06 2016-12-06 A kind of image processing method, device and mobile terminal Pending CN108151647A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611109479.8A CN108151647A (en) 2016-12-06 2016-12-06 A kind of image processing method, device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611109479.8A CN108151647A (en) 2016-12-06 2016-12-06 A kind of image processing method, device and mobile terminal

Publications (1)

Publication Number Publication Date
CN108151647A true CN108151647A (en) 2018-06-12

Family

ID=62467716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611109479.8A Pending CN108151647A (en) 2016-12-06 2016-12-06 A kind of image processing method, device and mobile terminal

Country Status (1)

Country Link
CN (1) CN108151647A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859265A (en) * 2018-12-28 2019-06-07 维沃通信科技有限公司 A kind of measurement method and mobile terminal
CN111336073A (en) * 2020-03-04 2020-06-26 南京航空航天大学 Wind driven generator tower clearance visual monitoring device and method
CN112378333A (en) * 2020-10-30 2021-02-19 支付宝(杭州)信息技术有限公司 Method and device for measuring warehoused goods
CN112528728A (en) * 2020-10-16 2021-03-19 深圳市银星智能科技股份有限公司 Image processing method and device for visual navigation and mobile robot

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859265A (en) * 2018-12-28 2019-06-07 维沃通信科技有限公司 A kind of measurement method and mobile terminal
CN109859265B (en) * 2018-12-28 2024-04-19 维沃移动通信有限公司 Measurement method and mobile terminal
CN111336073A (en) * 2020-03-04 2020-06-26 南京航空航天大学 Wind driven generator tower clearance visual monitoring device and method
CN111336073B (en) * 2020-03-04 2022-04-05 南京航空航天大学 Wind driven generator tower clearance visual monitoring device and method
CN112528728A (en) * 2020-10-16 2021-03-19 深圳市银星智能科技股份有限公司 Image processing method and device for visual navigation and mobile robot
CN112528728B (en) * 2020-10-16 2024-03-29 深圳银星智能集团股份有限公司 Image processing method and device for visual navigation and mobile robot
CN112378333A (en) * 2020-10-30 2021-02-19 支付宝(杭州)信息技术有限公司 Method and device for measuring warehoused goods
CN112378333B (en) * 2020-10-30 2022-05-06 支付宝(杭州)信息技术有限公司 Method and device for measuring warehoused goods

Similar Documents

Publication Publication Date Title
CN108765498B (en) Monocular vision tracking, device and storage medium
CN107084680B (en) Target depth measuring method based on machine monocular vision
CN109801374B (en) Method, medium, and system for reconstructing three-dimensional model through multi-angle image set
TW201709718A (en) Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product
CN110956660B (en) Positioning method, robot, and computer storage medium
CN108151647A (en) A kind of image processing method, device and mobile terminal
CN105222717B (en) A kind of subject matter length measurement method and device
CN106570899B (en) Target object detection method and device
JPWO2009001512A1 (en) Imaging apparatus, method, system integrated circuit, and program
CN110136114B (en) Wave surface height measuring method, terminal equipment and storage medium
WO2021244140A1 (en) Object measurement method and apparatus, virtual object processing method and apparatus, medium and electronic device
CN106503605A (en) Human body target recognition methods based on stereovision technique
JP2015184767A (en) Information processor, information processing method, position attitude estimation device and robot system
CN103544492B (en) Target identification method and device based on depth image three-dimension curved surface geometric properties
US11042984B2 (en) Systems and methods for providing image depth information
CN102072706A (en) Multi-camera positioning and tracking method and system
CN111862180A (en) Camera group pose acquisition method and device, storage medium and electronic equipment
CN105387847A (en) Non-contact measurement method, measurement equipment and measurement system thereof
JP2021531601A (en) Neural network training, line-of-sight detection methods and devices, and electronic devices
CN107016697A (en) A kind of height measurement method and device
CN110009687A (en) Color three dimension imaging system and its scaling method based on three cameras
WO2014002521A1 (en) Image processing device and image processing method
CN110232707A (en) A kind of distance measuring method and device
CN112304222A (en) Background board synchronous revolution's 3D information acquisition equipment
CN108958483A (en) Rigid body localization method, device, terminal device and storage medium based on interaction pen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180612