CN106324976A - Test system and test method - Google Patents

Test system and test method Download PDF

Info

Publication number
CN106324976A
CN106324976A CN201510386898.5A CN201510386898A CN106324976A CN 106324976 A CN106324976 A CN 106324976A CN 201510386898 A CN201510386898 A CN 201510386898A CN 106324976 A CN106324976 A CN 106324976A
Authority
CN
China
Prior art keywords
image
depth
tabula rasa
those
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510386898.5A
Other languages
Chinese (zh)
Other versions
CN106324976B (en
Inventor
蔡逸杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chicony Electronics Co Ltd
Original Assignee
Chicony Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chicony Electronics Co Ltd filed Critical Chicony Electronics Co Ltd
Priority to CN201510386898.5A priority Critical patent/CN106324976B/en
Publication of CN106324976A publication Critical patent/CN106324976A/en
Application granted granted Critical
Publication of CN106324976B publication Critical patent/CN106324976B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a test system, which comprises a light box, a plurality of light panels and a bearing bottom, wherein the light panels are arranged in the light box separately and are located at different depths in the light box separately; the bearing bottom is used for fixedly bearing a to-be-tested lens module and making the to-be-tested lens module face the light box and the light panels; the spacing distances between the to-be-tested lens module and the light panels are different; and when the to-be-tested lens module shoots a first image picture, the first image picture comprises images of the light panels. According to the test system, different depth information can be obtained by capturing one picture from the inside of the single light box including multiple depths of field, and these depth information can be utilized to detect a to-be-tested camera module and to correct the to-be-tested camera module.

Description

Test system and method for testing
Technical field
The present invention relates to a kind of test system and method for testing.Specifically, the present invention relates to And a kind of test system that multiple tabula rasa is set in a light box and method of testing.
Background technology
It is said that in general, the sensor sensitivity on various cameras is different, even if the camera of same model also may be used Can there is spotty situation.Additionally, magazine sensor be easily subject to photographed scene lighting environment, The impact of the factors such as the distance of camera distance subject, and different cameral is being gone out captured by Same Scene The image come is the most inconsistent.Therefore, the test before camera dispatches from the factory is particularly important.
Additionally, when testing camera model, generally require the most corresponding different test items of the multiple light box of use These a little light boxs with differently configured situation, then are shot one by one by mesh by camera model to be measured, with Obtain the image information of corresponding various test event respectively.But, this method of testing need to not shared the same light multiple Arranging different test situations in case, shooting results is likely to because of the difference of each light box itself, or treats Survey camera model each light box to arrange position slightly different and impacted, cause test result to be forbidden Really.Additionally, due to the light box that camera model to be measured needs corresponding differently configured test event is clapped one by one Taking the photograph, traditional method of testing is the most time-consuming.
Therefore, how to improve traditional camera model method of testing, and provide a kind of survey to save time and accurately Method of testing and test system, become the problem that industry need to solve.
Summary of the invention
Brief overview about the present invention given below, in order to provide about certain aspects of the invention Basic comprehension.Should be appreciated that this general introduction is not that the exhaustive about the present invention is summarized.It is not Being intended to determine the key of the present invention or pith, nor is it intended to limit the scope of the present invention.Its mesh Be only provide some concept in simplified form, in this, as discuss after a while in greater detail before Sequence.
For solving above-mentioned problem, one embodiment of the invention provides a kind of test system.Test system bag Bottom a light box, multiple tabula rasa and a carrying.Tabula rasa is separately positioned in light box, and tabula rasa is each located on Different depth in this light box.In order to fixing carrying one camera lens module to be measured bottom carrying, and make mirror to be measured Head module is in the face of light box and tabula rasa, a camera lens module to be measured spacing distance the most not phase respective with tabula rasa With.Wherein, when camera lens module to be measured shoots the first image frame, the first image frame comprises tabula rasa Image.
Those tabula rasas comprise at least one first tabula rasa, at least one second tabula rasa and at least one the 3rd tabula rasa.
The distance of this at least one first tabula rasa and this camera lens module to be measured is one first distance, and this is at least one the years old Two tabula rasas are a second distance with the distance of this camera lens module to be measured, and this at least one the 3rd tabula rasa is to be measured with this The distance of camera lens module is one the 3rd distance, and this first distance is more than this second distance, and this second distance is big In the 3rd distance.
Each limit of this at least one first tabula rasa is close to the tank wall at this light box, and this at least one second tabula rasa divides Not putting at least one angle at this light box, this at least one the 3rd tabula rasa is not close to the tank wall of this light box.
On this at least one first tabula rasa, this at least one second tabula rasa and this at least one the 3rd tabula rasa each self-contained Multiple images, those images comprise multiple anchor point and an analysis diagram.
Those images also comprise a check figure, a color lump figure, an object diagram at least one.
This camera lens module to be measured comprises multiple pick-up lens, and those pick-up lenss each shoot one second image Picture, respectively this second image frame comprise this at least one first tabula rasa, this at least one second tabula rasa and this extremely The image of one the 3rd tabula rasa, and this first image frame less is made up of those second image frames.
Further, also comprise:
One processing unit, according to this parsing at least one second image frame of those the second image frames Figure, to calculate an image resolution.
Further, also comprise:
One processing unit, according to those location at least two image frames in those second image frames Angle between point and this camera lens module to be measured, to calculate an image depth of view information;
Wherein this image depth of view information comprises a distant view depth of view information, scape depth of view information, a close shot scape in one Deeply convince breath.
This processing unit also by this image depth of view information compared with a known depth of view information, near to obtain one Scape corrected value, scape corrected value and a distant view corrected value in one, and by this distant view depth of view information, scape scape in this Deeply convince that breath, this close shot depth of view information are by this close shot corrected value, scape corrected value and this distant view corrected value in this Being corrected, to export a panoramic deep image, this panoramic deep image comprises those second images after correction.
This processing unit is also in order to resolve according to those in those second images in this panoramic deep image Those second images after correction to calculate the definition of this panoramic deep image, and are carried out complementation by figure, To produce a complementary panoramic deep image, and calculate a depth of field scattergram of this complementation panoramic deep image.
Another embodiment of the present invention provides a kind of method of testing.Method of testing comprises the steps of respectively Arranging multiple tabula rasa in a light box, those tabula rasas are each located on the different depth in this light box;Fixing Carry bottom a camera lens module to be measured carries one, and make camera lens module to be measured in the face of light box and tabula rasa, A camera lens module to be measured spacing distance respective with tabula rasa is different from;Wherein, clap when camera lens module to be measured When taking the photograph first image frame, the first image frame comprises the image of tabula rasa.
By applying an above-mentioned embodiment, the present invention can pick by comprising in the single light box of the multiple depth of field Take a picture and can obtain multiple different depth information, and these a little depth informations can be utilized to be measured to detect Camera module, and these a little depth informations can be applied so that camera module to be measured to be corrected.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to reality Execute the required accompanying drawing used in example or description of the prior art to be briefly described, it should be apparent that below, Accompanying drawing in description is only some embodiments of the present invention, for those of ordinary skill in the art, On the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the schematic diagram according to the light box depicted in one embodiment of the invention;
Fig. 2 is the upward view of the tabula rasa accommodation space according to the light box depicted in Fig. 1;
Fig. 3 is the side view according to the tabula rasa accommodation space depicted in one embodiment of the invention;
Fig. 4 is the flow chart of the method for testing according to one embodiment of the invention;
Fig. 5 is the schematic diagram according to the image on the tabula rasa depicted in one embodiment of the invention;
Fig. 6 is the sub-step flow chart of step S420 of Fig. 4;
Fig. 7 is the schematic diagram of the depth of field bearing calibration according to the embodiment of the present invention;
Fig. 8 is the schematic diagram of another depth of field bearing calibration according to the embodiment of the present invention.
Reference:
200: tabula rasa accommodation space, 220: the first tabula rasas, 240: the second tabula rasas, 260: the three tabula rasas
124: top
A: visual angle
120: light box
122: bottom carrying
140: camera lens module to be measured
D1, d2, d3: distance
S410~S460, S421~423: step
500,500a~500e: location figure
810,820,830: picture
811,812,821,822: imaging
831: fixed point
X1, x2: angle
520: check figure
530: object figure
510: analysis diagram.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with this Accompanying drawing in bright embodiment, is clearly and completely described the technical scheme in the embodiment of the present invention, Obviously, described embodiment is a part of embodiment of the present invention rather than whole embodiments.At this Element and feature described in one accompanying drawing of invention or a kind of embodiment can be with one or more Element shown in other accompanying drawing or embodiment and feature combine.It should be noted that, for clearly mesh , accompanying drawing and explanation eliminate unrelated to the invention, parts known to persons of ordinary skill in the art With the expression processed and description.Based on the embodiment in the present invention, those of ordinary skill in the art are not having Pay the every other embodiment obtained under creative work premise, broadly fall into the model of present invention protection Enclose.
About " comprising " used herein, " including ", " having ", " containing " etc., it is out The term of putting property, i.e. mean including but not limited to.Referring concurrently to Fig. 1 and Fig. 2.According to Fig. 1 this The schematic diagram of the light box depicted in a bright embodiment.Fig. 2 is that the tabula rasa according to the light box depicted in Fig. 1 holds Upward view between being empty.Test system 100 comprise light box 120, multiple tabula rasa 220,240,260 and One carrying bottom 122.Wherein, tabula rasa 220,240,260 is separately positioned in light box 120, tabula rasa 220,240,260 it is each located on the different depth in light box 120.Bottom carrying, 122 hold in order to fixing Carry a camera lens module 140 to be measured, and make camera lens module 140 to be measured in the face of light box 120 and tabula rasa 220, 240,260, camera lens module 140 to be measured is with the respective spacing distance of tabula rasa 220,240,260 the most not Identical.Wherein, when camera lens module 140 to be measured shoots the first image frame, the first image frame comprises The image of tabula rasa 220,240,260.
More specifically, as it is shown in figure 1, in one embodiment, tabula rasa 220,240,260 is placed on In tabula rasa accommodation space 200, the top of tabula rasa accommodation space 200 is the top 124 of light box 120, And camera model 140 to be measured can be placed in bottom 122.But, camera model 140 to be measured holds with tabula rasa Between being empty, the disposing way of 200 is not limited thereto, in another embodiment, and camera model 140 to be measured Only need to be placed on the position that can photograph each tabula rasa 220,240,260.Thus, when mirror to be measured When head module 140 is toward tabula rasa accommodation space 200 filmed image picture, its image frame photographed can be same Time comprise the image of tabula rasa 220,240,260.
In one embodiment, if along a direction, visual angle shown in Fig. 1, toward the top 124 of light box 120 Viewing, it can be seen that tabula rasa mode as shown in Figure 2 arranges.In the present embodiment, accommodation space 200 wraps Containing at least one first tabula rasa 220, at least one second tabula rasa 240 and at least one the 3rd tabula rasa 260.Wherein, Each limit of the first tabula rasa 220 is close to, such as: rectangular first tabula rasa 220 with the tank wall at light box 120 Four limits with light box 120 tank wall all just, four limits be close to;Second tabula rasa 240 is put at light box extremely respectively Few one jiao, such as: multiple second tabula rasas 240 in the way of suspention or using support, are fixed on light respectively Four corners of case;3rd tabula rasa 260 is not close to the tank wall of light box, such as: the 3rd tabula rasa 260 with Second tabula rasa 240, equally in the way of suspention or using support, is fixed in tabula rasa accommodation space 200 Between, and be not close to arbitrary tank wall.
By this configuration mode, various tabula rasa can be made will not to block that completely in filmed image picture This, and the size enough carrying out subsequent analysis can be photographed.
Then, referring to Fig. 3~5.Fig. 3 is to hold according to the tabula rasa depicted in one embodiment of the invention Side view between being empty.Fig. 4 is the flow chart of the method for testing according to one embodiment of the invention.Fig. 5 is Schematic diagram according to the image on the tabula rasa depicted in one embodiment of the invention.
In step S410, camera lens module to be measured is placed into position to be measured.In one embodiment, hold Bottom load, 122 in order to fixing carrying camera lens module to be measured, and make camera lens module to be measured in the face of light box 120 with And tabula rasa 220,240,260, wherein, camera lens module to be measured is respective with tabula rasa 220,240,260 One spacing distance is different from.Additionally, about tabula rasa camera lens module to be measured and tabula rasa 220,240,260 Details are as follows for the embodiment of respective spacing distance.
In one embodiment, as it is shown on figure 3, camera lens module to be measured 140 is placed on bottom carrying on 122, First tabula rasa 220 is d1 with the first distance of camera lens module 140 to be measured, the second tabula rasa 220 and mirror to be measured The second distance of head module 140 is d2, the 3rd tabula rasa 260 and the 3rd distance of camera lens module 140 to be measured For d3.Wherein, the first distance d1 is more than the 3rd distance d3 more than second distance d2, second distance d2. In one embodiment, the first distance d1 can be 5~15 centimeters, and second distance d2 can be 55~65 Centimetre, the 3rd distance d3 can be 95~105 centimeters.
By the configuration mode of this embodiment, camera lens module 140 to be measured can be allowed by shooting this little tabula rasas 220,240,260, its acquired picture image has the different depth of field such as distant view, middle scape and close shot Image part, can be beneficial to carry out the follow-up analysis for test result.
On the other hand, as it is shown in figure 5, towards the tabula rasa 220,240,260 of camera lens module 140 to be measured Surface on each self-contained multiple images, image comprises a location Figure 50 0 and analysis diagram 510.Location figure Multiple anchor point is included in 500.In one embodiment, image can also comprise check Figure 52 0, Color lump figure (not illustrating), an object Figure 53 0 or other border to be measured module can be provided as the figure of test event Picture.
Additionally, in one embodiment, camera lens module 140 to be measured can be an array camera, comprises multiple Pick-up lens.Similarly, these a little pick-up lenss each shoot one second image frame, and each second image is drawn Bread contain the first tabula rasa the 220, second tabula rasa 240 and image of the 3rd tabula rasa 260, and by these second Image frame may make up one first image frame.In one embodiment, it is by four the second picture images It is spliced into one first image frame.In another embodiment, processing unit can be according at least one second image Analysis diagram among picture, to calculate an image resolution.
Then, step S420 of Fig. 4 is returned to.In the step s 420, according to the second image frame, meter Calculate a depth of view information, and by image depth of view information compared with a known depth of view information, to obtain a close shot Corrected value, scape corrected value and a distant view corrected value in one.
In one embodiment, camera lens module 140 to be measured is clapped respectively by its multiple pick-up lenss comprised Take the photograph tabula rasa 220,240,260, make each pick-up lens each obtain one second image frame, and with place Reason unit (not illustrating), according to the multiple location at least two image frames in these a little second image frames Angle between point and camera lens module to be measured 140, to calculate an image depth of view information.Wherein, image scape Deeply convince that breath comprises a distant view depth of view information, scape depth of view information, a close shot depth of view information in one.
In another embodiment, camera lens module 140 to be measured has four pick-up lenss, these four camera lens Head shoots all tabula rasas 220,240,260 the most respectively, each to obtain one second image frame.Due to Location Figure 50 0 and analysis diagram 510 has all been included at least on each tabula rasa 220,240,260, the most each The second image frame captured by four pick-up lenss, the most all includes location Figure 50 0 and an analysis diagram 510.Processing unit can shoot in the second image frame, the most at least by these four pick-up lenss respectively Location Figure 50 0 in two the second image frames is to calculate an image depth information.
Following description produces the detailed description of the invention of depth of view information.Refer to Fig. 6~8.Fig. 6 is Fig. 4's The sub-step flow chart of step S420.Fig. 7 is the depth of field bearing calibration according to one embodiment of the invention Schematic diagram.Fig. 8 is the schematic diagram of the depth of field bearing calibration according to another embodiment of the present invention.
In step S421, processing unit by each second image frame obtain distant view anchor point, in Scape anchor point and the coordinate position of close shot anchor point.
For example, as it is shown in fig. 7, location Figure 50 0a~500d system are for belong at four the second images respectively Location figure in picture.In Figure 50 0a~500d of location, pie chart sample represents distant view anchor point, square pattern Scape anchor point in representative, triangle pattern represents close shot anchor point.In one embodiment, processing unit choosing Select and carry out computing with location Figure 50 0a~500d of all second image frames, and drawn by each second image Face obtains distant view anchor point, middle scape anchor point and the coordinate position of close shot anchor point.It is understood that, It is said that in general, select the most in order to carry out the second image frame number of computing, then in subsequent step, scape The precision of deep depth calculation is the highest.
In step S422, processing unit calculate multiple anchor points at least two second image frames with Angle between camera lens module 140 to be measured, to obtain an image depth of view information.Wherein, image depth of field letter Breath comprises distant view depth of view information, middle scape depth of view information, close shot depth of view information.
For example, as shown in Figure 8, base is for photographic imagery concept, with two camera sites respectively During shooting object, two pictures 810,820 that will be taken the photograph, after the generation picture 830 that coincides, distance Two imagings 811,821 of pick-up lens object farther out are less with the angle x1 of a fixed point 831, away from Bigger with the angle x2 of fixed point 831 from two imagings 812,822 of pick-up lens object farther out.According to This, can pass through this imaging characteristic, is overlapped by multiple multiple pictures taken the photograph with different camera sites, And according to multiple imagings of same object in the picture that coincides and pick-up lens or the angle of a certain fixed point, to distinguish Other object in image frame be for remote, in, the relation of close shot, to calculate image depth of view information.
In one embodiment, as it is shown in fig. 7, location Figure 50 0a~500d is that these four pick-up lenss divide Do not shoot the location figure in the second image frame.Processing unit can be according in Figure 50 0a~500d of location Each anchor point coordinate position, being coincided by location Figure 50 0a~500d is that a merging positions Figure 50 0e.Based on Respective distant view anchor point and camera lens module to be measured 140 in aforesaid wantonly two the second images (or arbitrary fixed Point) angle less, respective close shot anchor point and camera lens module 140 to be measured in wantonly two the second images The bigger concept of angle, can be learnt that various anchor point is apart from camera lens mould to be measured by merging location Figure 50 0e The distant relationships of block 140.
Such as, merging in Figure 50 0e of location, the anchor point of pie chart sample is the most intensive, and any two pie charts The anchor point of sample is less with the angle of camera lens module 140 (or arbitrary fixed point) to be measured, and closeness is higher, therefore Can determine whether that the anchor point of these a little pie chart samples represents distant view anchor point;The dense degree of the anchor point of square pattern Take second place, therefore can determine whether scape anchor point in the anchor point representative of these a little square patterns;The location of triangle pattern Point is the loosest, and the anchor point of any two triangle patterns is relatively big with the angle of camera lens module 140 to be measured, Closeness is minimum, therefore can determine whether that the anchor point of triangle pattern represents close shot anchor point.
Accordingly, after can being overlapped by the location figure in multiple second images, according to the dense degree of anchor point Or two angles of anchor point and camera lens, with differentiate each anchor point be the distant view in picture, middle scape or Close shot.
In step S423, processing unit by image depth of view information compared with a known depth of view information, To obtain a close shot corrected value, scape corrected value and a distant view corrected value in one.
For example, the paragraph described such as aforementioned corresponding diagram 3, tabula rasa 220,240,260 is with to be measured The actual range of camera lens module 140 is it is known that can be as known depth of view information.Therefore, processing unit can By image depth of view information acquired in step S422 compared with known depth of view information, can calculate Go out close shot corrected value, middle scape corrected value and distant view corrected value.
In step S424, bin (not illustrating) is in order to store correction parameter, and correction parameter comprises closely Scape corrected value, middle scape corrected value and distant view corrected value.
Wherein, in processing unit and bin can be placed on light box 120 or independently put at light box 120 Outward, with camera lens module 140 electric property coupling to be measured.Processing unit can be by volume circuit such as micro-control unit (microcontroller), microprocessor (microprocessor), digital signal processor (digital signal Processor), ASIC (application specific integrated circuit, ASIC) Or one logic circuit implement.Additionally, bin is in order to store various data, e.g. internal memory, hard disk, Portable disk memory card etc..
Then, step S430 of Fig. 4 is returned to.In step S430, by distant view depth of view information, middle scape Depth of view information, close shot depth of view information carry out school by close shot corrected value, middle scape corrected value and distant view corrected value Just, to export a panoramic deep image, panoramic deep image comprises these a little second images after correction.
In step S440, the analysis diagram in the second image in processing unit foundation panoramic deep image, To calculate the definition of panoramic deep image, and the second image after correction is carried out complementation, mutual to produce one Mend panoramic deep image, and calculate a depth of field scattergram of complementary panoramic deep image.Wherein, depth of field scattergram Can the corresponding depth of field rectangular histogram of for example, complementary panoramic deep image or other numerically present point Butut.
In one embodiment, the second image after processing unit can utilize at least two corrections is carried out mutually Mend, complementary mode can be ballot method, calculating pixel average or other image can be mended mutually The algorithm repaid.For example, the second image after processing unit utilizes three corrections carries out complementation, and this The second image after three corrections all has an identical object;If after this object only corrects at two The second image in position be identical, then with these two correction after the second image in object's position be Standard, by the object's position in the second image after another Zhang Jiaozheng, be adjusted to these two corrections after the Two images are identical.
In step S450, processing unit is by the distant view depth of view information in complementary panoramic deep image and middle scape Depth of view information carries out a Fuzzy processing, to produce a synthesis segmentation depth map.
By this mode, can make, in the synthesis segmentation depth map of output, there is the object of close shot depth of view information More highlight, and other picture part systems are fuzzy, and judge whether synthesis by human eye or processing unit Close shot part in segmentation depth map is correct, in one embodiment, it is judged that mode be by synthesis segmentation scape Close shot part in deep figure is compared with known actual close shot part, to judge whether to synthesize segmentation scape Deep figure is less than an error threshold value with the error of actual environment.Such as, the close shot part in segmentation depth map It is a basketball, and known actual close shot part is a basketball really, then judge the synthesis segmentation depth of field Figure is less than an error threshold value with the error of actual environment.Thus user can be learnt via complementary panorama deep Image produced synthesis segmentation depth map, whether its error with actual environment is in acceptable scope In.
In step S460, processing unit is by remote with one of synthesis segmentation depth map for known depth of view information Scape picture, scape picture and a close shot picture are compared in one, to produce an analysis result, and utilize aobvious Show that device (not shown) is to show analysis result.Thus, can determine whether to produce via steps such as correction, complementations Synthesis segmentation depth map whether correct and clear in the performance of distant view picture, middle scape picture and close shot picture Clear, and analysis result is shown.
By above-mentioned test system and method for testing, the present invention can be by comprising the single light of the multiple depth of field In case, capture a picture and can obtain multiple different depth information, and can utilize these a little depth informations with Detect camera module to be measured remote, in, the shooting performance of close shot, and these a little depth informations can be applied with right Camera module to be measured is corrected.
Last it is noted that above example is only in order to illustrate technical scheme, rather than to it Limit;Although the present invention being described in detail with reference to previous embodiment, the ordinary skill of this area Personnel it is understood that the technical scheme described in foregoing embodiments still can be modified by it, or Person carries out equivalent to wherein portion of techniques feature;And these amendments or replacement, do not make corresponding skill The essence of art scheme departs from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (28)

1. a test system, it is characterised in that comprise:
One light box;
Multiple tabula rasas, are separately positioned in this light box, and those tabula rasas are each located on the different depth in this light box; And
Bottom one carrying, in order to fixing carrying one camera lens module to be measured, and make this camera lens module to be measured in the face of being somebody's turn to do Light box and those tabula rasas, this camera lens module to be measured spacing distance respective with those tabula rasas is different from;
Wherein, when this camera lens module to be measured shooting one first image frame, this first image frame wraps simultaneously Image containing those tabula rasas.
Test system the most as claimed in claim 1, it is characterised in that those tabula rasas comprise at least one One tabula rasa, at least one second tabula rasa and at least one the 3rd tabula rasa.
Test system the most as claimed in claim 2, it is characterised in that this at least one first tabula rasa with should The distance of camera lens module to be measured is one first distance, this at least one second tabula rasa and this camera lens module to be measured away from It is one the 3rd distance from the distance for a second distance, this at least one the 3rd tabula rasa and this camera lens module to be measured, This first distance is more than this second distance, and this second distance is more than the 3rd distance.
Test system the most as claimed in claim 2, it is characterised in that this at least one first tabula rasa each Limit is close to the tank wall at this light box, and at least one angle at this light box put respectively by this at least one second tabula rasa, should At least one the 3rd tabula rasa is not close to the tank wall of this light box.
Test system the most as claimed in claim 2, it is characterised in that this at least one first tabula rasa, should Each self-contained multiple images at least one second tabula rasa and this at least one the 3rd tabula rasa, those images comprise multiple Anchor point and an analysis diagram.
Test system the most as claimed in claim 5, it is characterised in that those images also comprise a check Figure, a color lump figure, an object diagram at least one.
Test system the most as claimed in claim 5, it is characterised in that this camera lens module to be measured comprises many Individual pick-up lens, those pick-up lenss each shoot one second image frame, and respectively this second image frame comprises This at least one first tabula rasa, this at least one second tabula rasa and the image of this at least one the 3rd tabula rasa, and this first Image frame is made up of those second image frames.
Test system the most as claimed in claim 7, it is characterised in that also comprise:
One processing unit, according to this parsing at least one second image frame of those the second image frames Figure, to calculate an image resolution.
Test system the most as claimed in claim 7, it is characterised in that also comprise:
One processing unit, according to those anchor points at least two image frames in those second image frames And the angle between this camera lens module to be measured, to calculate an image depth of view information;
Wherein this image depth of view information comprises a distant view depth of view information, scape depth of view information, a close shot depth of field in one Information.
Test system the most as claimed in claim 9, it is characterised in that this processing unit is also by this image Depth of view information is compared with a known depth of view information, to obtain a close shot corrected value, scape corrected value and in Distant view corrected value, and by this distant view depth of view information, scape depth of view information in this, this close shot depth of view information by should Close shot corrected value, in this, scape corrected value and this distant view corrected value are corrected, to export a panoramic deep image, This panoramic deep image comprises those second images after correction.
11. test system as claimed in claim 10, it is characterised in that this processing unit is also in order to depend on According to those analysis diagrams in those second images in this panoramic deep image, to calculate the clear of this panoramic deep image Clear degree, and those second images after correction are carried out complementation, to produce a complementary panoramic deep image, and count Calculate a depth of field scattergram of this complementation panoramic deep image.
12. test system as claimed in claim 10, it is characterised in that also comprise:
One display, in order to show this depth of field scattergram;
One bin, in order to store a correction parameter, this correction parameter comprises this close shot corrected value, scape in this Corrected value and this distant view corrected value.
13. test system as claimed in claim 11, it is characterised in that this processing unit is also in order to incite somebody to action This distant view depth of view information in this complementation panoramic deep image and in this scape depth of view information carry out a Fuzzy processing, To produce a synthesis segmentation depth map.
14. test system as claimed in claim 13, it is characterised in that this processing unit is also in order to incite somebody to action One of this known depth of view information and this synthesis segmentation depth map distant view picture, scape picture and a close shot are drawn in one Comparing in face, to produce an analysis result, this display is in order to show this analysis result.
15. 1 kinds of method of testings, it is characterised in that it is characterized by comprising:
Being respectively provided with multiple tabula rasa in a light box, it is different deep that those tabula rasas are each located in this light box Degree;
Bottom fixing carrying one camera lens module to be measured carries one, and make this camera lens module to be measured in the face of this light box And those tabula rasas, this camera lens module to be measured spacing distance respective with those tabula rasas is different from;
Wherein, when this camera lens module to be measured shooting one first image frame, this first image frame wraps simultaneously Image containing those tabula rasas.
16. method of testings as claimed in claim 15, it is characterised in that those tabula rasas comprise at least one First tabula rasa, at least one second tabula rasa and at least one the 3rd tabula rasa.
17. method of testings as claimed in claim 16, it is characterised in that this at least one first tabula rasa with The distance of this camera lens module to be measured is one first distance, this at least one second tabula rasa and this camera lens module to be measured Distance is a second distance, the distance of this at least one the 3rd tabula rasa and this camera lens module to be measured be one the 3rd away from From, this first distance is more than this second distance, and this second distance is more than the 3rd distance.
18. method of testings as claimed in claim 16, it is characterised in that this at least one first tabula rasa Each limit is close to the tank wall at this light box, and at least one angle at this light box put respectively by this at least one second tabula rasa, This at least one the 3rd tabula rasa is not close to the tank wall of this light box.
19. method of testings as claimed in claim 16, it is characterised in that this at least one first tabula rasa, Each self-contained multiple images on this at least one second tabula rasa and this at least one the 3rd tabula rasa, those images comprise many Individual anchor point and an analysis diagram.
20. method of testings as claimed in claim 19, it is characterised in that those images also comprise lattice Stricture of vagina figure, a color lump figure, an object diagram at least one.
21. method of testings as claimed in claim 19, it is characterised in that this camera lens module to be measured comprises Multiple pick-up lenss, those pick-up lenss each shoot one second image frame, respectively this second image frame bag Containing this at least one first tabula rasa, this at least one second tabula rasa and the image of this at least one the 3rd tabula rasa, and this One image frame is made up of those second image frames.
22. method of testings as claimed in claim 21, it is characterised in that also comprise: according to those This analysis diagram at least one second image frame of two image frames, to calculate an image resolution.
23. method of testings as claimed in claim 21, it is characterised in that also comprise:
According to those anchor points at least two image frames in those second image frames and this mirror to be measured Angle between head module, to calculate an image depth of view information;
Wherein this image depth of view information comprises a distant view depth of view information, scape depth of view information, a close shot depth of field in one Information.
24. method of testings as claimed in claim 23, it is characterised in that also comprise: by this image scape Deeply convince that breath and a known depth of view information compare, to obtain a close shot corrected value, scape corrected value and remote in Scape corrected value, and by near by this to this distant view depth of view information, scape depth of view information in this, this close shot depth of view information Scape corrected value, in this, scape corrected value and this distant view corrected value are corrected, to export a panoramic deep image, and should Panoramic deep image comprises those second images after correction.
25. method of testings as claimed in claim 24, it is characterised in that also comprise: according to this panorama Those analysis diagrams in those second images in deep image, to calculate the definition of this panoramic deep image, and Those second images after correction are carried out complementation, to produce a complementary panoramic deep image, and calculates this complementation One depth of field scattergram of panoramic deep image.
26. method of testings as claimed in claim 25, it is characterised in that also comprise:
Show this depth of field scattergram;
Storing a correction parameter, this correction parameter comprises this close shot corrected value, scape corrected value and this distant view in this Corrected value.
27. method of testings as claimed in claim 25, it is characterised in that also comprise: by this mutual completion This distant view depth of view information in depth of field image and in this scape depth of view information carry out a Fuzzy processing, to produce one Synthesis segmentation depth map.
28. method of testings as claimed in claim 27, it is characterised in that also comprise: by this known scape Deeply convince breath and one of this synthesis segmentation depth map distant view picture, scape picture and a close shot picture compare in one Right, to produce an analysis result, this display is in order to show this analysis result.
CN201510386898.5A 2015-07-03 2015-07-03 Test macro and test method Expired - Fee Related CN106324976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510386898.5A CN106324976B (en) 2015-07-03 2015-07-03 Test macro and test method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510386898.5A CN106324976B (en) 2015-07-03 2015-07-03 Test macro and test method

Publications (2)

Publication Number Publication Date
CN106324976A true CN106324976A (en) 2017-01-11
CN106324976B CN106324976B (en) 2019-04-05

Family

ID=57728156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510386898.5A Expired - Fee Related CN106324976B (en) 2015-07-03 2015-07-03 Test macro and test method

Country Status (1)

Country Link
CN (1) CN106324976B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107645659A (en) * 2017-08-11 2018-01-30 江西盛泰光学有限公司 Camera module optics depth of field test device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1573400A (en) * 2003-06-13 2005-02-02 明基电通股份有限公司 Method of aligning lens and sensor of camera
CN1908809A (en) * 2006-08-23 2007-02-07 无锡凯尔科技有限公司 Universal testing lamp box for mobile camera module and testing method thereof
WO2008084548A1 (en) * 2007-01-12 2008-07-17 Pioneer Corporation Camera unit inspection equipment and camera unit inspection method
CN101493646A (en) * 2008-01-21 2009-07-29 鸿富锦精密工业(深圳)有限公司 Optical lens detection device and method
JP2010016464A (en) * 2008-07-01 2010-01-21 Iwate Toshiba Electronics Co Ltd Test device for camera module and test method thereof
TW201028788A (en) * 2009-01-22 2010-08-01 Foxconn Tech Co Ltd Detection device and method for detecting displacement of auto-focus lens
CN203120072U (en) * 2013-02-22 2013-08-07 广东欧珀移动通信有限公司 Camera testing device
CN103676454A (en) * 2012-09-18 2014-03-26 亚旭电脑股份有限公司 Testing tool

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1573400A (en) * 2003-06-13 2005-02-02 明基电通股份有限公司 Method of aligning lens and sensor of camera
CN1908809A (en) * 2006-08-23 2007-02-07 无锡凯尔科技有限公司 Universal testing lamp box for mobile camera module and testing method thereof
WO2008084548A1 (en) * 2007-01-12 2008-07-17 Pioneer Corporation Camera unit inspection equipment and camera unit inspection method
CN101493646A (en) * 2008-01-21 2009-07-29 鸿富锦精密工业(深圳)有限公司 Optical lens detection device and method
JP2010016464A (en) * 2008-07-01 2010-01-21 Iwate Toshiba Electronics Co Ltd Test device for camera module and test method thereof
TW201028788A (en) * 2009-01-22 2010-08-01 Foxconn Tech Co Ltd Detection device and method for detecting displacement of auto-focus lens
CN103676454A (en) * 2012-09-18 2014-03-26 亚旭电脑股份有限公司 Testing tool
CN203120072U (en) * 2013-02-22 2013-08-07 广东欧珀移动通信有限公司 Camera testing device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107645659A (en) * 2017-08-11 2018-01-30 江西盛泰光学有限公司 Camera module optics depth of field test device
CN107645659B (en) * 2017-08-11 2024-06-28 江西盛泰精密光学有限公司 Camera module optical depth of field testing arrangement

Also Published As

Publication number Publication date
CN106324976B (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CA2969482C (en) Method and apparatus for multiple technology depth map acquisition and fusion
WO2019200837A1 (en) Method and system for measuring volume of parcel, and storage medium and mobile terminal
US11145077B2 (en) Device and method for obtaining depth information from a scene
EP3028252B1 (en) Rolling sequential bundle adjustment
US7554575B2 (en) Fast imaging system calibration
TWI253006B (en) Image processing system, projector, information storage medium, and image processing method
TWI554976B (en) Surveillance systems and image processing methods thereof
CN103765870B (en) Image processing apparatus, projector and projector system including image processing apparatus, image processing method
CN105453546B (en) Image processing apparatus, image processing system and image processing method
WO2017076106A1 (en) Method and device for image splicing
CN112753217B (en) Hybrid depth processing
CN105637852B (en) A kind of image processing method, device and electronic equipment
CN103824303A (en) Image perspective distortion adjusting method and device based on position and direction of photographed object
Li et al. HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor
CN109146906A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN113129383A (en) Hand-eye calibration method and device, communication equipment and storage medium
CN109974659A (en) A kind of embedded range-measurement system based on binocular machine vision
CN115830131A (en) Method, device and equipment for determining fixed phase deviation
CN107527323B (en) Calibration method and device for lens distortion
KR102156998B1 (en) A method for detecting motion in a video sequence
CN106324976A (en) Test system and test method
US20210027439A1 (en) Orientation adjustment of objects in images
TW201644271A (en) Testing system and testing method
CN108062741B (en) Binocular image processing method, imaging device and electronic equipment
CN110827230A (en) Method and device for improving RGB image quality by TOF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190405

Termination date: 20200703