CN104819693B - Depth of field discriminating gear and depth of field method of discrimination - Google Patents
Depth of field discriminating gear and depth of field method of discrimination Download PDFInfo
- Publication number
- CN104819693B CN104819693B CN201510170371.9A CN201510170371A CN104819693B CN 104819693 B CN104819693 B CN 104819693B CN 201510170371 A CN201510170371 A CN 201510170371A CN 104819693 B CN104819693 B CN 104819693B
- Authority
- CN
- China
- Prior art keywords
- depth
- laser
- testee
- field
- laser spots
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Measurement Of Optical Distance (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of depth of field discriminating gear, including:The laser point light source of specific wavelength laser, spherical shape and the lattice array grating for being uniformly distributed multiple apertures thereon are sent, for forming multiple laser spots on testee;The first and second cmos image sensors of testee image and laser spots image are captured respectively;The laser of specific wavelength is only set to reach the wavelength filter of the second cmos image sensor;And processing unit, the processing unit is used to be superimposed to form a resultant image by the image that the first and second cmos image sensors are captured, the resultant image is divided into multiple area identical unit areas, the quantity of laser spots in each unit area is detected, and according to the depth of field of the quantity differentiation testee of laser spots in each unit region.The present invention can easily differentiate the relative and absolute depth of field of testee.
Description
Technical field
The present invention relates to image communication technology field, more particularly to a kind of depth of field discriminating gear and depth of field method of discrimination.
Background technology
Somatic sensation television game refers to the novel electron game that impression is gone with body, and it breaks through in the past simple with handle key-press input
Mode of operation, is operated by limb action change.
The input mode that electronic game most starts comes from computer keyboard, the professional electronics game machine derived later with
The form of handle or operating desk carries out e-gaming content.Still later, with scientific and technological progress and enhancing player gaming experience
The need for, each game company have developed specialization, the game input devices specifically changed and large-scale arcade game project.Example
Such as, the game supported in early days with amusement light gun is exactly the original version of somatic sensation television game, needs the hand-held light gun of player to coordinate hand in addition
Arm, the game that the limbs such as waist could be carried out is it can be appreciated that the basis of somatic sensation television game.
Somatic sensation television game carries out human-computer interaction by internet operation platform, and cameras capture three is relied on using video identification technology
The motion of player in dimension space, therefore, realizes that an important technology of somatic sensation television game is how to recognize the testee depth of field,
That is how to recognize depth information.
Generally, somatic sensation television game obtains depth of view information in the following way:The device of personage's depth of field can be recognized using one,
A reference planes are taken every a segment distance, the speckle pattern in reference planes are recorded, it is assumed that defined user's space
It is 1 meter to the 4 meters scope apart from television set, each 10cm takes a reference planes, then just save 30 width speckle images, needs
When measuring, shoot a secondary speckle image to be measured, by diagram picture and 30 width preserved with reference to figure according to
It is secondary to do computing cross-correlation, obtain 30 width correlation chart pictures.And the position that the object in space is present, it can be shown on associated picture
Go out peak value, these peak values are superimposed, then the 3D shape of whole scene will be obtained by interpolation arithmetic.
However, this method needs to build a depth of view information by the speckle information of each image, it is complex.
The content of the invention
It is a primary object of the present invention to overcome the defect of prior art conveniently can be obtained there is provided one kind in area to be illuminated domain
Depth of field discriminating gear and method of discrimination of the testee with respect to depth of view information and absolute depth of view information.
To reach above-mentioned purpose, the present invention provides a kind of depth of field discriminating gear, including laser point light source, sends specific wavelength
Laser;First cmos image sensor, the image for capturing multiple testees;Lattice array grating, with spherical shape
And equally distributed multiple apertures, the optical arrays for the laser of the specific wavelength to be changed into the multiple aperture of correspondence,
The optical arrays form multiple laser spots on the multiple testee;Second cmos image sensor, only to described specific
The laser photosensitive of wavelength, the image for capturing laser spots;Wavelength filter, for only making the laser of the specific wavelength reach institute
State the second cmos image sensor;And processing unit, it includes synthesis module, unit area determining module and the depth of field and differentiates mould
Block.Wherein, the synthesis module, for the shadow for capturing first cmos image sensor and the second cmos image sensor
As superposition forms a resultant image;The unit area determining module, for the resultant image to be divided into multiple area phases
Same unit area;The depth of field discrimination module, the quantity for detecting laser spots in each unit area, and according to each
The quantity of laser spots differentiates the relative depth of field of the multiple testee in the unit area.
It is preferred that, each testee has multiple continuously distributed unit areas in the resultant image,
One in each unit area that each testee of the depth of field discrimination module selection has should as representative
The reference unit region of testee, and according to laser spots in each reference unit region for representing each testee
Quantity differentiates the relative depth of field of the multiple testee.
It is preferred that, the reference unit region is that laser spots quantity is most in all unit areas in the testee
Few one or laser spots quantity most one or laser spots quantity therein are all lists in the testee
One of the average value of first region laser spots quantity, or with identical quantity lasers point and a most cellular zone of number
Domain.
It is preferred that, the optical arrays that the size of the unit area is equal in unity emitter angular range are flat in a benchmark
The size in the region that the laser spots formed on face are covered.
The present invention also provides a kind of depth of field method of discrimination based on cmos image sensor, comprises the following steps:
S1:The light of specific wavelength is sent by laser point light source;
S2:By with spherical shape and being uniformly distributed the lattice array gratings of multiple apertures by the laser of the specific wavelength
The optical arrays of the multiple aperture of correspondence are changed into form multiple laser spots on multiple testees;
S3:The image of multiple testees is captured by the first cmos image sensor, by only to the specific wavelength
Laser photosensitive the second cmos image sensor capture laser spots image;The certain wave is only wherein made by wavelength filter
Long laser reaches second cmos image sensor;
S4:The image that first cmos image sensor and the second cmos image sensor are captured is superimposed to form a conjunction
Into image;
S5:The resultant image is divided into multiple area identical unit areas;
S6:The quantity of laser spots in each unit area is detected, and according to laser spots in each unit area
Quantity differentiates the relative depth of field of the multiple testee.
It is preferred that, each testee has multiple continuously distributed unit areas in the resultant image,
Chosen in the step S6 conduct in each unit area that each testee has represent this be tested
The reference unit region of object, and according to the quantity of laser spots in each reference unit region for representing each testee
Differentiate the relative depth of field of the multiple testee.
It is preferred that, the reference unit region chosen in the step S6 by the testee have it is each described
In unit area, with most one of them of identical quantity lasers point and number itself or the testee have it is each
Laser spots quantity is most in the unit area one.
It is preferred that, the unit area is that the optical arrays in the range of unit launch angle are formed on a datum plane
The region that is covered of laser spots.
According to another aspect of the present invention, a kind of depth of field discriminating gear, including laser point light source are additionally provided, sends specific
The laser of wavelength;First cmos image sensor, the image for capturing testee;Lattice array grating, spherical shape and
Multiple apertures are uniformly distributed, the optical arrays for the laser of the specific wavelength to be changed into the multiple aperture of correspondence are described
Optical arrays form multiple laser spots on the testee;Second cmos image sensor, only swashs to the specific wavelength
Light sensation light, the image for capturing laser spots;Wavelength filter, for only making the laser of the specific wavelength reach described second
Cmos image sensor;And processing unit.Processing unit include synthesis module, unit area determining module, memory module and
Depth of field discrimination module.Wherein synthesis module is used to capture first cmos image sensor and the second cmos image sensor
Image be superimposed to form a resultant image;Unit area determining module is identical for the resultant image to be divided into multiple areas
Unit area;Memory module is for swashing in the Memory Reference depth of field and the unit area formed at the benchmark depth of field
The quantity of luminous point;Depth of field discrimination module is used for the quantity for detecting laser spots in each unit area, and according to described tested
The information that the quantity of laser spots and the memory module are stored in the unit area of object differentiates the exhausted of the testee
To the depth of field.
Preferably, testee described in the resultant image has multiple continuously distributed unit areas, described
Quantity and memory module institute of the depth of field discrimination module according to laser spots in each unit area of the testee
The information of storage differentiates the absolute depth of field of each unit area of testee.
Present invention also offers a kind of depth of field method of discrimination based on cmos image sensor, comprise the following steps:
S1:The laser of specific wavelength is sent by laser point light source;
S2:The laser of the specific wavelength is changed by spherical shape and the lattice array grating that is uniformly distributed multiple apertures
It is changed into the optical arrays of the multiple aperture of correspondence to form multiple laser spots on testee;
S3:The image of the testee is captured by the first cmos image sensor, by only to the specific wavelength
Laser photosensitive the second cmos image sensor capture laser spots image;The certain wave is only wherein made by wavelength filter
Long laser reaches second cmos image sensor;
S4:The image that first cmos image sensor and the second cmos image sensor are captured is superimposed to form a conjunction
Into image;
S5:The resultant image is divided into multiple area identical unit areas;
S6:Detect the quantity of laser spots in each unit area, and in the unit area according to the testee
The quantity of the quantity of laser spots, the benchmark depth of field and the laser spots in the unit area formed at the benchmark depth of field differentiates
The absolute depth of field of the testee.
Preferably, testee described in the resultant image has multiple continuously distributed unit areas, step
According to the quantity of laser spots, the benchmark depth of field in each unit area of the testee and in the benchmark in S6
The quantity of laser spots in the unit area formed at the depth of field differentiates the absolute scape of each unit area of testee
It is deep.
Compared to prior art, the beneficial effects of the present invention are setting using laser point light source and umbilical point array grating
Put, coordinate two cmos image sensors to capture testee image and laser spots image, according to representing each testee
The quantity of laser spots can determine the relative depth of field of each testee and the absolute depth of field of each testee in unit area, from
And overcome the equipment with human-computer interaction function in the prior art and differentiate that the measured object depth of field is complex in man-machine interaction
Defect, it is more convenient to also reduce cost.
Brief description of the drawings
Fig. 1 show the schematic diagram of the depth of field discriminating gear of one embodiment of the invention;
Fig. 2 show the block diagram of the processing unit of the depth of field discriminating gear of one embodiment of the invention;
Fig. 3 show the schematic diagram of the resultant image of one embodiment of the invention;
Fig. 4 show the flow chart of the depth of field method of discrimination of one embodiment of the invention;
Fig. 5 show the block diagram of the processing unit of the depth of field discriminating gear of another embodiment of the present invention;
Fig. 6 show the flow chart of the depth of field method of discrimination of another embodiment of the present invention.
Embodiment
To make present disclosure more clear understandable, below in conjunction with Figure of description, present disclosure is made into one
Walk explanation.Certainly the invention is not limited in the specific embodiment, the general replacement known to those skilled in the art
Cover within the scope of the present invention.
Embodiment one
As shown in figure 1, depth of field discriminating gear includes laser point light source 10, lattice array grating 11, the first cmos image sensing
Device 12, wavelength filter 13, the second cmos image sensor 14 and processing unit 15.Wherein to send cluster specific for laser point light source 10
The laser of wavelength, the diameter of light beam is very small, can reach grade and following.The laser of the specific wavelength is non-visible light wave
The laser of section, to avoid the shooting for influenceing the second cmos image sensor 14.Common lasing light emitter can be divided into following several:Argon
Fluorine laser (ultraviolet light) λ=193nm;Xenon chlorine laser (ultraviolet light) λ=308nm of krypton fluorine laser (ultraviolet light) λ=248;N_2 laser
(ultraviolet light) λ=337nm;Argon laser (blue light) λ=488nm;Argon laser (green glow) λ=514nm;He-Ne Lasers (green glow) λ=
543nm;He-Ne Lasers (feux rouges) λ=633nm;Rhodamine 6G dyestuff (tunable optical) λ=570-650nm;Ruby (CrAlO3)
(feux rouges) λ=694nm;Neodymium-yttrium-aluminium-garnet (near infrared light) λ=1064nm;Carbon dioxide (far red light) λ=10600nm
Deng.In the present embodiment, the laser with specific wavelength λ uses monochromatic light, that is to say, that for example for N_2 laser, sends
Just have 337nm exact wavelengths monochromatic light.Because lasing light emitter 10 is spot light, therefore the laser of transmitting is in dispersion shape.
Lattice array grating 11 be located between laser point light source 10 and actual testee, its spherical shape and be evenly distributed with largely it is small
Hole (optical grating point), when the laser of the specific wavelength for the dispersion shape that laser point light source 10 is sent is shone out by lattice array grating 11
Turn into optical arrays afterwards, and multiple laser spots can be formed on multiple testees.It is every in optical arrays for lattice array grating 11
One light, one optical grating point of correspondence, optical grating point has good focusing (holding point-shaped character), though testee compared with
At a distance, still in focusing shape after scattering laser is shone out by optical grating point, and remotely still formed and optical grating point quantity
Consistent laser spots.First cmos image sensor 12 is used for the image for capturing multiple testees, the second cmos image sensing
Laser photosensitive of the device 14 only to specific wavelength, the image for capturing laser spots.The two cmos image sensors it is equidirectional and
It is close to laser point light source 10 to set, is far smaller than laser point light source 10 and testee apart from x1 with laser point light source 10
Actual range x2 compare, x1<<x2.Wavelength filter 13 is used to only make the laser of specific wavelength to reach the second cmos image sensing
Device 14.
Processing unit 15 is connected with two cmos image sensors 12 and 14, for according to the two cmos image sensors
The image of acquisition differentiates the relative depth of field of current multiple testees.
Specifically, incorporated by reference to Fig. 1 and Fig. 2, processing unit 15 includes synthesis module 151, and unit area determines mould
Block 152 and depth of field discrimination module 153.The light of the laser array changed through umbilical point array grating 11 is irradiated with all angles
Go, each plane where the multiple testees of arrival simultaneously forms multiple laser spots, and the second cmos image sensor 14 captures this
The image of a little laser spots, the first cmos image sensor 11 then captures the image of multiple testees.Synthesis module 151 is by two
The image that cmos image sensor is captured is superimposed to form a resultant image, and unit area determining module 152 passes through figure
As resultant image is divided into multiple area identical unit areas by Processing Algorithm, and depth of field discrimination module 153 then detects each
The quantity of laser spots in individual unit area, and the relative of multiple testees is differentiated according to the quantity of laser spots in each unit region
The depth of field.
In the embodiment shown in fig. 1, with two testees, respectively at depth of field d2 and at depth of field d3.Synthesis
The image that module 151 captures two cmos image sensors is superimposed, the conjunction with testee and laser points
Into image, as shown in Figure 3.Preferably, unit area determining module 152 by the optical arrays in unity emitter angular range in the depth of field
The region s covered by the laser spots formed on the datum plane at d1 size as unit area size, so as to pass through
It is circle that resultant image is divided into the unit area formed in multiple unit area s, such as Fig. 3 by image processing algorithm.Unit is sent out
Firing angle degree can be set according to demand, and the present invention is not any limitation as.
As a rule, optical arrays can form many laser spots on testee, when unit area is smaller, in synthesis
The testee of each in image has continuously distributed multiple unit areas, therefore depth of field discrimination module 153 can be chosen often
It is one of as the reference unit region for representing the testee in the unit region that one testee has, with
Position where reference unit region is as the position of the testee, unit area 1 and unit area 2 in such as Fig. 3.When
So, conplane testee, multiple units that testee has are not in itself on depth of field direction for some
Laser spots quantity in region may be different, and in this case, depth of field discrimination module 153 can choose appropriate according to demand
Unit area determines the position of testee as reference unit region.For example, choosing the unit region of testee
In, with one in the most multiple unit areas of identical quantity lasers point and number itself as representing the testee
Reference unit region.Assuming that testee has X unit area, X1 unit area therein has a1 laser points,
X2 unit area has a2 laser points, and being left X3 unit area has a3 laser points, X1>X2>X3, illustrates quilt
Survey object to be mostly at the depth of field residing for X1 unit area, then depth of field discrimination module will be in X1 unit area of selection
One be used as the reference unit region for representing testee.It is of course also possible to choose in the unit region of testee,
The most unit area of laser spots quantity, that is to say the minimum unit area of the depth of field, single as the reference for representing the testee
First region.For example, coming applied to the object (palm of i.e. one stretching) nearest from video screen is caught in man-machine interaction automatically
Differentiate gesture operation.In addition, in other embodiments, can also choose in all unit areas of testee, laser spots quantity
Minimum unit area, or choose the list that laser spots quantity is equal to all unit area laser spots number averages of testee
First region is used as reference unit region.
Then, depth of field discrimination module 153 is according to the quantity of laser spots in each reference unit region for representing each testee
Differentiate the relative depth of field of these testees.Still with two testees 1 and 2, respectively at depth of field d2 and depth of field d3
Exemplified by place.Here depth of field d2 and depth of field d3 be calculated using the position of laser point light source 10 as zero point obtain (due to two
Cmos image sensor is equidirectional and is close to laser point light source 10 and sets, and to be far smaller than apart from x1 with laser point light source 10
Laser point light source 10 is compared with the actual range x2 of testee, x1<<X2, therefore the position of two cmos image sensors
Zero point can be used as).Depth of field discrimination module 153, which have chosen, to be represented the unit area 1 and unit area 2 of the two testees and distinguishes
It is used as reference unit region.Assuming that having in unit area 1 has n3 laser spots in n2 laser spots, unit area 2, then say
It is bright for same unit area, when the unit area is located at the d2 depth of field, have n2 laser spots in the area S of unit area;
When the unit area is located at the d3 depth of field, only has laser in n3 laser spots, i.e. reference unit region in the area S of unit area
The depth of field of testee more than point quantity is smaller, and the depth of field of the few testee of laser spots quantity is larger in reference unit region.
If equally having the area S ' in the region of n2 laser spots at the d3 depth of field, S '=(n2/n3) S should be met.Due to unit area
Area S and the depth of field square proportional relation, therefore n2/n3=S '/S=(d3/d2)2, the depth of field of testee 2 and tested
The ratio between the depth of field of object 1 square is equal in reference unit region 1 laser spots in quantity and the reference unit region 2 of laser spots
Ratio of number.Thus, representing the laser spots quantity in the reference unit region of each testee by identification can just differentiate
The relative depth of field relation of these testees.
Fig. 4 show the flow chart of the present embodiment depth of field method of discrimination, and it comprises the following steps:
S1:The light of specific wavelength is sent by laser point light source;
S2:By with spherical shape and being uniformly distributed the lattice array gratings of multiple apertures and changing the laser of specific wavelength
For the optical arrays of the multiple apertures of correspondence to form multiple laser spots on multiple testees;
S3:The image of multiple testees is captured by the first cmos image sensor, by only swashing to specific wavelength
Second cmos image sensor of light sensation light captures the image of laser spots;Wherein only make the laser of specific wavelength by wavelength filter
Reach the second cmos image sensor;
S4:The image that first cmos image sensor and the second cmos image sensor are captured is superimposed to form a synthesis shadow
Picture;
S5:Resultant image is divided into multiple area identical unit areas;
S6:The quantity of laser spots in each unit area is detected, and is differentiated according to the quantity of laser spots in each unit region
The relative depth of field of multiple testees.
Embodiment two
In the present embodiment, it can be obtained by all testees in illumination range, including all quilts using depth of field discriminating gear
The absolute depth of field of object various pieces is surveyed, so as to further carry out accurate depth of field identification.
Fig. 1 and Fig. 5 are refer to, depth of field discriminating gear includes laser point light source 10, lattice array grating 11, the first cmos image
Sensor 12, wavelength filter 13, the second cmos image sensor 14 and processing unit 15.Wherein laser point light source 10, lattice array
Grating 11, the first cmos image sensor 12 and the second cmos image sensor 14 are identical with embodiment one, and therefore not to repeat here.
It should be noted that when calculating the absolute depth of field, still using the position of laser point light source 10 as zero point, because two cmos images are passed
Sensor is equidirectional and is close to laser point light source 10 and sets, and to be far smaller than laser point light source apart from x1 with laser point light source 10
10 compared with the actual range x2 of testee, x1<<X2, thus two cmos image sensors position also can as zero point,
Therefore the absolute depth of field in the present embodiment is to refer to testee to the distance at laser point light source (two cmos image sensors) place.
Processing unit 15 is connected with two cmos image sensors 12 and 14, for according to the two cmos image sensors
The image of acquisition differentiates the absolute depth of field of testee.Specifically, processing unit 15 includes synthesis module 151 ', unit area
Determine determining module 152 ', memory module 154 ' and depth of field discrimination module 153 '.Synthesis module 151 ' passes two cmos images
The image that sensor 12,14 is captured is superimposed to form a resultant image;Unit area determining module 152 ' by image at
Resultant image is divided into multiple area identical unit areas by adjustment method;Memory module 154 ' is used to be stored in advance in a benchmark
The quantity of laser spots in the unit area formed at the depth of field and the benchmark depth of field;And depth of field discrimination module 153 ' is then detected
In current resultant image in each unit area laser spots quantity, and according to laser spots in the unit area of testee
The information that quantity and memory module 154 ' are stored differentiates the absolute depth of field of the testee.
Still it is illustrated with Fig. 1, as shown in the figure, it is assumed that testee is located at depth of field d2.Synthesis module 151 ' is by two
The image superposition that cmos image sensor is captured, is had the resultant image of testee and laser points simultaneously.This implementation
In example, the definition datum depth of field is that the distance to laser point light source is d1, unit area determining module 152 ' at d1, i.e. the benchmark depth of field
The laser spots institute overlay area that optical arrays in unity emitter angular range are formed on the datum plane at benchmark depth of field d1
Size as unit area size, so that current resultant image is divided into multiple unit areas by image processing algorithm
s.In the present embodiment, there are n1 laser spots in the unit area s formed on the datum plane at benchmark depth of field d1.Benchmark scape
Laser spots quantity at depth and the benchmark depth of field in unit area can be obtained in several ways, for example, carrying out absolute depth of field knowledge
Before not, a planar object is first set at depth of field d1, the shooting of two cmos image sensors is carried out, mould is recognized by the depth of field
Block detects that the laser spots quantity in the s of unit area is n1;Or first pass through depth of field identification module detect in unit area swash
Luminous point is n1 position, and the manual measurement position depth of field obtains d1;Or utilize the automatic right of cmos image sensor camera
Burnt function, finds the position that unit area laser spots are n1 and carries out auto-focusing, then provide focusing by image processing algorithm
With a distance from from two cmos image sensors, that is, depth of field d1;Can be obtained by the above method benchmark depth of field d1 and
At the benchmark depth of field in unit area laser spots quantity information, and by memory module 154 ' store.
When unit area is smaller, the testee in resultant image have continuously distributed multiple unit areas and this
Laser spots quantity in a little continuously distributed unit areas may it is different (each several part of such as testee on depth of field direction not
In same plane), therefore depth of field discrimination module 153 ' understands the number of laser spots in each unit region according to the testee
The information that amount and memory module are stored differentiates the absolute depth of field in the testee unit region, it is possible thereby to further
Identify the absolute depth of field of the testee various pieces.In one embodiment of this invention, depth of field discrimination module 153 ' also may be used
It is one of as the reference unit region for representing the testee position in these unit areas to choose as needed, by
The depth of field in the reference unit region is used as the depth of field of testee.It is identical with embodiment one, reference unit region can be by
Survey one or laser spots quantity most one or laser therein of laser spots minimum number in each unit region of object
Point quantity is one of average value of all unit area laser spots quantity of testee, or with identical quantity lasers point and
A most unit area of number.
Depth of field discrimination module 153 ' is according to the quantity and memory module of laser spots in a unit area of testee
The information of 154 ' storages differentiates the absolute depth of field of the unit area of testee.In this example, it is assumed that testee
There are n2 laser spots in some unit area, illustrate with reference to the information that memory module is stored for same unit area,
When the unit area is located at the d1 depth of field, there are n1 laser spots in the area S of unit area;When the unit area is located at the d2 depth of field
When, the interior only n2 laser spots of area S of unit area that is to say the area in the region at the d2 depth of field with n1 laser spots
S '=(n1/n2) S.Due to square proportional relation of area S and the depth of field of unit area, therefore n1/n2=S '/S=(d2/
d1)2, in the case of known to d1 and n1, it becomes possible to easily obtain the absolute depth of field of the unit area of testee.Equally
, if the unit area of testee is at depth of field d3, by the way that there are n3 laser spots in detection now unit area, also may be used
With according to formula n1/n3=(d3/d1)2Obtain depth of field d3.
Fig. 6 show the flow chart of the depth of field method of discrimination of the present embodiment, and it comprises the following steps:
S1:The light of specific wavelength is sent by laser point light source;
S2:By with spherical shape and being uniformly distributed the lattice array gratings of multiple apertures and changing the laser of specific wavelength
For the optical arrays of the multiple apertures of correspondence to form multiple laser spots on testee;
S3:The image of testee is captured by the first cmos image sensor, passes through the laser sense only to specific wavelength
Second cmos image sensor of light captures the image of laser spots;The laser for wherein only making specific wavelength by wavelength filter is reached
Second cmos image sensor;
S4:The image that first cmos image sensor and the second cmos image sensor are captured is superimposed to form a synthesis shadow
Picture;
S5:Resultant image is divided into multiple area identical unit areas;
S6:Detect the quantity of laser spots in each unit area, and according to laser spots in the unit area of testee
The quantity of quantity, the benchmark depth of field and the laser spots in the unit area formed at the benchmark depth of field differentiates the exhausted of testee
To the depth of field.
In summary, the present invention coordinates two cmos images using the setting of laser point light source and umbilical point array grating
Sensor captures testee image and laser spots image, according to the quantity energy of laser spots in the unit area of each testee
The relative depth of field of each testee and the absolute depth of field of each testee are enough determined, is had in the prior art so as to overcome
The equipment of human-computer interaction function differentiates the complex defect of the measured object depth of field in man-machine interaction, more convenient to also reduce into
This.
Although the present invention is disclosed as above with preferred embodiment, right many embodiments are illustrated only for the purposes of explanation
, the present invention is not limited to, those skilled in the art can make without departing from the spirit and scope of the present invention
Some changes and retouching, the protection domain that the present invention is advocated should be to be defined described in claims.
Claims (10)
1. a kind of depth of field discriminating gear, it is characterised in that including:
Laser point light source, sends the laser of specific wavelength;
First cmos image sensor, the image for capturing multiple testees;
Lattice array grating, spherical shape and is uniformly distributed multiple apertures, for the laser of the specific wavelength to be changed into pair
The optical arrays of the multiple aperture are answered, the optical arrays form multiple laser spots on the multiple testee;
Second cmos image sensor, the only laser photosensitive to the specific wavelength, the image for capturing laser spots;
Wavelength filter, for only making the laser of the specific wavelength reach second cmos image sensor;
Processing unit, it includes:
Synthesis module, for the image of first cmos image sensor and the acquisition of the second cmos image sensor to be superimposed into shape
Into a resultant image;
Unit area determining module, for the resultant image to be divided into multiple area identical unit areas;And
Depth of field discrimination module, the quantity for detecting laser spots in each unit area, and according to each unit area
The quantity of interior laser spots differentiates the relative depth of field of the multiple testee.
2. depth of field discriminating gear according to claim 1, it is characterised in that each measured object in the resultant image
Body has multiple continuously distributed unit areas, and the depth of field discrimination module chooses what each testee had
One in each unit area is as the reference unit region for representing the testee, and according to representing each measured object
The quantity of laser spots differentiates the relative depth of field of the multiple testee in each reference unit region of body.
3. depth of field discriminating gear according to claim 2, it is characterised in that the reference unit region is the measured object
One of laser spots minimum number or laser spots quantity most one or therein swash in all unit areas in body
Luminous point quantity is one of the average value of all unit area laser spots quantity in the testee, or with identical
Quantity lasers point and a most unit area of laser spots number.
4. depth of field discriminating gear according to claim 1, it is characterised in that the size of the unit area is sent out equal to unit
The size in the region that the laser spots that the optical arrays in the range of firing angle degree are formed on a datum plane are covered.
5. a kind of depth of field method of discrimination based on cmos image sensor, it is characterised in that comprise the following steps:
S1:The laser of specific wavelength is sent by laser point light source;
S2:The laser of the specific wavelength is changed into by spherical shape and the lattice array grating that is uniformly distributed multiple apertures
The optical arrays of the multiple aperture are corresponded to form multiple laser spots on multiple testees;
S3:The image of the multiple testee is captured by the first cmos image sensor, by only to the specific wavelength
Laser photosensitive the second cmos image sensor capture laser spots image;The certain wave is only wherein made by wavelength filter
Long laser reaches second cmos image sensor;
S4:The image that first cmos image sensor and the second cmos image sensor are captured is superimposed to form a synthesis shadow
Picture;
S5:The resultant image is divided into multiple area identical unit areas;
S6:The quantity of laser spots in each unit area is detected, and according to the quantity of laser spots in each unit area
Differentiate the relative depth of field of the multiple testee.
6. depth of field method of discrimination according to claim 5, it is characterised in that each measured object in the resultant image
Body, which has, chooses each institute that each testee has in multiple continuously distributed unit areas, the step S6
One in unit area is stated as the reference unit region for representing the testee, and according to representing each testee
The quantity of laser spots differentiates the relative depth of field of the multiple testee in each reference unit region.
7. a kind of depth of field discriminating gear, it is characterised in that including:
Laser point light source, sends the laser of specific wavelength;
First cmos image sensor, the image for capturing testee;
Lattice array grating, spherical shape and is uniformly distributed multiple apertures, for the laser of the specific wavelength to be changed into pair
The optical arrays of the multiple aperture are answered, the optical arrays form multiple laser spots on the testee;
Second cmos image sensor, the only laser photosensitive to the specific wavelength, the image for capturing laser spots;
Wavelength filter, for only making the laser of the specific wavelength reach second cmos image sensor;
Processing unit, it includes:
Synthesis module, for the image of first cmos image sensor and the acquisition of the second cmos image sensor to be superimposed into shape
Into a resultant image;
Unit area determining module, for the resultant image to be divided into multiple area identical unit areas;
Laser spots in memory module, the unit area formed for the Memory Reference depth of field and at the benchmark depth of field
Quantity;And
Depth of field discrimination module, the quantity for detecting laser spots in each unit area, and according to the testee
The information that the quantity of laser spots and the memory module are stored in unit area differentiates the absolute depth of field of the testee.
8. depth of field discriminating gear according to claim 7, it is characterised in that testee described in the resultant image has
There are multiple continuously distributed unit areas, the depth of field discrimination module is according to each unit area of the testee
The information that the quantity of interior laser spots and the memory module are stored differentiates the exhausted of each unit area of testee
To the depth of field.
9. a kind of depth of field method of discrimination based on cmos image sensor, it is characterised in that comprise the following steps:
S1:The laser of specific wavelength is sent by laser point light source;
S2:The laser of the specific wavelength is changed into by spherical shape and the lattice array grating that is uniformly distributed multiple apertures
The optical arrays of the multiple aperture are corresponded to form multiple laser spots on testee;
S3:The image of the testee is captured by the first cmos image sensor, by only swashing to the specific wavelength
Second cmos image sensor of light sensation light captures the image of laser spots;The specific wavelength is only wherein made by wavelength filter
Laser reaches second cmos image sensor;
S4:The image that first cmos image sensor and the second cmos image sensor are captured is superimposed to form a synthesis shadow
Picture;
S5:The resultant image is divided into multiple area identical unit areas;
S6:Detect the quantity of laser spots in each unit area, and in the unit area according to the testee
The quantity of the quantity of laser spots, the benchmark depth of field and the laser spots in the unit area formed at the benchmark depth of field differentiates
The absolute depth of field of the testee.
10. depth of field method of discrimination according to claim 9, it is characterised in that testee described in the resultant image
With multiple continuously distributed unit areas, according to laser in each unit area of the testee in step S6
The quantity of laser spots in quantity, the benchmark depth of field and the unit area that is formed at the benchmark depth of field of point differentiates
The absolute depth of field of each unit area of testee.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510170371.9A CN104819693B (en) | 2015-04-10 | 2015-04-10 | Depth of field discriminating gear and depth of field method of discrimination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510170371.9A CN104819693B (en) | 2015-04-10 | 2015-04-10 | Depth of field discriminating gear and depth of field method of discrimination |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104819693A CN104819693A (en) | 2015-08-05 |
CN104819693B true CN104819693B (en) | 2017-08-22 |
Family
ID=53730066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510170371.9A Active CN104819693B (en) | 2015-04-10 | 2015-04-10 | Depth of field discriminating gear and depth of field method of discrimination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104819693B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102445181B (en) * | 2010-10-13 | 2013-08-14 | 原相科技股份有限公司 | Ranging method, ranging system and processing method thereof |
CN102760234B (en) * | 2011-04-14 | 2014-08-20 | 财团法人工业技术研究院 | Depth image acquisition device, system and method |
CN103869593B (en) * | 2014-03-26 | 2017-01-25 | 深圳科奥智能设备有限公司 | Three-dimension imaging device, system and method |
CN104359405B (en) * | 2014-11-27 | 2017-11-07 | 上海集成电路研发中心有限公司 | Three-dimensional scanner |
-
2015
- 2015-04-10 CN CN201510170371.9A patent/CN104819693B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN104819693A (en) | 2015-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104634276B (en) | Three-dimension measuring system, capture apparatus and method, depth computing method and equipment | |
US11115633B2 (en) | Method and system for projector calibration | |
JP6290720B2 (en) | Inspection device, inspection method, and program | |
US11067692B2 (en) | Detector for determining a position of at least one object | |
CN104634277B (en) | Capture apparatus and method, three-dimension measuring system, depth computing method and equipment | |
CN108351523A (en) | Stereocamera and the depth map of structure light are used using head-mounted display | |
US20140307100A1 (en) | Orthographic image capture system | |
JP5655134B2 (en) | Method and apparatus for generating texture in 3D scene | |
CN105306922B (en) | Acquisition methods and device of a kind of depth camera with reference to figure | |
KR20060126545A (en) | Device for measuring 3d shape using irregular pattern and method for the same | |
JP2005534026A (en) | Three-dimensional measurement data automatic alignment apparatus and method using optical markers | |
EP1277026A1 (en) | Combined stereovision, color 3d digitizing and motion capture system | |
KR20090034824A (en) | Generating position information using a video camera | |
JP2022521405A (en) | Detector with projector for illuminating at least one object | |
JP2017531258A (en) | Method and apparatus for identifying structural element of projected structural pattern in camera image | |
US9990724B2 (en) | Image recording simulation in a coordinate measuring machine | |
JP4193342B2 (en) | 3D data generator | |
JP2008275366A (en) | Stereoscopic 3-d measurement system | |
CN104819693B (en) | Depth of field discriminating gear and depth of field method of discrimination | |
KR102185322B1 (en) | System for detecting position using ir stereo camera | |
JP2017156311A (en) | Three-dimensional measurement device, three-dimensional measurement system, three-dimensional measurement method, and program | |
JP6946345B2 (en) | Multi-function sensing system | |
JP2015203597A (en) | Information processing apparatus, information processing method, and program | |
JP7028814B2 (en) | External shape recognition device, external shape recognition system and external shape recognition method | |
US20200234458A1 (en) | Apparatus and method for encoding in structured depth camera system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |