CN104837002A - Shooting device, three-dimensional measuring system, and video intra-frame interpolation method and apparatus - Google Patents

Shooting device, three-dimensional measuring system, and video intra-frame interpolation method and apparatus Download PDF

Info

Publication number
CN104837002A
CN104837002A CN201510226284.0A CN201510226284A CN104837002A CN 104837002 A CN104837002 A CN 104837002A CN 201510226284 A CN201510226284 A CN 201510226284A CN 104837002 A CN104837002 A CN 104837002A
Authority
CN
China
Prior art keywords
video
sequence
frames
frame
interpolation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510226284.0A
Other languages
Chinese (zh)
Other versions
CN104837002B (en
Inventor
王敏捷
梁雨时
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI TUYANG INFORMATION TECHNOLOGY CO., LTD.
Original Assignee
BEIJING WEICHUANG SHIJIE TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING WEICHUANG SHIJIE TECHNOLOGY Co Ltd filed Critical BEIJING WEICHUANG SHIJIE TECHNOLOGY Co Ltd
Priority to CN201510226284.0A priority Critical patent/CN104837002B/en
Publication of CN104837002A publication Critical patent/CN104837002A/en
Application granted granted Critical
Publication of CN104837002B publication Critical patent/CN104837002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a shooting device, a three-dimensional measuring system, and a video intra-frame interpolation method and apparatus. The shooting device is a three-dimensional image shooting device. A predetermined relative positional relationship is arranged between a first imaging device and a second imaging device. The first and second imaging device carry out shooting for a shooting area to respectively obtain a first video frame sequence and a second video frame sequence. A processor comprises a timer or is connected to a timer to enable each video frame in the first and second video frame sequences to be related to the corresponding shooting moment. The processor enables each video frame in the first and second video frame sequences to be related to the corresponding shooting moment. The processor carries out interpolation process of one video frame sequence so as to generate a corresponding interpolation video frame to the shooting moment of a video frame in the other video frame sequence. Therefore, three-dimensional image data can be acquired through the two independent imaging devices without performing synchronous operation, and the measurement of the three-dimensional data are facilitated correspondingly.

Description

Capture apparatus, three-dimension measuring system, video frame interpolation method and apparatus
Technical field
The present invention relates to 3-D view shooting and process field, particularly a kind of 3-D view capture apparatus, three-dimension measuring system, video frame interpolation method and apparatus.
Background technology
At present, the shooting of 3-D view and the measurement etc. of three-dimensional data relate to the various schemes broad development of three-dimensional space data.
Generally speaking, during shooting 3-D view, use two imaging devices (such as, imageing sensor) with predetermined relative location relation to take shooting area or object respectively, obtain two two dimensional images from different visual angles simultaneously.
In addition, utilize the position difference (parallax) of same object in these two images, the depth data (distances between object and two imaging devices) of this object can be calculated.
Particularly when taking for the object in motion, very important point is, the image of two imaging device generations should be synchronous, and namely two images should be taken in the same moment, so just can accurately describe the actual conditions of reference object and environment.
In the 3-D view capture apparatus of routine, simultaneous operation is carried out to two imaging devices, two imaging devices are synchronously taken.
Fig. 1 shows the sequential chart that two synchronous imaging devices obtain image.A and B represents that two imaging devices obtain the sequential of image respectively.
Can see, the action 1001 and 1101 of two imaging device photographed image signals occurs in synchronization, and therefore taken image can describe object and the environmental change of synchronization.
If two imaging devices are asynchronous, namely do not take at synchronization, the position difference of so same object in obtained two images can not react the true parallax in any one moment.
Fig. 2 adopts nonsynchronous two imaging devices to obtain the sequential chart of image.A1 and B1 represents that two imaging devices obtain the sequential of image respectively.
Can see that the action 2001 and 2101 of imaging device photographed image signal occurs in not in the same time, the time interval of shooting is Δ T, therefore when shooting environmental changes, when occurrence positions as continuous in reference object or change of shape, taken image cannot describe the environmental change of synchronization, but Description Images different to environment before and after Δ T time.
When using two such two dimensional images to carry out compute depth data, result of calculation also there will be deviation.
Particularly, when object moves along base direction (being namely parallel to the direction of the line between two imaging devices) with speed V, according to synchronous two imaging devices, sync pulse jamming is to the image of object, if image parallactic is D.
When object still moves along base direction with speed V, according to nonsynchronous two imaging devices, photograph the image of object respectively, image parallactic is D'.
When nonsynchronous two imaging devices are Δ T to the time interval of object photographic images, D'=D+ Δ TV can be obtained.
According to the relational expression of the depth of field (depth data) with image parallactic (Z is depth data, the i.e. distance of object actual range imaging device, b is the length of base, the i.e. distance of two imaging devices, f is the focal length of imaging device, and d is the parallax of object on two two dimensional images), can see, when adopting asynchronous imaging device, asynchronous due in image capture sequential, the depth of field data finally calculated and the realistic objective depth of field have deviation.
Therefore, current 3-D view capture apparatus needs to carry out simultaneous operation to two imaging devices.
And arrange identical frame frequency parameter even if consider, the actual frame frequency of two independently imaging devices also has small difference.Like this, even if the starting stage carries out synchronous, As time goes on, also can engender nonsynchronous phenomenon, and the gap of taking the moment can widen gradually.
Like this, just need constantly to two independently imaging device carry out simultaneous operation.
Such as, if frame frequency is 30Hz, the frequency of namely carrying out the pulse train of taking for triggering imaging device is 30Hz, in other words, has 30 pulses a second.In order to keep two imaging devices synchronous, need to carry out simultaneous operation with the time of two pulses.Like this, 28 two field pictures can only be taken a second.
Visible, to two independently imaging device carry out the complexity that simultaneous operation adds system, even also reduce the efficiency of system.
Therefore, need to provide a kind of new technical scheme, solve asynchronous the brought problem of imaging device imaging action.
Summary of the invention
An object of the present invention is to provide the shooting of a kind of 3-D view and process field, in particular to a kind of 3-D view capture apparatus, three-dimension measuring system, video frame interpolation method and apparatus, it makes not need to perform simultaneous operation, just can use two independently imaging device to obtain 3 d image data, correspondingly to carry out three-dimensional vision information.
According to an aspect of the present invention, providing a kind of 3-D view capture apparatus, comprising: the first imaging device, obtaining the first sequence of frames of video for taking shooting area; Second imaging device, and between the first imaging device, there is predetermined relative location relation, obtain the second sequence of frames of video for taking shooting area; Processor, be connected to the first imaging device and the second imaging device, processor comprises timer or is connected to timer, each frame of video in first sequence of frames of video and the second sequence of frames of video is associated with its shooting moment respectively, processor is used for carrying out interpolation process to a sequence of frames of video in the first sequence of frames of video and the second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in another sequence of frames of video in the first sequence of frames of video and the second sequence of frames of video.
According to another aspect of the present invention, providing a kind of three-dimension measuring system, comprising: the first imaging device, obtaining the first sequence of frames of video for taking shooting area; Second imaging device, and between the first imaging device, there is predetermined relative location relation, obtain the second sequence of frames of video for taking shooting area; Processor, be connected to the first imaging device and the second imaging device, processor comprises timer or is connected to timer, each frame of video in first sequence of frames of video and the second sequence of frames of video is associated with its shooting moment respectively, processor is used for carrying out interpolation process to a sequence of frames of video in the first sequence of frames of video and the second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in another sequence of frames of video in the first sequence of frames of video and the second sequence of frames of video; And depth data calculation element, the depth data of computation and measurement object is carried out based on the position difference of measuring object in the frame of video corresponding to the same shooting moment and interpolation video frame.
According to another aspect of the present invention, provide a kind of video frame interpolation method, comprise: obtain the first imaging device and take shooting area and the first sequence of frames of video obtained, each frame of video in the first sequence of frames of video was associated with its shooting moment; Obtain the second imaging device to take shooting area and the second sequence of frames of video obtained, each frame of video in the second sequence of frames of video was associated with its shooting moment, had predetermined relative location relation between the second imaging device and the first imaging device; Interpolation process is carried out to the second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in the first sequence of frames of video.
Preferably, this video frame interpolation method can also comprise: form the second interpolation video frame sequence synchronous with the first sequence of frames of video by the continuous multiple interpolation video frames produced by carrying out interpolation process to the second sequence of frames of video.
Preferably, this video frame interpolation method can also comprise: the interpolation video frame produced by carrying out interpolation process to the second sequence of frames of video is inserted the second sequence of frames of video, thus produces the second supplement sequence of frames of video; Interpolation process is carried out to the first sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in the second sequence of frames of video; And the interpolation video frame produced by carrying out interpolation process to the first sequence of frames of video is inserted the first sequence of frames of video, thus produce the first supplement sequence of frames of video.
Preferably, the step of a sequence of frames of video in the first sequence of frames of video and the second sequence of frames of video being carried out to interpolation process can comprise: the shooting moment determining the concern frame of video in another sequence of frames of video in the first sequence of frames of video and the second sequence of frames of video, namely pays close attention to the shooting moment; Obtain in a sequence of frames of video, immediately pay close attention to shooting the moment before first shooting the moment shooting the first frame of video and immediately pay close attention to shooting the moment after second shooting the moment shooting the second frame of video; Based on the first shooting moment, the second shooting moment and concern shooting moment, interpolation process is carried out to the first frame of video and the second frame of video, to obtain interpolation video frame.
According to another aspect of the present invention, provide a kind of video frame interpolation device, comprise: the first sequence of frames of video acquiring unit, take shooting area and the first sequence of frames of video obtained for obtaining the first imaging device, each frame of video in the first sequence of frames of video was associated with its shooting moment; Second sequence of frames of video acquiring unit, shooting area is taken and the second sequence of frames of video obtained for obtaining the second imaging device, each frame of video in second sequence of frames of video was associated with its shooting moment, had predetermined relative location relation between the second imaging device and the first imaging device; Second interpolation processing unit, for carrying out interpolation process to the second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in the first sequence of frames of video.
Preferably, this video frame interpolation device can also comprise: the second interpolation video frame sequence generation unit, is provided for the continuous multiple interpolation video frames produced by carrying out interpolation process to the second sequence of frames of video and forms the second interpolation video frame sequence synchronous with the first sequence of frames of video.
Preferably, this video frame interpolation device can also comprise: the second supplement sequence of frames of video generation unit, for the interpolation video frame produced by carrying out interpolation process to the second sequence of frames of video is inserted the second sequence of frames of video, thus produce the second supplement sequence of frames of video; First interpolation processing unit, for carrying out interpolation process to the first sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in the second sequence of frames of video; And first augments sequence of frames of video generation unit, for the interpolation video frame produced by carrying out interpolation process to the first sequence of frames of video is inserted the first sequence of frames of video, thus produce the first supplement sequence of frames of video.
Preferably, first interpolation processing unit and the second interpolation processing unit can comprise respectively: pay close attention to shooting moment determining unit, for determining the shooting moment of the concern frame of video in another sequence of frames of video in the first sequence of frames of video and the second sequence of frames of video, namely pay close attention to the shooting moment; Frame of video acquiring unit, for obtaining in a sequence of frames of video, immediately pay close attention to shooting the moment before first shooting the moment shooting the first frame of video and immediately pay close attention to shooting the moment after second shooting the moment shooting the second frame of video; Interpolation video frame generation unit, for based on the first shooting moment, the second shooting moment and concern shooting moment, carries out interpolation process to the first frame of video and the second frame of video, to obtain interpolation video frame.
Accompanying drawing explanation
In conjunction with the drawings disclosure illustrative embodiments is described in more detail, above-mentioned and other object of the present disclosure, Characteristics and advantages will become more obvious, wherein, in disclosure illustrative embodiments, identical reference number represents same parts usually.
Fig. 1 is the sequential chart adopting two synchronous imaging devices to obtain image.
Fig. 2 adopts nonsynchronous two imaging devices to obtain the sequential chart of image.
Fig. 3 is the exemplary timing diagram having carried out after dynamic compensation to the image adopting nonsynchronous two imaging devices to obtain according to the present invention.
Fig. 4 A schematically shows frame of video 3001 and 3002 in the sequential A2 of Fig. 3 and virtual synchronous frame 3001 '.
Fig. 4 B schematically shows frame of video 3101 and 3102 in the sequential B2 of Fig. 3 and virtual synchronous frame 3101 '.
Fig. 4 C schematically shows the image 3101 and 3001 ', 3101 ' and 3002 corresponding to two groups of synchronization in the sequential of Fig. 3 with the time difference.
Fig. 5 shows the schematic block diagram according to 3-D view capture apparatus of the present invention.
Fig. 6 shows the schematic block diagram according to three-dimension measuring system of the present invention.
Fig. 7 shows the indicative flowchart according to video frame interpolation method of the present invention.
Fig. 8 shows the indicative flowchart of the video frame interpolation method of modified embodiment.
Fig. 9 shows the indicative flowchart of the video frame interpolation method of another modified embodiment.
Figure 10 shows the indicative flowchart of a kind of implementation of interpolation process.
Figure 11 shows the schematic block diagram according to video frame interpolation device of the present invention.
Figure 12 shows the schematic block diagram of the video frame interpolation device of modified embodiment.
Figure 13 shows the schematic block diagram of the video frame interpolation device of modified embodiment.
Figure 14 shows the schematic block diagram of a kind of implementation of the first/the second interpolation processing unit.
Embodiment
Below with reference to accompanying drawings preferred implementation of the present disclosure is described in more detail.Although show preferred implementation of the present disclosure in accompanying drawing, but should be appreciated that, the disclosure can be realized in a variety of manners and not should limit by the execution mode of setting forth here.On the contrary, provide these execution modes to be to make the disclosure more thorough and complete, and the scope of the present disclosure intactly can be conveyed to those skilled in the art.
First, for the application scenarios of three-dimensional vision information, the roughly design of brief overview one embodiment of the invention, so that understand the present invention.
According to the present invention, this problem of depth of field deviation is measured for asynchronous imaging device, the mode of dynamic compensation can be adopted to solve.
Fig. 3 is the exemplary timing diagram having carried out after dynamic compensation to the image adopting nonsynchronous two imaging devices to obtain according to the present invention.A2 and B2 represents that two imaging devices obtain the sequential of image respectively.
As shown in Figure 3, the time interval that processor can obtain the image taking action 3001 and 3101 of asynchronous imaging device is Δ T, the time interval that every two field picture taken separately by known two imaging devices is all Δ t=1/FPS (FPS is frame frequency), first imaging device that processor can detect is Δ S3001 in the displacement of 3001 and 3002 target acquisitions, and second imaging device that processor can detect is Δ S3101 in the displacement of 3101 and 3102 target acquisitions.
When adopted imaging device 11 and 12 can under same frame per second capture movement target image, and in general manner, when frame per second is greater than 30 frames/second, we can think that target is uniform motion in the motion of consecutive frame time interval IT.Corresponding virtual synchronous frame is inserted between the image sequence therefore can caught at imaging device 11 and 12.
As shown in Figure 3, be first imaging device insert 3001 ' corresponding to 3101 of second imaging device.Be second imaging device insert 3101 ' corresponding to 3002 of first imaging device.
It is D that processor calculates the target image parallax of catching at 3001 and 3101 places 3101, after inserting virtual synchronous frame 3001 ', the synchronous images parallax after compensation is then:
D 3101 ′ = D 3101 - Δ S 3001 - 1 ΔT - FPS .
It is D that processor calculates the target image parallax of catching at 3101 and 3002 places 3002, after inserting virtual synchronous frame 3101 ', the synchronous images parallax after compensation is then:
D 3002 ′ = D 3002 - Δ S 3101 - 1 1 - ΔT - FPS .
As mentioned above, by inserting the mode of virtual synchronous frame, the depth of field distance that nonsynchronous two imaging devices obtain can be revised.
On the other hand, when as mentioned above, when the image (sequence of frames of video) of two imaging devices acquisitions is all carried out to interpolation process and is used, system image capture frame per second can also be made to double, thus with the high performance system scheme that the assembly of low cost low performance is formed.
Such as, adopt two nonsynchronous CMOS, it is all 30fps (frame is per second) that their image catches frame per second continuously, inserts after virtual frames compensates and the depth of field of whole system measurement frame per second can be made to be 60fps.
The virtual synchronous frame 3001 ' that Fig. 4 A is schematically shown the frame of video 3001 and 3002 in the sequential A2 of Fig. 3 and obtained by interpolation process.
The virtual synchronous frame 3101 ' that Fig. 4 B is schematically shown the frame of video 3101 and 3102 in the sequential B2 of Fig. 3 and obtained by interpolation process.
Because frame of video 3001 and 3101 is not take at synchronization, if use them to carry out such as depth calculation, then parallax will not be inconsistent with actual conditions.Such as may be bigger than normal, also may be less than normal.Same problem is also existed for frame of video 3002 and 3102.
Fig. 4 C schematically shows the frame of video 3101 and 3001 ', 3101 ' and 3002 corresponding to two groups of synchronization in the sequential of Fig. 3 with the time difference.
By as mentioned above, the virtual video frame 3001 ' in the shooting moment corresponding to frame of video 3101 is inserted in sequential A2, in sequential B2, insert the virtual video frame 3101 ' in the shooting moment corresponding to frame of video 3002, correct for the disparity deviation caused due to the asynchronous of two imaging device shooting moment.The parallax obtained after correction like this can properly use in calculated example as depth data.
Describe in detail according to 3-D view capture apparatus of the present invention, three-dimension measuring system, video frame interpolation method and apparatus below with reference to Fig. 5 to Figure 14.
Fig. 5 shows the schematic block diagram according to 3-D view capture apparatus of the present invention.
As shown in Figure 5,3-D view capture apparatus according to the present invention can comprise the first imaging device 10, second imaging device 20, processor 30.
Processor 30 itself can have timer 32.Or processor 30 also can be connected to the timer (not shown) of processor 30 outside.
Timer 32 is used to provide temporal information.The temporal information that timer 32 provides can be the standard time represented with the form of date Hour Minute Second millisecond, such as Beijing time, also can be the temporal information provided for starting point with random time point.Namely timer 32 does not need to be consistent with the standard time.Or the temporal information that timer 32 provides also can be represent with the number of such as clock pulse.
In addition, timer 32 itself can have clocking capability, thus temporal information is provided, and also can from outside (such as external timer, the Internet, GPS etc.) receiving time information, other device be then supplied in 3-D view capture apparatus uses.
First imaging device 10 pairs shooting area is taken, and obtains the first sequence of frames of video.
Second imaging device 20 pairs shooting area is taken, and obtains the second sequence of frames of video.
First imaging device 10 and the second imaging device 20 can be such as imageing sensors, as cmos image sensor.
There is between first imaging device 10 and the second imaging device 20 predetermined relative position relation (such as, distance be above described length of base b) between the two.Like this, can 3-D view be presented by the two dimensional image of these two imaging devices 10 and 20 shooting or obtain depth data (or being called " depth of field data ").
Processor 30 is connected respectively to the first imaging device 10 and the second imaging device 20.Processor makes each frame of video in the first sequence of frames of video and the second sequence of frames of video be associated with its shooting moment.
Such as, shooting time information can be added in the data of each frame of video.
Or, also can store the mark (such as, frame number) of each frame of video and corresponding shooting moment in memory.
Processor 30 carries out interpolation process to the first sequence of frames of video, thus produces the interpolation video frame corresponding with the shooting moment of the frame of video in the second sequence of frames of video.Thus, based on nonsynchronous two imaging devices, reference object and the environment actual conditions in this shooting moment also can be described exactly.
It is synchronous for carrying out interpolation video frame that interpolation process obtains with the frame of video in the second sequence of frames of video to the first sequence of frames of video.If these interpolation video frames to be arranged in chronological order formation first interpolation video frame sequence, so the first interpolation video frame sequence is synchronous with the second sequence of frames of video.The first interpolation video frame sequence and the second sequence of frames of video can be used to carry out three-dimensional measurement calculating.Thus, can solve the first imaging device 10 cause first sequence of frames of video and second sequence of frames of video asynchronous with the second imaging device 20 can not the problem of accurate description reference object or environmental information dynamic changing process.
On the other hand, processor 30 can also carry out interpolation process to the second sequence of frames of video, thus produces the interpolation video frame corresponding with the shooting moment of the frame of video in the first sequence of frames of video.
It is synchronous for carrying out interpolation video frame that interpolation process obtains with the frame of video in the first sequence of frames of video to the second sequence of frames of video.If these interpolation video frames to be arranged in chronological order formation second interpolation video frame sequence, so the second interpolation video frame sequence is synchronous with the first sequence of frames of video.Like this, the second interpolation video frame sequence and the first sequence of frames of video also can be used to carry out three-dimensional measurement calculating.
In other words, processor 30 only can carry out interpolation process in the first sequence of frames of video and the second sequence of frames of video, and use the interpolation video frame sequence obtained by interpolation process to form synchronous sequence of frames of video pair with another sequence of frames of video, calculate for three-dimensional measurement.Now, processor 30 externally can export an original sequence of frames of video and a synchronous interpolation video frame sequence.
On the other hand, processor 30 also both can carry out interpolation process to the first sequence of frames of video, also interpolation process is carried out to the second sequence of frames of video, interpolation video frame is inserted in corresponding sequence of frames of video, thus produce the first supplement sequence of frames of video and the second supplement sequence of frames of video respectively.The twice that first and second frame frequencies augmenting sequence of frames of video will be former sequence of frames of video.Like this, not only solving the first imaging device 10 cause first sequence of frames of video and second sequence of frames of video asynchronous with the second imaging device 20 can not the problem of accurate description reference object or environmental information, also increases the frame frequency of sequence of frames of video.
In this case, the first supplement sequence of frames of video and the second supplement sequence of frames of video can be used to carry out three-dimensional measurement calculating.Now, processor 30 externally can export the first supplement sequence of frames of video and the second supplement sequence of frames of video.
In addition, this 3-D view capture apparatus can also comprise memory (not shown), be used for storage first sequence of frames of video, the second sequence of frames of video, interpolation video frame, the first/the second interpolation video frame sequence or the first/the second supplement sequence of frames of video.
The following describes and use the sequence of frames of video of the output of 3-D view capture apparatus shown in Fig. 5 to carry out the situation of three-dimensional measurement.
Fig. 6 shows and uses the 3-D view capture apparatus of Fig. 5 to carry out the schematic block diagram of the three-dimension measuring system of three-dimensional measurement.
As shown in Figure 6, except above-mentioned the first imaging device 10, second imaging device 20 as shown in Figure 5 and processor 30 (and timer 32), three-dimension measuring system also comprises depth data calculation element 40.
Depth data calculation element 40 is connected to processor 30, with from processor 30 receiver, video frame sequence, thus based on the position difference of measuring object in the frame of video of original shooting corresponding to the same shooting moment and the interpolation video frame of correspondence, carry out the depth data of computation and measurement object.
Such as when processor 30 is only to a sequence of frames of video, such as the second sequence of frames of video, when carrying out interpolation processing, depth data calculation element 40 receives the second interpolation video frame sequence and the first sequence of frames of video from processor 30, then the depth data (distances namely between measuring object and the first and second imaging devices 10 and 20 (such as the mid point of both lines)) of computation and measurement object can be carried out based on the position difference corresponded in the frame of video of measuring object in the first sequence of frames of video and the second interpolation video frame sequence in the interpolation video frame in same shooting moment.
If processor 30 both carried out interpolation processing to the first sequence of frames of video, also carry out interpolation processing to the second sequence of frames of video, then depth data calculation element 40 can receive the first supplement sequence of frames of video and the second supplement sequence of frames of video from processor 30.Then, depth data calculation element 40 can carry out the depth data of computation and measurement object based on the original video frame and corresponding interpolation video frame corresponding to the same shooting moment in the first and second supplement sequence of frames of video.
Be described below in detail according to video frame interpolation method of the present invention.
Fig. 7 shows the indicative flowchart according to video frame interpolation method of the present invention.
First, in step S100, obtain the first imaging device 10 pairs of shooting areas and take and the first sequence of frames of video obtained.As mentioned above, each frame of video in the first sequence of frames of video was associated with its shooting moment.First sequence of frames of video directly can obtain from the first imaging device 10, also can read from memory.
On the other hand, in step S200, obtain the second imaging device 20 pairs of shooting areas and take and the second sequence of frames of video obtained.As mentioned above, each frame of video in the second sequence of frames of video was associated with its shooting moment.Between second imaging device 20 and the first imaging device 10, there is predetermined relative location relation.Second sequence of frames of video directly can obtain from the second imaging device 10, also can read from memory.
The order of step S100 and S200 can change, or also can synchronously carry out.
Then, in step S300, interpolation process is carried out to the second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in the first sequence of frames of video.
As previously mentioned, the interpolation video frame so produced is synchronous with the corresponding frame of video in the first sequence of frames of video, can describe reference object or the environmental information of synchronization.
Fig. 8 shows the indicative flowchart of the video frame interpolation method of modified embodiment.
In the method for video frame interpolation shown in Fig. 8, after step S100, S200, S300 shown in above-mentioned Fig. 7, in step S400, form the second interpolation video frame sequence synchronous with the first sequence of frames of video by the continuous multiple interpolation video frames produced by carrying out interpolation process to the second sequence of frames of video.
Like this, can use the first sequence of frames of video and carry out three-dimensional measurement by the second interpolation video frame sequence that interpolation produces, solving the first imaging device 10 cause first sequence of frames of video and second sequence of frames of video asynchronous with the second imaging device 20 can not the problem of dynamic changing process of accurate description reference object or environmental information.
Fig. 9 shows the indicative flowchart of the video frame interpolation method of another modified embodiment.
In the method for video frame interpolation shown in Fig. 9, after step S100, S200, S300 shown in above-mentioned Fig. 7, in step S410, the interpolation video frame produced by carrying out interpolation process to the second sequence of frames of video is inserted the second sequence of frames of video, thus produce the second supplement sequence of frames of video.
On the other hand, in step S500, interpolation process is carried out to the first sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in the second sequence of frames of video.
In step S600, the interpolation video frame produced by carrying out interpolation process to the first sequence of frames of video is inserted the first sequence of frames of video, thus produce the first supplement sequence of frames of video.
First perform step S300 and S410 shown in Fig. 9, then perform S500 and S600.In fact, above-mentioned execution sequence can exchange, or can synchronously perform, or can hocket by frame one by one.
Like this, solving outside the first sequence of frames of video and the nonsynchronous problem of the second sequence of frames of video, the frame frequency of sequence of frames of video is also increased.
A kind of specific implementation of above-mentioned interpolation process once is briefly described below.
Figure 10 shows the indicative flowchart of a kind of implementation to the interpolation process that the sequence of frames of video (such as the second sequence of frames of video) in the first sequence of frames of video and the second sequence of frames of video carries out.
First, in step S310, determine the shooting moment of the concern frame of video in another sequence of frames of video (such as the first sequence of frames of video) in the first sequence of frames of video and the second sequence of frames of video, namely pay close attention to the shooting moment.In other words, the current interpolation video frame that will be treated to a frame of video generation correspondence by interpolation, this frame of video is called " concern frame of video ", and its shooting moment is called " paying close attention to the shooting moment ".
In step S320, obtain in this sequence of frames of video (such as the second sequence of frames of video), immediately pay close attention to shooting the moment before first shooting the moment shooting the first frame of video and immediately pay close attention to shooting the moment after second shooting the moment shooting the second frame of video.
In step S330, based on the first shooting moment, the second shooting moment and concern shooting moment, interpolation process is carried out to the first frame of video and the second frame of video, to obtain interpolation video frame.
Be described below in detail according to the present invention for performing the video frame interpolation device of above-mentioned frame rate interpolation method.In the following description, some details are identical with above with reference to the description of Fig. 7 to 10 to above-mentioned video frame interpolation method.For avoiding repetition, do not repeat them here.
Figure 11 shows the schematic block diagram according to video frame interpolation device of the present invention.
First sequence of frames of video acquiring unit 110 is taken and the first sequence of frames of video obtained for obtaining the first imaging device 10 pairs of shooting areas.As mentioned above, each frame of video in the first sequence of frames of video was associated with its shooting moment.First sequence of frames of video directly can obtain from the first imaging device 10, also can read from memory.
Second sequence of frames of video acquiring unit 120 is taken and the second sequence of frames of video obtained for obtaining the second imaging device 20 pairs of shooting areas.As mentioned above, each frame of video in the second sequence of frames of video was associated with its shooting moment.Between second imaging device 20 and the first imaging device 10, there is predetermined relative location relation.Second sequence of frames of video directly can obtain from the second imaging device 10, also can read from memory.
Second interpolation processing unit 130 is for carrying out interpolation process to the second sequence of frames of video, so that the interpolation video frame that generation is corresponding with the shooting moment of the frame of video in the first sequence of frames of video.
Figure 12 shows the schematic block diagram of the video frame interpolation device of modified embodiment.
Except the first sequence of frames of video acquiring unit 110, second sequence of frames of video acquiring unit 120, second interpolation processing unit 130 illustrated in fig. 11, the video frame interpolation device of the modified embodiment shown in Figure 12 also comprises the second interpolation video frame sequence generation unit 140.
Second interpolation video frame sequence generation unit 140 is provided for the continuous multiple interpolation video frames produced by carrying out interpolation process to the second sequence of frames of video and forms the second interpolation video frame sequence synchronous with the first sequence of frames of video.
Figure 13 shows the schematic block diagram of the video frame interpolation device of modified embodiment.
Except the first sequence of frames of video acquiring unit 110, second sequence of frames of video acquiring unit 120, second interpolation processing unit 130 illustrated in fig. 11, the video frame interpolation device of the modified embodiment shown in Figure 13 also comprises the second supplement sequence of frames of video generation unit 150, first interpolation processing unit 160, first supplement sequence of frames of video generation unit 170.
Second supplement sequence of frames of video generation unit 150 for the interpolation video frame produced by carrying out interpolation process to the second sequence of frames of video is inserted the second sequence of frames of video, thus produces the second supplement sequence of frames of video.
First interpolation processing unit 160 is for carrying out interpolation process to the first sequence of frames of video, so that the interpolation video frame that generation is corresponding with the shooting moment of the frame of video in the second sequence of frames of video.
First supplement sequence of frames of video generation unit 170 for the interpolation video frame produced by carrying out interpolation process to the first sequence of frames of video is inserted the first sequence of frames of video, thus produces the first supplement sequence of frames of video.
Figure 14 shows the schematic block diagram of a kind of implementation of the first/the second interpolation processing unit 130/160.
The first/the second interpolation processing unit 130/160 can comprise: pay close attention to shooting moment determining unit 210, frame of video acquiring unit 220, interpolation video frame generation unit 230.
Concern shooting moment determining unit 210 determines the shooting moment of the concern frame of video in another sequence of frames of video in described first sequence of frames of video and described second sequence of frames of video, namely pays close attention to the shooting moment.
Frame of video acquiring unit 220 obtains in a described sequence of frames of video, pays close attention to the first frame of video of the first shooting moment shooting of taking before the moment immediately described and pay close attention to immediately described the second frame of video that second after the shooting moment takes moment shooting;
Interpolation video frame generation unit 230, based on described first shooting moment, described second shooting moment and described concern shooting moment, carries out interpolation process to described first frame of video and described second frame of video, to obtain described interpolation video frame.
Above be described in detail with reference to the attached drawings according to capture apparatus of the present invention, three-dimensional measurement, video frame interpolation method and apparatus.
In addition, a kind of computer program can also be embodied as according to method of the present invention, this computer program comprises computer-readable medium, stores the computer program for performing the above-mentioned functions limited in method of the present invention on the computer-readable medium.Those skilled in the art will also understand is that, may be implemented as electronic hardware, computer software or both combinations in conjunction with various illustrative logical blocks, module, circuit and the algorithm steps described by disclosure herein.
Flow chart in accompanying drawing and block diagram show the architectural framework in the cards of the system and method according to multiple embodiment of the present invention, function and operation.In this, each square frame in flow chart or block diagram can represent a part for module, program segment or a code, and a part for described module, program segment or code comprises one or more executable instruction for realizing the logic function specified.Also it should be noted that at some as in the realization of replacing, the function marked in square frame also can be different from occurring in sequence of marking in accompanying drawing.Such as, in fact two continuous print square frames can perform substantially concurrently, and they also can perform by contrary order sometimes, and this determines according to involved function.Also it should be noted that, the combination of the square frame in each square frame in block diagram and/or flow chart and block diagram and/or flow chart, can realize by the special hardware based system of the function put rules into practice or operation, or can realize with the combination of specialized hardware and computer instruction.
Be described above various embodiments of the present invention, above-mentioned explanation is exemplary, and non-exclusive, and be also not limited to disclosed each embodiment.When not departing from the scope and spirit of illustrated each embodiment, many modifications and changes are all apparent for those skilled in the art.The selection of term used herein, is intended to explain best the principle of each embodiment, practical application or the improvement to the technology in market, or makes other those of ordinary skill of the art can understand each embodiment disclosed herein.

Claims (10)

1. a 3-D view capture apparatus, comprising:
First imaging device, obtains the first sequence of frames of video for taking shooting area;
Second imaging device, and between described first imaging device, there is predetermined relative location relation, for taking to obtain the second sequence of frames of video to described shooting area;
Processor, be connected to described first imaging device and described second imaging device, described processor comprises timer or is connected to timer, each frame of video in described first sequence of frames of video and described second sequence of frames of video is associated with its shooting moment respectively, described processor is used for carrying out interpolation process to a sequence of frames of video in described first sequence of frames of video and described second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in another sequence of frames of video in described first sequence of frames of video and described second sequence of frames of video.
2. a three-dimension measuring system, comprising:
First imaging device, obtains the first sequence of frames of video for taking shooting area;
Second imaging device, and between described first imaging device, there is predetermined relative location relation, for taking to obtain the second sequence of frames of video to described shooting area;
Processor, be connected to described first imaging device and described second imaging device, described processor comprises timer or is connected to timer, each frame of video in described first sequence of frames of video and described second sequence of frames of video is associated with its shooting moment respectively, described processor is used for carrying out interpolation process to a sequence of frames of video in described first sequence of frames of video and described second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in another sequence of frames of video in described first sequence of frames of video and described second sequence of frames of video, and
Depth data calculation element, calculates the depth data of described measuring object based on the position difference of measuring object in the described frame of video corresponding to the same shooting moment and described interpolation video frame.
3. a video frame interpolation method, comprising:
Obtain the first imaging device to take shooting area and the first sequence of frames of video obtained, each frame of video in described first sequence of frames of video was associated with its shooting moment;
Obtain the second imaging device to take described shooting area and the second sequence of frames of video obtained, each frame of video in described second sequence of frames of video was associated with its shooting moment, had predetermined relative location relation between described second imaging device and described first imaging device;
Interpolation process is carried out to described second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in described first sequence of frames of video.
4. video frame interpolation method according to claim 3, also comprises:
The second interpolation video frame sequence synchronous with described first sequence of frames of video is formed by the continuous multiple interpolation video frames produced by carrying out interpolation process to described second sequence of frames of video.
5. video frame interpolation method according to claim 3, also comprises:
The interpolation video frame produced by carrying out interpolation process to described second sequence of frames of video is inserted described second sequence of frames of video, thus produces the second supplement sequence of frames of video;
Interpolation process is carried out to described first sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in described second sequence of frames of video; And
The interpolation video frame produced by carrying out interpolation process to described first sequence of frames of video is inserted described first sequence of frames of video, thus produces the first supplement sequence of frames of video.
6., according to the video frame interpolation method in claim 3 to 5 described in any one, wherein the step that a sequence of frames of video in described first sequence of frames of video and described second sequence of frames of video carries out interpolation process is comprised:
Determine the shooting moment of the concern frame of video in another sequence of frames of video in described first sequence of frames of video and described second sequence of frames of video, namely pay close attention to the shooting moment;
Obtain in a described sequence of frames of video, pay close attention to the first frame of video of the first shooting moment shooting of taking before the moment immediately described and pay close attention to immediately described the second frame of video that second after the shooting moment takes moment shooting;
Based on described first shooting moment, described second shooting moment and described concern shooting moment, interpolation process is carried out to described first frame of video and described second frame of video, to obtain described interpolation video frame.
7. a video frame interpolation device, comprising:
First sequence of frames of video acquiring unit, take shooting area and the first sequence of frames of video obtained for obtaining the first imaging device, each frame of video in described first sequence of frames of video was associated with its shooting moment;
Second sequence of frames of video acquiring unit, described shooting area is taken and the second sequence of frames of video obtained for obtaining the second imaging device, each frame of video in described second sequence of frames of video was associated with its shooting moment, had predetermined relative location relation between described second imaging device and described first imaging device;
Second interpolation processing unit, for carrying out interpolation process to described second sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in described first sequence of frames of video.
8. video frame interpolation device according to claim 7, also comprises:
Second interpolation video frame sequence generation unit, is provided for the continuous multiple interpolation video frames produced by carrying out interpolation process to described second sequence of frames of video and forms the second interpolation video frame sequence synchronous with described first sequence of frames of video.
9. video frame interpolation device according to claim 7, also comprises:
Second supplement sequence of frames of video generation unit, inserts described second sequence of frames of video for the interpolation video frame will produced by carrying out interpolation process to described second sequence of frames of video, thus produces the second supplement sequence of frames of video;
First interpolation processing unit, for carrying out interpolation process to described first sequence of frames of video, to produce the interpolation video frame corresponding with the shooting moment of the frame of video in described second sequence of frames of video; And
First supplement sequence of frames of video generation unit, inserts described first sequence of frames of video for the interpolation video frame will produced by carrying out interpolation process to described first sequence of frames of video, thus produces the first supplement sequence of frames of video.
10., according to the video frame interpolation device in claim 7 to 9 described in any one, wherein said first interpolation processing unit and described second interpolation processing unit comprise respectively:
Paying close attention to shooting moment determining unit, for determining the shooting moment of the concern frame of video in another sequence of frames of video in described first sequence of frames of video and described second sequence of frames of video, namely paying close attention to the shooting moment;
Frame of video acquiring unit, for obtaining in a described sequence of frames of video, paying close attention to the first frame of video of the first shooting moment shooting of taking before the moment immediately described and pay close attention to immediately described the second frame of video that second after the shooting moment takes moment shooting;
Interpolation video frame generation unit, for based on described first shooting moment, described second shooting moment and described concern shooting moment, carries out interpolation process to described first frame of video and described second frame of video, to obtain described interpolation video frame.
CN201510226284.0A 2015-05-06 2015-05-06 Capture apparatus, three-dimension measuring system, video frame interpolation method and apparatus Active CN104837002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510226284.0A CN104837002B (en) 2015-05-06 2015-05-06 Capture apparatus, three-dimension measuring system, video frame interpolation method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510226284.0A CN104837002B (en) 2015-05-06 2015-05-06 Capture apparatus, three-dimension measuring system, video frame interpolation method and apparatus

Publications (2)

Publication Number Publication Date
CN104837002A true CN104837002A (en) 2015-08-12
CN104837002B CN104837002B (en) 2016-10-26

Family

ID=53814609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510226284.0A Active CN104837002B (en) 2015-05-06 2015-05-06 Capture apparatus, three-dimension measuring system, video frame interpolation method and apparatus

Country Status (1)

Country Link
CN (1) CN104837002B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354819A (en) * 2015-09-29 2016-02-24 上海图漾信息科技有限公司 Depth data measurement system, depth data determination method and apparatus
CN106131529A (en) * 2016-06-30 2016-11-16 联想(北京)有限公司 A kind of method of video image processing and device
CN106604016A (en) * 2017-01-26 2017-04-26 上海图漾信息科技有限公司 Stereoscopic video capture system
CN107454377A (en) * 2016-05-31 2017-12-08 深圳市微付充科技有限公司 A kind of algorithm and system that three-dimensional imaging is carried out using camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101277454A (en) * 2008-04-28 2008-10-01 清华大学 Method for generating real time tridimensional video based on binocular camera
CN102215416A (en) * 2010-04-09 2011-10-12 汤姆森特许公司 Method for processing stereoscopic images and corresponding device
CN102918584A (en) * 2010-05-12 2013-02-06 联发科技股份有限公司 Graphics procesing method for three-dimensional images applied to first buffer for storing right-view contents and second buffer for storing left-view contents and related graphics processing apparatus thereof
WO2014096400A1 (en) * 2012-12-21 2014-06-26 Imcube Labs Gmbh Method, apparatus and computer program usable in synthesizing a stereoscopic image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101277454A (en) * 2008-04-28 2008-10-01 清华大学 Method for generating real time tridimensional video based on binocular camera
CN102215416A (en) * 2010-04-09 2011-10-12 汤姆森特许公司 Method for processing stereoscopic images and corresponding device
CN102918584A (en) * 2010-05-12 2013-02-06 联发科技股份有限公司 Graphics procesing method for three-dimensional images applied to first buffer for storing right-view contents and second buffer for storing left-view contents and related graphics processing apparatus thereof
WO2014096400A1 (en) * 2012-12-21 2014-06-26 Imcube Labs Gmbh Method, apparatus and computer program usable in synthesizing a stereoscopic image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354819A (en) * 2015-09-29 2016-02-24 上海图漾信息科技有限公司 Depth data measurement system, depth data determination method and apparatus
CN105354819B (en) * 2015-09-29 2018-10-09 上海图漾信息科技有限公司 Depth data measuring system, depth data determine method and apparatus
CN107454377A (en) * 2016-05-31 2017-12-08 深圳市微付充科技有限公司 A kind of algorithm and system that three-dimensional imaging is carried out using camera
CN107454377B (en) * 2016-05-31 2019-08-02 深圳市微付充科技有限公司 A kind of algorithm and system carrying out three-dimensional imaging using camera
CN106131529A (en) * 2016-06-30 2016-11-16 联想(北京)有限公司 A kind of method of video image processing and device
CN106604016A (en) * 2017-01-26 2017-04-26 上海图漾信息科技有限公司 Stereoscopic video capture system

Also Published As

Publication number Publication date
CN104837002B (en) 2016-10-26

Similar Documents

Publication Publication Date Title
US7616885B2 (en) Single lens auto focus system for stereo image generation and method thereof
EP3248374B1 (en) Method and apparatus for multiple technology depth map acquisition and fusion
CN102272796B (en) Motion vector generation apparatus and motion vector generation method
JP3745117B2 (en) Image processing apparatus and image processing method
CN102027752B (en) For measuring the system and method for the potential eye fatigue of stereoscopic motion picture
CN101247530A (en) Three-dimensional image display apparatus and method for enhancing stereoscopic effect of image
JP2005151534A5 (en)
CN104837002A (en) Shooting device, three-dimensional measuring system, and video intra-frame interpolation method and apparatus
CN107917701A (en) Measuring method and RGBD camera systems based on active binocular stereo vision
JP2011064894A (en) Stereoscopic image display apparatus
JP4928476B2 (en) Stereoscopic image generating apparatus, method thereof and program thereof
JP6219997B2 (en) Dynamic autostereoscopic 3D screen calibration method and apparatus
US9628777B2 (en) Method of 3D reconstruction of a scene calling upon asynchronous sensors
CN105222717A (en) A kind of subject matter length measurement method and device
KR20120053536A (en) Image display device and image display method
CN109917419A (en) A kind of depth fill-in congestion system and method based on laser radar and image
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
US20150199817A1 (en) Position detection device and position detection program
CN103561257A (en) Interference-free light-encoded depth extraction method based on depth reference planes
US9538161B2 (en) System and method for stereoscopic photography
CN110969706B (en) Augmented reality device, image processing method, system and storage medium thereof
EP3705844B1 (en) Three-dimensional dataset and two-dimensional image localisation
CN113438388A (en) Processing method, camera assembly, electronic device, processing device and medium
CN101566784B (en) Method for establishing depth of field data for three-dimensional image and system thereof
JP5871113B2 (en) Stereo image generation apparatus, stereo image generation method, and stereo image generation program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20151230

Address after: 201203 Shanghai City, Pudong New Area Jinke road lane 2889 Changtai Plaza C building 11 layer

Applicant after: SHANGHAI TUYANG INFORMATION TECHNOLOGY CO., LTD.

Address before: 100068, No. 14 Majiabao West Road, Beijing, Fengtai District, 4, 8, 915

Applicant before: Beijing Weichuang Shijie Technology Co., Ltd.

C14 Grant of patent or utility model
GR01 Patent grant