CN105578015A - Object tracking image processing method and object tracking image processing system - Google Patents
Object tracking image processing method and object tracking image processing system Download PDFInfo
- Publication number
- CN105578015A CN105578015A CN201410526672.6A CN201410526672A CN105578015A CN 105578015 A CN105578015 A CN 105578015A CN 201410526672 A CN201410526672 A CN 201410526672A CN 105578015 A CN105578015 A CN 105578015A
- Authority
- CN
- China
- Prior art keywords
- image
- field
- wide
- vision
- narrow field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an object tracking image processing method and an object tracking image processing system. The method comprises the following steps: receiving a first wide-field image and a first narrow-field image, deciding a to-be-tracked object from the images and selecting one image as a first output image; taking the area size of the to-be-tracked object in the first output image as a reference area; and comparing the area size of the to-be-tracked object in a second wide-field image and the area size of the to-be-tracked object in a second narrow-field image with the reference area to decide one image as a main image, scaling and deforming the corresponding areas of the main image and the other image, and fusing the two scaled and deformed images into a second output image. Through the method and the system, users can keep the frame size of the main image unchanged and constantly change the background image during filming, and therefore, filming is made richer and more interesting.
Description
Technical field
The invention relates to a kind of image processing method and system thereof, particularly a kind of object tracing image processing method and system thereof utilizing the camera head of twin-lens.
Background technology
When using the electronic installation with camera lens, as camera or intelligent mobile phone, when scenery is made a video recording, when the image that captures comprise main image (as personage) and a background video (as streetscape), make the size of main image remain unchanged if expect, see different background videos for can know again simultaneously, cannot effectively reach this effect only according to the function on current camera or intelligent mobile phone.
The shooting gimmick that this kind makes the size of main image remain unchanged is common in mute, and it uses zoom lens, the zoom when the shooting body, changes the distance between main body simultaneously, to keep main body constant in the size of picture, change be but the background at rear.
But, go out have in mobile phone photographs the film of this effect, need photographer mobile, operate zoom again with convergent-divergent image simultaneously, be not easy very much to accomplish.
Summary of the invention
Because the problems referred to above, the object of the invention is to utilize the device of twin-lens to provide a kind of object tracing image processing method and system thereof.
Because the problems referred to above, another object of the present invention is to provide a kind of object tracing image processing method and system thereof, allow user when taking film, have and the picture size of main image can be allowed constant, and its background video does not stop the characteristic of change, rich and interesting with a when promoting film shooting.
Based on above-mentioned purpose, the invention provides a kind of object tracing image processing method, comprise: (a) receives one first wide field image and one first Narrow Field Of Vision image, and determine that one treats tracing object from the first wide field image and the first Narrow Field Of Vision image; B () selects one first image output from the first wide field image and the first Narrow Field Of Vision image, and to treat that the area size of tracing object in the first image output is as a reference zone; C () receives one second wide field image and one second Narrow Field Of Vision image, will treat that tracing object is compared in a wide area of visual field size of the second wide field image and the Narrow Field Of Vision area size in the second Narrow Field Of Vision image and reference zone; (d) when wide area of visual field size relatively reference zone time, ratio according to wide area of visual field size and reference zone carries out convergent-divergent and deformation process to the second wide field image, and carries out convergent-divergent and deformation process to a Narrow Field Of Vision corresponding region of the wide area of visual field size of the second Narrow Field Of Vision image mapped; (e) when Narrow Field Of Vision area size relatively reference zone time, ratio according to Narrow Field Of Vision area size and reference zone carries out convergent-divergent and deformation process to the second Narrow Field Of Vision image, and carries out convergent-divergent and deformation process to a wide corresponding region, the visual field of the second wide field-of-view image mapping Narrow Field Of Vision area size; And (f) merges the second wide field image after convergent-divergent and deformation process and Narrow Field Of Vision corresponding region or the second Narrow Field Of Vision image after convergent-divergent and deformation process and corresponding region, the wide visual field to export one second image output.
Preferably, first wide field image and the second wide field image are captured continuously by a wide-angle camera, first Narrow Field Of Vision image and the second Narrow Field Of Vision image are captured continuously by a non-wide-angle camera, described wide-angle camera and non-wide-angle camera are arranged on a camera head, first image output and the continuous image of the second image output for storing when camera head is recorded a video, or when camera head carries out preview, be shown in the continuous image on a screen of camera head.
Preferably, described in continuous image presents, treat that the size of tracing object does not change haply, but treat the effect that the scenery beyond tracing object has convergent-divergent to change.
Preferably, Narrow Field Of Vision area size and wide ratio between area of visual field size and reference zone closest to one be regarded as relatively reference zone.
Preferably, treat the image feature of tracing object respectively in the second Narrow Field Of Vision image and the second wide field image more be regarded as relatively reference zone.
Based on above-mentioned purpose, the present invention reoffers a kind of object tracing image processing system, comprises: one first camera lens module, captures one first wide field image and one second wide field image respectively; One second camera lens module, captures one first Narrow Field Of Vision image and one second Narrow Field Of Vision image respectively; One tracing object selected cell, determines that one treats tracing object from the first wide field image and the first Narrow Field Of Vision image; One tracking feature acquisition unit, multiple first image features of tracing object are treated in acquisition; One object tracing unit, finds according to multiple first image feature respectively and treats tracing object and find out to treat wide area of visual field size and the Narrow Field Of Vision area size of tracing object in the second wide field image and the second Narrow Field Of Vision image in the first wide field image, the second wide field image, the first Narrow Field Of Vision image and the second Narrow Field Of Vision image; One feature extraction unit, in order to find out the multiple image features in the second wide field image and the second Narrow Field Of Vision image respectively, in order to find out a corresponding region of the second wide field image and the second Narrow Field Of Vision image; One image-zooming and deformation unit, one first image output is selected from the first wide field image and the first Narrow Field Of Vision image, and to treat that the area size of tracing object in described first image output is as a reference zone, and wide area of visual field size and Narrow Field Of Vision area size and reference zone are compared; When image-zooming and deformation unit judge wide area of visual field size relatively reference zone time, image-zooming and deformation unit carry out convergent-divergent and deformation process according to the ratio of wide area of visual field size and reference zone to the second wide field image, and carry out convergent-divergent and deformation process to a Narrow Field Of Vision corresponding region of the wide area of visual field size of the second Narrow Field Of Vision image mapped; When image-zooming and deformation unit judge Narrow Field Of Vision area size relatively reference zone time, image-zooming and deformation unit carry out convergent-divergent and deformation process according to the ratio of Narrow Field Of Vision area size and reference zone to the second Narrow Field Of Vision image, and carry out convergent-divergent and deformation process to a wide corresponding region, the visual field of the second wide field-of-view image mapping Narrow Field Of Vision area size; And a visual fusion unit, merge the second wide field image and Narrow Field Of Vision corresponding region crossed through convergent-divergent or deformation process or the second Narrow Field Of Vision image of crossing through convergent-divergent or deformation process and corresponding region, the wide visual field to export one second image output.
Preferably, Narrow Field Of Vision area size and wide ratio between area of visual field size and reference zone closest to one be regarded as relatively reference zone.
Preferably, treat the image feature of tracing object respectively in the second Narrow Field Of Vision image and the second wide field image more be regarded as relatively reference zone.
Preferably, image-zooming and deformation unit give tacit consent to a ratio threshold value, when image-zooming and deformation unit judge wide area of visual field size relatively reference zone, and a wide proportional difference between area of visual field size and reference zone is when being greater than ratio threshold value, then to the image procossing that the second wide field image amplifies and is out of shape.
Preferably, image-zooming and deformation unit give tacit consent to a ratio threshold value, when image-zooming and deformation unit judge Narrow Field Of Vision area size relatively reference zone time, and the proportional difference between Narrow Field Of Vision area size and reference zone is when being greater than ratio threshold value, the second Narrow Field Of Vision image is carried out the image procossing reducing and be out of shape.
Accompanying drawing explanation
Above-mentioned and other feature of the present invention and advantage will become more aobvious by describing its exemplary embodiments in detail with reference to accompanying drawing and easily know, wherein:
Fig. 1 is the calcspar of the embodiment according to object tracing image processing system of the present invention;
Fig. 2 is the first schematic diagram of the embodiment according to object tracing image processing system of the present invention;
Fig. 3 is the second schematic diagram of the embodiment according to object tracing image processing system of the present invention;
Fig. 4 is the 3rd schematic diagram of the embodiment according to object tracing image processing system of the present invention; And
Fig. 5 is the flow chart of steps of the embodiment according to object tracing image processing method of the present invention.
Symbol description
10 first camera lens modules
11 first wide field image
12,13,14 second wide field image
20 second camera lens modules
21 first Narrow Field Of Vision images
22,23,24 second Narrow Field Of Vision images
25,42,43,44 second image outputs
30 treat tracing object
32 first image features
40 tracing object selected cells
41 first image outputs
50 tracking feature acquisition units
51 object tracing unit
60 camera heads
70 feature extraction unit
71 wide area of visual field sizes
72 Narrow Field Of Vision area sizes
73 wide corresponding regions, the visual field
74 Narrow Field Of Vision corresponding regions
80 image-zoomings and deformation unit
81 input units
82 reference zones
90 visual fusion unit
91 supplement region
92 dotted line frames
93 display units
S1 ~ S6 step
Embodiment
In this use, vocabulary " and/or " comprise one or more relevant bar list of items any or all combination.When " at least one " describe prefix before a component list time, modify whole list element and individual elements in non-modified list.
Consult Fig. 1, Fig. 2 and Fig. 3, it is the calcspar of embodiment according to object tracing image processing system of the present invention, the first schematic diagram and the second schematic diagram.In Fig. 1, object tracing image processing system comprises the first camera lens module 10, second camera lens module 20, tracing object selected cell 40, tracking feature acquisition unit 50, object tracing unit 51, feature extraction unit 70, image-zooming and deformation unit 80, visual fusion unit 90 and display unit 93.Object tracing image processing system of the present invention operates on a camera head 60, on the same side that the first camera lens module 10 and the second camera lens module 20 are arranged on camera head 60 and towards same direction.
The processing unit that tracing object selected cell 40, tracking feature acquisition unit 50, object tracing unit 51, feature extraction unit 70, image-zooming and deformation unit 80 and visual fusion unit 90 can be in camera head 60 performs with software mode, and such as tracing object selected cell 40, tracking feature acquisition unit 50, object tracing unit 51, feature extraction unit 70, image-zooming and deformation unit 80 and visual fusion unit 90 can be respectively the executable program code of processing unit; Or perform in hardware, tracing object selected cell 40, tracking feature acquisition unit 50, object tracing unit 51, feature extraction unit 70, image-zooming and deformation unit 80 and visual fusion unit 90 are respectively the particular electrical circuit in processing unit.
First camera lens module 10 and the second camera lens module 20 can have different focal lengths respectively, for example, in the present embodiment, the first camera lens module 10 is a wide-angle lens (Wide) and the second camera lens module 20 is a long focal length lens (Tele).Wherein, the first camera lens module 10 can capture multiple wide field image respectively, as 11,12,13 and 14 of Fig. 2.Second camera lens module 20 captures multiple Narrow Field Of Vision image respectively, as 21,22,23 and 24 of Fig. 3.As shown in Figure 2, the image visual field that first camera lens module 10 captures is greater than the image visual field that the second camera lens module 20 captures, and the size of same object in the wide field image 11,12,13 and 14 of Fig. 2 also can be less than what come in the Narrow Field Of Vision image 21,22,23 and 24 of Fig. 3.
Tracing object selected cell 40 determines to treat tracing object 30 from the first wide field image 11 and/or the first Narrow Field Of Vision image 21.In enforcement, tracing object selected cell 40 first can pick out the common interesting object existed in the first wide field image 11 and the first Narrow Field Of Vision image 21 according at least one image feature, such as face, the person or special article.Then, from then on tracing object selected cell 40 automatically can determine in multiple interesting object and treat tracing object 30; Or the instruction that tracing object selected cell 40 can send according to input unit 81 determines and treats tracing object 30 from multiple interesting object, and namely user-operable input unit 81 is selected and treated tracing object 30.
In addition, user more can utilize input unit 81 input to treat the display size of tracing object 30 on display unit 93.
Then, after treating that tracing object 30 determines, tracking feature acquisition unit 50 treats multiple first image features 32 in tracing object 30 in order to acquisition.This multiple first image feature 32 is used in follow-up image, find this and treats tracing object 30.
Multiple first image features 32 that object tracing unit 51 provides according to tracking feature acquisition unit 50, in the image inputted, treat tracing object 30 respectively at the first camera lens module 10 and the second camera lens module 20 described in searching.For example, multiple first image features 32 that object tracing unit 51 can provide according to tracking feature acquisition unit 50, find out wide area of visual field size 71, that is treat the region at tracing object 30 place in the multiple second wide field image; And, multiple Narrow Field Of Vision shadow in find out Narrow Field Of Vision area size 72, that is treat the region at tracing object 30 place.
Feature extraction unit 70 finds multiple image feature in order in the image that inputted by the first camera lens module 10 and the second camera lens module 20, the image feature that jointly comprises of image that this multiple image feature is the first camera lens module 10 and the input of the second camera lens module 20, it is used to comparison and goes out corresponding region in the wide field image of the first camera lens module 10 and the Narrow Field Of Vision image of the second camera lens module 20.
Image-zooming and deformation unit 80 can according to input unit 81 provide or the display size treating tracing object 30 of default, select the Narrow Field Of Vision image of the wide field image of the first camera lens module 10 or the second camera lens module 20 to be one first image output 41, and treat that the area size of tracing object 30 in this first image output 41 is as a reference zone 82 using this.Then, in the image of follow-up shooting, wide area of visual field size 71 and Narrow Field Of Vision area size 72 are compared with reference zone 82 by image-zooming and deformation unit 80, when image-zooming and deformation unit 80 judge wide area of visual field size 71 relatively reference zone 82 time, image-zooming and deformation unit 80 according to a relative scale of wide area of visual field size 71 and reference zone 82 to the second wide field image 12, 13, 14 carry out convergent-divergent and deformation process, and to the second Narrow Field Of Vision image 22, 23, convergent-divergent and deformation process are carried out respectively in one Narrow Field Of Vision corresponding region 74 of 24 corresponding wide area of visual field sizes 71, and the second wide field image 12 after convergent-divergent and/or deformation process, 13, 14 can carry out merging to produce 3 continuous print second image outputs 25 with Narrow Field Of Vision corresponding region 74 in visual fusion unit 90, on the other hand, if when image-zooming and deformation unit 80 judge Narrow Field Of Vision area size 72 relatively reference zone 82 time, image-zooming and deformation unit 80 according to a relative scale of Narrow Field Of Vision area size 72 and reference zone 82 first respectively to the second Narrow Field Of Vision image 22, 23, 24 carry out convergent-divergent and deformation process, and to the second wide field image 12, 13, convergent-divergent and deformation process are carried out in one wide corresponding region, the visual field 73 of 14 corresponding Narrow Field Of Vision area sizes 72, similarly, the second Narrow Field Of Vision image 22 after convergent-divergent and/or deformation process, 23, 24 can carry out merging to produce 3 continuous print second image outputs 25 as output with corresponding region, the wide visual field 73 in visual fusion unit 90.
Be noted that the first camera lens module 10 and the second camera lens module 20 are in order to take multiple images continuously herein, therefore, through above-mentioned process and multiple images exporting display unit 93 to form a video image (video).Be event, camera head 60 of the present invention can maintain certain video image in order to export a main body image (treating tracing object 30) size.
Referring to the first schematic diagram of the embodiment of Fig. 2, Fig. 3 and Fig. 4 description object tracking image treatment system to the 3rd schematic diagram, and referring to the explanation of Fig. 1.First, tracing object selected cell 40 determines to treat tracing object 30 in the first wide field image 11 of Fig. 2 and the first Narrow Field Of Vision image 21 of Fig. 3, and by image-zooming and deformation unit 80 determine the wide field image 11 of use first or the first Narrow Field Of Vision image 21 wherein one as the first image output 41, and to treat that the area size of tracing object 30 at this first image output 41 is as with reference to region 82, illustrates using the first Narrow Field Of Vision image 21 as the first image output 41 in the present embodiment.
Then by tracking feature acquisition unit 50 in the first wide field image 11 of Fig. 2 and Fig. 3 the first Narrow Field Of Vision image 21 treat in tracing object 30, capture multiple first image features 32 treating tracing object 30.And multiple first image features 32 that object tracing unit 51 provides according to tracking feature acquisition unit 50, described in finding in the second wide field image 12,13 and 14 of Fig. 2 and the second Narrow Field Of Vision image 22,23,24 of Fig. 3, treat tracing object 30.For example, multiple first image features 32 that object tracing unit 51 can provide according to tracking feature acquisition unit 50, find out wide area of visual field size 71, that is treat the region at tracing object 30 place in the multiple second wide field image 12,13,14; And, in multiple Narrow Field Of Vision image 22,23,24, find out Narrow Field Of Vision area size 72, that is treat the region at tracing object 30 place.
Then, feature extraction unit 70 is in finding multiple image feature in the multiple second wide field image 12,13,14 and multiple Narrow Field Of Vision image 22,23,24, the image feature that jointly comprises of image that this multiple image feature is the first camera lens module 10 and the input of the second camera lens module 20, is used to comparison and goes out corresponding region 73,74 in the wide field image of the first camera lens module 10 and the Narrow Field Of Vision image of the second camera lens module 20.Its object is to judge that the second Narrow Field Of Vision image 22,23,24 is in the second wide field image 12,13,14 corresponding region each other.
Then, compared, to determine the size relatively reference zone 82 of whichever with wide area of visual field size 71 and Narrow Field Of Vision area size 72 by image-zooming and deformation unit 80 pairs of reference zones 82.In enforcement, image-zooming and deformation unit 80 can compare by the area size of the area size in this little region and reference zone 82.Such as, the ratio between the area size in wide area of visual field size 71 and Narrow Field Of Vision area size 72 and reference zone 82 closest to one, then can be regarded as comparatively close to reference zone 82.Such as, treat that the area size of tracing object 30 in Narrow Field Of Vision area size 72 is 1.2 times of reference zone 82, and treat that the area size of tracing object 30 in wide area of visual field size 71 is 0.95 times of reference zone 82.0.95 to 1.2 closer to 1, so image-zooming and deformation unit 80 then can carry out follow-up image using the second wide field image 12 as a main image combine process.
Or image-zooming and deformation unit 80 can compare by the image feature quantity of the image feature quantity in this little region and reference zone 82.Such as, when the first camera lens module 10 and the second camera lens module 20 are when the object be taken is close, the part of object in second Narrow Field Of Vision image 22 likely can appearance and not being photographed, namely represent that some image feature does not appear in image, then image-zooming and deformation unit 80 are not suitable for selecting this kind of image to do subsequent treatment.Therefore, image-zooming and deformation unit 80 can select image feature quantity to do subsequent treatment closest to the image of reference zone 82.
Foregoing illustrates with the area size of imagery zone or image feature quantity, but the present invention is therefore not restricted; In addition, the use of area size or image feature quantity also can mix enforcement.
When image-zooming and deformation unit 80 judge until the Narrow Field Of Vision area size 72 of tracing object 30 in the second Narrow Field Of Vision image 22 relatively reference zone 82 time, image-zooming and deformation unit 80 first carry out convergent-divergent and deformation process to the second Narrow Field Of Vision image 22 according to Narrow Field Of Vision area size 72 and a relative scale of reference zone 82, and carry out convergent-divergent and deformation process to a wide corresponding region, the visual field 73 of second wide field image 12 this Narrow Field Of Vision area size 72 corresponding.Otherwise, if until the wide area of visual field size 71 of tracing object 30 in the second wide field image 12 relatively reference zone 82 time, image-zooming and the wide area of visual field size of deformation unit 80 bases 71 carry out convergent-divergent and deformation process with a relative scale of reference zone 82 to the second wide field image 12, and carry out convergent-divergent and deformation process to a Narrow Field Of Vision corresponding region 74 of the second Narrow Field Of Vision image 22 this wide area of visual field size 71 corresponding.
Finally, merged to produce one second image output 25 with corresponding region, the wide visual field 73 to the second Narrow Field Of Vision image 22 after convergent-divergent and/or deformation process by visual fusion unit 90, this second image output 25 can present treats that the size of tracing object 30 does not change haply, but treats the effect that the scenery beyond tracing object 30 has convergent-divergent to change.
Further instruction, Fig. 2 and Fig. 3 emulates camera head 60 gradually near the situation treating tracing object 30, the first original Narrow Field Of Vision image 21, second Narrow Field Of Vision image 22,23 and 24, and the size treating tracing object 30 in the wide field image 12,13 and 14 of the first wide field image 11, second becomes large gradually, and the object having Narrow Field Of Vision image all in wide field image.In 4A figure, first image output 41 produces from the first Narrow Field Of Vision image 21, 4B figure and 4C figure represents the second different image outputs 42 respectively, 43, for the second Narrow Field Of Vision image 22 of reducing, (dotted line frame 92 represents the scope of the second Narrow Field Of Vision image 22 after reducing to second image output 42, supplement the gap of region 91 for the second Narrow Field Of Vision image 22 after filling up the second image output 42 and reducing) with the corresponding region, the wide visual field 73 of the second wide field image 12 in conjunction with image, and in 4C figure, this second image output 43 produces by merging through the second wide field image 13 of convergent-divergent and distortion and the Narrow Field Of Vision corresponding region (not being shown in figure) of the second Narrow Field Of Vision image 23, the second image output 43 is now except treating tracing object 30, also there is leg-of-mutton object in its background, and it is known from 4B figure and 4C figure, treat that the change in size degree of tracing object 30 is less, but outside scenery but can have a greater change.
Further instruction, as shown in 4D figure, if during the size of the wide area of visual field size 71 in the second wide field image 13 comparatively close to reference zone 82, now first with the ratio of reference zone 82, convergent-divergent and deformation process are carried out to the second wide field image 13 according to this wide area of visual field size 71, if when user is for amplifying 1.5 times by this second image output 44, now the Narrow Field Of Vision corresponding region (not being shown in figure) of the second Narrow Field Of Vision image 23 this wide area of visual field size 71 corresponding is amplified simultaneously and is adjusted to corresponding size, as 1.5 times, and visual fusion is carried out in this Narrow Field Of Vision corresponding region of amplifying and the second wide field image 13, the overall image quality after 1.5 times is amplified to improve the second image output 44.
In enforcement, image-zooming and deformation unit 80 can give tacit consent to a ratio threshold value, when image-zooming and deformation unit 80 judge until tracing object 30 the second wide field image 12 wide area of visual field size 71 relatively reference zone 82 time, and when the proportional difference of tracing object 30 between wide area of visual field size 71 and reference zone 82 is less than ratio threshold value, then this second wide field image 12 and the second Narrow Field Of Vision image 22 are merged to produce the first image output 41, then carry out follow-up image procossing operation.
Refer to Fig. 5, it is the flow chart of steps of the embodiment according to object tracing image processing method of the present invention.See also Fig. 1, this object tracing image processing method, comprises the following step.
In step S1, receive one first wide field image 11 and one first Narrow Field Of Vision image 21, and determine that one treats tracing object 30 from the first wide field image 11 and the first Narrow Field Of Vision image 21.
In step S2, from the first wide field image 11 and the first Narrow Field Of Vision image 21, select one first image output 41, and to treat that the area size of tracing object 30 in the first image output 41 is as a reference zone 82.
In step S3, receive one second wide field image 12 and one second Narrow Field Of Vision image 22, will treat that tracing object 30 is compared with reference zone 82 in a wide area of visual field size 71 of the second wide field image 12 and the Narrow Field Of Vision area size 72 in the second Narrow Field Of Vision image 22.
In step S4, when wide area of visual field size 71 relatively reference zone 82 time, with the ratio of reference zone 82, convergent-divergent and deformation process are carried out to the second wide field image 12 according to wide area of visual field size 71, and convergent-divergent and deformation process are carried out to a Narrow Field Of Vision corresponding region 74 of the corresponding wide area of visual field size 71 of the second Narrow Field Of Vision image 22.
In step S5, when Narrow Field Of Vision area size 72 relatively reference zone 82 time, according to Narrow Field Of Vision area size 72 and the ratio of reference zone 82, convergent-divergent and deformation process are carried out to the second Narrow Field Of Vision image 22, and convergent-divergent and deformation process are carried out to a wide corresponding region, the visual field 73 of the second wide corresponding Narrow Field Of Vision area size 72 of field image 12.
In step S6, merge the second wide field image 12 after convergent-divergent and deformation process and Narrow Field Of Vision corresponding region 74 or the second Narrow Field Of Vision image 22 after convergent-divergent and deformation process with corresponding region, the wide visual field 73 to export one second image output 25.
First wide field image 11 and the second wide field image 12 are captured continuously by a wide-angle camera, first Narrow Field Of Vision image 21 and the second Narrow Field Of Vision image 22 are captured continuously by a non-wide-angle camera, wide-angle camera and non-wide-angle camera are arranged on a camera head 60, second image output 25 can be the continuous image stored when camera head 60 is recorded a video, or is shown in the continuous image on a screen of camera head 60 when camera head 60 carries out preview.
Continuous image presents treats that the size of tracing object 30 does not change haply, but treats the effect that the scenery beyond tracing object 30 has convergent-divergent to change.
Wherein, treat tracing object 30 respectively wide area of visual field size 71 and the ratio between Narrow Field Of Vision area size 72 and reference zone 82 closest to one be regarded as relatively reference zone 82.
Treat the image feature of tracing object 30 respectively in wide area of visual field size 71 and Narrow Field Of Vision area size 72 more be regarded as relatively reference zone 82.
In addition, on the implementation, first can set a ratio threshold value, and in step s 4 which, when treating that tracing object 30 is at wide area of visual field size 71 relatively reference zone 82, and when the proportional difference of tracing object 30 between wide area of visual field size 71 and reference zone 82 is greater than ratio threshold value, then first to the image procossing that the second wide field image 12 amplifies and is out of shape.
In step s 5, when until the area size of tracing object 30 in the second Narrow Field Of Vision image 22 relatively reference zone 82 time, and the proportional difference between the area size of tracing object 30 in the second Narrow Field Of Vision image 22 and reference zone 82 is when being greater than ratio threshold value, to the image procossing that the second Narrow Field Of Vision image 22 reduces and is out of shape.
The specific embodiment proposed in the detailed description of preferred embodiment only illustrates technology contents of the present invention in order to convenient; but not the present invention is narrowly limited to above-described embodiment; do not exceeding the situation of spirit of the present invention and appending claims; the many variations done is implemented, and all belongs to protection scope of the present invention.
Claims (9)
1. an object tracing image processing method, is characterized in that, comprises:
A () receives the first wide field image and the first Narrow Field Of Vision image, and determine to treat tracing object from the described first wide field image and described first Narrow Field Of Vision image;
B () selects the first image output from the described first wide field image and described first Narrow Field Of Vision image, and treat that the area size of tracing object in described first image output is as reference region using described;
C () receives the second wide field image and the second Narrow Field Of Vision image, treat that tracing object is compared in the wide area of visual field size of the described second wide field image and the Narrow Field Of Vision area size in described second Narrow Field Of Vision image and described reference zone by described;
(d) when described wide area of visual field size relatively described reference zone time, ratio according to described wide area of visual field size and described reference zone carries out convergent-divergent and deformation process to the described second wide field image, and carries out convergent-divergent and deformation process to the Narrow Field Of Vision corresponding region of area of visual field size wide described in described second Narrow Field Of Vision image mapped;
(e) when described Narrow Field Of Vision area size relatively described reference zone time, ratio according to described Narrow Field Of Vision area size and described reference zone carries out convergent-divergent and deformation process to described second Narrow Field Of Vision image, and carries out convergent-divergent and deformation process to the corresponding region, the wide visual field that the described second wide field-of-view image maps described Narrow Field Of Vision area size; And
F () merges the described second wide field image after convergent-divergent and deformation process and described Narrow Field Of Vision corresponding region or the described second Narrow Field Of Vision image after convergent-divergent and deformation process and corresponding region, the described wide visual field to export one second image output.
2. object tracing image processing method as claimed in claim 1, it is characterized in that, described first wide field image and the described second wide field image are captured continuously by wide-angle camera, described first Narrow Field Of Vision image and described second Narrow Field Of Vision image are captured continuously by non-wide-angle camera, described wide-angle camera and described non-wide-angle camera are arranged on camera head, described first image output and the continuous image of described second image output for storing when described camera head is recorded a video, or described camera head is shown in the continuous image on a screen of described camera head when carrying out preview.
3. object tracing image processing method as claimed in claim 1, is characterized in that, described Narrow Field Of Vision area size and the ratio between described wide area of visual field size and described reference zone closest to one be regarded as relatively described reference zone.
4. object tracing image processing method as claimed in claim 1, is characterized in that, described in treat the image feature of tracing object respectively in described second Narrow Field Of Vision image and the described second wide field image more be regarded as relatively described reference zone.
5. an object tracing image processing system, is characterized in that, comprises:
First camera lens module, captures the first wide field image and the second wide field image respectively;
Second camera lens module, captures the first Narrow Field Of Vision image and the second Narrow Field Of Vision image respectively;
Tracing object selected cell, determines to treat tracing object from the described first wide field image and described first Narrow Field Of Vision image;
Tracking feature acquisition unit, treats multiple first image features of tracing object described in acquisition;
Object tracing unit, treat tracing object described in finding in the described first wide field image, the described second wide field image, described first Narrow Field Of Vision image and described second Narrow Field Of Vision image according to described multiple first image feature respectively, and described in finding out, treat the wide area of visual field size of tracing object in the described second wide field image and described second Narrow Field Of Vision image and Narrow Field Of Vision area size;
Feature extraction unit, in order to find out the multiple image features in the described second wide field image and described second Narrow Field Of Vision image respectively, in order to find out the corresponding region of the described second wide field image and described second Narrow Field Of Vision image;
Image-zooming and deformation unit, the first image output is selected from the described first wide field image and described first Narrow Field Of Vision image, and treat that the area size of tracing object in described first image output is as with reference to region using described, and described wide area of visual field size and described Narrow Field Of Vision area size and described reference zone are compared; When described image-zooming and deformation unit judge described wide area of visual field size relatively described reference zone time, described image-zooming and deformation unit carry out convergent-divergent and deformation process according to the ratio of described wide area of visual field size and described reference zone to the described second wide field image, and carry out convergent-divergent and deformation process to the Narrow Field Of Vision corresponding region of area of visual field size wide described in described second Narrow Field Of Vision image mapped; When described image-zooming and deformation unit judge described Narrow Field Of Vision area size relatively described reference zone time, described image-zooming and deformation unit carry out convergent-divergent and deformation process according to the ratio of described Narrow Field Of Vision area size and described reference zone to described second Narrow Field Of Vision image, and carry out convergent-divergent and deformation process to the corresponding region, the wide visual field that the described second wide field-of-view image maps described Narrow Field Of Vision area size; And
Visual fusion unit, merge the described second wide field image and described Narrow Field Of Vision corresponding region crossed through convergent-divergent or deformation process or the described second Narrow Field Of Vision image crossed through convergent-divergent or deformation process and corresponding region, the described wide visual field to export the second image output.
6. object tracing image processing system as claimed in claim 5, is characterized in that, described Narrow Field Of Vision area size and the ratio between described wide area of visual field size and described reference zone closest to one be regarded as relatively described reference zone.
7. object tracing image processing system as claimed in claim 5, is characterized in that, described in treat the image feature of tracing object respectively in described second Narrow Field Of Vision image and the described second wide field image more be regarded as relatively described reference zone.
8. object tracing image processing system as claimed in claim 5, it is characterized in that, described image-zooming and deformation unit default scale threshold value, when described image-zooming and deformation unit judge described wide area of visual field size relatively described reference zone, and the proportional difference between described wide area of visual field size and described reference zone is when being greater than described ratio threshold value, then to the image procossing that the described second wide field image amplifies and is out of shape.
9. object tracing image processing system as claimed in claim 5, it is characterized in that, described image-zooming and deformation unit default scale threshold value, when described image-zooming and deformation unit judge described Narrow Field Of Vision area size relatively described reference zone time, and the proportional difference between described Narrow Field Of Vision area size and described reference zone is when being greater than described ratio threshold value, described second Narrow Field Of Vision image is carried out the image procossing reducing and be out of shape.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410526672.6A CN105578015B (en) | 2014-10-09 | 2014-10-09 | Object tracing image processing method and its system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410526672.6A CN105578015B (en) | 2014-10-09 | 2014-10-09 | Object tracing image processing method and its system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105578015A true CN105578015A (en) | 2016-05-11 |
CN105578015B CN105578015B (en) | 2018-12-21 |
Family
ID=55887632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410526672.6A Active CN105578015B (en) | 2014-10-09 | 2014-10-09 | Object tracing image processing method and its system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105578015B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018040443A1 (en) * | 2016-08-31 | 2018-03-08 | 宇龙计算机通信科技(深圳)有限公司 | Image capturing method and device |
CN110896443A (en) * | 2018-09-13 | 2020-03-20 | 深圳市鸿合创新信息技术有限责任公司 | Area value presetting method for double-camera switching |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120026364A1 (en) * | 2010-07-27 | 2012-02-02 | Sanyo Electric Co., Ltd. | Image pickup apparatus |
CN103986867A (en) * | 2014-04-24 | 2014-08-13 | 宇龙计算机通信科技(深圳)有限公司 | Image shooting terminal and image shooting method |
CN104052923A (en) * | 2013-03-15 | 2014-09-17 | 奥林巴斯株式会社 | Photographing apparatus, image display apparatus, and display control method of image display apparatus |
-
2014
- 2014-10-09 CN CN201410526672.6A patent/CN105578015B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120026364A1 (en) * | 2010-07-27 | 2012-02-02 | Sanyo Electric Co., Ltd. | Image pickup apparatus |
CN104052923A (en) * | 2013-03-15 | 2014-09-17 | 奥林巴斯株式会社 | Photographing apparatus, image display apparatus, and display control method of image display apparatus |
CN103986867A (en) * | 2014-04-24 | 2014-08-13 | 宇龙计算机通信科技(深圳)有限公司 | Image shooting terminal and image shooting method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018040443A1 (en) * | 2016-08-31 | 2018-03-08 | 宇龙计算机通信科技(深圳)有限公司 | Image capturing method and device |
CN110896443A (en) * | 2018-09-13 | 2020-03-20 | 深圳市鸿合创新信息技术有限责任公司 | Area value presetting method for double-camera switching |
CN110896443B (en) * | 2018-09-13 | 2021-11-09 | 深圳市鸿合创新信息技术有限责任公司 | Area value presetting method for double-camera switching and double-camera switching equipment |
Also Published As
Publication number | Publication date |
---|---|
CN105578015B (en) | 2018-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9386229B2 (en) | Image processing system and method for object-tracing | |
US9196070B2 (en) | Image processing apparatus that generates omnifocal image, image processing method, and storage medium | |
US20160080657A1 (en) | Image capturing device and digital zoom method | |
CN113840070B (en) | Shooting method, shooting device, electronic equipment and medium | |
CN112532808A (en) | Image processing method and device and electronic equipment | |
CN112839166B (en) | Shooting method and device and electronic equipment | |
CN115866394B (en) | Method for switching between first lens and second lens and electronic device | |
TW201902204A (en) | Power reduction in a multi-sensor camera device by on-demand sensors activation | |
CN108513069B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN103780839A (en) | Shooting method and terminal | |
CN105528112B (en) | A kind of camera shooting focus adjustment method, system and mobile terminal based on mobile terminal | |
CN112333382A (en) | Shooting method and device and electronic equipment | |
CN114390206A (en) | Shooting method and device and electronic equipment | |
US8860844B2 (en) | Image processing that generates high definition and general definition video at the same time | |
CN105578015A (en) | Object tracking image processing method and object tracking image processing system | |
CN105426081B (en) | Interface switching device and method of mobile terminal | |
CN109309784B (en) | Mobile terminal | |
CN114143455B (en) | Shooting method and device and electronic equipment | |
CN114125297B (en) | Video shooting method, device, electronic equipment and storage medium | |
CN114285988B (en) | Display method, display device, electronic equipment and storage medium | |
CN111654620B (en) | Shooting method and device | |
CN112653841A (en) | Shooting method and device and electronic equipment | |
CN111866383A (en) | Image processing method, terminal and storage medium | |
CN112367464A (en) | Image output method and device and electronic equipment | |
KR101567668B1 (en) | Smartphones camera apparatus for generating video signal by multi-focus and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |