CN104103062A - Image processing device and image processing method - Google Patents
Image processing device and image processing method Download PDFInfo
- Publication number
- CN104103062A CN104103062A CN201310119788.3A CN201310119788A CN104103062A CN 104103062 A CN104103062 A CN 104103062A CN 201310119788 A CN201310119788 A CN 201310119788A CN 104103062 A CN104103062 A CN 104103062A
- Authority
- CN
- China
- Prior art keywords
- image
- frame
- imaging plane
- depth
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides an image processing device and an image processing method. The image processing device comprises an object extraction unit, a matching unit, a feature extraction unit and a depth determination unit, wherein the object extraction unit is used for extracting an image of a target object from a monocular video sequence frame to serve as an object image; the matching unit is used for matching an observation object image with a reference object image to obtain a matching parameter, and the reference object image and the observation object image are object images extracted from the reference frame and the observation frame except from the reference frame; the feature extraction unit is used for the features reflecting depth changes of the observation object image and the reference object image in relative to an imaging plane by using the matching parameter; and the depth determination unit is used for determining the depth changes on the basis of the extracted features.
Description
Technical field
The processing of disclosure relate generally to video image, relates in particular to and a kind ofly can determine image processing equipment and the image processing method of destination object with respect to the change in depth of imaging plane according to monocular video sequence.
Background technology
The distance of the destination object in detected image and estimating target object distance video camera is widely used in every field, for example, and vision monitoring, obstacle detection and man-machine interaction etc.Most traditional destination object depth detection technology are carried out the estimating target degree of depth based on principle of stereoscopic vision.In this technology, need two video cameras of having demarcated, by analysis from the image of these two video cameras between corresponding relation carry out the detected target object degree of depth.
Conventional art seldom relates to monocular vision target depth and estimates.Utilizing monocular image to carry out in a kind of method of target depth estimation, such scheme is disclosed: based on some hypothesis that the movement properties of moving target object is made, such as maximal rate, minimum speed variation, consistance and continuous motion etc., detect moving target; By this detection, the position of each moving target object in two consecutive images in output monocular sequence image; Then,, according to the position of output, cross constraint (over-constrained) method and estimate the distance of moving target by utilization.
Summary of the invention
In the prior art such as such scheme, carry out estimation of Depth with the motion feature of object, algorithm relative complex, and accuracy rate is not high.
The object of the present invention is to provide a kind of image processing equipment and image processing method, by without carry out motion prediction in the situation that, consider that variation characteristic object images, that be associated with destination object change in depth in each frame carries out depth detection, thereby replace the analysis to image motion feature.
According to an aspect of the present disclosure, a kind of image processing equipment is provided, comprising: object extracting unit, for extract the image of destination object from the frame of monocular video sequence, as object images; Registration unit, for object of observation image is carried out to registration for references object image, to obtain registration parameter, references object image and object of observation image are the object images of extracting the observation frame outside reference frame and reference frame respectively; Feature extraction unit, for by utilizing registration parameter to extract can to reflect object of observation image feature with respect to the change in depth of imaging plane compared with references object image; And Depth determination unit, determine change in depth for the feature based on extracted.
According in an embodiment of the present disclosure, the zooming parameter that feature extraction unit can change reflection object picture size in registration parameter be extracted as feature.
According in another embodiment of the present disclosure, if represent to observe the zooming parameter of frame for reference frame with S: at S>1+ ε
atime, Depth determination unit determines that destination object shoals with respect to imaging plane; At S<1-ε
btime, Depth determination unit determines that destination object deepens with respect to imaging plane; And otherwise, Depth determination unit determines that destination object is uncertain with respect to the change in depth of imaging plane.Wherein, ε
aand ε
bit is the little positive number as definite surplus.
According in another embodiment of the present disclosure, this image processing equipment can also comprise direction of motion recognition unit, for identifying the direction of motion of destination object with respect to imaging plane.Wherein, in the time making image processing equipment carry out Depth determination to the observation frame sequential in scheduled time slot, direction of motion recognition unit can be identified direction of motion according to the result sequence of being determined by Depth determination unit.
According in another embodiment of the present disclosure, when " deepening " more than there is the first continuous predetermined number in result sequence, direction of motion recognition unit can identify destination object along moving away from the direction of imaging plane; And when " shoaling " more than there is the second continuous predetermined number in result sequence, direction of motion recognition unit can identify destination object and move along the direction near imaging plane.
According in another embodiment of the present disclosure, when there being the continuous the 3rd more than predetermined number " deepening " in result sequence, and when the continued product of corresponding zooming parameter S is less than first threshold, direction of motion recognition unit can identify destination object along moving away from the direction of imaging plane; And when there being the continuous the 4th more than predetermined number " shoaling " in result sequence, and the continued product of corresponding zooming parameter S is while being greater than Second Threshold, and direction of motion recognition unit can identify destination object and move along the direction near imaging plane.
According in another embodiment of the present disclosure, feature extraction unit can comprise: alignment unit, for according to the translation parameters of registration parameter by object of observation image and references object image alignment; And histogram generation unit, generate histograms of oriented gradients for the marginal portion of the object images for alignment, as the feature that can reflect change in depth.
According in another embodiment of the present disclosure, object extracting unit can comprise detecting unit.This detecting unit is for using the moving window detected object image of many sizes, to determine object images region.
According in another embodiment of the present disclosure, extraction unit can comprise cutting unit.This cutting unit is for dividing processing is carried out in object images region, to extract object images.
According in another embodiment of the present disclosure, detecting unit can use housebroken destination object detecting device detected object image.
According to an aspect of the present disclosure, a kind of image processing method is provided, comprising: from the frame of monocular video sequence, extract the image of destination object, as object images; Object of observation image is carried out to registration for references object image, and to obtain registration parameter, references object image and object of observation image are the object images of extracting the observation frame outside reference frame and reference frame respectively; By utilizing registration parameter to extract can to reflect object of observation image feature with respect to the change in depth of imaging plane compared with references object image; And feature based on extracted is determined change in depth.
By implementing according to image processing equipment of the present invention and image processing method, can simplify the algorithm of destination object depth detection, in real time the relative depth of destination object be changed and detected, and improve the accuracy rate detecting.
Brief description of the drawings
Below with reference to the accompanying drawings illustrate embodiments of the invention, can understand more easily above and other objects, features and advantages of the present invention.In the accompanying drawings, identical or corresponding technical characterictic or parts will adopt identical or corresponding Reference numeral to represent.Needn't go out according to scale in the accompanying drawings size and the relative position of unit.
Fig. 1 is the schematic diagram that the principle of carrying out institute of the present invention foundation is shown.
Fig. 2 be illustrate according to disclosure embodiment for carrying out the block diagram of structure of image processing equipment 200 of depth detection.
Fig. 3 illustrates the image that detects window in one's hands and this window is carried out to the hand after dividing processing.
Fig. 4 is the block diagram illustrating according to the structure of the feature extraction unit 400 of disclosure embodiment.
Fig. 5 illustrates utilize feature extraction unit 400 to align to image and generate the histogrammic schematic diagram of edge feature.
Fig. 6 be illustrate according to disclosure embodiment for carrying out the block diagram of structure of image processing equipment 600 of depth detection.
Fig. 7 is the process flow diagram illustrating according to the image processing method that carries out depth detection of disclosure embodiment.
Fig. 8 is the process flow diagram illustrating according to the image processing method that carries out depth detection of disclosure embodiment.
Fig. 9 is the block diagram that the example arrangement that realizes computing machine of the present disclosure is shown.
Embodiment
Embodiments of the invention are described with reference to the accompanying drawings.It should be noted that for purposes of clarity, in drawing and description, omitted expression and the description of unrelated to the invention, parts well known by persons skilled in the art and processing.
For convenience, hereinafter the example using hand as destination object is described.In man-machine interactive system, can be by palmistry being pushed away before for video camera or post-tensioning carries out the triggering of event.And hand before push away or post-tensioning can change and determine for the distance (degree of depth) of the imaging plane of camera by detecting palmistry.Be appreciated that according to the solution of the present invention and can be applied to any other destination object, such as vehicle, people (whole body or part), such as any indicating device of indicating bar etc.; And any other application scenarios, such as vision monitoring, obstacle detection etc.
Fig. 1 is the schematic diagram that schematically shows the principle of carrying out institute of the present invention foundation.The image sequence that given monocular camera is taken, suppose to carry out along the depth detection of optical axis direction to as if image sequence in the hand that occurs.Fig. 1 illustrates that people is in the time doing " front pushing away " action, the imaging size variation on change in depth and the plane of delineation of adjacent two frames (t-1 frame and t frame).In Fig. 1, d represents the distance of hand to camera; F represents the focal length of camera; H represents the size of hand typically; S represents to be positioned at the size that is of a size of the image that the hand of h becomes on imaging plane apart from d place; Δ d represents the distance that hand moves between two frames, the distance that more precisely hand above moves at the depth direction with respect to imaging plane (optical axis direction) between two frames; Δ s is in response to the variation of hand on depth direction, the variation of imaging size on the plane of delineation.
Known through diagram geometric proportion relation: the relation of the imaging size variation Δ s on change in depth Δ d and the plane of delineation of hand as the formula (1):
Wherein, when at two successive frames or be separated by while comparing between nearer frame, Δ d<<d, thereby " ≈ " is rational.Can find out from formula (1), can be by being detected as the rate of change of estimating the degree of depth of hand as big or small rate of change.
In addition, changing from the object degree of depth variation that causes object images size to occur associates: if the object images corresponding to different depth is alignd by the correspondence position of each several part, so, when the object of observation image as from observing the object images that frame extracts and as compared with the references object image of object images extracting from reference frame, while shoaling with respect to imaging plane, in image after alignment, object of observation image will cover references object image completely; And when object of observation image is compared with references object image, while deepening with respect to imaging plane, in the image after alignment, object of observation image will be less than references object image, thereby cannot cover references object image completely.Visible: in both of these case, the marginal portion of the object images after alignment will present different features.If these features are extracted, may obtain having significantly different histogram.That is to say, can determine the change in depth of object of observation image than references object image according to the histogram of the edge feature of align objects image in both cases.
Consider that above each point makes the present invention.
Fig. 2 illustrates according to the embodiment of the present invention structured flowchart of object with respect to the image processing equipment 200 of the change in depth of imaging plane that really set the goal.
As shown in Figure 2, image processing equipment 200 comprises: object extracting unit 201, registration unit 202, feature extraction unit 203 and Depth determination unit 204.
Object extracting unit 201 is extracted the image of destination object from the frame of monocular video sequence, as the object images of subsequent treatment.Can in extraction unit 201, adopt various image extraction method well known in the art, as long as the image of destination object can be identified and isolated to meet from frame of video subsequent treatment needs.
In one embodiment, for example, extraction unit 201 can comprise the detecting unit (not shown) for object images is detected from frame of video.For example, in one example, detecting unit can use the moving window detected object image of many sizes, to determine the region at object images place.Particularly, this detecting unit can use the moving window of specific size to scan frame of video, and is carried out feature extraction and the feature of extraction sent into sorter and determine in this moving window, whether there is object images by the picture material in moving window.Completing with this moving window after the scanning of whole frame, the size of this moving window is adjusted, then repeat above-mentioned scanning, extraction and definite step, until determined object images region.
The sorter that detecting unit is used can adopt any usual feature to construct and train.In certain embodiments, in order destination object to be detected more exactly, can adopt trained destination object detecting device to carry out detected object image.By use standard study machine technology, such as support vector machine, the destination object detecting device of training can detect destination object image more exactly.This is especially applicable for scene less such as this texture of hand or the more unsharp destination object in edge and background complexity or bright and dark light variation.In addition, the use of housebroken destination object detecting device also provides possibility for detecting non-rigid targets object.An example of non-rigid objects is that hand alters one's posture at the volley, or out-of-plane rotation occurs.
In certain embodiments, in order to improve the accuracy of follow-up change in depth Check processing, extraction unit 201 can also comprise cutting unit (not shown).This cutting unit is for example, for the object images region detecting (window of object images being detected) carried out to dividing processing, so that object images is separated with the background area in this region as prospect.Dividing processing can be used the usual variety of way in this area to carry out.For example, in the embodiment using hand as destination object, can construct complexion model, so that the window that comprises hand detecting is carried out to prospect background separation.In Fig. 3, illustrate and detect moving window in one's hands, and this window area is carried out to the image of the hand as object images obtaining after dividing processing.Can effectively reduce to be introduced into the noise of follow-up depth detection process to the processing of hand Image Segmentation Using, thereby, the accuracy of depth detection result is improved.
Get back to Fig. 2, if get a frame as with reference to frame in monocular video sequence, get frame outside reference frame as observing frame, and the object images extracting at reference frame is called to references object image, the object images extracting in observation frame is called to object of observation image, registration unit 202 is carried out registration by object of observation image for references object image, thereby obtains corresponding registration parameter.Registration parameter generally comprises: the translation parameters of the translation occurring between indicated object image, the rotation parameter of indicated object image rotation planar, and the zooming parameter of the change in size of indicated object image.
Registration unit 202 can be carried out registration process according to the whole bag of tricks known in the art.For example, can be with reference to J.Lee, S.S.Young, R.Gutierrez-Osuna is at Technical Report, CSE Department, Texas A & M University, " the An Iterative Image Registration Technique Using a Scale-Space Model " delivering in 2011.In the embodiment detecting opponent, consider the low resolution of image in one's hands, can adopt the method for registering images based on region of mentioning in above-mentioned document directly the picture element density of two images to be mated as a whole.This is by realizing in dimensional space model insertion non-linear least square framework.
Obtained by registration unit 202 after the registration parameter of object of observation image and references object image, feature extraction unit 203 can reflect object of observation image feature with respect to the change in depth of imaging plane compared with references object image by utilizing the registration parameter from described registration unit 202, extracting.
In one embodiment, feature extraction unit 203 can directly be extracted the zooming parameter S that reflection object picture size in registration parameter changes, as reflection object of observation image feature with respect to the change in depth of imaging plane compared with references object image.
In the case of using zooming parameter S as reflection change in depth feature, Depth determination unit 204 can carry out to determine the change in depth of object of observation image compared with references object image by this way: at S>1+ ε
atime, determine that destination object shoals with respect to imaging plane; At S<1-ε
btime, determine that destination object deepens with respect to imaging plane; And, otherwise, determine that destination object is uncertain with respect to the change in depth of imaging plane.Wherein, ε
aand ε
bbe the little positive number as definite surplus, can be taken as identical or different value.ε
aand ε
bfor example can be taken as 0.05 or 0.1, need and determine according to specific design.
Illustrate: getting ε
aand ε
bbe that in 0.1 situation,, in the time of zooming parameter S=1.15, Depth determination unit 204 is defined as change in depth " shoaling " (S>1+ ε
a=1.1); In the time of zooming parameter S=0.8, Depth determination unit 204 is defined as change in depth " deepening " (at S<1-ε
b=0.9); In the time of zooming parameter S=1.05, Depth determination unit 204 is defined as " uncertain " (0.9<S<1.1) by change in depth.
In a further embodiment, replace zooming parameter S is directly extracted as to the feature of determining change in depth, feature extraction unit 203 can be according to the translation parameters in the registration parameter from registration unit 202 by object images alignment to be determined, then the marginal portion of the image after alignment is analyzed, to extract individual features.
Fig. 4 is the block diagram illustrating as the structure of the feature extraction unit 400 of an example of feature extraction unit 203.Feature extraction unit 400 can comprise alignment unit 401 and histogram generation unit 402.Alignment unit 401 according to the translation parameters in registration parameter by object of observation image and references object image alignment.Histogram generation unit 402 generates histograms of oriented gradients for the marginal portion of the object images of alignment, using the feature as reflection change in depth.Histogram generation unit 402 can adopt any method well known in the art to extract the feature of the marginal portion of align objects image, to generate corresponding histogram.
Referring to Fig. 5.The left side of Fig. 5 illustrates by alignment unit 401 according to translation parameters the image obtaining after object of observation image and references object image alignment.Due to reference frame in this example with to observe frame be consecutive frame (or the nearer frame in interval), thereby observation and references object image edge separately can not be observed individually in the image of alignment.But, utilize histogram generation unit 402 to carry out feature extraction for the marginal portion of the image of alignment, and generate corresponding histograms of oriented gradients, can clearly tell the magnitude relationship of observation and references object image.
For example, the histogram in Fig. 5 (a) schematically shows object of observation image than references object image apart from the more histogram of nearly (" shoaling ") of imaging plane; Histogram (b) in Fig. 5 schematically shows the histogram of object of observation image than references object image apart from imaging plane farther (" deepening ").From (a) and contrast (b), can find out, compare with references object image at object of observation image that relative imaging plane shoals or the different situations that deepen under, the feature histogram generating will have obvious difference.Please note: schematically, the difference of the method adopting according to extraction marginal portion feature, can draw and present multi-form histogram the instruction of histogram shown in Fig. 5, changes as long as these histograms can reflect the different depths.
The change in depth of interframe object images can be provided according to the histogram being provided by histogram generation unit 402 in Depth determination unit 204.
In the application scenarios of man-machine interaction etc., only the change in depth in two frames of definite sequence of video images is inadequate, and will be in special time period, and the destination object in the frame of particular sequence is determined with respect to the direction of motion of imaging plane.Then, trigger corresponding operation according to the direction of motion of destination object.For example, the example using hand as destination object, can work as and determine that the direction of motion of selling is, while moving (being hand " front pushing away ") along the direction of close video camera imaging plane, to open specific function; And when determining that the direction of selling is when moving (being hand " post-tensioning ") away from the direction of video camera imaging plane, closes specific function.In order to realize this interactive function, in certain embodiments, image processing equipment also comprises for identifying the direction of motion recognition unit of destination object with respect to the direction of motion of imaging plane.
Fig. 6 is the block diagram illustrating according to the structure of the image processing equipment 600 of the embodiment of the present invention.Image processing equipment 600 comprises: object extracting unit 601, registration unit 602, feature extraction unit 603, Depth determination unit 604 and direction of motion recognition unit 605.Because the 26S Proteasome Structure and Function of object extracting unit 601, registration unit 602, feature extraction unit 603 and Depth determination unit 604 is respectively with identical in conjunction with the 26S Proteasome Structure and Function of object extracting unit 201 illustrated in fig. 2, registration unit 202, feature extraction unit 203 and Depth determination unit 204, therefore omit it here and be repeated in this description, and only direction of motion recognition unit 605 is described.
Direction of motion recognition unit 605 can be identified the direction of motion of destination object with respect to imaging plane.For example, make image processing equipment 600 carry out Depth determination to the observation frame sequential in scheduled time slot, thereby obtain the Depth determination result of each observation frame.Then, direction of motion recognition unit 605 can be identified direction of motion according to the Depth determination result sequence of the observation frame of inputting from Depth determination unit 604.
It should be noted that: for scheduled time slot, get a frame in can the frame of video within this period as with reference to frame.For example, can be, but not limited to get the first frame in this period or last frame as with reference to frame.Then, the frame outside the reference frame in this scheduled time slot is as observing frame.In addition, also can at random outside this period, get a frame as with reference to frame, for example frame before adjacent this period.Then, all frames in this scheduled time slot are as observing frame.This can need to decide according to design.Although also can get in the centre of the corresponding video sequence of scheduled time slot a frame as with reference to frame, usually, for convenience's sake, conventionally fetch bit in the frame of video of one end of observing frame sequence as with reference to frame.Illustrate the embodiment that direction of motion recognition unit 605 is identified below.
In one embodiment, get a frame the earliest in scheduled time slot as with reference to frame, other frame is thereafter as observing frame, and makes image processing equipment 600 orders carry out Depth determination processing to each observation frame.Owing to having carried out fully describing above, omit here and describe observing the processing such as object extraction that frame carries out, registration, feature extraction, Depth determination.The Depth determination result of the each observation frame obtaining is offered direction of motion recognition unit 605 by Depth determination unit 604.Direction of motion recognition unit 605 is identified a direction of moving according to the Depth determination result sequence of observing frame for each obtaining.
For example, in the time there is continuously above " the deepening " of n in Depth determination result sequence, direction of motion recognition unit identifies destination object edge and moves away from the direction of imaging plane.For example, identification is sold and is done post-tensioning motion.And in the time there is continuously above " the shoaling " of m in Depth determination result sequence, direction of motion recognition unit identifies destination object and moves along the direction of close imaging plane.For example, identification pushes moving before selling and doing.Wherein, n and m are predefined positive integers, can get identical or different value.The size of n and m can need to decide according to design.
Carrying out with zooming parameter S in some embodiment of Depth determination, can also the two identify the direction of motion of destination object according to Depth determination result and zooming parameter S.For example, when " deepening " of occurring continuously in the result sequence from Depth determination unit 604 more than K, and the corresponding zooming parameter S of object of observation image of this K " deepening "
i(i=1 ..., K) continued product while being less than predetermined threshold TH1, direction of motion recognition unit 605 can identify destination object along moving away from the direction of imaging plane.In addition, when " shoaling " of occurring continuously in the result sequence from Depth determination unit 604 more than L, and the corresponding zooming parameter S of object of observation image of this L " shoaling "
j(j=1 ..., L) continued product while being greater than predetermined threshold TH2, direction of motion recognition unit 605 can identify destination object and move along the direction near imaging plane.Wherein, K and L are predefined positive integers, can get identical or different value.The size of K and L and TH1 and TH2 can need to decide according to design.
The number of times that specific definite result is occurred continuously and occur corresponding to these that continuously the zooming parameter of result combines to identify the direction of motion of destination object with respect to imaging plane, has improved the accuracy of recognition result.In addition, by controlling the size of predetermined threshold TH1 and TH2, Moving Objects can be excluded with respect to relatively slow, the inapparent motion of imaging plane.
Below in conjunction with Fig. 7 and Fig. 8, the image processing method using according to the image processing equipment of disclosure embodiment is described.
Fig. 7 is the process flow diagram illustrating according to the image processing method based on monocular video Sequence Detection target depth of disclosure embodiment.
In step S701, from the frame of monocular video sequence, extract the image of destination object, as object images.Can adopt various image extraction method well known in the art, as long as the image of destination object can be identified and isolated to meet from frame of video subsequent treatment needs.
In one embodiment, can use the moving window of many sizes to detect object images, to determine the region at object images place.Particularly, can use the moving window of specific size to scan frame of video, and carry out feature extraction and the feature of extraction sent into sorter and determine in this moving window, whether there is object images by the picture material in moving window.Completing with this moving window after the scanning of whole frame, the size of this moving window is adjusted, then repeat above-mentioned scanning, extraction and definite step, until determined object images region.
With sorter can adopt any usual feature to construct and train.In certain embodiments, in order destination object to be detected more exactly, can adopt trained destination object detecting device to carry out detected object image.By use standard study machine technology, such as support vector machine, the destination object detecting device of training can detect destination object image more exactly.
In certain embodiments, in order to improve the accuracy of follow-up change in depth Check processing, can also for example, carry out dividing processing to the object images region detecting (window of object images being detected), so that object images is separated with the background area in this region as prospect.Dividing processing can be used the usual variety of way in this area to carry out.For example, in the embodiment using hand as destination object, can construct complexion model, so that the window that comprises hand detecting is carried out to prospect background separation.Can effectively reduce to be introduced into the noise of follow-up depth detection process to the processing of hand Image Segmentation Using, thereby, the accuracy of depth detection result is improved.
In sequence of frames of video, get a frame as with reference to frame, and with reference to the frame beyond frame as observing frame.The object images of extracting from reference frame and observation frame is called respectively to references object image and object of observation image., in step S702, object of observation image is carried out to registration to references object image, thereby obtain corresponding registration parameter.Can carry out registration process according to the whole bag of tricks known in the art.Here repeat no more.
Obtaining after the registration parameter of object of observation image and references object image, in step S703, by utilizing the registration parameter extraction obtaining can reflect object of observation image feature with respect to the change in depth of imaging plane compared with references object image.
In one embodiment, can directly extract the zooming parameter S that reflection object picture size in registration parameter changes, as reflection object of observation image feature with respect to the change in depth of imaging plane compared with references object image.
In the case of using zooming parameter S as reflection change in depth feature, can carry out by this way to determine the change in depth of object of observation image compared with references object image: at S>1+ ε
atime, can determine that destination object shoals with respect to imaging plane; At S<1-ε
btime, can determine that destination object deepens with respect to imaging plane; And, otherwise, can determine that destination object is uncertain with respect to the change in depth of imaging plane.Wherein, ε
aand ε
bbe the little positive number as definite surplus, can be taken as identical or different value.ε
aand ε
bfor example can be taken as 0.05 or 0.1, need and determine according to specific design.
In a further embodiment, replace zooming parameter S is directly extracted as to the feature of determining change in depth, can, according to the translation parameters in registration parameter by object images alignment to be determined, then the marginal portion of the image after alignment be analyzed, to extract individual features.
For example, can be according to the translation parameters in registration parameter by object of observation image and references object image alignment.Then, generate histograms of oriented gradients for the marginal portion of the object images of aliging, using the feature as reflection change in depth.Can adopt any method well known in the art to extract the feature of the marginal portion of align objects image, to generate corresponding histogram.Conventionally, carry out feature extraction by the marginal portion of the image for alignment, and generate corresponding histograms of oriented gradients, can clearly tell the magnitude relationship of observation and references object image.
In step S704, can determine according to the histogram obtaining the change in depth of interframe object images in step S703.Then, processing finishes.
According to depth detection method embodiment illustrated in fig. 7, can be in keeping carrying out video in real time in low computing load the detection of the object degree of depth.
In the application scenarios of man-machine interaction etc., sometimes need to determine destination object in the frame of the particular sequence direction of motion with respect to imaging plane, to trigger corresponding operation according to the direction of motion of destination object.For example, the example using hand as destination object, can work as and determine that the direction of motion of selling is, while moving (being hand " front pushing away ") along the direction of close video camera imaging plane, to open specific function; And when determining that the direction of selling is when moving (being hand " post-tensioning ") away from the direction of video camera imaging plane, closes specific function.
Fig. 8 illustrates the process flow diagram with respect to the method for the direction of motion of imaging plane according to the identification destination object of disclosure embodiment.Because the processing of the step S801~S804 in Fig. 8 is identical with the processing of the step S701~S704 in conjunction with Fig. 7 explanation, no longer carry out repeat specification here, only step S805 is described.
In step S805, identification destination object is with respect to the direction of motion of imaging plane.For example, can perform step the processing in S801~S804 to the observation frame sequential in scheduled time slot, thereby obtain the Depth determination result of each observation frame.Then,, in step S805, can identify direction of motion according to the Depth determination result sequence of the observation frame obtaining.
It should be noted that: for scheduled time slot, get a frame in can the frame of video within this period as with reference to frame.For example, can be, but not limited to get the first frame in this period or last frame as with reference to frame.Then, the frame outside the reference frame in this scheduled time slot is as observing frame.In addition, also can at random outside this period, get a frame as with reference to frame, for example frame before adjacent this period.Then, all frames in this scheduled time slot are as observing frame.This can need to decide according to design.Although also can get in the centre of the corresponding video sequence of scheduled time slot a frame as with reference to frame, usually, for convenience's sake, conventionally fetch bit in the frame of video of one end of observing frame sequence as with reference to frame.Illustrate the embodiment that direction of motion is identified below.
For example, in the time there is continuously above " the deepening " of n in Depth determination result sequence, can identify destination object edge and move away from the direction of imaging plane.For example, identification is sold and is done post-tensioning motion.And in the time there is continuously above " the shoaling " of m in Depth determination result sequence, can identify destination object and move along the direction of close imaging plane.For example, identification pushes moving before selling and doing.Wherein, n and m are predefined positive integers, can get identical or different value.The size of n and m can need to decide according to design.
Carrying out with zooming parameter S in some embodiment of Depth determination, can also the two identify the direction of motion of destination object according to Depth determination result and zooming parameter S.For example, when " deepening " of occurring continuously in the definite result sequence obtaining in step S804 more than K, and the corresponding zooming parameter S of object of observation image of this K " deepening "
i(i=1 ..., K) continued product while being less than predetermined threshold TH1, can identify destination object along moving away from the direction of imaging plane.In addition, in Depth determination result sequence, there is continuously L above " shoaling ", and the corresponding zooming parameter S of object of observation image of this L " shoaling "
j(j=1 ..., L) continued product while being greater than predetermined threshold TH2, direction of motion recognition unit 605 can identify destination object and move along the direction near imaging plane.Wherein, K and L are predefined positive integers, can get identical or different value.The size of K and L and TH1 and TH2 can need to decide according to design.
Please note: although get in the above embodiments a frame in video sequence as with reference to frame, other frame carries out registration as observing frame with this reference frame, also can adopt in such a way:, and using the former frame in video sequence as with reference to frame, a rear frame is as observing frame, and so forth.Need to decide according to design.
The example arrangement of the computing machine of realizing data processing equipment of the present invention hereinafter, is described with reference to figure 9.Fig. 9 is the block diagram that the example arrangement that realizes computing machine of the present invention is shown.
In Fig. 9, CPU (central processing unit) (CPU) 901 carries out various processing according to the program of storage in ROM (read-only memory) (ROM) 902 or from the program that storage area 908 is loaded into random access memory (RAM) 903.In RAM903, also store as required data required in the time that CPU901 carries out various processing.
CPU901, ROM902 and RAM903 are connected to each other via bus 904.Input/output interface 905 is also connected to bus 904.
Following parts are connected to input/output interface 905: importation 906, comprises keyboard, mouse etc.; Output 907, comprises display, such as cathode-ray tube (CRT) (CRT), liquid crystal display (LCD) etc., and loudspeaker etc.; Storage area 908, comprises hard disk etc.; And communications portion 909, comprise such as LAN card, modulator-demodular unit etc. of network interface unit.Communications portion 909 via network such as the Internet executive communication processing.
As required, driver 910 is also connected to input/output interface 905.Detachable media 911 such as disk, CD, magneto-optic disk, semiconductor memory etc. are installed on driver 910 as required, and the computer program of therefrom reading is installed in storage area 908 as required.
In the situation that realizing above-mentioned steps and processing by software, such as the Internet or storage medium such as detachable media 911, the program that forms software is installed from network.
It will be understood by those of skill in the art that this storage medium is not limited to wherein having program stored therein shown in Fig. 9, distributes separately the detachable media 911 so that program to be provided to user with method.The example of detachable media 911 comprises disk, CD (comprising compact disc read-only memory (CD-ROM) and digital universal disc (DVD)), magneto-optic disk and (comprises mini-disk (MD) and semiconductor memory.Or storage medium can be hard disk comprising in ROM902, storage area 908 etc., wherein computer program stored, and be distributed to user together with comprising their method.
In instructions above, with reference to specific embodiment, the present invention is described.But those of ordinary skill in the art's understanding can be carried out various amendments and change under the prerequisite that does not depart from the scope of the present invention limiting as claims.
The present invention can also realize with embodiment below:
1. an image processing equipment, comprising:
Object extracting unit, for extract the image of destination object from the frame of monocular video sequence, as object images;
Registration unit, for object of observation image is carried out to registration for references object image, to obtain registration parameter, references object image and object of observation image are the object images of extracting the observation frame outside reference frame and reference frame respectively;
Feature extraction unit, for by utilizing registration parameter to extract can to reflect object of observation image feature with respect to the change in depth of imaging plane compared with references object image; And
Depth determination unit, determines change in depth for the feature based on extracted.
2. according to the image processing equipment of item 1, wherein, the zooming parameter that feature extraction unit changes reflection object picture size in registration parameter is extracted as feature.
3. according to the image processing equipment of item 2, wherein, represent to observe the zooming parameter of frame for reference frame, Depth determination unit with S:
At S>1+ ε
atime, determine that destination object shoals with respect to imaging plane;
At S<1-ε
btime, determine that destination object deepens with respect to imaging plane; And
Otherwise, determine that destination object is uncertain with respect to the change in depth of imaging plane;
Wherein, ε
aand ε
bit is the little positive number as definite surplus.
4. according to any image processing equipment in item 1 to 3, also comprise direction of motion recognition unit, for identifying the direction of motion of destination object with respect to imaging plane;
Wherein, in the time making image processing equipment carry out Depth determination to the observation frame sequential in scheduled time slot, direction of motion recognition unit can be identified direction of motion according to the result sequence of being determined by Depth determination unit.
5. according to the image processing equipment of item 4, wherein, direction of motion recognition unit:
When " deepening " more than there is the first continuous predetermined number in result sequence, identify destination object along moving away from the direction of imaging plane; And
When " shoaling " more than there is the second continuous predetermined number in result sequence, identify destination object and move along the direction near imaging plane.
6. according to the image processing equipment of item 4, wherein, direction of motion recognition unit:
When there being the continuous the 3rd more than predetermined number " deepening " in result sequence, and the continued product of corresponding zooming parameter S is while being less than first threshold, identifies destination object along moving away from the direction of imaging plane; And
When there being the continuous the 4th more than predetermined number " shoaling " in result sequence, and the continued product of corresponding zooming parameter S is while being greater than Second Threshold, identifies destination object and moves along the direction near imaging plane.
7. according to the image processing equipment of item 1, wherein, feature extraction unit comprises:
Alignment unit, for according to the translation parameters of registration parameter by object of observation image and references object image alignment; And
Histogram generation unit, generates histograms of oriented gradients for the marginal portion of the object images for alignment, as the feature that can reflect change in depth.
8. according to any image processing equipment in item 1 to 7, wherein, object extracting unit comprises detecting unit, and detecting unit is for using the moving window detected object image of many sizes, to determine object images region.
9. according to the image processing equipment of item 8, wherein, extraction unit comprises cutting unit, and cutting unit is for dividing processing is carried out in object images region, to extract object images.
10. according to the image processing equipment of item 8 or 9, wherein, detecting unit uses housebroken destination object detecting device detected object image.
11. 1 kinds of image processing methods, comprising:
From the frame of monocular video sequence, extract the image of destination object, as object images;
Object of observation image is carried out to registration for references object image, and to obtain registration parameter, references object image and object of observation image are the object images of extracting the observation frame outside reference frame and reference frame respectively;
By utilizing registration parameter to extract can to reflect object of observation image feature with respect to the change in depth of imaging plane compared with references object image; And
Feature based on extracted is determined change in depth.
12. according to the image processing method of item 11, and wherein, the zooming parameter that reflection object picture size in registration parameter is changed is extracted as feature.
13. according to the image processing method of item 12, wherein, represents to observe the zooming parameter of frame for reference frame with S:
At S>1+ ε
atime, determine that destination object shoals with respect to imaging plane;
At S<1-ε
btime, determine that destination object deepens with respect to imaging plane; And
Otherwise, determine that destination object is uncertain with respect to the change in depth of imaging plane;
Wherein, ε
aand ε
bit is the little positive number as definite surplus.
14. according to any image processing method in item 11 to 13, also comprises: the sequence of the Depth determination result drawing according to the observation frame sequential in scheduled time slot is identified the direction of motion of destination object with respect to imaging plane.
15. according to the image processing method of item 14, wherein,
When " deepening " more than there is the first continuous predetermined number in result sequence, identify destination object along moving away from the direction of imaging plane; And
When " shoaling " more than there is the second continuous predetermined number in result sequence, identify destination object and move along the direction near imaging plane.
16. according to the image processing method of item 14, wherein,
When there being the continuous the 3rd more than predetermined number " deepening " in result sequence, and the continued product of corresponding zooming parameter S is while being less than first threshold, identifies destination object along moving away from the direction of imaging plane; And
When there being the continuous the 4th more than predetermined number " shoaling " in result sequence, and the continued product of corresponding zooming parameter S is while being greater than Second Threshold, identifies destination object and moves along the direction near imaging plane.
17. according to the image processing method of item 11, and wherein, the feature of extracting reflection change in depth comprises:
According to the translation parameters in registration parameter by object of observation image and references object image alignment; And
Marginal portion for the object images of aliging generates histograms of oriented gradients, as the feature that can reflect change in depth.
18. according to any image processing method in item 11 to 17, wherein, extracts object images and comprises: use the moving window detected object image of many sizes, to determine object images region.
19. according to the image processing method of item 18, wherein, extracts object images and comprises: dividing processing is carried out in object images region, to extract object images.
20. according to the image processing method of item 18 or 19, wherein, extracts object images and comprises: use housebroken destination object detecting device detected object image.
Claims (10)
1. an image processing equipment, comprising:
Object extracting unit, for extract the image of destination object from the frame of monocular video sequence, as object images;
Registration unit, for object of observation image is carried out to registration for references object image, to obtain registration parameter, described references object image and described object of observation image are the object images of extracting the observation frame outside reference frame and reference frame respectively;
Feature extraction unit, for by utilizing described registration parameter to extract can to reflect object of observation image feature with respect to the change in depth of imaging plane compared with references object image; And
Depth determination unit, determines described change in depth for the feature based on extracted.
2. image processing equipment according to claim 1, wherein, the zooming parameter that described feature extraction unit changes reflection object picture size in described registration parameter is extracted as described feature.
3. image processing equipment according to claim 2, wherein, represents to observe the described zooming parameter of frame for reference frame, described Depth determination unit with S:
At S>1+ ε
atime, determine that described destination object shoals with respect to imaging plane;
At S<1-ε
btime, determine that described destination object deepens with respect to imaging plane; And
Otherwise, determine that described destination object is uncertain with respect to the change in depth of imaging plane;
Wherein, ε
aand ε
bit is the little positive number as described definite surplus.
4. according to the image processing equipment described in any in claims 1 to 3, also comprise direction of motion recognition unit, for identifying the direction of motion of described destination object with respect to described imaging plane;
Wherein, in the time making described image processing equipment carry out Depth determination to the observation frame sequential in scheduled time slot, described direction of motion recognition unit can be identified direction of motion according to the result sequence of being determined by Depth determination unit.
5. image processing equipment according to claim 4, wherein, described direction of motion recognition unit:
In the time there is " deepening " more than the first continuous predetermined number in described result sequence, identify described destination object along moving away from the direction of imaging plane; And
In the time there is " shoaling " more than the second continuous predetermined number in described result sequence, identify described destination object and move along the direction near imaging plane.
6. an image processing method, comprising:
From the frame of monocular video sequence, extract the image of destination object, as object images;
Object of observation image is carried out to registration for references object image, and to obtain registration parameter, described references object image and described object of observation image are the object images of extracting the observation frame outside reference frame and reference frame respectively;
By utilizing described registration parameter to extract can to reflect object of observation image feature with respect to the change in depth of imaging plane compared with references object image; And
Feature based on extracted is determined described change in depth.
7. image processing method according to claim 6, wherein, the zooming parameter that reflection object picture size in described registration parameter is changed is extracted as described feature.
8. image processing method according to claim 7, wherein, represents to observe the described zooming parameter of frame for reference frame with S:
At S>1+ ε
atime, determine that described destination object shoals with respect to imaging plane;
At S<1-ε
btime, determine that described destination object deepens with respect to imaging plane; And
Otherwise, determine that described destination object is uncertain with respect to the change in depth of imaging plane;
Wherein, ε
aand ε
bit is the little positive number as described definite surplus.
9. according to the image processing method described in any in claim 6 to 8, also comprise: identify the direction of motion of described destination object with respect to described imaging plane according to the sequence of the Depth determination result drawing for the described observation frame sequential in scheduled time slot.
10. image processing method according to claim 9, wherein,
In the time there is " deepening " more than the first continuous predetermined number in described result sequence, identify described destination object along moving away from the direction of imaging plane; And
In the time there is " shoaling " more than the second continuous predetermined number in described result sequence, identify described destination object and move along the direction near imaging plane.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310119788.3A CN104103062A (en) | 2013-04-08 | 2013-04-08 | Image processing device and image processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310119788.3A CN104103062A (en) | 2013-04-08 | 2013-04-08 | Image processing device and image processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104103062A true CN104103062A (en) | 2014-10-15 |
Family
ID=51671186
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310119788.3A Pending CN104103062A (en) | 2013-04-08 | 2013-04-08 | Image processing device and image processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104103062A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292907A (en) * | 2017-07-14 | 2017-10-24 | 灵动科技(北京)有限公司 | A kind of method to following target to be positioned and follow equipment |
CN109165645A (en) * | 2018-08-01 | 2019-01-08 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and relevant device |
CN110516517A (en) * | 2018-05-22 | 2019-11-29 | 杭州海康威视数字技术股份有限公司 | A kind of target identification method based on multiple image, device and equipment |
CN112883769A (en) * | 2020-01-15 | 2021-06-01 | 赛义德·皮拉斯特 | Method for identifying human interaction behavior in aerial video of unmanned aerial vehicle |
CN113243016A (en) * | 2018-12-10 | 2021-08-10 | 株式会社小糸制作所 | Object recognition system, arithmetic processing device, automobile, vehicle lamp, and method for learning classifier |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101815225A (en) * | 2009-02-25 | 2010-08-25 | 三星电子株式会社 | Method for generating depth map and device thereof |
CN102413756A (en) * | 2009-04-29 | 2012-04-11 | 皇家飞利浦电子股份有限公司 | Real-time depth estimation from monocular endoscope images |
CN102609942A (en) * | 2011-01-31 | 2012-07-25 | 微软公司 | Mobile camera localization using depth maps |
CN102662334A (en) * | 2012-04-18 | 2012-09-12 | 深圳市兆波电子技术有限公司 | Method for controlling distance between user and electronic equipment screen and electronic equipment |
US20120306876A1 (en) * | 2011-06-06 | 2012-12-06 | Microsoft Corporation | Generating computer models of 3d objects |
-
2013
- 2013-04-08 CN CN201310119788.3A patent/CN104103062A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101815225A (en) * | 2009-02-25 | 2010-08-25 | 三星电子株式会社 | Method for generating depth map and device thereof |
CN102413756A (en) * | 2009-04-29 | 2012-04-11 | 皇家飞利浦电子股份有限公司 | Real-time depth estimation from monocular endoscope images |
CN102609942A (en) * | 2011-01-31 | 2012-07-25 | 微软公司 | Mobile camera localization using depth maps |
US20120306876A1 (en) * | 2011-06-06 | 2012-12-06 | Microsoft Corporation | Generating computer models of 3d objects |
CN102662334A (en) * | 2012-04-18 | 2012-09-12 | 深圳市兆波电子技术有限公司 | Method for controlling distance between user and electronic equipment screen and electronic equipment |
Non-Patent Citations (1)
Title |
---|
孙丽娟: "基于边缘梯度直方图的中国静态手语识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292907A (en) * | 2017-07-14 | 2017-10-24 | 灵动科技(北京)有限公司 | A kind of method to following target to be positioned and follow equipment |
CN107292907B (en) * | 2017-07-14 | 2020-08-21 | 灵动科技(北京)有限公司 | Method for positioning following target and following equipment |
CN110516517A (en) * | 2018-05-22 | 2019-11-29 | 杭州海康威视数字技术股份有限公司 | A kind of target identification method based on multiple image, device and equipment |
CN110516517B (en) * | 2018-05-22 | 2022-05-06 | 杭州海康威视数字技术股份有限公司 | Target identification method, device and equipment based on multi-frame image |
CN109165645A (en) * | 2018-08-01 | 2019-01-08 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and relevant device |
CN109165645B (en) * | 2018-08-01 | 2023-04-07 | 腾讯科技(深圳)有限公司 | Image processing method and device and related equipment |
CN113243016A (en) * | 2018-12-10 | 2021-08-10 | 株式会社小糸制作所 | Object recognition system, arithmetic processing device, automobile, vehicle lamp, and method for learning classifier |
CN112883769A (en) * | 2020-01-15 | 2021-06-01 | 赛义德·皮拉斯特 | Method for identifying human interaction behavior in aerial video of unmanned aerial vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11643076B2 (en) | Forward collision control method and apparatus, electronic device, program, and medium | |
Mei et al. | Robust visual tracking and vehicle classification via sparse representation | |
US9355320B2 (en) | Blur object tracker using group lasso method and apparatus | |
US10554957B2 (en) | Learning-based matching for active stereo systems | |
CN104156693B (en) | A kind of action identification method based on the fusion of multi-modal sequence | |
CN108596128A (en) | Object identifying method, device and storage medium | |
EP3070676A1 (en) | A system and a method for estimation of motion | |
KR102399025B1 (en) | Improved data comparison method | |
CN111488873B (en) | Character level scene text detection method and device based on weak supervision learning | |
Jepson et al. | A layered motion representation with occlusion and compact spatial support | |
CN104103062A (en) | Image processing device and image processing method | |
EP3665651B1 (en) | Hierarchical disparity hypothesis generation with slanted support windows | |
Hu et al. | Robust object tracking via multi-cue fusion | |
EP3001339A1 (en) | Apparatus and method for supporting computer aided diagnosis | |
CN113255779B (en) | Multi-source perception data fusion identification method, system and computer readable storage medium | |
CN106462975A (en) | Method and apparatus for object tracking and segmentation via background tracking | |
KR20080069878A (en) | Face view determining apparatus and method and face detection apparatus and method employing the same | |
Ali et al. | Vehicle detection and tracking in UAV imagery via YOLOv3 and Kalman filter | |
Meus et al. | Embedded vision system for pedestrian detection based on HOG+ SVM and use of motion information implemented in Zynq heterogeneous device | |
Gu et al. | Embedded and real-time vehicle detection system for challenging on-road scenes | |
Wang et al. | Combining semantic scene priors and haze removal for single image depth estimation | |
Chang et al. | Topology-constrained layered tracking with latent flow | |
Hou et al. | Multi-modal feature fusion for 3D object detection in the production workshop | |
Zhang et al. | Infrared detection of small moving target using spatial–temporal local vector difference measure | |
Pece et al. | Tracking with the EM contour algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20141015 |
|
WD01 | Invention patent application deemed withdrawn after publication |