CN109074661A - Image processing method and equipment - Google Patents
Image processing method and equipment Download PDFInfo
- Publication number
- CN109074661A CN109074661A CN201780022779.9A CN201780022779A CN109074661A CN 109074661 A CN109074661 A CN 109074661A CN 201780022779 A CN201780022779 A CN 201780022779A CN 109074661 A CN109074661 A CN 109074661A
- Authority
- CN
- China
- Prior art keywords
- joint
- speed
- parallax
- target point
- vital body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The embodiment of the present application provides a kind of image processing method and equipment, may be implemented to fully consider the features such as limbs or the joint of life entity under the high dynamic scene with life entity so that the depth map of life entity calculate it is more accurate.This method comprises: determining direction vector of the target point on the targeted vital body in image at least one joint, and determine the positional relationship of the target point and at least one pixel;According to the direction vector and the parallax of the positional relationship and at least one pixel, the parallax of the target point is calculated.According to the direction vector and the positional relationship, the penalty coefficient of the global energy function of SGM algorithm is adjusted;Based on the parallax of at least one pixel, using institute's global energy function after the penalty coefficient is had adjusted, the parallax of the target point is calculated.
Description
Copyright notice
This patent document disclosure includes material protected by copyright.The copyright is all for copyright holder.Copyright
Owner does not oppose the patent document in the presence of anyone replicates the proce's-verbal of Patent&Trademark Office and archives or should
Patent discloses.
Technical field
The invention relates to field of image processings, and more particularly, to a kind of image processing method and equipment.
Background technique
The mankind are going into the information age, and computer enters nearly all field more and more widely.As intelligence computation
Key areas, computer vision obtained great development and application.Computer vision is to replace visual organ by imaging system
Official is as input sensitive means, and the most commonly used is cameras, constitutes a basic vision system by dual camera.
Binocular camera shooting head system can shoot synchronization, two photos of different angle, then lead to by two cameras
The position crossed between the difference and dual camera of two photos, angular relationship can calculate scene using triangle relation
With the distance relation of camera, it can obtain depth map.After all, binocular camera shooting head system is by synchronization difference
The difference of two photos of angle, to obtain the depth information of scene.
But for high dynamic scene, the invalid situation of some depth maps is had, even if there is the dynamic regulation for the prospect of being directed to
Exposure strategies, still having can not work under some cases well.
Summary of the invention
The embodiment of the present application provides a kind of image processing method and equipment, may be implemented in the high dynamic with life entity
Under scene, fully consider the features such as limbs or the joint of life entity so that the depth map of life entity calculate it is more accurate.
On the one hand, provide a kind of image processing method, comprising: determine target point on the targeted vital body in image to
The direction vector of at least one joint, and determine the positional relationship of the target point and at least one pixel;According to institute
Direction vector and the positional relationship are stated, the complete of half global registration (Semi-Global Matching, SGM) algorithm is adjusted
The penalty coefficient of office's energy function;Based on the parallax of at least one pixel, after having adjusted the penalty coefficient
Institute's global energy function, calculates the parallax of the target point.
On the other hand, a kind of image processing equipment, including determination unit and computing unit are provided;Wherein, the determination
Unit is used for: being determined direction vector of the target point on the targeted vital body in image at least one joint, and is determined
The positional relationship of the target point and at least one pixel;The computing unit is used for: according to the direction vector, Yi Jisuo
Positional relationship is stated, the penalty coefficient of the global energy function of SGM algorithm is adjusted;Based on the parallax of at least one pixel,
Using institute's global energy function after the penalty coefficient is had adjusted, the parallax of the target point is calculated.
On the other hand, a kind of image processing equipment, including memory and processor are provided, which is stored with generation
Code, the processor can call the code in memory to execute following methods: determine the target on the targeted vital body in image
The direction vector at least one joint is put, and determines the positional relationship of the target point and at least one pixel;Root
According to the direction vector and the positional relationship, the penalty coefficient of the global energy function of SGM algorithm is adjusted;Based on described
The parallax of at least one pixel calculates the target point using institute's global energy function after the penalty coefficient is had adjusted
Parallax.
On the other hand, a kind of computer storage medium is provided, which has code, which is determined for
Direction vector from the target point on targeted vital body in image at least one joint, and determine the target point with extremely
The positional relationship of a few pixel;According to the direction vector and the positional relationship, the global energy of SGM algorithm is adjusted
The penalty coefficient of flow function;Based on the parallax of at least one pixel, using having adjusted, the institute after the penalty coefficient is complete
Office's energy function, calculates the parallax of the target point.
On the other hand, a kind of computer program product is provided, which includes code, which can be with
For determining direction vector of the target point on the targeted vital body in image at least one joint, and the determining mesh
The positional relationship of punctuate and at least one pixel;According to the direction vector and the positional relationship, SGM algorithm is adjusted
Global energy function penalty coefficient;Based on the parallax of at least one pixel, using having adjusted the penalty coefficient
Institute's global energy function afterwards, calculates the parallax of the target point.
Therefore, in the embodiment of the present application, determine target point on the targeted vital body in image at least one joint
The direction vector at place, and determine the positional relationship of the target point and at least one pixel;According to the direction vector, with
And the positional relationship, adjust the penalty coefficient of the global energy function of SGM algorithm;View based at least one pixel
Difference calculates the parallax of the target point, may be implemented having using institute's global energy function after the penalty coefficient is had adjusted
Have under the high dynamic scene of life entity, fully considers the features such as limbs or the joint of life entity to adjust in half global registration algorithm
Penalty coefficient, avoid using fixed penalty coefficient so that the depth map of life entity calculate it is more accurate.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be in embodiment or description of the prior art
Required attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some realities of the application
Example is applied, it for those of ordinary skill in the art, without creative efforts, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is under high dynamic scene, and the schematic figure of disconnection occurs in depth information.
Fig. 2 is the schematic figure according to the image processing method of the embodiment of the present application.
Fig. 3 is the schematic figure that life entity is partitioned into using PAF algorithm according to the embodiment of the present application.
Fig. 4 is the schematic figure on ground.
Fig. 5 is the schematic figure of limbs vector field.
Fig. 6 is the schematic figure of limbs vector field.
Fig. 7 is the image of thermal imaging.
Fig. 8 is the schematic figure according to the image processing equipment of the embodiment of the present application.
Fig. 9 is the schematic figure according to the image processing equipment of the embodiment of the present application.
Figure 10 is the schematic figure according to the unmanned plane of the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application is described, and is shown
So, described embodiment is some embodiments of the present application, instead of all the embodiments.Based on the implementation in the application
Example, every other embodiment obtained by those of ordinary skill in the art without making creative efforts belong to
The range of the application protection.
It should be noted that when a component and another component " being fixedly connected " or " connection " in the embodiment of the present application, alternatively,
When one component " is fixed on " another component, it can directly on another component, or there may also be components placed in the middle.
Unless otherwise indicated, the technical field of all technical and scientific terms and the application used in the embodiment of the present application
The normally understood meaning of technical staff it is identical.Term used in this application is intended merely to the mesh of description specific embodiment
, it is not intended that limitation scope of the present application.Term "and/or" used in this application includes one or more relevant listed
Any and all combinations of item.
For life entity (for example, people), due to being in a high dynamic environment, when obtaining depth information, then
It will appear the invalid situation of some depth maps, even if there are still some cases using the dynamic regulation exposure strategies for the prospect of being directed to
It is invalid to there is depth map, for example, dynamic exposure strategy still needs convergence time, and it will cause depths when scene switching
Degree figure is invalid, for example, having more invalid depth information such as the arm segment that Fig. 1 is directed to people, leading to limb occur on 3D depth map
The case where body disconnection, wherein be the photo shot on the left of Fig. 1, right side is depth information.
The language for unmanned plane, flight course planning avoidance are needed using 3D depth map, i.e. the quality of depth map directly affects
The success or failure and effect of avoidance.
Therefore, the embodiment of the present application has provided scheme below, available more preferably depth information.
It should be understood that the embodiment of the present application, which obtains depth information, can be used for unmanned plane progress avoidance, it can be used for other
Scene, the embodiment of the present application are not especially limited this.
It should also be understood that the embodiment of the present application can carry out calculating depth information using the image of binocular camera shooting,
The image that can use monocular cam shooting calculates depth information, and the embodiment of the present application is not especially limited this.
The embodiment of the present application can be used for aerial photography aircraft or other have the carrier of multi-cam, such as unpiloted
Automobile, the unmanned plane to fly automatically, VR/AR glasses, the mobile phone of dual camera, the equipment such as intelligent carriage that have vision system.
Fig. 2 is the schematic flow chart according to the image processing method 100 of the embodiment of the present application.This method 100 include with
At least partly content in lower content.
In 110, direction vector of the target point on the targeted vital body in image at least one joint is determined, with
And determine the positional relationship of the target point and at least one pixel.
Optionally, can be divided by joint between each limbs that the embodiment of the present application is mentioned, for example, the application is real
Applying the limbs that example is mentioned may include head, hand, upper arm, lower arm, huckle and calf etc..
Optionally, the life entity that the embodiment of the present application is mentioned can be people, it is of course also possible to be other life entities, for example,
Cat, dog, elephant and birds etc..
Optionally, in the embodiment of the present application, targeted vital body can be partitioned into from image in advance.And then determine target
Target point on life entity and determines the target point and at least one pixel to the direction vector of at least one joint
Positional relationship.Wherein, which refers to target pixel points, which can be the adjacent of the target pixel points
Pixel.
Specifically, on this image, the limbs joint of life entity can be determined;According to the vector of the limbs joint of life entity
, determine the connection relationship of the limbs joint of life entity;According to the connection relationship of the limbs joint of life entity, divide from the image
Cut out the targeted vital body.
Specifically, limbs interconnection vector field (Part Affinity Fields, PAFs) can be used, is partitioned into life
Body.
Specifically, a in Fig. 3 can be configured to the limbs joint confidence map (Part as shown in the b in Fig. 3
Confidence Maps), and it is further configured to limbs associative field (the Part Affinity as shown in the c in Fig. 3
Fields, PAF), so as to the limbs associative field according to shown in the c in Fig. 3, human body is split, it specifically can be such as Fig. 3
In d shown in.
During being configured to the limbs joint confidence map as shown in the b in Fig. 3, convolutional neural networks can be passed through
(Convolutional Neural Network, CNN) finds the articular portion of human body, for example, wrist joint, elbow joint and shoulder joint
Section etc..Wherein it is possible to obtain limbs joint confidence map for human body kValue at the p of positionIt can be according to following
Formula 1 constructs:
Wherein, xj,kIt is the actual position (groundtruth position) of the limbs j of human body k in image, σ limits peak
The extension of value, wherein peak value corresponds to the visual limbs of each of each human body.
Limbs associative field refers to the points relationship between limbs joint, for example, shoulder joint is directed toward elbow joint, elbow joint refers to
To wrist joint etc..Wherein, as shown in figure 4, xj1,kAnd xj2,kIt is the joint j for connecting the limbs c of human body k1And j2Actual position,
The limbs interconnection vector field of point p can be defined according to the following Expression 2:
From above formula 2 as can be seen that if fruit dot p is on limbs c, thenValue be from j1To j2Point unit to
Amount, if not on limbs c,Value be 0.Wherein,
The set put on limbs c can be defined as the point in line segment in distance threshold, these points p meets following equation 3
With formula 4:
Wherein, limbs width cslIt is pixel distance,It is limbs length, v⊥It is perpendicular to the vector of v.
Therefore, after the scheme or similar scheme according to above-mentioned introduction determine the vector field of limbs joint of life entity,
It can determine the connection relationship of the limbs joint of life entity;According to the connection relationship of the limbs joint of life entity, from the image
It is partitioned into the targeted vital body.
Optionally, in the embodiment of the present application, it can be determined according to the points relationship of the limbs joint of the targeted vital body
Direction vector of the target point at least one joint.
Specifically, as shown in figure 4, after defining from elbow joint to wrist points relationship, in the available lower arm
Direction vector from any point to joint.
Optionally, in the embodiment of the present application, in the connection relationship according to the limbs joint of the life entity, from the image
It is partitioned into before the targeted vital body, by the way of thermal imaging, initial segmentation goes out the targeted vital body from the image.Its
In, Fig. 7 show the human body image that the mode of thermal imaging obtains.Certainly, in the embodiment of the present application, thermal imaging can not also be used
Mode, initial segmentation goes out targeted vital body.
It is alternatively possible to be based on minimum spanning tree (minimum spanning tree) figure split plot design, it is partitioned into life
Body.It can specifically include following operation.
Step 1: image (image) being first converted to curve graph (graph), graph G=(V, E) is obtained, for graph
G has n vertex v and m edge e.
Pass through the set S=(C for the cut zone that following steps obtain1,...,Cr), wherein C1,...,CrFor vertex composition
Subset.
Step 2: the weight w of edge E being arranged according to non-decreasing, obtains set π=(o1,...,om)。
Step 3: from segmented areas S0Start, in set S0In, each vertex is that (each pixel is single to self-contained subset
Only region).
Step 4: step 4 is repeated, for Sq, q=1 ..., m.
Step 5: it is as described below, pass through Sq-1S is calculatedq.Here v is usediAnd vjIndicate two tops of the q articles edge connection
Point, is denoted as oq=(vi,vj).If viAnd vjIn Sq-1In be two vertex that are independent, not yet connecting, and the edge weights w of line
(oq) it is less than certain threshold valueIt then connects and merges viAnd vjThe subset at place, otherwise remains unchanged.It is specific next
It says, Sq-1In be denoted as comprising the area subsets of viAnd include vjBe denoted asIfAndThen merge subsetWithObtain new set Sq, otherwise Sq=Sq-1It remains unchanged.
Wherein,
Wherein, int is defined as internal difference, as subset C1Maximum weight in the small spanning tree MST of minimum of composition, and weight
It is defined as w (e)=w (vi,vj)=| I (pi)-I(pj)|。
Step 6: after having traversed m item connection edge line, returning to final result Sm, as S.
In 120, according to the direction vector and the positional relationship, half global registration (Semi-Global is adjusted
Matching, SGM) algorithm global energy function penalty coefficient.
In 130, the parallax based at least one pixel, using having adjusted institute's global energy after the penalty coefficient
Function calculates the parallax of the target point.
It should be understood that in the embodiment of the present application, other than calculating the parallax of target point using SGM algorithm, can also use
Other algorithms calculate the parallax of target point, will illustrate how to calculate mesh using SGM by taking SGM as an example below the embodiment of the present application
The parallax of punctuate.
For the ease of the application is more clearly understood, SGM algorithm will be introduced below.
It can be made up of a parallax map (disparity map) the parallax of each pixel of selection, be arranged one
The relevant global energy function with parallax map (disparity map) minimizes this energy function, is solved often with reaching
The purpose of a optimal parallax of pixel.Wherein, energy function can be as shown in following formula 6:
Wherein, D refers to that parallax map, E (D) are the corresponding energy functions of parallax map;Some picture in p, q representative image
Vegetarian refreshments;NpRefer to the neighbor pixel of pixel p;C (p, Dp) refer to that current pixel point parallax is DpWhen, the cost of the pixel;P1It is one
A penalty coefficient, its parallax value suitable for pixel p adjacent pixel with the parallax value of p differ those of 1 pixel;P2It is one
A penalty coefficient, its parallax value and parallax value of p suitable for pixel p adjacent pixel differ by more than those of 1 pixel.
It is larger that optimal solution time-consuming is found in two dimensional image using above-mentioned functional expression 6, therefore the problem is approximately decomposed into
Multiple one-dimensional problems, i.e. linear problem.And each one-dimensional problem can be solved with Dynamic Programming.Because of 1 pixel
Usually there is 8 neighbor pixels (neighbor pixel that there can certainly be other quantity), therefore is generally decomposed into 8 and one-dimensional asks
Topic.
For each one-dimensional solution, the following Expression 7 can be used:
Wherein, r refers to that some is directed toward the direction of current pixel p, can be understood as pixel p herein in the adjacent pixel of the direction
Point.
Lr(p, d) is indicated along work as front direction, when the parallax value of current pixel point p is d, minimum cost value.
Wherein, the minimum value that this minimum value can be chosen from 4 kinds of possible candidate values:
1st kind it is possible when current pixel point it is equal with previous pixel parallax value when, the smallest cost value.
2nd and 3 kind may be current pixel point and previous pixel parallax value difference 1 (when more 1 or few 1), the smallest generation
Value+penalty coefficient P1。
4th kind when may be that the difference of current pixel point and previous pixel parallax value is greater than 1, the smallest cost value+punish
Penalty factor P2。
In addition, the cost value of current pixel point can also subtract the smallest cost when previous pixel takes different parallax values
Value.This is because Lr(p, d) is can be as moving to right for current pixel stop to increase, and numerical value overflows in order to prevent, can allow it
Maintain a lesser numerical value.
Wherein it is possible to calculate C (p, d) using following equation 8 and 9:
C (p, d)=min (d (p, p-d, IL,IR),d(p-d,p,IR,IL)) formula 8
It, can be tired by multiple direction, such as the cost value in 8 directions when having calculated separately the cost value in each direction
Evaluation chooses cumulative final parallax value of the smallest parallax value of cost value as the pixel, for example, can pass through following equation 10
It adds up:
S (p, d)=∑ Lr(p, d) formula 10
It can be seen that for above formula 7, if it is desired to which target pixel points and neighbor pixel use identical parallax, then may be used
With by penalty coefficient P1With penalty coefficient P2What is be arranged is larger, can increase target pixel points in this way with neighbor pixel using phase
The probability of same parallax, if it is desired to which target pixel points differ larger with the parallax of neighbor pixel, then can be by penalty coefficient
P2What is be arranged is smaller, by penalty coefficient P1What is be arranged is larger, can increase target pixel points in this way with neighbor pixel using larger
Parallax difference probability, if it is desired to target pixel points differ smaller with the parallax of neighbor pixel, then can will punish
FACTOR P1What is be arranged is smaller, by penalty coefficient P2What is be arranged is larger, can increase target pixel points in this way and neighbor pixel uses
The probability of the difference of lesser parallax.
Here it can be illustrated using for ground as shown in Figure 5.For ground, in 2D image, ground
Be along upper and lower direction, what depth successively changed, and on left and right directions, depth is substantially coincident.
So in Fig. 5, when corresponding to four direction (path) up and down, since depth is one on left and right directions
It causes, so punishment parameter P in the lateral direction1And P2Larger, algorithm can tend to select identical view in the lateral direction
Difference;And in up and down direction, lesser punishment parameter P will be given1And P2, algorithm will tend to select in the up-down direction not
Same parallax.
Optionally, according to the angle of direction vector vector corresponding with the positional relationship, the penalty coefficient is adjusted.
In one implementation, when the absolute value of the disparity difference is greater than or equal to predetermined parallax, the angle and 90
The mould of the difference of degree is positively correlated with the penalty coefficient.
For example, image as shown in FIG. 6, on the extension direction of arm, i.e., from elbow joint on carpal direction,
It is an inclined-plane, is gradually changed with a distance from camera lens, depth is variation, the up and down direction of above ground portion in analogy Fig. 5,
Give biggish punishment parameter P1And P2, algorithm will tend to select different parallaxes on carpal direction in elbow joint.
And perpendicular to the direction of arm, perpendicular to elbow joint on carpal direction, substantially at one apart from upper, i.e., depth is base
This is identical, and the left and right directions on ground in analogy Fig. 5 dynamically tunes up punishment parameter P1And P2, algorithm will tend to select it is close very
To identical parallax.
Optionally, according to the limbs edge of the targeted vital body, determine that at least one pixel is in the targeted vital
On body.
Specifically, it when neighbor pixel is located on life entity, is closed using the position between neighbor pixel and target point
System and target point to the direction vector at least one joint determine penalty coefficient, more meaningful, therefore can be raw according to target
The limbs edge for ordering body when determining that consecutive points are in targeted vital body, can be calculated using the method 100 of the embodiment of the present application
The parallax of target point.
Certainly, if target point is the pixel at limbs edge, and the pixel for calculating depth information is that target is raw
When ordering the pixel except body, then lesser punishment P can be set2, biggish penalty coefficient P is further set1, so as to
To allow calculated parallax to jump.
It optionally, can be raw referring only to the target that the PAF mode of the embodiment of the present application is partitioned into the embodiment of the present application
Body is ordered, determines the pixel at the edge of targeted vital body, the positional relationship of the pixel adjacent thereto of the pixel based on edge is adjusted
Whole penalty coefficient, without considering direction vector of the target point at least one joint.
Optionally, in the embodiment of the present application, which can be calculated according to the parallax of at least one target point
Depth information.
Specifically, after calculating the parallax of each pixel of targeted vital body, the depth of the life entity can be calculated
Spend information.Wherein, parallax can be depth inversely.
It is alternatively possible to calculate depth by following equation 11:
Wherein, d is depth, and b is the distance between left and right camera, and f is the focal length of camera, dpIt is parallax (disparity).
From equation 1 above as can be seen that generally being remained unchanged since b and f are physical attributes, then d and dpInversely.It is right
For the object of short distance, depth is smaller, then parallax is larger, and for remote object, depth is larger, corresponding
Parallax it is smaller.
Optionally, in the embodiment of the present application, First Speed can be determined according to the depth of the targeted vital body, this
The direction of one speed is from the targeted vital body to the direction of unmanned equipment;According to the First Speed and second speed, really
The fixed control speed controlled the unmanned equipment, the second speed are the speed of controller input;According to the control
Speed controls the flight of the unmanned equipment.Optionally, the size with depth of the First Speed are inversely proportional.Wherein, this nobody
Steer can be unmanned plane or pilotless automobile etc., will be illustrated by taking unmanned plane as an example below.
Specifically, it when unmanned plane flies to closer apart from life entity, utilizes repulsion field (Repulsive Force Field)
" flicking " unmanned plane realizes the detour to barrier.
Here it is referred to following Formula of Universal Gravitation 12, constructs repulsion field, specific manifestation form can refer to formula 13.
Here life weight mobstacleA biggish steady state value can be taken, wherein mdroneIt is the quality of unmanned plane,
And G is also a steady state value, therefore can define constant k=Gmobstacle, thus, it is possible to obtain following formula 14:
Wherein, DxIt is the depth information of life entity, the depth of each pixel of life entity can be averaged.
It is then possible to obtain the speed planned in repulsion field by the constant acceleration formula in the following Expression 15
The corresponding speed of repulsion field is directed toward the direction far from life entity, and the unmanned plane of user's control itself has a speed
Degree, two speed are by one new speed of vector superposed rear generation, as the speed finally planned, as the speed of control system
Fourth finger enables, and eventually realizes the detour to barrier.
Therefore, in the embodiment of the present application, determine target point on the targeted vital body in image at least one joint
The direction vector at place, and determine the positional relationship of the target point and at least one pixel;According to the direction vector, with
And the positional relationship, adjust the penalty coefficient of the global energy function of SGM algorithm;View based at least one pixel
Difference calculates the parallax of the target point, may be implemented having using institute's global energy function after the penalty coefficient is had adjusted
Have under the high dynamic scene of life entity, fully considers the features such as limbs or the joint of life entity to adjust in half global registration algorithm
Penalty coefficient, avoid using fixed penalty coefficient so that the depth map of life entity calculate it is more accurate.
Moreover in the embodiment of the present application, being directed to the unmanned plane of low-latitude flying, propose a set of complete
Strategy, is directed to human body and other animals detect, and the calculating for optimizing depth map, to obtain more accurate barrier
Hinder object to be observed, can guarantor safety, also can preferably realize unmanned plane avoidance flight course planning detour.
Fig. 8 is the schematic block diagram according to the image processing equipment 200 of the embodiment of the present application.As shown in figure 8, the equipment
200 include determination unit 210 and computing unit 220;Wherein,
The determination unit 210 is used for: determining target point on the targeted vital body in image at least one joint
Direction vector, and determine the positional relationship of the target point and at least one pixel;
The computing unit 220 is used for: according to the direction vector and the positional relationship, adjusting half global registration
The penalty coefficient of the global energy function of SGM algorithm;
Based on the parallax of at least one pixel, using having adjusted institute's global energy letter after the penalty coefficient
Number, calculates the parallax of the target point.
Optionally, the computing unit 220 is further used for:
According to the angle of direction vector vector corresponding with the positional relationship, the penalty coefficient is adjusted.
Optionally, when the absolute value of the disparity difference is greater than or equal to predetermined parallax, the angle and 90 degree of difference
The mould of value is positively correlated with the penalty coefficient.
Optionally, it as shown in figure 8, the equipment 200 further includes the first cutting unit 230, is used for:
On the image, the limbs joint of life entity is determined;
According to the vector field of the limbs joint of the life entity, the connection relationship of the limbs joint of the life entity is determined;
According to the connection relationship of the limbs joint of the life entity, the targeted vital body is partitioned into from described image.
Optionally, the determination unit 210 is further used for:
According to the points relationship of the limbs joint of the targeted vital body, determine the target point at least one joint
Direction vector.
Optionally, the determination unit 210 is further used for:
According to the limbs edge of the targeted vital body, determine that at least one described pixel is in the targeted vital body
On.
Optionally, it as shown in figure 8, the equipment 200 further includes the second cutting unit 240, is used for:
By the way of thermal imaging, initial segmentation goes out the targeted vital body from described image.
Optionally, it as shown in figure 8, the equipment 200 further includes control unit 250, is used for:
According to the parallax of target point described at least one, the depth of the targeted vital body is calculated;
According to the depth of the targeted vital body, First Speed is determined, the direction of the First Speed is from the target
Life entity is directed toward the direction of unmanned equipment;
According to the First Speed and second speed, the control speed controlled the unmanned equipment is determined,
Wherein, the second speed is the speed of controller input;
According to the control speed, the unmanned equipment is controlled.
Optionally, the size of the First Speed is inversely proportional with the depth.
It should be understood that the corresponding operating in method 100 may be implemented in the image processing equipment, for sake of simplicity, no longer superfluous herein
It states.
Fig. 9 is the schematic block diagram according to the image processing equipment 400 of the embodiment of the present application
Optionally, which may include multiple and different components, these components can be used as integrated electricity
Road (integrated circuits, ICs) or the part of integrated circuit, discrete electronic equipment or other suitable for circuit
The module of plate (such as mainboard or add-in card), can also be used as the component for being incorporated to computer system.
Optionally, which may include processor 410 and the storage medium coupled with processor 410 420.
Processor 410 may include one or more general processors, such as central processing unit (central
Processing unit, CPU) or processing equipment etc..Specifically, which can be complex instruction set processing
(complex instruction set computing, CISC) microprocessor, very long instruction word (very long
Instruction word, VLIW) microprocessor, realize the microprocessor of multiple instruction collection combination.The processor is also possible to
One or more application specific processors, such as application specific integrated circuit (application specific integrated
Circuit, ASIC), field programmable gate array (field programmable gate array, FPGA), at digital signal
It manages device (digital signal processor, DSP).
Processor 410 can be communicated with storage medium 420.The storage medium 420 can be disk, CD, read-only storage
Device (read only memory, ROM), flash memory, phase transition storage.The storage medium 420 can store processor storage
Instruction, and/or, some information stored from External memory equipment can be cached.
Optionally, in addition to processor 420 and storage medium 420, image processing equipment may include display controller and/or
Display unit 430, transceiver 440, video input output unit 450, audio input output unit 460, other inputs are defeated
Unit 470 out.These components that image processing equipment 400 includes can pass through bus or internal connection interconnection.
Optionally, which can be wireline transceiver or wireless transceiver, and such as, WIFI transceiver, satellite is received
Send out device, bluetooth transceiver, wireless cellular telephony transceiver or combinations thereof etc..
Optionally, video input output unit 450 may include the image processing subsystem of such as video camera comprising light
Sensor, charge-coupled device (charged coupled device, CCD) or complementary metal oxide semiconductor
(complementary metal-oxide semiconductor, CMOS) optical sensor, for realizing shooting function.
Optionally, which may include loudspeaker, microphone, earpiece etc..
Optionally, other input-output equipment 470 may include storage equipment, universal serial bus (USB)
Port, serial port, parallel port, printer, network interface etc..
Optionally, which can execute operation shown in method 100, for sake of simplicity, no longer superfluous herein
It states.
Optionally, image processing equipment 400 or 400 can be located in movable equipment.Movable equipment can be any
It is moved under suitable environment, (for example, determine wing aircraft, gyroplane, or both to have determined the wing or not rotor for example, in air
Aircraft), in water (for example, steamer or submarine), land (for example, automobile or train), space is (for example, space plane, satellite
Or detector), and any combination of the above various environment.What movable equipment can fly with unpiloted automobile, automatically
Unmanned plane, VR/AR glasses, the mobile phone of dual camera, the equipment such as intelligent carriage that have vision system.
Figure 10 is the schematic block diagram according to the movable equipment 500 of the embodiment of the present application.As shown in Figure 10, it moves and sets
Standby 500 include carrier 510 and load 520.Movable equipment is described as unmanned plane just for the sake of description aspect in Figure 14.It is negative
Carrying 520 can not be connected on movable equipment by carrier 510.Movable equipment 500 can also include dynamical system 530,
Sensor-based system 540 and communication system 550 and image processing equipment 562 and camera system 564.
Dynamical system 530 may include electron speed regulator (referred to as electricity adjust), one or more propellers and with one
Or the corresponding one or more motors of multiple propellers.Motor and propeller are arranged on corresponding horn;Electron speed regulator
Driving current is provided to motor, to control motor for receiving the driving signal of flight controller generation, and according to driving signal
Revolving speed and/or steering.Motor is for driving propeller to rotate, so that the flight for UAV provides power, which makes UAV
It can be realized the movement of one or more freedom degrees.In certain embodiments, UAV can be around one or more rotary shaft rotations
Turn.For example, above-mentioned rotary shaft may include roll axis, translation shaft and pitch axis.It should be understood that motor can be direct current generator,
It can be with alternating current generator.In addition, motor can be brushless motor, it can also be with brush motor.
Sensor-based system 540 is used to measure the posture information of UAV, i.e. location information and status information of the UAV in space, example
Such as, three-dimensional position, three-dimensional perspective, three-dimensional velocity, three-dimensional acceleration and three-dimensional angular velocity etc..Sensor-based system for example may include top
Spiral shell instrument, electronic compass, Inertial Measurement Unit (Inertial Measurement Unit, referred to as " IMU "), visual sensor,
At least one in the sensors such as global positioning system (Global Positioning System, referred to as " GPS ") and barometer
Kind.Flight controller is used to control the flight of UAV, for example, can be flown according to the posture information control UAV that sensor-based system measures
Row.It should be understood that flight controller can control UAV according to the program instruction finished in advance, can also by respond come
UAV is controlled from one or more control instructions of commanding apparatus.
Communication system 550 can be carried out with a terminal device 580 with communication system 570 by wireless signal 590
Communication.Communication system 550 and communication system 570 may include multiple transmitter, receiver and/or transmitting-receivings for wireless communication
Machine.Here wireless communication can be one-way communication, send number to terminal device 580 for example, can only be movable equipment 500
According to.Or wireless communication can be two-way communication, data can be sent to terminal device 580 from movable equipment 500, can also
To be sent to movable equipment 500 by terminal device 580.
Optionally, terminal device 580, which is capable of providing, is directed to one or more movable equipments 500, carrier 510 and load
520 control data, and the information that movable equipment 500, carrier 510 and load 520 are sent can be received.Terminal device 580 mentions
The control data of confession can be used in controlling one or more movable equipments 500, carrier 510 and the state for loading 520.It is optional
It include the communication module for being communicated with terminal device 580 in ground, carrier 510 and load 520.
It is understood that the image processing equipment 562 that movable equipment illustrated in fig. 10 includes is able to carry out method
100, for sake of simplicity, details are not described herein.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any
Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain
Lid is within the scope of protection of this application.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.
More than, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, and it is any to be familiar with
Those skilled in the art within the technical scope of the present application, can easily think of the change or the replacement, and should all cover
Within the protection scope of the application.Therefore, the protection scope of the application should be subject to the protection scope in claims.
Claims (18)
1. a kind of image processing method characterized by comprising
It determines direction vector of the target point on the targeted vital body in image at least one joint, and determines the mesh
The positional relationship of punctuate and at least one pixel;
According to the direction vector and the positional relationship, punishing for the global energy function of half global registration SGM algorithm is adjusted
Penalty factor;
Based on the parallax of at least one pixel, using having adjusted institute's global energy function after the penalty coefficient, meter
Calculate the parallax of the target point.
2. the method according to claim 1, wherein being adjusted according to the direction vector and the positional relationship
The penalty coefficient of the global energy function of whole half global registration SGM algorithm, comprising:
According to the angle of direction vector vector corresponding with the positional relationship, the penalty coefficient is adjusted.
3. according to the method described in claim 2, it is characterized in that, the disparity difference absolute value be greater than or equal to it is predetermined
When parallax, the mould of the angle and 90 degree of difference is positively correlated with the penalty coefficient.
4. according to the method in any one of claims 1 to 3, which is characterized in that the target in the determining image is raw
Target point on body is ordered to before the direction vector of at least one joint, the method also includes:
On the image, the limbs joint of life entity is determined;
According to the vector field of the limbs joint of the life entity, the connection relationship of the limbs joint of the life entity is determined;
According to the connection relationship of the limbs joint of the life entity, the targeted vital body is partitioned into from described image.
5. according to the method described in claim 4, it is characterized in that, target point on targeted vital body in the determining image
To the direction vector of at least one joint, comprising:
According to the points relationship of the limbs joint of the targeted vital body, finger of the target point at least one joint is determined
To vector.
6. method according to claim 4 or 5, which is characterized in that in the determination target point and at least one picture
Before the positional relationship of vegetarian refreshments, the method also includes:
According to the limbs edge of the targeted vital body, determine that at least one described pixel is on the targeted vital body.
7. the method according to any one of claim 4 to 6, which is characterized in that in the limb according to the life entity
The connection relationship in body joint, before being partitioned into the targeted vital body in described image, the method also includes:
By the way of thermal imaging, initial segmentation goes out the targeted vital body from described image.
8. method according to any one of claim 1 to 7, which is characterized in that the method also includes:
According to the parallax of target point described at least one, the depth of the targeted vital body is calculated;
According to the depth of the targeted vital body, First Speed is determined, the direction of the First Speed is from the targeted vital
Body is directed toward the direction of unmanned equipment;
According to the First Speed and second speed, the control speed controlled the unmanned equipment is determined, wherein
The second speed is the speed of controller input;
According to the control speed, the unmanned equipment is controlled.
9. according to the method described in claim 8, it is characterized in that, the size of the First Speed is inversely proportional with the depth.
10. a kind of image processing equipment, which is characterized in that including determination unit and computing unit;Wherein,
The determination unit is used for: determine direction from target point on the targeted vital body in image at least one joint to
Amount, and determine the positional relationship of the target point and at least one pixel;
The computing unit is used for: according to the direction vector and the positional relationship, adjusting half global registration SGM algorithm
Global energy function penalty coefficient;
Based on the parallax of at least one pixel, using having adjusted institute's global energy function after the penalty coefficient, meter
Calculate the parallax of the target point.
11. equipment according to claim 10, which is characterized in that the computing unit is further used for:
According to the angle of direction vector vector corresponding with the positional relationship, the penalty coefficient is adjusted.
12. equipment according to claim 11, which is characterized in that be greater than or equal in the absolute value of the disparity difference pre-
When determining parallax, the mould of the angle and 90 degree of difference is positively correlated with the penalty coefficient.
13. equipment according to any one of claims 10 to 12, which is characterized in that further include the first cutting unit, use
In:
On the image, the limbs joint of life entity is determined;
According to the vector field of the limbs joint of the life entity, the connection relationship of the limbs joint of the life entity is determined;
According to the connection relationship of the limbs joint of the life entity, the targeted vital body is partitioned into from described image.
14. equipment according to claim 13, which is characterized in that the determination unit is further used for:
According to the points relationship of the limbs joint of the targeted vital body, finger of the target point at least one joint is determined
To vector.
15. equipment described in 3 or 14 according to claim 1, which is characterized in that the determination unit is further used for:
According to the limbs edge of the targeted vital body, determine that at least one described pixel is on the targeted vital body.
16. equipment described in any one of 3 to 15 according to claim 1, which is characterized in that further include the second cutting unit, use
In:
By the way of thermal imaging, initial segmentation goes out the targeted vital body from described image.
17. equipment described in any one of 0 to 16 according to claim 1, which is characterized in that further include control unit, be used for:
According to the parallax of target point described at least one, the depth of the targeted vital body is calculated;
According to the depth of the targeted vital body, First Speed is determined, the direction of the First Speed is from the targeted vital
Body is directed toward the direction of unmanned equipment;
According to the First Speed and second speed, the control speed controlled the unmanned equipment is determined, wherein
The second speed is the speed of controller input;
According to the control speed, the unmanned equipment is controlled.
18. equipment according to claim 17, which is characterized in that the size of the First Speed is with the depth at anti-
Than.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/119291 WO2019127192A1 (en) | 2017-12-28 | 2017-12-28 | Image processing method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109074661A true CN109074661A (en) | 2018-12-21 |
Family
ID=64812376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780022779.9A Pending CN109074661A (en) | 2017-12-28 | 2017-12-28 | Image processing method and equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109074661A (en) |
WO (1) | WO2019127192A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020244273A1 (en) * | 2019-06-04 | 2020-12-10 | 万维科研有限公司 | Dual camera three-dimensional stereoscopic imaging system and processing method |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120314031A1 (en) * | 2011-06-07 | 2012-12-13 | Microsoft Corporation | Invariant features for computer vision |
CN104835165A (en) * | 2015-05-12 | 2015-08-12 | 努比亚技术有限公司 | Image processing method and image processing device |
US20150279045A1 (en) * | 2014-03-27 | 2015-10-01 | Wei Zhong | Disparity deriving apparatus, movable apparatus, robot, method of deriving disparity, method of producing disparity, and storage medium |
CN105222760A (en) * | 2015-10-22 | 2016-01-06 | 一飞智控(天津)科技有限公司 | The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method |
CN106022304A (en) * | 2016-06-03 | 2016-10-12 | 浙江大学 | Binocular camera-based real time human sitting posture condition detection method |
CN106796728A (en) * | 2016-11-16 | 2017-05-31 | 深圳市大疆创新科技有限公司 | Generate method, device, computer system and the mobile device of three-dimensional point cloud |
CN106815594A (en) * | 2015-11-30 | 2017-06-09 | 展讯通信(上海)有限公司 | Solid matching method and device |
CN106931961A (en) * | 2017-03-20 | 2017-07-07 | 成都通甲优博科技有限责任公司 | A kind of automatic navigation method and device |
CN106981075A (en) * | 2017-05-31 | 2017-07-25 | 江西制造职业技术学院 | The skeleton point parameter acquisition devices of apery motion mimicry and its recognition methods |
CN107214703A (en) * | 2017-07-11 | 2017-09-29 | 江南大学 | A kind of robot self-calibrating method of view-based access control model auxiliary positioning |
CN107392898A (en) * | 2017-07-20 | 2017-11-24 | 海信集团有限公司 | Applied to the pixel parallax value calculating method and device in binocular stereo vision |
-
2017
- 2017-12-28 CN CN201780022779.9A patent/CN109074661A/en active Pending
- 2017-12-28 WO PCT/CN2017/119291 patent/WO2019127192A1/en active Application Filing
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120314031A1 (en) * | 2011-06-07 | 2012-12-13 | Microsoft Corporation | Invariant features for computer vision |
US20150279045A1 (en) * | 2014-03-27 | 2015-10-01 | Wei Zhong | Disparity deriving apparatus, movable apparatus, robot, method of deriving disparity, method of producing disparity, and storage medium |
CN104835165A (en) * | 2015-05-12 | 2015-08-12 | 努比亚技术有限公司 | Image processing method and image processing device |
CN105222760A (en) * | 2015-10-22 | 2016-01-06 | 一飞智控(天津)科技有限公司 | The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method |
CN106815594A (en) * | 2015-11-30 | 2017-06-09 | 展讯通信(上海)有限公司 | Solid matching method and device |
CN106022304A (en) * | 2016-06-03 | 2016-10-12 | 浙江大学 | Binocular camera-based real time human sitting posture condition detection method |
CN106796728A (en) * | 2016-11-16 | 2017-05-31 | 深圳市大疆创新科技有限公司 | Generate method, device, computer system and the mobile device of three-dimensional point cloud |
CN106931961A (en) * | 2017-03-20 | 2017-07-07 | 成都通甲优博科技有限责任公司 | A kind of automatic navigation method and device |
CN106981075A (en) * | 2017-05-31 | 2017-07-25 | 江西制造职业技术学院 | The skeleton point parameter acquisition devices of apery motion mimicry and its recognition methods |
CN107214703A (en) * | 2017-07-11 | 2017-09-29 | 江南大学 | A kind of robot self-calibrating method of view-based access control model auxiliary positioning |
CN107392898A (en) * | 2017-07-20 | 2017-11-24 | 海信集团有限公司 | Applied to the pixel parallax value calculating method and device in binocular stereo vision |
Non-Patent Citations (3)
Title |
---|
SHIYAN PANG等: "《SGM-based seamline determination for urban orthophoto mosaicking》", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 * |
宋修洋 等: "《基于D-H方法建立抛光机器人的运动学模型》", 《电子科技》 * |
朱庆 等: "《顾及纹理特征的航空影像自适应密集匹配方法》", 《测绘学报》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020244273A1 (en) * | 2019-06-04 | 2020-12-10 | 万维科研有限公司 | Dual camera three-dimensional stereoscopic imaging system and processing method |
Also Published As
Publication number | Publication date |
---|---|
WO2019127192A1 (en) | 2019-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11460844B2 (en) | Unmanned aerial image capture platform | |
US11644832B2 (en) | User interaction paradigms for a flying digital assistant | |
US11649052B2 (en) | System and method for providing autonomous photography and videography | |
CN108351574B (en) | System, method and apparatus for setting camera parameters | |
WO2018086130A1 (en) | Flight trajectory generation method, control device, and unmanned aerial vehicle | |
CN108886572B (en) | Method and system for adjusting image focus | |
US20180098052A1 (en) | Translation of physical object viewed by unmanned aerial vehicle into virtual world object | |
CN109983468A (en) | Use the method and system of characteristic point detection and tracking object | |
CN106742003A (en) | Unmanned plane cloud platform rotation control method based on intelligent display device | |
CN105959625A (en) | Method and device of controlling unmanned plane tracking shooting | |
US11353891B2 (en) | Target tracking method and apparatus | |
US20210112194A1 (en) | Method and device for taking group photo | |
CN108496201A (en) | Image processing method and equipment | |
CN108780577A (en) | Image processing method and equipment | |
CN116830057A (en) | Unmanned Aerial Vehicle (UAV) cluster control | |
CN113228103A (en) | Target tracking method, device, unmanned aerial vehicle, system and readable storage medium | |
CN109074661A (en) | Image processing method and equipment | |
CN116745725A (en) | System and method for determining object position using unmanned aerial vehicle | |
CN109885100A (en) | A kind of unmanned plane target tracking searching system | |
Hattori et al. | Generalized measuring-worm algorithm: High-accuracy mapping and movement via cooperating swarm robots | |
CN116648725A (en) | Target tracking method, device, movable platform and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181221 |
|
WD01 | Invention patent application deemed withdrawn after publication |