CN109151334A - A kind of unmanned vehicle camera system - Google Patents

A kind of unmanned vehicle camera system Download PDF

Info

Publication number
CN109151334A
CN109151334A CN201811107368.2A CN201811107368A CN109151334A CN 109151334 A CN109151334 A CN 109151334A CN 201811107368 A CN201811107368 A CN 201811107368A CN 109151334 A CN109151334 A CN 109151334A
Authority
CN
China
Prior art keywords
camera
unmanned vehicle
holder
camera array
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811107368.2A
Other languages
Chinese (zh)
Other versions
CN109151334B (en
Inventor
方维
金尚忠
严永强
李雅兰
张益溢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201811107368.2A priority Critical patent/CN109151334B/en
Publication of CN109151334A publication Critical patent/CN109151334A/en
Application granted granted Critical
Publication of CN109151334B publication Critical patent/CN109151334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of unmanned vehicle camera system, system includes: monocular camera, camera array, holder, data processing system and unmanned vehicle.Holder is mounted on two sides and intermediate three positions on unmanned vehicle, for adjusting the shooting direction of corresponding camera and improving camera stability;In two sides, holder installation monocular camera forms wide baseline camera array, and camera array is installed on intermediate holder and is used to obtain unmanned vehicle ambient enviroment and track the image information of target;Data processing system is mounted on inside unmanned vehicle, is connect with monocular camera, camera array and holder, for controlling holder and obtaining camera captured images information, then handles image information;There is the system short baseline camera array and the combination of wide baseline camera array will generate multiple parallaxes and depth map, can obtain the accurate depth information of unmanned vehicle ambient enviroment;The synthetic aperture imaging for tracking the object that is blocked is realized in the addition of short baseline camera array, that is, is realized and blocked, while helping to obtain the image of high-resolution and high dynamic fusion.

Description

A kind of unmanned vehicle camera system
Technical field
The present invention relates to a kind of unmanned vehicle camera systems.
Background technique
Unmanned vehicle is crucial to realize its unpiloted purpose, environment sensing ability.Camera is environment sensing one kind Important means.Traditional fixed camera limited view during unmanned vehicle tracks, and generally use a monocular camera and be difficult to Obtain the depth information of environment.Unmanned vehicle is during tracking in traditional camera system, when blocking and camera motion meeting for target Very big problem is brought to tracking process.
Summary of the invention
The technical problem to be solved in the present invention: providing a kind of unmanned vehicle camera system, which has short baseline camera battle array The combination of column and wide baseline camera array will generate multiple parallaxes and depth map, and it is accurately deep can to obtain unmanned vehicle ambient enviroment Spend information.The synthetic aperture imaging for tracking the object that is blocked is realized in the addition of short baseline camera array, that is, is realized and hidden Gear, while will be helpful to capture high-definition picture, and there is multi-exposure pattern to help to obtain high dynamic blending image.
A kind of technical solution of the present invention: unmanned vehicle camera system characterized by comprising two monocular cameras (01), Camera array (02), unmanned vehicle (03), 3 holders (04) and data processing system (05), the holder (04) are mounted on nobody Two sides and intermediate three positions on vehicle (03), for adjusting the shooting direction of corresponding camera;Two monocular cameras (01) are pacified respectively On the holder (04) of unmanned vehicle (03) two sides, for obtaining the image information of unmanned vehicle ambient enviroment;Camera array (02) It is mounted on the intermediate holder (04) of unmanned vehicle (03), for obtaining unmanned vehicle ambient enviroment and tracking the image letter of target Breath;Data processing system (05) is mounted on inside unmanned vehicle, with monocular camera (01) and camera array (02) and holder (04) it connects, for controlling holder, and acquisition camera captured images information data and processing image information;
The camera head (04) is three axis of one kind dynamically from steady holder.The holder not only can be according to data processing system The data that system (05) transmits are controlled, and can also carry out real-time compensation according to unmanned vehicle motion conditions, there is fine stability;
The camera array (02) is arranged the narrow baseline camera array that total mxn camera forms, m or n=3-5 by m row n. The array helps to capture high-definition picture, and can also be arranged in camera array (02) by data processing system (05) Camera rapid alternation in chronological order realizes that the camera array of multi-exposure pattern obtains high dynamic blending image;
It is described to realize the synthetic aperture imaging for tracking the object that is blocked, which is characterized in that camera array combines one kind and goes It is specific as follows to block high-resolution imaging algorithm (1):
For only one shelter before target.Each image of each camera capture be considered background layer b and The superposition of barrier bed o, is expressed as follows:
yi=Ki·oi+(1-Ki)·bi (1)
Wherein i ∈ { 1,2 ..., m × n } indicates the label of camera in camera array (02);K is mask, shelter occurs K is equal to 1 when pixel, is otherwise 0;Here ' ' indicates to indicate a full vector by element multiplication, 1.Although above-mentioned model only has one A barrier bed, but can extend to multiple barrier beds.
Use xbThe high-resolution background vector indicated a desire to.xbWith the low resolution background vector b of captureiWith with Lower relationship:
bi=Mixb (2)
Wherein
Mi=DRWb,i (3)
In formula, Wb,iIndicate the background deformation matrix for i camera in camera array (02);D and R respectively indicates extraction And fuzzy operator.By the way that (2) are substituted into (1) formula, and due to Ki·yi=Ki·oi, it is available:
(1-Ki)·yi+Ki·Mixb=bi (4)
By by the corresponding K of camera i ∈ in camera array (02) { 1,2 ..., m × n }i, yi, MiAnd biBe integrated into K, y, M and B can write above formula as:
(1-K)·y+K·Mxb=b (5)
K, xb, b can calculate with following algorithm (1):
Initialize K0With
Step 1:
Step 2:
Step 3:
Step 1 is repeated, 2,3 until K, xb, the value convergence of b.
In algorithm (1), Kt, xbAnd btK, x when being iteration tb, the estimation of b.It is a super-resolution operator, it is adopted High-resolution estimation is generated with a series of low-resolution images.Here using with Huber priority LR Deconvolution Method come It realizesIt is a kind of " seed growth method ".Because different cameral may capture the not ipsilateral for blocking object, So their values calculate one by one;
It is describedSuper-resolution operator is implemented as follows:
This method can obscure blocking in scene, and keep Background Recognition, be the super-resolution of synthetic aperture method Version.This is because unmatched homography defined in M protects background, and other methods may destroy background.However The direct application of LR deconvolution can introduce high-frequency noise, therefore Huber previous priori is added during iteration, Huber function is defined as follows:
Wherein α is free parameter.We are used as the Prior function of iterative process, as follows:
Wherein z is normaliztion constant, and v is priori intensity, is usually chosen empirically, dcImage is measured in parameter sets c The gradient of image on defining direction and position, as follows:
dm,n,1X=xm,n-1-2xm,n+xm,n+1 (9)
dm,n,2X=0.5xm+1,n-1-xm,n+0.5xm-1,n+1 (10)
dm,n,3X=xm-1,n-2xm,n+xm+1,n (11)
dm,n,4X=0.5xm-1,n-1-xm,n+0.5xm+1,n+1 (12)
Huber priori can be combined into following expression with LR deconvolution:
Algorithm (1) can be written as follow form in this way:
Initialize K0With
Step 1:
Step 2:
Repetition step 1,2, until K, xb, the value convergence of b;
It is described" seed growth method " is implemented as follows:
The characteristic point in scene is detected using SIFT first;Then it is answered using RANSAC algorithm according to facial feature estimation list Property.We are denoted as Hb, and assumes that background characteristics point is more than and blocks characteristic point.The Hb of estimation should be suitble to background but discomfort Conjunction is blocked.Then those points for being suitble to Hb are excluded, and carry out RANSAC again to find homography Ho.Remaining, is suitble to Ho's Characteristic point, which belongs to, blocks.It therefore can be with separating background and the characteristic point blocked.
The characteristic point blocked is considered as to " seed " for finding mask here.For each seed, we have initially set one Wicket centered on it.Then four boundaries of each window iteratively will increase and stop, and block side until it is reached Boundary.Then all windows are merged to form mask.More specifically, window is as follows based on probability function growth:
Wherein lq,iRefer to the pixel that q-th of boundary of window in image is shot by camera i in array camera;Refer to Be according to the homography about the position camera i from the background of estimationThe pixel of the image same position of conversion.length (lq, i) function be to give lq,iThe quantity of middle pixel,Only lq,iWithBetween difference l1Norm.THt Indicate threshold value, it is to estimate using K-Neareat Neighbors method with feature locations in backgroundIt is blocked between yi What Edge difference determined.It is according to estimation background and to observe the difference between image, window edge has surpassed A possibility that occlusion area estimated out.Growth rate is inversely proportional with probability, specific as follows:
Wherein SqThe growth step-length on expression y-th of boundary of window, and SmaxIt is the maximum growth that we allow.Therefore, every In secondary iteration, y-th of frame of window will be displaced outwardly S in the direction of its normalqPixel.The length of two adjacent frames also will Corresponding adjustment.When all windows stop growing, they are grouped together to form mask.For the object with rectangular shape Body, mask can very well be suitble to block.For the object with irregular shape, this method still can be blocked The mask of object.
Compared with the prior art, the advantages of the present invention are as follows:
1, there is the present invention short baseline camera array and the combination of wide baseline camera array will generate multiple parallaxes and depth Figure, can obtain the accurate depth information of unmanned vehicle ambient enviroment.
2, the synthetic aperture imaging realized for tracking the object that is blocked is added in camera array in the present invention, that is, realizes It blocks.
3, camera array addition helps to capture high-definition picture in the present invention, and helps with multi-exposure pattern In acquisition high dynamic blending image.
4, the present invention is using three axis dynamically from steady holder.What the holder can not only be transmitted according to data processing system (05) Data are controlled, and can also carry out real-time compensation according to unmanned vehicle motion conditions, there is fine stability;
Detailed description of the invention:
Fig. 1 is structural schematic diagram of the invention;
In figure: 01, monocular camera, 02, camera array, 03, unmanned vehicle, 04, holder, 05 data processing system.
Specific embodiment:
To make the object, technical solutions and advantages of the present invention clearer, the present invention is made into one below in conjunction with attached drawing Step ground detailed description.
With reference to Fig. 1, a kind of unmanned vehicle camera system characterized by comprising two monocular cameras (01), camera arrays (02), unmanned vehicle (03), 3 holders (04) and data processing system (05), the holder (04) are mounted on unmanned vehicle (03) Two sides and intermediate three positions, for adjusting the shooting direction of corresponding camera;Two monocular cameras (01) are separately mounted to nobody On the holder (04) of vehicle (03) two sides, for obtaining the image information of unmanned vehicle ambient enviroment;Camera array (02) is mounted on nothing On the intermediate holder (04) of people's vehicle (03), for obtaining unmanned vehicle ambient enviroment and tracking the image information of target;At data Reason system (05) is mounted on inside unmanned vehicle, is connect with monocular camera (01) and camera array (02) and holder (04), is used In control holder, and obtain camera captured images information data and processing image information;
The camera head (04) is three axis of one kind dynamically from steady holder.The holder not only can be according to data processing system The data that system (05) transmits are controlled, and can also carry out real-time compensation according to unmanned vehicle motion conditions, there is fine stability;
The camera array (02) is arranged the narrow baseline camera array that total mxn camera forms, m or n=3-5 by m row n. The array helps to capture high-definition picture, and can also be arranged in camera array (02) by data processing system (05) Camera rapid alternation in chronological order realizes that the camera array of multi-exposure pattern obtains high dynamic blending image;
It is described to realize the synthetic aperture imaging for tracking the object that is blocked, which is characterized in that camera array combines one kind and goes It is specific as follows to block high-resolution imaging algorithm (1):
For only one shelter before target.Each image of each camera capture be considered background layer b and The superposition of barrier bed o, is expressed as follows:
Yi=Ki·oi+(1-Ki)·bi (1)
Wherein i ∈ { 1,2 ..., m × n } indicates the label of camera in camera array (02);K is mask, shelter occurs K is equal to 1 when pixel, is otherwise 0;Here ' ' indicates to indicate a full vector by element multiplication, 1.Although above-mentioned model only has one A barrier bed, but can extend to multiple barrier beds.
Use xbThe high-resolution background vector indicated a desire to.xbWith the low resolution background vector b of captureiWith with Lower relationship:
bi=Mixb (2)
Wherein
Mi=DRWb,i (3)
In formula, Wb,iIndicate the background deformation matrix for i camera in camera array (02);D and R respectively indicates extraction And fuzzy operator.By the way that (2) are substituted into (1) formula, and due to Ki·yi=Ki·oi, it is available:
(1-Ki)·yi+Ki·Mixb=bi (4)
By by the corresponding K of camera i ∈ in camera array (02) { 1,2 ..., m × n }i, yi, MiAnd biBe integrated into K, y, M and B can write above formula as:
(1-K)·y+K·Mxb=b (5)
K, xb, b can calculate with following algorithm (1):
Initialize K0With
Step 1:
Step 2:
Step 3:
Step 1 is repeated, 2,3 until K, xb, the value convergence of b.
In algorithm (1), Kt, xbAnd btK, x when being iteration tb, the estimation of b.It is a super-resolution operator, it is adopted High-resolution estimation is generated with a series of low-resolution images.Here using with Huber priority LR Deconvolution Method come It realizesIt is a kind of " seed growth method ".Because different cameral may capture the not ipsilateral for blocking object, So their values calculate one by one;
It is describedSuper-resolution operator is implemented as follows:
This method can obscure blocking in scene, and keep Background Recognition, be the super-resolution of synthetic aperture method Version.This is because unmatched homography defined in M protects background, and other methods may destroy background.However The direct application of LR deconvolution can introduce high-frequency noise, therefore Huber previous priori is added during iteration, Huber function is defined as follows:
Wherein α is free parameter.We are used as the Prior function of iterative process, as follows:
Wherein z is normaliztion constant, and v is priori intensity, is usually chosen empirically, dcImage is measured in parameter sets c The gradient of image on defining direction and position, as follows:
dm,n,1X=xm,n-1-2xm,n+xm,n+1 (9)
dm,n,2X=0.5xm+1,n-1-xm,n+0.5xm-1,n+1 (10)
dm,n,3X=xm-1,n-2xm,n+xm+1,n (11)
dm,n,4X=0.5xm-1,n-1-xm,n+0.5xm+1,n+1 (12)
Huber priori can be combined into following expression with LR deconvolution:
Algorithm (1) can be written as follow form in this way:
Initialize K0With
Step 1:
Step 2:
Repetition step 1,2, until K, xb, the value convergence of b;
It is described" seed growth method " is implemented as follows:
The characteristic point in scene is detected using SIFT first;Then it is answered using RANSAC algorithm according to facial feature estimation list Property.We are denoted as Hb, and assumes that background characteristics point is more than and blocks characteristic point.The Hb of estimation should be suitble to background but discomfort Conjunction is blocked.Then those points for being suitble to Hb are excluded, and carry out RANSAC again to find homography Ho.Remaining, is suitble to Ho's Characteristic point, which belongs to, blocks.It therefore can be with separating background and the characteristic point blocked.
The characteristic point blocked is considered as to " seed " for finding mask here.For each seed, we have initially set one Wicket centered on it.Then four boundaries of each window iteratively will increase and stop, and block side until it is reached Boundary.Then all windows are merged to form mask.More specifically, window is as follows based on probability function growth:
Wherein lq,iRefer to the pixel that q-th of boundary of window in image is shot by camera i in array camera;Refer to Be according to the homography about the position camera i from the background of estimationThe pixel of the image same position of conversion.length (lq, i) function be to give lq,iThe quantity of middle pixel,Only lq,iWithBetween difference l1Norm.THt Indicate threshold value, it is to estimate using K-Neareat Neighbors method with feature locations in backgroundWith yiBetween block What Edge difference determined.It is according to estimation background and to observe the difference between image, window edge has surpassed A possibility that occlusion area estimated out.Growth rate is inversely proportional with probability, specific as follows:
Wherein SqThe growth step-length on expression y-th of boundary of window, and SmaxIt is the maximum growth that we allow.Therefore, every In secondary iteration, y-th of frame of window will be displaced outwardly S in the direction of its normalqPixel.The length of two adjacent frames also will Corresponding adjustment.When all windows stop growing, they are grouped together to form mask.For the object with rectangular shape Body, mask can very well be suitble to block.For the object with irregular shape, this method still can be blocked The mask of object.

Claims (8)

1. a kind of unmanned vehicle camera system characterized by comprising two monocular cameras (01), camera array (02), unmanned vehicle (03), 3 holders (04) and data processing system (05), the holder (04) are mounted on two sides and centre three on unmanned vehicle (03) A position, for adjusting the shooting direction of corresponding camera;Two monocular cameras (01) are separately mounted to unmanned vehicle (03) two sides On holder (04), for obtaining the image information of unmanned vehicle ambient enviroment;It is intermediate that camera array (02) is mounted on unmanned vehicle (03) Holder (04) on, for obtain unmanned vehicle ambient enviroment and track target image information;Data processing system (05) peace Inside unmanned vehicle, it is connect with monocular camera (01) and camera array (02) and holder (04), for controlling holder, with And obtain camera captured images information data and processing image information.
2. unmanned vehicle camera system according to claim 1, it is characterised in that: the camera head (04) is one kind three Axis is dynamically from steady holder.The data that the holder can not only be transmitted according to data processing system (05) are controlled, can be with root Real-time compensation is carried out according to unmanned vehicle motion conditions, there is fine stability.
3. unmanned vehicle camera system according to claim 1, it is characterised in that: described to be mounted on unmanned vehicle (03) two sides cloud Monocular camera (01) on platform (04) forms wide baseline binocular camera array.On the intermediate holder (04) of be mounted on unmanned vehicle (03) Camera array (02) be narrow baseline camera array.The combination of narrow baseline and wide baseline camera array will generate multiple parallaxes and Depth map can obtain the accurate depth information of unmanned vehicle ambient enviroment.
4. unmanned vehicle camera system according to claim 1, it is characterised in that: the camera array (02) is arranged by m row n The narrow baseline camera array of total mxn camera composition, m or n=3-5.The array helps to capture high-definition picture, and also Camera rapid alternation in chronological order in camera array (02) can be set by data processing system (05), realize multiple-exposure The camera array of mode obtains high dynamic blending image.
5. unmanned vehicle camera system according to claim 1, it is characterised in that: camera array (02) is a kind of narrow baseline phase Machine array may be implemented to track the synthetic aperture imaging for the object that is blocked.
6. realizing the synthetic aperture imaging for tracking the object that is blocked according to claim 5, which is characterized in that camera array knot One kind has been closed to go to block high-resolution imaging algorithm (1), specific as follows:
For only one shelter before target.Each image of each camera capture is considered background layer b and blocks The superposition of layer o, is expressed as follows:
yi=Ki·oi+(1-Ki)·bi (1)
Wherein i ∈ { 1,2 ..., m × n } indicates the label of camera in camera array (02);K is mask, shelter pixel occurs When K be equal to 1, be otherwise 0;Here ' ' indicates to indicate a full vector by element multiplication, 1.Although only one screening of above-mentioned model Barrier, but can extend to multiple barrier beds.
Use xbThe high-resolution background vector indicated a desire to.xbWith the low resolution background vector b of captureiWith with ShiShimonoseki System:
bi=Mixb (2)
Wherein
Mi=DRWb,i (3)
In formula, Wb,iIndicate the background deformation matrix for i camera in camera array (02);D and R respectively indicate extraction and mould Paste operator.By the way that (2) are substituted into (1) formula, and due to Ki·yi=Ki·oi, it is available:
(1-Ki)·yi+Ki·Mixb=bi (4)
By by the corresponding K of camera i ∈ in camera array (02) { 1,2 ..., m × n }i, yi, MiAnd biIt is integrated into K, y, M and b, it can Above formula is write as:
(1-K)·y+K·Mxb=b (5)
K, xb, b can calculate with following algorithm (1):
Initialize K0With
Step 1:
Step 2:
Step 3:
Step 1 is repeated, 2,3 until K, xb, the value convergence of b.
In algorithm (1), Kt, xbAnd btK, x when being iteration tb, the estimation of b.It is a super-resolution operator, it uses one Series of low resolution image generates high-resolution estimation.Here it is realized using the LR Deconvolution Method with Huber priority It is a kind of " seed growth method ".Because different cameral may capture the not ipsilateral for blocking object, Their values calculate one by one.
7. one kind goes to block high-resolution imaging algorithm (1) according to claim 6, which is characterized in that describedSuper-resolution Rate operator is implemented as follows:
This method can obscure blocking in scene, and keep Background Recognition, be the super-resolution version of synthetic aperture method. This is because unmatched homography defined in M protects background, and other methods may destroy background.However LR warp Long-pending direct application can introduce high-frequency noise, therefore Huber previous priori, huber function are added during iteration It is defined as follows:
Wherein α is free parameter.We are used as the Prior function of iterative process, as follows:
Wherein z is normaliztion constant, and v is priori intensity, is usually chosen empirically, dcMeasurement image is being defined in parameter sets c The gradient of image on direction and position, as follows:
dm,n,1X=xm,n-1-2xm,n+xm,n+1 (9)
dm,n,2X=0.5xm+1,n-1-xm,n+0.5xm-1,n+1 (10)
dm,n,3X=xm-1,n-2xm,n+xm+1,n (11)
dm,n,4X=0.5xm-1,n-1-xm,n+0.5xm+1,n+1 (12)
Huber priori can be combined into following expression with LR deconvolution:
Algorithm (1) can be written as follow form in this way:
Initialize K0With
Step 1:
Step 2:
Repetition step 1,2, until K, xb, the value convergence of b.
8. one kind goes to block high-resolution imaging algorithm (1) according to claim 6, which is characterized in that described" seed Growing method " be implemented as follows be implemented as follows:
The characteristic point in scene is detected using SIFT first;Then using RANSAC algorithm according to facial feature estimation homography. We are denoted as Hb, and assumes that background characteristics point is more than and blocks characteristic point.The Hb of estimation should be suitble to background but be not suitable for hiding Gear.Then those points for being suitble to Hb are excluded, and carry out RANSAC again to find homography Ho.Remaining, is suitble to the feature of Ho Point, which belongs to, to be blocked.It therefore can be with separating background and the characteristic point blocked.
The characteristic point blocked is considered as to " seed " for finding mask here.For each seed, we initially set one with it Centered on wicket.Then four boundaries of each window iteratively will increase and stop, until it reaches Ouluding boundary.So All windows are merged to form mask afterwards.More specifically, window is as follows based on probability function growth:
Wherein lq,iRefer to the pixel that q-th of boundary of window in image is shot by camera i in array camera;Refer to root According to the homography about the position camera i from the background of estimationThe pixel of the image same position of conversion.length(lq, i) Function is to give lqiThe quantity of middle pixel,Only lq,iWithBetween difference l1Norm.THtIndicate threshold value, It is to be estimated using K-Neareat Neighbors method with feature locations in backgroundWith yiBetween block Edge difference and determine Fixed.It is according to estimation background and to observe the difference between image, window edge has exceeded the screening of estimation A possibility that keeping off region.Growth rate is inversely proportional with probability, specific as follows:
Wherein SqThe growth step-length on expression y-th of boundary of window, and SmaxIt is the maximum growth that we allow.Therefore, it is changing every time Y-th of frame of Dai Zhong, window will be displaced outwardly S in the direction of its normalqPixel.The length of two adjacent frames also will be corresponding Adjustment.When all windows stop growing, they are grouped together to form mask.For the object with rectangular shape, cover Mould can very well be suitble to block.For the object with irregular shape, this method still can obtain and block object Mask.
CN201811107368.2A 2018-09-21 2018-09-21 Unmanned vehicle camera system Active CN109151334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811107368.2A CN109151334B (en) 2018-09-21 2018-09-21 Unmanned vehicle camera system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811107368.2A CN109151334B (en) 2018-09-21 2018-09-21 Unmanned vehicle camera system

Publications (2)

Publication Number Publication Date
CN109151334A true CN109151334A (en) 2019-01-04
CN109151334B CN109151334B (en) 2020-12-22

Family

ID=64823050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811107368.2A Active CN109151334B (en) 2018-09-21 2018-09-21 Unmanned vehicle camera system

Country Status (1)

Country Link
CN (1) CN109151334B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111300426A (en) * 2020-03-19 2020-06-19 深圳国信泰富科技有限公司 Control system of sensing head of highly intelligent humanoid robot
CN111427383A (en) * 2020-03-18 2020-07-17 青岛联合创智科技有限公司 Control method for binocular holder variable base line
CN111970424A (en) * 2020-08-25 2020-11-20 武汉工程大学 Light field camera shielding removing system and method based on micro-lens array synthetic aperture

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104662589A (en) * 2012-08-21 2015-05-27 派力肯影像公司 Systems and methods for parallax detection and correction in images captured using array cameras
CN106132783A (en) * 2014-04-08 2016-11-16 Tk控股公司 For night vision object detection and the system and method for driver assistance
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles
CN106960414A (en) * 2016-12-12 2017-07-18 天津大学 A kind of method that various visual angles LDR image generates high-resolution HDR image
CN107483911A (en) * 2017-08-25 2017-12-15 秦山 A kind of signal processing method and system based on more mesh imaging sensors
CN107959805A (en) * 2017-12-04 2018-04-24 深圳市未来媒体技术研究院 Light field video imaging system and method for processing video frequency based on Hybrid camera array
CN108307675A (en) * 2015-04-19 2018-07-20 快图凯曼有限公司 More baseline camera array system architectures of depth enhancing in being applied for VR/AR
CN108332716A (en) * 2018-02-07 2018-07-27 徐州艾特卡电子科技有限公司 A kind of autonomous driving vehicle context aware systems
CN108427961A (en) * 2018-02-11 2018-08-21 陕西师范大学 Synthetic aperture focusing imaging depth appraisal procedure based on convolutional neural networks

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104662589A (en) * 2012-08-21 2015-05-27 派力肯影像公司 Systems and methods for parallax detection and correction in images captured using array cameras
CN106132783A (en) * 2014-04-08 2016-11-16 Tk控股公司 For night vision object detection and the system and method for driver assistance
CN108307675A (en) * 2015-04-19 2018-07-20 快图凯曼有限公司 More baseline camera array system architectures of depth enhancing in being applied for VR/AR
CN106960414A (en) * 2016-12-12 2017-07-18 天津大学 A kind of method that various visual angles LDR image generates high-resolution HDR image
CN106650708A (en) * 2017-01-19 2017-05-10 南京航空航天大学 Visual detection method and system for automatic driving obstacles
CN107483911A (en) * 2017-08-25 2017-12-15 秦山 A kind of signal processing method and system based on more mesh imaging sensors
CN107959805A (en) * 2017-12-04 2018-04-24 深圳市未来媒体技术研究院 Light field video imaging system and method for processing video frequency based on Hybrid camera array
CN108332716A (en) * 2018-02-07 2018-07-27 徐州艾特卡电子科技有限公司 A kind of autonomous driving vehicle context aware systems
CN108427961A (en) * 2018-02-11 2018-08-21 陕西师范大学 Synthetic aperture focusing imaging depth appraisal procedure based on convolutional neural networks

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427383A (en) * 2020-03-18 2020-07-17 青岛联合创智科技有限公司 Control method for binocular holder variable base line
CN111427383B (en) * 2020-03-18 2023-04-25 青岛联合创智科技有限公司 Control method for variable base line of binocular cradle head
CN111300426A (en) * 2020-03-19 2020-06-19 深圳国信泰富科技有限公司 Control system of sensing head of highly intelligent humanoid robot
CN111300426B (en) * 2020-03-19 2022-05-31 深圳国信泰富科技有限公司 Control system of sensing head of highly intelligent humanoid robot
CN111970424A (en) * 2020-08-25 2020-11-20 武汉工程大学 Light field camera shielding removing system and method based on micro-lens array synthetic aperture

Also Published As

Publication number Publication date
CN109151334B (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN109151439B (en) Automatic tracking shooting system and method based on vision
Pan et al. Single image optical flow estimation with an event camera
US8432434B2 (en) Camera and method for focus based depth reconstruction of dynamic scenes
KR102003015B1 (en) Creating an intermediate view using an optical flow
CN110364253B (en) System and method for assisted patient positioning
CN103973989B (en) Obtain the method and system of high-dynamics image
CN101616310B (en) Target image stabilizing method of binocular vision system with variable visual angle and resolution ratio
CN106550174B (en) A kind of real time video image stabilization based on homography matrix
CN110363116B (en) Irregular human face correction method, system and medium based on GLD-GAN
CN105023275B (en) Super-resolution optical field acquisition device and its three-dimensional rebuilding method
CN109151334A (en) A kind of unmanned vehicle camera system
US9961321B2 (en) Image processing device and image processing method having function for reconstructing multi-aspect images, and recording medium
US20060078162A1 (en) System and method for stabilized single moving camera object tracking
CN106165395A (en) Image processing apparatus, image processing method and image processing program
CN103841297B (en) A kind of electronic image stabilization method being applicable to resultant motion shooting carrier
CN108830925B (en) Three-dimensional digital modeling method based on spherical screen video stream
WO2020037615A1 (en) Gimbal system and image processing method therefor, and unmanned aerial vehicle
CN111614965B (en) Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN104574443B (en) The cooperative tracking method of moving target between a kind of panoramic camera
CN105635808B (en) A kind of video-splicing method based on bayesian theory
Wang et al. Video stabilization: A comprehensive survey
CN113436130B (en) Intelligent sensing system and device for unstructured light field
JP6513941B2 (en) Image processing method, image processing apparatus and program
Weibel et al. Contrast-enhancing seam detection and blending using graph cuts
CN117058183A (en) Image processing method and device based on double cameras, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant