CN109120900B - Unmanned vehicle images processing system and its processing method - Google Patents

Unmanned vehicle images processing system and its processing method Download PDF

Info

Publication number
CN109120900B
CN109120900B CN201811084575.0A CN201811084575A CN109120900B CN 109120900 B CN109120900 B CN 109120900B CN 201811084575 A CN201811084575 A CN 201811084575A CN 109120900 B CN109120900 B CN 109120900B
Authority
CN
China
Prior art keywords
module
channel
data
processing module
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811084575.0A
Other languages
Chinese (zh)
Other versions
CN109120900A (en
Inventor
徐强
欧阳星
李子轩
田云
张文祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Drow Unmanned Aerial Vehicle Manufacturing Co Ltd
Original Assignee
Wuhan Drow Unmanned Aerial Vehicle Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Drow Unmanned Aerial Vehicle Manufacturing Co Ltd filed Critical Wuhan Drow Unmanned Aerial Vehicle Manufacturing Co Ltd
Priority to CN201811084575.0A priority Critical patent/CN109120900B/en
Publication of CN109120900A publication Critical patent/CN109120900A/en
Application granted granted Critical
Publication of CN109120900B publication Critical patent/CN109120900B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a kind of unmanned vehicle camera shooting processing system and its processing methods, comprise the following modules: photographing module 1, control processor 2, positioning navigation module 3, wireless transport module 4, wireless receiving module 5, channel processing module 6, image processing module 7 and display module 8.The present invention has the following advantages and beneficial effects: that the video image of unmanned vehicle shooting meets high-definition digital transmission requirement, the region that especially height displacement is related to height with other for the photography target region of height involved in camera shooting when unmanned vehicle is shot is clear, true photography target can be showed, the characteristics of demand of low time delay and transmission smoothness is fully considered after the completion of camera shooting, it carries out having used two sets of processing modules when data processing, preposition pretreatment has been carried out on unmanned vehicle, it not only reduces time delay and has saved resource, the data processing load of unmanned vehicle is alleviated simultaneously.

Description

Unmanned vehicle images processing system and its processing method
Technical field
The invention belongs to unmanned vehicle technical fields, and in particular to a kind of unmanned vehicle camera shooting processing system and its place Reason method.
Background technique
In the prior art, the video image information of unmanned vehicle shooting is typically all to pass through image transmission module for video Image data is transferred on the receiving module of ground base station, and then observer can be real by the display module of ground control centre When check unmanned vehicle shooting video image, due to the limitation of image transmission module and signal, so that the position of ground base station It sets the distance between unmanned vehicle and has to comply with signal transmission and received range, therefore unmanned vehicle camera shooting is handled Requirements at the higher level are proposed, but existing processing method is unintelligible, operability is not strong, is especially clapped in unmanned vehicle The Asymmetry information of height involved in camera shooting is answered by She Shi height displacement, so that photography target region is related to the region of height with other Obscure, cannot clearly show true photography target, low time delay is not fully considered after the completion of camera shooting and transmits smoothness Characteristics of demand can not be ranked up channel according to available channel resources under the conditions of the useful information of channel selection is insufficient Code rate selection is reduced, dynamic does not obtain data sectional from multiple channels, can not efficiently utilize the multifarious spy of channel Point increases propagation delay time and causes unnecessary heavy buffering;Balance can not be found between the flatness and channel conversion of transmission, Caused channel resource allocation is uneven, not can be carried out the video image of real time inspection unmanned vehicle shooting, is unmanned flight The application of device brings biggish limitation, greatly compromises customer experience.
Summary of the invention
The purpose of the invention is to overcome above-mentioned deficiency to provide a kind of unmanned vehicle camera shooting processing system and its processing Method;
A kind of unmanned vehicle camera shooting processing system, comprises the following modules:
Photographing module 1, control processor 2, positioning navigation module 3, wireless transport module 4, wireless receiving module 5, channel Processing module 6, image processing module 7 and display module 8;
The photographing module 1, control processor 2, positioning navigation module 3 and wireless transport module 4 are installed in nobody and fly On row device;The wireless receiving module 5, channel processing module 6, image processing module 7 and display module 8 are installed in ground control Center processed;
Photographing module 1, positioning navigation module 3 and wireless transport module 4 are connected with control processor 2 respectively, the control The signal output end of processor 2 is connected with the signal input part of wireless receiving module 5, the channel processing module 6 with wirelessly connect It receives module 5 to be connected, described image processing module 7 is connected with channel processing module 6, the display module 8 and image processing module 7 It is connected.
The photographing module 1, for acquiring video image;
Positioning navigation module 3 is transmitted to control processor 2 for positioning to unmanned vehicle, and by location data;
The control processor 2, for receiving the data that photographing module 1 acquires to the sending acquisition instructions of photographing module 1, It is sent to wireless transport module 4 after the data that photographing module 1 acquires are handled, is also used to transmit positioning navigation module 3 Location data judge that location data whether in normal threshold range, and will receive the data that acquire of photographing module 1 The location data transmitted with navigation module 3 is sent to wireless transport module 4;
Wireless transport module 4 for receiving the data of the transmission of control processor 2, and is sent to wireless receiving module 5;
Wireless receiving module 5 for receiving the data of the transmission of wireless transport module 4, and is sent to channel processing module 6;
Channel processing module 6 for receiving the data of the transmission of wireless receiving module 5, and selects suitable channel by data It is transmitted to image processing module 7;
Image processing module 7 is then forwarded to display mould for carrying out processing to the video signal data received Block 8;
Display module 8;For showing treated video signal.
The unmanned vehicle images processing system, processing method the following steps are included:
S1. system initialization, the positioning navigation module 3 start to position unmanned vehicle, and acquire nobody and fly The location data of row device;
S2. the control processor 2 issues acquisition instructions to photographing module 1, and photographing module 1 is according to control processor 2 Track in instruction acquires video image;
S3. the video image of 2 pairs of control processor acquisitions pre-processes, and is then sent out by wireless transport module 4 It send to wireless receiving module 5;
S4. channel processing module 6 receives the data that wireless receiving module 5 is sent, and suitable channel is selected to pass data Transport to image processing module 7;
S5. image processing module 7 carries out processing to the video signal data received and is then forwarded to display module 8 It is shown.
The present invention has the following advantages and beneficial effects: that the video image of unmanned vehicle shooting meets high-definition digital transmission It is required that especially when unmanned vehicle is shot height displacement for camera shooting involved in height photography target region and other The region for being related to height is clear, can show true photography target, fully considers that low time delay and transmission are smooth after the completion of camera shooting Characteristics of demand, carry out data processing when used two sets of processing modules, preposition pretreatment has been carried out on unmanned vehicle, not only It reduces time delay and has saved resource, while alleviating the data processing load of unmanned vehicle, according to available channel resources to letter Road is ranked up reduction code rate selection, data sectional is obtained from multiple channels, between the flatness and channel conversion of transmission Balance is found, the video image of real time inspection unmanned vehicle shooting can be carried out, is brought very for the application of unmanned vehicle Big space, is greatly improved customer experience.
Detailed description of the invention
Fig. 1 is unmanned vehicle camera shooting processing system structural schematic diagram.
Specific embodiment
Below in conjunction with specific embodiment, the present invention is further illustrated:
A kind of unmanned vehicle camera shooting processing system, comprises the following modules:
Photographing module 1, control processor 2, positioning navigation module 3, wireless transport module 4, wireless receiving module 5, channel Processing module 6, image processing module 7 and display module 8;
The photographing module 1, control processor 2, positioning navigation module 3 and wireless transport module 4 are installed in nobody and fly On row device;The wireless receiving module 5, channel processing module 6, image processing module 7 and display module 8 are installed in ground control Center processed;
Photographing module 1, positioning navigation module 3 and wireless transport module 4 are connected with control processor 2 respectively, the control The signal output end of processor 2 is connected with the signal input part of wireless receiving module 5, the channel processing module 6 with wirelessly connect It receives module 5 to be connected, described image processing module 7 is connected with channel processing module 6, the display module 8 and image processing module 7 It is connected.
The photographing module 1, for acquiring video image;
Positioning navigation module 3 is transmitted to control processor 2 for positioning to unmanned vehicle, and by location data;
The control processor 2, for receiving the data that photographing module 1 acquires to the sending acquisition instructions of photographing module 1, It is sent to wireless transport module 4 after the data that photographing module 1 acquires are handled, is also used to transmit positioning navigation module 3 Location data judge that location data whether in normal threshold range, and will receive the data that acquire of photographing module 1 The location data transmitted with navigation module 3 is sent to wireless transport module 4;
Wireless transport module 4 for receiving the data of the transmission of control processor 2, and is sent to wireless receiving module 5;
Wireless receiving module 5 for receiving the data of the transmission of wireless transport module 4, and is sent to channel processing module 6;
Channel processing module 6 for receiving the data of the transmission of wireless receiving module 5, and selects suitable channel by data It is transmitted to image processing module 7;
Image processing module 7 is then forwarded to display mould for carrying out processing to the video signal data received Block 8;
Display module 8;For showing treated video signal.
The unmanned vehicle images processing system, processing method the following steps are included:
S1. system initialization, the positioning navigation module 3 start to position unmanned vehicle, and acquire nobody and fly The location data of row device;
S2. the control processor 2 issues acquisition instructions to photographing module 1, and photographing module 1 is according to control processor 2 Track in instruction acquires video image;
S3. the video image of 2 pairs of control processor acquisitions pre-processes, and is then sent out by wireless transport module 4 It send to wireless receiving module 5;
S4. channel processing module 6 receives the data that wireless receiving module 5 is sent, and suitable channel is selected to pass data Transport to image processing module 7;
S5. image processing module 7 carries out processing to the video signal data received and is then forwarded to display module 8 It is shown.
The step S3 specifically includes the following steps:
S31. photographing module 1 obtains image pixel size, will acquire image pixel size and is transmitted to control processor 2;
S32. control processor 2 generates the identical virtual image of an image pixel;
S33. the corresponding location data in image border is obtained by positioning navigation module 3, control processor 2 is according to image side The corresponding location data of edge, determines the threshold range of location data;
S34. the numerical cutting tool data in the threshold range are obtained, and by the digital earth's surface in the threshold range Model data is transmitted to image processing module 7 and executes step S5.
The step S5 specifically includes the following steps:
S51. image processing module 7 is by the three-dimensional coordinate of grid each in numerical cutting tool according to arranging from high to low Sequence carries out on re-projection to virtual image according to three-dimensional coordinate of the sequence from high to low to each grid.
S52. pixel object in S31 and adjacent pixel object are compared, if they are the same, then carries out object merging, directly It is all merged together to all identical pixel objects;
S53. the apparent point of brightness change in image is identified, then brightness change is significantly put and carries out the connection of neighbours domain;
S54. the image after mark is scanned, obtains the region of the interconnection of edge pixel;
S55. bitmap is converted into the vector graphics for the pixel for meeting straight line model in current join domain, described in judgement Whether pixel interconnects, if being interconnected, then it is assumed that detected is straight line;
S56. the most like object for finding present image, judges whether the most like object merges with present image, repeats Above-mentioned steps are not until continuing combined object, wherein judgment method are as follows: setting segmentation scale parameter, when most like right When as being more than scale parameter, nonjoinder;When being regular shape after most like object merges therewith, merging.
The S4 specifically includes the following steps:
S4.1: channel processing module 6 obtains the address of all channels, and wireless receiving module 5, image processing module 7 are divided Connection is not established with the channel;
S4.2: channel processing module 6 receives video signal data from wireless receiving module 5 and carries out data point Section, Xiang Suoyou channel send data receiver request, channel receive first data sectional and respectively record receive start when Between, it is denoted as tks, k is channel logo code;
S4.3: received data sectional is carried out load caching by channel processing module 6, and is write down channel and finished receiving most Short time, it is assumed that channel k is initially completed data sectional reception, when reception a length of tk, then the channel capacity between all channelsj∈{1,2,...,k,...,m};Wherein, len1,jIt is in tkTime interval in connect from channel processing module 6 The length for receiving first data part charge data, the reception of first data sectional is initially completed from channel processing module 6, then W1,k=len1,k/tk=(v1×flen)/(tke-tks);Wherein, V1 is the code rate of the 1st data sectional;Flen is data point The duration of section, tkS is time of the 1st data sectional since receiving channel processing module 6, tkE be the 1st data sectional from The time that channel processing module 6 finishes receiving;
S4.4: channel processing module 6 is according to the channel capacity that each channel is calculated, according to sequence from big to small to letter Road is ranked up, and obtains candidate channel queue XD;
S4.5: channel processing module 6 selects head of the queue node, the as maximum letter of channel capacity from candidate channel queue XD Road, the channel as the request of next data sectional;
S4.6: the code rate of next data sectional is determined, it is assumed that the code rate of the data sectional just finished receiving is vi=Vk, V1≤Vk≤VL, head of the queue channel k in queue XDa, a ∈ [1, m] then compares head of the queue channel kaChannel estimating capacity ywi,aWith it is preset The code rate of next data sectional i+1Size;
If ywi,a≥vk+1, then the code rate of next data sectional i+1
Wherein ywi,aCalculating can be realized by the following method:
Wherein wi,jExpression has received the channel capacity value measured after data sectional i, ywi,jIndicate channel k when having received segmentation ijChannel estimating capacity;Wherein,
μ0It is that take 0.5, μ be the error between measured value and predicted value to a constant,
S4.7: channel processing module 6 is to the channel k chosenj+1Request of data is sent, received target data segment is requested Code rate be vi+1
S4.8: image processing module 7 passes through channel kj+1Reception code rate is vi+1Target data segment.
The data are video data.The video image of unmanned vehicle shooting meets high-definition digital transmission requirement, especially It is that height displacement is related to height with other for the photography target region of height involved in camera shooting when unmanned vehicle is shot Region it is clear, true photography target can be showed, fully considered after the completion of camera shooting low time delay and transmission it is smooth demand it is special Point carries out having used two sets of processing modules when data processing, preposition pretreatment has been carried out on unmanned vehicle, when not only reducing Prolong and saved resource, while alleviating the data processing load of unmanned vehicle, channel is arranged according to available channel resources Sequence reduces code rate selection, and data sectional is obtained from multiple channels, finds balance between the flatness and channel conversion of transmission, The video image of real time inspection unmanned vehicle shooting can be carried out, brings very large space for the application of unmanned vehicle, It is greatly improved customer experience.

Claims (3)

1. a kind of unmanned vehicle images processing system, it is characterised in that comprise the following modules:
Photographing module 1, control processor 2, positioning navigation module 3, wireless transport module 4, wireless receiving module 5, Channel Processing Module 6, image processing module 7 and display module 8;
The photographing module 1, control processor 2, positioning navigation module 3 and wireless transport module 4 are installed in unmanned vehicle On;The wireless receiving module 5, channel processing module 6, image processing module 7 and display module 8 are installed in the control of ground The heart;
Photographing module 1, positioning navigation module 3 and wireless transport module 4 are connected with control processor 2 respectively, the control processing The signal output end of device 2 is connected with the signal input part of wireless receiving module 5, the channel processing module 6 and wireless receiving mould Block 5 is connected, and described image processing module 7 is connected with channel processing module 6, the display module 8 and 7 phase of image processing module Even;
The photographing module 1, for acquiring video image;
Positioning navigation module 3 is transmitted to control processor 2 for positioning to unmanned vehicle, and by location data;
The control processor 2 receives the data that photographing module 1 acquires, will take the photograph for issuing acquisition instructions to photographing module 1 It is sent to wireless transport module 4 after being handled as the data of the acquisition of module 1, is also used to determine what positioning navigation module 3 transmitted Position data judge that location data whether in normal threshold range, and will receive the data that acquire of photographing module 1 and lead The location data that model plane block 3 transmits is sent to wireless transport module 4;
Wireless transport module 4 for receiving the data of the transmission of control processor 2, and is sent to wireless receiving module 5;
Wireless receiving module 5 for receiving the data of the transmission of wireless transport module 4, and is sent to channel processing module 6;
Channel processing module 6 for receiving the data of the transmission of wireless receiving module 5, and selects suitable channel to transmit data To image processing module 7;
Image processing module 7 is then forwarded to display module 8 for carrying out processing to the video signal data received;
Display module 8;For showing treated video signal;
Wherein, unmanned vehicle camera shooting processing system processing method the following steps are included:
S1. system initialization, the positioning navigation module 3 starts to position unmanned vehicle, and acquires unmanned vehicle Location data;
S2. the control processor 2 issues acquisition instructions to photographing module 1, and photographing module 1 is according to the instruction of control processor 2 In track acquire video image;
S3. the video image of 2 pairs of control processor acquisitions pre-processes, and is then sent to by wireless transport module 4 Wireless receiving module 5;
S4. channel processing module 6 receives the data that wireless receiving module 5 is sent, and suitable channel is selected to send data to Image processing module 7;
S5. image processing module 7 carries out processing to the video signal data received and is then forwarded to the progress of display module 8 Display;
The step S3 specifically includes the following steps:
S31. photographing module 1 obtains image pixel size, will acquire image pixel size and is transmitted to control processor 2;
S32. control processor 2 generates the identical virtual image of an image pixel;
S33. the corresponding location data in image border is obtained by positioning navigation module 3, control processor 2 is according to image border pair The location data answered determines the threshold range of location data;
S34. the numerical cutting tool data in the threshold range are obtained, and by the numerical cutting tool in the threshold range Data are transmitted to image processing module 7 and execute step S5;
The step S5 specifically includes the following steps:
S51. image processing module 7 by the three-dimensional coordinate of grid each in numerical cutting tool according to being ranked up from high to low, It is carried out on re-projection to virtual image according to three-dimensional coordinate of the sequence from high to low to each grid;
S52. pixel object in S31 and adjacent pixel object are compared, if they are the same, then carries out object merging, Zhi Daosuo There is identical pixel object to be all merged together;
S53. the apparent point of brightness change in image is identified, then brightness change is significantly put and carries out the connection of neighbours domain;
S54. the image after mark is scanned, obtains the region of the interconnection of edge pixel;
S55. bitmap is converted into the vector graphics for the pixel for meeting straight line model in current join domain, judges the pixel Whether point interconnects, if being interconnected, then it is assumed that detected is straight line;
S56. the most like object for finding present image, judges whether the most like object merges with present image, repeats above-mentioned Step is not until continuing combined object, wherein judgment method are as follows: setting segmentation scale parameter, when most like object is super When crossing scale parameter, nonjoinder;When being regular shape after most like object merges therewith, merging.
2. unmanned vehicle images processing system according to claim 1, it is characterised in that: the S4 specifically includes following step It is rapid:
S4.1: channel processing module 6 obtains the address of all channels, by wireless receiving module 5, image processing module 7 respectively with The channel establishes connection;
S4.2: channel processing module 6 receives video signal data from wireless receiving module 5 and carries out data sectional, to All channels send the request of data receiver, and channel receives first data sectional and record receives the time started, note respectively For tks, k is channel logo code;
S4.3: received data sectional is carried out load caching by channel processing module 6, and write down that channel finishes receiving most in short-term Between, it is assumed that channel k is initially completed data sectional reception, when reception a length of tk, then the channel capacity between all channelsWherein, len1,jIt is in tkTime interval in from channel processing module 6 receive The length of first data part charge data, the reception of first data sectional is initially completed from channel processing module 6, then W1,k=len1,k/tk=(v1×flen)/(tke-tks);Wherein, V1 is the code rate of the 1st data sectional;Flen is data point The duration of section, tkS is time of the 1st data sectional since receiving channel processing module 6, tkE be the 1st data sectional from The time that channel processing module 6 finishes receiving;
S4.4: channel processing module 6 according to the channel capacity that each channel is calculated, according to sequence from big to small to channel into Row sequence, obtains candidate channel queue XD;
S4.5: channel processing module 6 selects head of the queue node, the as maximum channel of channel capacity from candidate channel queue XD, Channel as the request of next data sectional;
S4.6: the code rate of next data sectional is determined, it is assumed that the code rate of the data sectional just finished receiving is vi=vk,v1≤vk ≤vL, head of the queue channel k in queue XDa, a ∈ [1, m] then compares head of the queue channel kaChannel estimating capacity ywi,aWith it is preset next The code rate of data sectional i+1Size;
If ywi,a≥vk+1, then the code rate of next data sectional i+1
Wherein ywi,aCalculating can be realized by the following method:
Wherein wi,jExpression has received the channel capacity value measured after data sectional i, ywi,jIt indicates Channel k when segmentation i is receivedjChannel estimating capacity;Wherein,μ 0 is that a constant takes 0.5, μ is the error between measured value and predicted value,
S4.7: channel processing module 6 is to the channel k chosenj+1Request of data is sent, the code of received target data segment is requested Rate is vi+1
S4.8: image processing module 7 passes through channel kj+1Reception code rate is vi+1Target data segment.
3. unmanned vehicle images processing system according to claim 2, it is characterised in that: the data are video data.
CN201811084575.0A 2018-09-17 2018-09-17 Unmanned vehicle images processing system and its processing method Active CN109120900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811084575.0A CN109120900B (en) 2018-09-17 2018-09-17 Unmanned vehicle images processing system and its processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811084575.0A CN109120900B (en) 2018-09-17 2018-09-17 Unmanned vehicle images processing system and its processing method

Publications (2)

Publication Number Publication Date
CN109120900A CN109120900A (en) 2019-01-01
CN109120900B true CN109120900B (en) 2019-05-24

Family

ID=64859557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811084575.0A Active CN109120900B (en) 2018-09-17 2018-09-17 Unmanned vehicle images processing system and its processing method

Country Status (1)

Country Link
CN (1) CN109120900B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102967297A (en) * 2012-11-23 2013-03-13 浙江大学 Space-movable visual sensor array system and image information fusion method
EP2386092B1 (en) * 2009-01-13 2014-03-05 Robert Bosch GmbH Device, method, and computer for image-based counting of objects passing through a counting section in a prescribed direction
CN105120237A (en) * 2015-09-17 2015-12-02 成都时代星光科技有限公司 Wireless image monitoring method based on 4G technology
CN105262989A (en) * 2015-10-08 2016-01-20 成都时代星光科技有限公司 Automatic inspection and real-time image acquisition transmission method of railway line unmanned aerial plane
CN106503248A (en) * 2016-11-08 2017-03-15 深圳市速腾聚创科技有限公司 Ground drawing generating method and map creation device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529404B2 (en) * 2007-06-20 2009-05-05 Ahdoot Ned M Digital video filter and image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2386092B1 (en) * 2009-01-13 2014-03-05 Robert Bosch GmbH Device, method, and computer for image-based counting of objects passing through a counting section in a prescribed direction
CN102967297A (en) * 2012-11-23 2013-03-13 浙江大学 Space-movable visual sensor array system and image information fusion method
CN105120237A (en) * 2015-09-17 2015-12-02 成都时代星光科技有限公司 Wireless image monitoring method based on 4G technology
CN105262989A (en) * 2015-10-08 2016-01-20 成都时代星光科技有限公司 Automatic inspection and real-time image acquisition transmission method of railway line unmanned aerial plane
CN106503248A (en) * 2016-11-08 2017-03-15 深圳市速腾聚创科技有限公司 Ground drawing generating method and map creation device

Also Published As

Publication number Publication date
CN109120900A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
US9443350B2 (en) Real-time 3D reconstruction with power efficient depth sensor usage
Zhang et al. An efficient algorithm for pothole detection using stereo vision
US9378583B2 (en) Apparatus and method for bidirectionally inpainting occlusion area based on predicted volume
US20140368645A1 (en) Robust tracking using point and line features
CN108508916B (en) Control method, device and equipment for unmanned aerial vehicle formation and storage medium
CN111462503B (en) Vehicle speed measuring method and device and computer readable storage medium
CN112541426B (en) Communication bandwidth self-adaptive data processing method based on unmanned aerial vehicle cluster cooperative sensing
CN109949231B (en) Method and device for collecting and processing city management information
CN109410593A (en) A kind of whistle capturing system and method
US11172218B2 (en) Motion estimation
CN105069804A (en) Three-dimensional model scanning reconstruction method based on smartphone
CN102447917A (en) Three-dimensional image matching method and equipment thereof
JP2013185905A (en) Information processing apparatus, method, and program
CN105354813B (en) Holder is driven to generate the method and device of stitching image
CN209641070U (en) A kind of whistle capturing system
CN109120900B (en) Unmanned vehicle images processing system and its processing method
CN108600691B (en) Image acquisition method, device and system
CN116105721B (en) Loop optimization method, device and equipment for map construction and storage medium
CN110896469A (en) Resolution testing method for three-shot photography and application thereof
US20210183079A1 (en) Vertical disparity detection in stereoscopic images using a deep neural network
US11856284B2 (en) Method of controlling a portable device and a portable device
US10430971B2 (en) Parallax calculating apparatus
KR101804157B1 (en) Disparity map generating method based on enhanced semi global matching
EP3627817B1 (en) Image processing method and terminal
CN109328373B (en) Image processing method, related device and storage medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant