CN205693767U - Uas - Google Patents

Uas Download PDF

Info

Publication number
CN205693767U
CN205693767U CN201620266759.9U CN201620266759U CN205693767U CN 205693767 U CN205693767 U CN 205693767U CN 201620266759 U CN201620266759 U CN 201620266759U CN 205693767 U CN205693767 U CN 205693767U
Authority
CN
China
Prior art keywords
rgbd
unmanned plane
processor
target
uas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201620266759.9U
Other languages
Chinese (zh)
Inventor
黄源浩
肖振中
许宏淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Shenzhen Orbbec Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Orbbec Co Ltd filed Critical Shenzhen Orbbec Co Ltd
Priority to CN201620266759.9U priority Critical patent/CN205693767U/en
Application granted granted Critical
Publication of CN205693767U publication Critical patent/CN205693767U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The utility model discloses a kind of UAS.UAS includes: unmanned plane and far-end server, unmanned plane includes RGBD camera, flight controller, radio communication unit and processor, processor is connected with RGBD camera, radio communication unit and flight controller respectively, RGBD camera obtains the RGBD image of target during unmanned plane during flying in real time, wherein in RGBD image, each pixel includes R, G, B Pixel Information and depth information, far-end server is set up with radio communication unit data cube computation, and the RGBD image receiving radio communication unit transmission processes for RGBD image.By with upper type, this utility model is capable of the transmission of efficient data.

Description

UAS
Technical field
This utility model relates to unmanned plane field, particularly relates to a kind of UAS.
Background technology
Along with microelectric technique and the development of computer vision technique so that target following is able to real-time implementation, especially It is to be installed on unmanned plane by target tracker, it is possible to achieve the flexible dynamic tracking to target, leads at military and civilian Territory has higher use value.
In the target following technology of tradition unmanned plane, generally use laser, radar and the actively environment perception method such as ultrasonic, Its shortcoming is to directly obtain the unknown message of target, and can interfere during the detection of multiple unmanned plane, more drawback Be disguised poor in battlefield surroundings, the increase that the probability that found by enemy is big.
When existing unmanned plane is directed generally to increase boat, improving speed, stealthy body, reduce volume, highly intelligence, adds Carry weapon, strengthen transmission reliability and versatility, enable unmanned plane pre-to complete according to instruction or program prepared in advance Fixed combat duty.And the camera on existing unmanned plane is normally applied 2D camera to shoot 2D image, each pixel in image Point only includes red (Red, R), green (Green, G), blue (Blue, B) pixel, does not include depth information D.Such existing unmanned plane Target following shooting etc. cannot be automatically obtained according to shooting 2D image.
Utility model content
This utility model embodiment provides a kind of UAS, it is possible to the most directly carry out RGBD image at unmanned plane Process, it is achieved efficient data transmission.
This utility model provides a kind of UAS, and including unmanned plane and far-end server, described unmanned plane includes RGBD camera, flight controller, radio communication unit, The Cloud Terrace and processor, described processor respectively with described RGBD camera, Described radio communication unit and described flight controller connect, and described RGBD camera is real-time during described unmanned plane during flying Obtaining the RGBD image of target, in wherein said RGBD image, each pixel includes R, G, B Pixel Information and depth information, Described far-end server is set up with described radio communication unit data cube computation, receives the described of described radio communication unit transmission RGBD image processes for described RGBD image;Being provided with swingle on The Cloud Terrace, RGBD camera is arranged on swingle On.
Wherein, UAS also includes gesture outfan, and described RGBD camera obtains the output of described gesture outfan Gesture, described processor generates control instruction according to the described gesture obtained, and flight controller controls according to described control instruction The flight attitude of described unmanned plane and/or screening-mode.
Wherein, processor is according to target described in the Depth Information Acquistion of pixel in described RGBD image to described RGBD phase The real-time distance of machine;Described flight controller adjusts the flight attitude of described unmanned plane according to described real-time distance.
Wherein, RGBD camera is additionally operable to shoot the different gestures of user's input, and described processor produces according to different gestures Corresponding control instruction, described flight controller selects screening-mode according to described control instruction.
Wherein, unmanned plane also includes that voice acquisition module, described voice acquisition module are connected with described processor, institute's predicate Sound acquisition module is for obtaining the voice of user's input, and the voice that described processor inputs always according to user produces control instruction, Described flight controller selects screening-mode according to described control instruction.
Wherein, unmanned plane also includes that speech transducer, described speech transducer are connected with described processor, is used for obtaining institute Stating the voice messaging of target, described processor carries out identity according to RGBD image described in multiframe and described voice messaging further Identify, and carry out the dynamic behaviour analysis of target.
By such scheme, the beneficial effects of the utility model are: RGBD camera is real during described unmanned plane during flying Time obtain target RGBD image, in wherein said RGBD image, each pixel includes R, G, B Pixel Information and depth information, Described far-end server is set up with described radio communication unit data cube computation, receives the described of described radio communication unit transmission RGBD image processes for described RGBD image, it is possible to realize the transmission of efficient data.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme in this utility model embodiment, required in embodiment being described below Accompanying drawing to be used is briefly described, it should be apparent that, the accompanying drawing in describing below is only realities more of the present utility model Execute example, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to according to these accompanying drawings Obtain other accompanying drawing.Wherein:
Fig. 1 is the structural representation of the unmanned plane of this utility model first embodiment;
Fig. 2 a is the structural representation of the unmanned plane of this utility model the second embodiment;
Fig. 2 b is the structural representation of the unmanned plane section in Fig. 2 a;
Fig. 2 c is the structural representation of the RGBD camera rotation of the unmanned plane in Fig. 2 a;
Fig. 3 is the structural representation of the unmanned plane of this utility model the 3rd embodiment
Fig. 4 is the structural representation of the three-dimensional sensing chip of this utility model embodiment;
Fig. 5 is the structural representation of the UAS of this utility model first embodiment;
Fig. 6 is the structural representation of the UAS of this utility model the second embodiment.
Detailed description of the invention
Below in conjunction with the accompanying drawing in this utility model embodiment, the technical scheme in this utility model embodiment is carried out Clearly and completely describe, it is clear that described embodiment is only a part of embodiment of this utility model rather than all real Execute example.Based on the embodiment in this utility model, those of ordinary skill in the art are not under making performing creative labour premise The every other embodiment obtained, broadly falls into the scope of this utility model protection.
Fig. 1 is the structural representation of the unmanned plane of this utility model first embodiment.As it is shown in figure 1, unmanned plane (unmanned air vehicle, UAV) 10 includes: RGBD camera 11, flight controller 12 and processor 13.Processor 13 It is connected with processor 13 with RGBD camera 11 and flight controller 12.Flight controller 12 is for controlling flying of described unmanned plane Row attitude and/or screening-mode.RGBD camera 11 is for obtaining the RGBD image of target in real time in unmanned plane 10 flight course. In RGBD image, each pixel includes the depth information of R, G, B Pixel Information and correspondence.The wherein depth information structure of pixel Become the two-dimensional pixel matrix of scene, be called for short depth map.Each pixel is corresponding with its position in the scene, and has expression Pixel value from the distance of certain reference position to its scene location.In other words, depth map has the form of image, pixel value Point out the topographical information of the object of scene rather than brightness and/or color.Processor 13 is used in real time R, G, B Pixel Information And/or the depth information of correspondence processes, and obtain the profile of described target for identifying target.
In this utility model embodiment, processor 13 is according to the Depth Information Acquistion target of pixel in RGBD image extremely The real-time distance of RGBD camera;Flight controller adjusts the flight attitude of unmanned plane 10 according to distance in real time.Specifically, flight control Device 12 processed can receive by remote controller, the instruction that the control unit such as voice, gesture sends, and adjusts unmanned plane 10 according to instruction Flight attitude.Wherein, the flight attitude of unmanned plane 10 include taking off, hover, pitching, rolling, go off course, land at least it One.
As shown in Figure 2 a, unmanned plane 20 can include at least two RGBD camera 210,211, and also includes flight assembly 24 and The Cloud Terrace 25 (not shown).RGBD camera 210,211 is arranged on The Cloud Terrace 25, and The Cloud Terrace 25 is for measuring the attitudes vibration of carrier Make a response with the RGBD camera 210,111 on stable The Cloud Terrace, clap to facilitate 210,211 pairs of targets of RGBD camera to be tracked Take the photograph.Being provided with swingle 26 on The Cloud Terrace 25, RGBD camera 210,211 is arranged along the vertical direction of swingle 26.Unmanned plane 20 Profile as shown in Figure 2 b, arranges circuit board inside unmanned plane 20, and processor 23 is arranged on circuit boards.Flight assembly 24 can To include rotor or fixed-wing, for guaranteeing stablizing of flight attitude in the normal flight of unmanned plane and flight course.Excellent Selection of land, as a example by four rotor wing unmanned aerial vehicles, four propellers are decussation structure, and four relative rotors have identical rotation Direction, divides two groups, and the direction of rotation of two groups is different.Different from traditional helicopter, quadrotor can only be by changing spiral shell The speed of rotation oar realizes various action.In unmanned plane 20, RGBD camera 210,211 is separate setting, i.e. RGBD camera 210,211 separate shoot, the most unaffected.Fig. 2 c is the knot that in unmanned plane 20, RGBD camera 211 rotates 60 degree Structure schematic diagram.In this utility model embodiment, the RGBD camera quantity of unmanned plane 20 is not limited to 2, specifically can extend rotation Bull stick 26, increases RGBD camera in its longitudinal direction.Certainly in other embodiments of the present utility model, it is also possible to will at least Two RGBD camera levels are independently positioned on The Cloud Terrace 25, as arranged multiple swingle on The Cloud Terrace 25 to be respectively provided with RGBD camera.
In this utility model embodiment, processor 13 can identify the wheel of target according to the depth information of each pixel Exterior feature, and then the feature of recognizable object.In RGBD image, depth information and rgb pixel information one_to_one corresponding, processor 13 is also Available rgb pixel information carries out feature identification to target, identifies the profile of object, color information, extracts more multiobject spy Levy, improve the recognition accuracy of target.Recognition methods is not limited to the training method commonly used, and such as machine learning, degree of depth study etc. is calculated Method.Such as utilize RGB information, dynamic biological target is carried out skin color model, or meets human body complexion feature, then identify that target is No for human body, it is otherwise non-human.Processor 13 can be with the letter of other sensors such as compatible processing sound, infrared sensor Breath, is used for identifying and detecting target and feature thereof, improves accuracy rate.Specifically, processor 13 can apply color images Method, utilizes background texture to be partitioned into background image, then utilizes original image subtracting background image to obtain target image.Certainly In other embodiments of this utility model, it is also possible to application additive method identification target.Target is specific human body.
In this utility model embodiment, processor 13 is according to the depth information identification mesh of R, G, B Pixel Information and correspondence It is designated as rigid body or non-rigid.Specifically can utilize depth information that the profile of target is identified, distinguish profile be rigid body still Non-rigid, picking out target is dynamic biological (such as human body) or the object of non-rigid.If rigid body is then identified as object, and Whether target there is motion actively.Wherein rigid body refers to that three dimensional structure will not be along with the object that change of motion, rather than just Body is then contrary, and its three dimensional structure can change along with motion.
If recognizing target is human body, then processor 13 identifies trunk, extremity, hand, face et al. body Position, extracts the information such as height, brachium, shoulder breadth, hand size, face size, countenance feature.Due to human body be non-just Body, during the track up of long period, human body can not keep same posture, is susceptible to non-rigid change, needs Model Reconstruction to be carried out, it is to avoid the non-rigid change of data.The processor 13 first degree of depth of target to RGBD camera 11 shooting Image removes background parts, and owing to the depth value of background pixel point is bigger than the depth value of human body, processor 13 can select One suitable threshold value, when the depth value of pixel is more than this threshold value, is labeled as background dot by this pixel, from depth map Remove in Xiang, obtain human body data cloud.Cloud data is converted into triangle grid data by processor 13 again, specifically can utilize Four fields on depth image are as the topological relation connected, and cloud data is according to this Topology generation triangle grid data. Point is gone data to carry out denoising by processor 13 further, specifically can be sued for peace respectively by the multiframe cloud data at each visual angle The big noise of average removal, then remove small noise with bilateral filtering.Processor 13 is last by the triangle grid data at multiple visual angles Be stitched together one entirety of formation, for carrying out Model Reconstruction.Processor 13 can use iterative algorithm to rebuild three-dimensional people Body Model.In iterative algorithm, first find out the corresponding point between master pattern and the data collected, for use as change below Obligatory point.Then using obligatory point as energy term, minimize object function, thus be deformed to master pattern solve scanning number According to, finally obtain the parameter in human space of the master pattern after deformation, calculated human parameters is for changing next time Dai Zhong, completes the reconstruction of three-dimensional (3 D) manikin after so carrying out successive ignition.And then trunk, extremity, hands can be identified The human body such as portion, face, extracts the information such as height, brachium, shoulder breadth, hand size, face size, countenance feature, Can also further discriminate between each one feature in target group, and be marked its feature, authenticating identity, it is old for distinguishing target People, child, adolescence.
RGBD camera 11 follows the tracks of human body target, and the motion rail of partes corporis humani position according to the anthropometric dummy that processor 13 is rebuild Mark.Processor 13 and then in order to analyze the attitude action of target, and analyze according to the attitude action of target, behavioral pattern etc. and extract Identity information.Specifically, unmanned plane also includes speech transducer, for obtaining the voice messaging of target.Processor 13 is further Carry out identification according to multiframe RGBD image and voice messaging, and carry out the dynamic behaviour analysis of target.Processor 13 enters And the speed of action of target can be identified, it can be determined that whether the acceleration to target travel is more than certain threshold value, and in target Early warning is carried out when the acceleration of motion is more than certain threshold value.Such as, unmanned plane is applied in safety-protection system, processor 13 When the RGBD image obtained according to RGBD camera determines the speed of actions that a suspected terrorist occurs acceleration suddenly, to being System proposes warning.The most such as processor 13 determines old man according to the RGBD image that RGBD camera obtains or child falls, then may be used So that its action is judged, and to system feedback.
If recognizing target is animal, then processor 13 can utilize the RGBD recognition methods of similar human body target, and RGBD image sequence method for tracking target is identified and target characteristic identification extraction, does not repeats them here.
If recognizing target is inanimate, processor 13 utilizes depth information D to identify the overall size of target.Tool Body ground, processor 13 can split depth map to find out the profile of target.Processor 13 and then utilize the RGB information of target, enters Row object detection, identifies the information such as its color, or Quick Response Code.Processor 13 is carried out according to continuous multiple frames RGBD image further The dynamic behaviour analysis of target.As a example by automobile, whether processor can deviate according to continuous multiple frames RGBD graphical analysis automobile Track originally, or the speed of service is the most too fast, and report to the police time the most too fast at the original track of deviation or the speed of service Prompting.
In this utility model embodiment, target can be multiple.Multiple targets can be carried out by i.e. unmanned plane 10 simultaneously Identify.Now, in unmanned plane 10 flight course, if needing the multiple targets identified not far from one another, RGBD camera can be One RGBD image of shooting includes the plurality of target simultaneously.If far apart between multiple targets, RGBD camera cannot Ensure that same RGBD image includes multiple target, then RGBD camera carries out displacement or rotates the plurality of shooting successively Target.Unmanned plane 10 also includes memory element, the RGBD image shot for RGBD camera 11 and 2D video and processor 13 The target 3D model of preliminary treatment, 3D video etc..Wherein somewhere target is continuously shot by 2D video by RGBD camera 11 The RGBD image sequence of RGBD image construction is constituted.Can certainly be by multiple RGBD cameras 11 to different targets respectively Shoot.Single RGBD camera 11 is during shooting, and the movement of RGBD camera 11 may be considered the movement at visual angle, claps If RGBD camera 11 moves horizontally when taking the photograph, then can photograph bigger scene.RGBD camera 11 can also be around mesh Mark rotation shoots, to photograph the RGBD image of the different visual angles of same target.
In this utility model embodiment, the memory capacity of the memory element within unmanned plane 10 is limited, it is impossible to Storing jumbo data, therefore see Fig. 3, unmanned plane 10 also includes radio communication unit 14.Radio communication unit 14 and place Reason device 13 connects, and carries out communication for realization and far-end server.Wherein far-end server includes cloud server and/or ground Terminal server.Far-end server is for processing the RGBD image sequence transmitted by radio communication unit 14, and processes high definition RGBD, generates high definition high-resolution target 3D model, target 3D video or 3D animation etc..The video that RGBD obtains includes that 2D regards Frequency and RGBD image sequence, if the data volume of 2D video and RGBD image sequence is too big, then radio communication unit 14 can be by 2D video and RGBD image sequence send to far-end server, so that far-end server is according to 2D video and RGBD image sequence Generate 3D video, so can process the RGBD image sequence of big data, facilitate flight controller 12 to continue target and clap Take the photograph.Radio communication unit 14 is additionally operable to transmit the target 3D model of processor 13 preliminary treatment, 3D video etc. to far-end in real time Server.
In this utility model embodiment, RGBD camera 11 is additionally operable to shoot the different gestures of user's input, processor 13 Producing corresponding control instruction according to different gestures, flight controller 12 selects screening-mode to identify mesh according to control instruction Mark.Wherein, screening-mode includes that unmanned plane 10 start and stop, target type are selected and track up mode is selected, wherein target class Type includes human body.Gesture includes that the five fingers opening and closing gesture, the five fingers opening and closing gesture include that the five fingers open gesture and the five fingers Guan Bi gesture.With The gesture at family can also include but not limited to grasp, naturally raises one's hand, front push away, wave in upper and lower, left and right.Different gesture correspondences is not Same control instruction, as naturally raise one's hand, expression starts unmanned plane 10, waves and represent adjustment unmanned plane 10 side of flight in upper and lower, left and right To control instruction etc., be not described in detail in this.
In this utility model embodiment, unmanned plane 10 also includes that voice acquisition module, voice acquisition module are used for obtaining The voice of user's input, the voice that processor 13 inputs always according to user produces control instruction, and flight controller 12 is according to control Instruction selects screening-mode to identify target.Specifically, remote control unit carries out recognition of face and carries out Application on Voiceprint Recognition.Recognition of face Time, face database has pre-saved face information (such as by infrared signal detection facial image and retain people's interorbital space, The physiological features such as human eye length), when gathering, collect human face data by infrared signal and make with the data in face database Relatively.If by recognition of face, then the voice received being further determined whether the language for having voice-operated authority Sound, determines the authority corresponding to this voice, and carries out speech recognition.Remote control unit according to the result of recognition of face, is sentenced further Break and whether receive voice.Every has and sends the personnel of phonetic control command and all upload one section of training voice, and then obtains vocal print Storehouse.Carrying out vocal print when comparing, the phonetic order person of sending sends phonetic order, and this phonetic order is carried out vocal print with voice print database Contrast.Search identity information corresponding in voice print database and face database by vocal print and face information, thus confirm it Authority.Phonetic order is sent to the voice acquisition module of unmanned plane by remote control unit further.Voice is referred to by voice acquisition module The security verification of order, and producing control instruction by checking preprocessor 13 according to phonetic order, it is sent to unmanned plane Flight controller 12.Flight controller 12 is by the operation time needed for instruction corresponding for the symbol lookup of the instruction received, so After after this phonetic order (being actually code), add this operation time.Flight controller 12 selects according to control instruction Screening-mode controls the flight attitude of unmanned plane 10, as between flight speed, flying height, flight path and peripheral obstacle Distance etc..
In this utility model embodiment, processor 13, radio communication unit 14 and memory element are all integrated in three-dimensional In sensing chip.Seeing Fig. 4, three-dimensional sensing chip includes DEPTH ENGINE module, REGISTER PROCESSOR module, control Device module processed, register module, RGB CMOS drive module, IR CMOS to drive module, AXI bus interface module, APB bus Interface module, AXI/APB bridge module and outside storage drive module, switch module, I2S interface module, usb interface module and electricity Source control module.
The signal input part of DEPTH ENGINE module and IR CMOS drive module to connect, DEPTH ENGINE module Control signal end and controller module connect, and the data terminal of DEPTH ENGINE module and AXI bus interface module connect, The signal input part of REGISTER PROCESSOR module and RGB CMOS drive module to connect, REGISTER PROCESSOR mould The control signal end of block and controller module connect, controller lever lie-in respectively with register module, AXI bus interface module Connecting, register module also connects with AXI bus interface module, and AXI bus interface module passes through AXI/APB bridge module and APB Bus interface module connects, and RGB CMOS drives module to be connected with AXI bus interface module the most respectively, and outside storage drives module It is connected with AXI bus interface module, APB bus interface module respectively.
Outside storage drives module to include, and the Flash storage being connected with outside flash storage drives module and outside The DDR3 storage that DDR3 memorizer connects drives module.When processing optical 3-dimensional data, controller module sends the first instruction To be also turned on outside flash storage and the connection of Flash storage driving module, outside DDR3 memorizer device module sends Second instruction is to connect outside flash storage and the connection of Flash storage driving module, in order to process non-optical three dimensions According to, and disconnect outside DDR3 memorizer and the connection of DDR3 storage driving module.
DEPTH ENGINE module is depth engine circuit, and REGISTER PROCESSOR module is process buffer circuit, RGB CMOS drives module to be RGB photosensitive sensor drive circuit, and IR CMOS drives module to be that infrared photosensitive sensor drives Galvanic electricity road, AXI bus interface module is the AXI interface circuit meeting AXI bus protocol, and APB bus interface module is for meeting APB The APB interface circuit of bus protocol, the AXI/ that AXI/APB bridge module is AXI bus protocol and APB bus protocol is mutually changed APB bridge module.Above-mentioned various circuit, those skilled in the art can be according to common knowledge, at the technical background of the technical program Under, select the function that different circuit connecting modes is corresponding to realize each circuit with the components and parts of different parameters, the most no longer lift Example repeats.
RGB CMOS drives the signal input part of module to be connected with outside color video camera.I R CMOS drives module Signal input part is connected with outside thermal camera.When processing optical 3-dimensional data, simultaneously with outside flash storage Connect with DDR3 memorizer, in order to quickly process high-precision optical 3-dimensional data, process the optical 3-dimensional depth image obtained Resolution high and postpone short.
Switch module and controller module connect, when switch module is closed by switching device or is sent out by controller module Go out the 3rd instruction Guan Bi, then connect outside DDR3 memorizer and DDR3 stores the connection driving module, when switch module is by opening Close device or send the 4th instruction disconnection by controller module, then disconnecting outside DDR3 memorizer and DDR3 storage drives module Connection.Switch module can coordinate Sofe Switch, as programmed instruction can hard switching, as single-pole double-throw switch (SPDT) device use, with reality Existing switch module Guan Bi or the effect disconnected, the occasion that concrete form is applied according to reality determines.
The signal input part of I2S interface module is connected with outside audio sensor, the signal output part of I2S interface module It is connected with AXI bus interface module, APB bus interface module respectively.I2S interface module i.e. integrated circuit built-in audio bus electricity Road, be for digital audio-frequency apparatus between voice data transmission and a kind of bus standard of formulating, this bus have employed along independent The design of wire transmission clock and data signal, by data are shared with clock signal, it is to avoid because of the mistake of time difference induction Very, the data transmission between audio frequency apparatus is specialized in.
The data input pin of usb interface module is connected with AXI bus interface module, the data output end of usb interface module Connect with outside image processor.Wherein, usb interface module includes USB3.0 controller module and USB interface, and USB3.0 is controlled Device module processed and USB interface connect.Usb interface module is USB (universal serial bus) circuit, is that one is quick, two-way, can synchronize to pass Defeated, cheap and hot-swappable serial interface circuit can be carried out.Usb interface module is easy to use, can connect multiple different setting Standby.USB3.0 controller module needs the novel entities layer of two channels to come for sub data transmission stream, to reach intended two-forty, Package route (Packet-routing) technology used, will only just allow when terminal unit needs and transmits data transmission Data are transmitted.This specification support sets up standby multiple data streams, and can be that each data stream retains respective priority.
Power management module and APB bus interface module connect.Power management module is mainly responsible for identifying circuit to be powered Power supply amplitude, in order to produce corresponding short square ripple promote late-class circuit carry out power output.Conventional power management chip has The models such as HIP6301, IS6537, RT9237, ADP3168, KA7500, TL494.
RGB CMOS drives module to include RGB CMOS interface, and IR CMOS drives module to include IR CMOS interface, Flash Storage drives module to include, and Flash interface, DDR3 storage drive module to include DDR3 interface.Above-mentioned RGB CMOS interface, IR CMOS interface, Flash interface and DDR3 interface are all integrated in the hardware configuration of three-dimensional sensing chip and realize, compact.
Seeing Fig. 5, this utility model also provides for a kind of UAS, UAS include aforesaid unmanned plane 10 with And far-end server 20.The RGBD image that far-end server 20 sends for receiving unmanned plane 10 is for RGBD Reason;Wherein far-end server 20 includes cloud server 21 and/or ground based terminal server 22.Ground based terminal server 22 namely Host computer.Specifically, the interfaces such as USB can be set on unmanned plane 10 and carry out communication with ground based terminal server 22, unmanned Radio communication unit is set on machine 10 and carries out communication with cloud server 21.At RGBD image sequence and/or 2D/3D video data When measuring huge, cloud server 21 and/or ground based terminal server 22 receive RGBD image and/or the 2D/ that unmanned plane 10 sends 3D video is to be further processed.When unmanned plane 10 includes multiple RGBD camera, can respectively multiple RGBD cameras be clapped The RGBD image transmitting taken the photograph is to far-end server 20, and far-end server 20 can be according to the RGBD image of multiple RGBD cameras shooting Output 3D video in real time.
Fig. 6 is the structural representation of the UAS of this utility model the second embodiment.As it is shown in figure 5, unmanned plane system System includes at least one unmanned plane 10, far-end server 20 and gesture outfan 40.Far-end server 20 is used for receiving nobody The RGBD image that machine 10 sends is for RGBD process;Wherein far-end server 20 include cloud server 21 and/or Ground based terminal server 22.Unmanned plane 10 obtains the gesture of gesture outfan 30 output by RGBD camera, and according to acquisition Gesture generates control instruction, and the flight attitude of control unmanned plane 10 and/or screening-mode to be tracked shooting to target.Gesture Outfan 30 is human body, and when target 40 is also human body, both can be identical, and now unmanned plane includes at least two RGBD camera, a RGBD camera is used for obtaining gesture, and a RGBD camera is for photographic subjects.Gesture outfan 30 and mesh Mark 40 can also differ, and can be now that a RGBD camera shoots, and gesture outfan 30 and target 40 are same In the visual field.When UAS includes multiple unmanned plane, gesture outfan 40 can control multiple stage unmanned plane simultaneously.The most permissible Use gesture wherein one or more unmanned planes of activation, then the unmanned plane activated is carried out gesture control, naturally it is also possible to one Individual gesture activates whole unmanned planes, and all activated unmanned plane can be carried out same by one gesture of now gesture outfan output Step controls.
In sum, the RGBD that this utility model obtains target by RGBD camera during unmanned plane during flying in real time schemes Picture, wherein in RGBD image, each pixel includes R, G, B Pixel Information and depth information, far-end server and wireless telecommunications list Unit sets up data cube computation, and the RGBD image receiving radio communication unit transmission processes for RGBD image, so It is capable of the transmission of efficient data.
The foregoing is only embodiment of the present utility model, not thereby limit the scope of the claims of the present utility model, every Utilize equivalent structure or equivalence flow process conversion that this utility model description and accompanying drawing content made, or be directly or indirectly used in Other relevant technical fields, are the most in like manner included in scope of patent protection of the present utility model.

Claims (6)

1. a UAS, it is characterised in that described UAS includes unmanned plane and far-end server, described nothing Man-machine RGBD camera, flight controller, radio communication unit, The Cloud Terrace and the processor of including, described processor is respectively with described RGBD camera, described radio communication unit and described flight controller connect, and described RGBD camera is at described unmanned plane during flying During obtain in real time the RGBD image of target, in wherein said RGBD image each pixel include R, G, B Pixel Information and Depth information, described far-end server is set up with described radio communication unit data cube computation, receives described radio communication unit The described RGBD image sent processes for described RGBD image;Swingle it is provided with on described The Cloud Terrace, described RGBD camera is arranged on described swingle.
UAS the most according to claim 1, it is characterised in that described UAS also includes that gesture exports End, described RGBD camera obtains the gesture of described gesture outfan output, and described processor generates according to the described gesture obtained Control instruction, flight controller controls flight attitude and/or the screening-mode of described unmanned plane according to described control instruction.
UAS the most according to claim 1, it is characterised in that described processor is according to picture in described RGBD image The real-time distance of the extremely described RGBD camera of target described in the Depth Information Acquistion of vegetarian refreshments;Described flight controller according to described in real time Distance adjusts the flight attitude of described unmanned plane.
UAS the most according to claim 1, it is characterised in that described RGBD camera is additionally operable to shoot user's input Different gestures, described processor produces corresponding control instruction according to different gestures, and described flight controller is according to described control System instruction selects screening-mode.
UAS the most according to claim 1, it is characterised in that described unmanned plane also includes voice acquisition module, Described voice acquisition module is connected with described processor, and described voice acquisition module is for obtaining the voice of user's input, described The voice that processor inputs always according to user produces control instruction, and described flight controller selects shooting according to described control instruction Pattern.
UAS the most according to claim 1, it is characterised in that described unmanned plane also includes speech transducer, institute Predicate sound sensor is connected with described processor, for obtaining the voice messaging of described target, described processor basis further RGBD image described in multiframe and described voice messaging carry out identification, and carry out the dynamic behaviour analysis of target.
CN201620266759.9U 2016-03-31 2016-03-31 Uas Active CN205693767U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201620266759.9U CN205693767U (en) 2016-03-31 2016-03-31 Uas

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201620266759.9U CN205693767U (en) 2016-03-31 2016-03-31 Uas

Publications (1)

Publication Number Publication Date
CN205693767U true CN205693767U (en) 2016-11-16

Family

ID=57264698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201620266759.9U Active CN205693767U (en) 2016-03-31 2016-03-31 Uas

Country Status (1)

Country Link
CN (1) CN205693767U (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106275410A (en) * 2016-11-17 2017-01-04 湖南科瑞特科技股份有限公司 A kind of wind disturbance resistant unmanned plane
CN106483978A (en) * 2016-12-09 2017-03-08 佛山科学技术学院 A kind of unmanned machine operation voice guide devices and methods therefor
CN108153325A (en) * 2017-11-13 2018-06-12 上海顺砾智能科技有限公司 The control method and device of Intelligent unattended machine
CN108182416A (en) * 2017-12-30 2018-06-19 广州海昇计算机科技有限公司 A kind of Human bodys' response method, system and device under monitoring unmanned scene
CN108401141A (en) * 2018-04-25 2018-08-14 北京市电话工程有限公司 A kind of cell perimeter crime prevention system
CN108572659A (en) * 2017-03-10 2018-09-25 三星电子株式会社 It controls the method for unmanned plane and supports the unmanned plane of this method
CN108629261A (en) * 2017-03-24 2018-10-09 纬创资通股份有限公司 Remote identity recognition method and system and computer readable recording medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106275410A (en) * 2016-11-17 2017-01-04 湖南科瑞特科技股份有限公司 A kind of wind disturbance resistant unmanned plane
CN106275410B (en) * 2016-11-17 2018-11-23 湖南科瑞特科技有限公司 A kind of wind disturbance resistant unmanned plane
CN106483978A (en) * 2016-12-09 2017-03-08 佛山科学技术学院 A kind of unmanned machine operation voice guide devices and methods therefor
CN108572659A (en) * 2017-03-10 2018-09-25 三星电子株式会社 It controls the method for unmanned plane and supports the unmanned plane of this method
CN108572659B (en) * 2017-03-10 2023-08-11 三星电子株式会社 Method for controlling unmanned aerial vehicle and unmanned aerial vehicle supporting same
CN108629261A (en) * 2017-03-24 2018-10-09 纬创资通股份有限公司 Remote identity recognition method and system and computer readable recording medium
CN108153325A (en) * 2017-11-13 2018-06-12 上海顺砾智能科技有限公司 The control method and device of Intelligent unattended machine
CN108182416A (en) * 2017-12-30 2018-06-19 广州海昇计算机科技有限公司 A kind of Human bodys' response method, system and device under monitoring unmanned scene
CN108401141A (en) * 2018-04-25 2018-08-14 北京市电话工程有限公司 A kind of cell perimeter crime prevention system

Similar Documents

Publication Publication Date Title
CN105912980B (en) Unmanned plane and UAV system
CN205693767U (en) Uas
CN105786016B (en) The processing method of unmanned plane and RGBD image
CN105847684A (en) Unmanned aerial vehicle
CN205453893U (en) Unmanned aerial vehicle
CN105892474A (en) Unmanned plane and control method of unmanned plane
US11861892B2 (en) Object tracking by an unmanned aerial vehicle using visual sensors
US11749124B2 (en) User interaction with an autonomous unmanned aerial vehicle
US11726498B2 (en) Aerial vehicle touchdown detection
US11755041B2 (en) Objective-based control of an autonomous unmanned aerial vehicle
Hu et al. Bio-inspired embedded vision system for autonomous micro-robots: The LGMD case
CN107139179A (en) A kind of intellect service robot and method of work
KR20190041504A (en) Augmented reality display device with deep learning sensors
CN104802962A (en) Water rescue system and method
CN105159452B (en) A kind of control method and system based on human face modeling
CN105717933A (en) Unmanned aerial vehicle and unmanned aerial vehicle anti-collision method
CN105825268A (en) Method and system for data processing for robot action expression learning
CN113228103A (en) Target tracking method, device, unmanned aerial vehicle, system and readable storage medium
Nahar et al. Autonomous UAV forced graffiti detection and removal system based on machine learning
CN105930766A (en) Unmanned plane
CN109709975A (en) A kind of quadrotor indoor security system and method for view-based access control model SLAM
Piponidis et al. Towards a Fully Autonomous UAV Controller for Moving Platform Detection and Landing
CN105912989B (en) Flight instruction generation system and method based on image recognition
CN108319287A (en) A kind of UAV Intelligent hides the system and method for flying object
Zhang et al. Unmanned aerial vehicle perception system following visual cognition invariance mechanism

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518057 a808, Zhongdi building, industry university research base, China University of Geosciences, 8 Yuexing Third Road, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Obi Zhongguang Technology Group Co., Ltd

Address before: 518057 a808, Zhongdi building, industry university research base, China University of Geosciences, 8 Yuexing Third Road, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN ORBBEC Co.,Ltd.