CN107071389A - Take photo by plane method, device and unmanned plane - Google Patents
Take photo by plane method, device and unmanned plane Download PDFInfo
- Publication number
- CN107071389A CN107071389A CN201710031741.XA CN201710031741A CN107071389A CN 107071389 A CN107071389 A CN 107071389A CN 201710031741 A CN201710031741 A CN 201710031741A CN 107071389 A CN107071389 A CN 107071389A
- Authority
- CN
- China
- Prior art keywords
- rendering
- image
- cameras
- plane
- gathered
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
- H04N7/185—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
Abstract
Take photo by plane method, device and unmanned plane, the described method comprises the following steps the invention discloses one kind:Pass through two camera collection images;Two image mosaics that described two cameras are gathered are a 3D rendering;It is sent out the 3D rendering.So as to so that unmanned plane can provide 3D rendering during taking photo by plane, and cause user to bring experience on the spot in person to user with real time inspection to picture 3D rendering true to nature, be greatly improved the experience of taking photo by plane of user.Further, while 3D rendering is real-time transmitted into user's viewing, depth finding is also carried out by 3D rendering, so that unmanned plane using one group of binocular camera (i.e. two cameras) can just realize simultaneously 3D take photo by plane, avoidance, tracking, a variety of functions such as ranging, two groups of binocular cameras (i.e. four cameras) need not be utilized respectively and realize that 3D is shot and depth finding respectively, so as to realize a variety of functions with relatively low cost.
Description
Technical field
The present invention relates to unmanned air vehicle technique field, taken photo by plane method, device and unmanned plane more particularly, to one kind.
Background technology
With developing rapidly for unmanned air vehicle technique, the application of unmanned plane is more and more extensive, wherein, it is important to take photo by plane
Application field.Current system of taking photo by plane, is mainly made up of, user is entered by remote control equipment to unmanned plane unmanned plane and remote control equipment
The image of shooting is real-time transmitted to remote control equipment by a camera of taking photo by plane and watched for user by row control, unmanned plane.
In order to provide a user a kind of more real experience, start wear-type virtual reality in the prior art
(Virtual Reality, VR) equipment such as VR glasses are used as remote control equipment.Unmanned plane is by the Image Real-time Transmission of shooting to VR
Mirror, user then watches shooting image in real time by VR glasses, at the same user also controlled by VR glasses unmanned plane posture and
Shooting angle.
Because VR glasses are by human eye and outside isolation completely, therefore a kind of immersion experience is provided the user, certain
Consumer's Experience is improved in degree.However, because the image that unmanned plane sends VR glasses to is 2D images, therefore can not be abundant
The advantage of VR glasses is played, method brings experience on the spot in person to user.
The content of the invention
The main purpose of the embodiment of the present invention is that providing one kind takes photo by plane method, device and unmanned plane, it is intended to which realization passes through
Experience on the spot in person is brought when unmanned plane is taken photo by plane to user.
To achieve these objectives, on the one hand propose that one kind is taken photo by plane method, the described method comprises the following steps:
Pass through two camera collection images;
Two image mosaics that described two cameras are gathered are a 3D rendering;
It is sent out the 3D rendering.
Alternatively, described image is photo or video flowing.
Alternatively, when described image is video flowing, two image mosaics that described two cameras are gathered are
One 3D rendering includes:
Two video flowings that described two cameras are gathered are sampled as the video flowing of two default resolution ratio respectively, described
Default resolution ratio is less than original resolution;
It is a 3D video flowing by the video stream splicing of described two default resolution ratio.
Alternatively, described two cameras or so are arranged side by side, two images that described two cameras are gathered
Being spliced into a 3D rendering includes:
Two images that described two cameras are gathered or so are stitched together side by side, obtain the 3D of a left-right format
Image.
Alternatively, described two images for gathering described two cameras or so be stitched together side by side including:By a left side
The image mosaic of side camera collection is on the left side, and the image mosaic that the right camera is gathered is on the right.
Alternatively, also wrapped after the step of two image mosaics described two cameras gathered are a 3D rendering
Include:The depth finding of photographed scene is carried out using the 3D rendering, depth information is obtained.
Alternatively, it is described to be sent out the 3D rendering and include:The 3D rendering is sent to wear-type virtual reality device.
On the other hand, a kind of aerial device is proposed, described device includes:
Image capture module, for passing through two camera collection images;
Image processing module, two image mosaics for described two cameras to be gathered are a 3D rendering;
Image sending module, for being sent out the 3D rendering.
Alternatively, described image is photo or video flowing.
Alternatively, when described image is video flowing, described image processing module is used for:Described two cameras are gathered
Two video flowings be sampled as the video flowing of two default resolution ratio respectively, by the video stream splicing of described two default resolution ratio
For a 3D video flowing, wherein, the default resolution ratio is less than original resolution.
Alternatively, described two cameras or so are arranged side by side, and described image processing module is used for:By described two shootings
Two images of head collection or so are stitched together side by side, obtain the 3D rendering of a left-right format.
Alternatively, described image processing module is used for:The image mosaic that left side camera is gathered takes the photograph the right on the left side
The image mosaic that picture head is gathered is on the right.
Alternatively, described device also includes depth finding module, and the depth finding module is used for:Utilize the 3D rendering
The depth finding of photographed scene is carried out, depth information is obtained.
Alternatively, described image sending module is used for:The 3D rendering is sent to wear-type virtual reality device.
The present invention also proposes a kind of unmanned plane, including:
One or more processors;
Memory;
One or more application programs, wherein one or more of application programs are stored in the memory and quilt
It is configured to by one or more of computing devices, one or more of application programs are configurable for performing foregoing boat
Shooting method.
One kind that the embodiment of the present invention is provided is taken photo by plane method, and two images are gathered by two cameras, and will collection
Two image mosaics be to send after 3D rendering so that unmanned plane can provide 3D rendering during taking photo by plane, and cause
User produced sensation on the spot in person by can be allowed user, realized real immersion with real time inspection to picture 3D rendering true to nature
Experience, is greatly improved the experience of taking photo by plane of user.
Further, while 3D rendering is real-time transmitted into user's viewing, depth finding is also carried out by 3D rendering,
So that unmanned plane using one group of binocular camera (i.e. two cameras) can just realize simultaneously 3D take photo by plane, avoidance, tracking, ranging etc.
A variety of functions, realize that 3D is shot and depth finding respectively without being utilized respectively two groups of binocular cameras (i.e. four cameras), from
And a variety of functions are realized with relatively low cost.
Brief description of the drawings
Fig. 1 is the flow chart of the method for taking photo by plane of first embodiment of the invention;
Fig. 2 is the flow chart of the method for taking photo by plane of second embodiment of the invention;
Fig. 3 is the module diagram of the aerial device of third embodiment of the invention;
Fig. 4 is the module diagram of the aerial device of fourth embodiment of the invention.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described,
Obviously, described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.Based in the present invention
Embodiment, the every other embodiment that those of ordinary skill in the art are obtained under the premise of creative work is not made,
Belong to the scope of protection of the invention.
It is to be appreciated that institute is directional in the embodiment of the present invention indicates that (such as up, down, left, right, before and after ...) is only used
In explaining relative position relation, motion conditions under a certain particular pose (as shown in drawings) between each part etc., if should
When particular pose changes, then directionality indicates also correspondingly therewith to change.
In addition, the description for being related to " first ", " second " etc. in the present invention is only used for describing purpose, and it is not intended that referring to
Show or imply its relative importance or the implicit quantity for indicating indicated technical characteristic.Thus, " first ", " are defined
At least one this feature can be expressed or be implicitly included to two " feature.In addition, the technical scheme between each embodiment can
To be combined with each other, but must can be implemented as basis with those of ordinary skill in the art, when the combination of technical scheme occurs
It is conflicting or when can not realize it will be understood that the combination of this technical scheme is not present, also not in the protection model of application claims
Within enclosing.
Take photo by plane method and the aerial device of the embodiment of the present invention, are mainly used in unmanned plane, naturally it is also possible to applied to it
Its aircraft, this is not limited by the present invention.The embodiment of the present invention is described in detail exemplified by applied to unmanned plane.
Embodiment one
Referring to Fig. 1, the method for taking photo by plane of first embodiment of the invention is proposed, be the described method comprises the following steps:
S11, pass through two camera collection images.
In the embodiment of the present invention, unmanned plane is provided with two cameras, constitutes one group of binocular camera.Two cameras are excellent
Choosing left and right is arranged side by side.It is of course also possible to be staggered, i.e., two cameras are not on same horizontal line.Two cameras
Between it is spaced apart, spacing distance is the bigger the better in theory.
In this step S11, by two cameras, (synchronization) gathers image to unmanned plane simultaneously, and the image of collection can be shone
Piece or video flowing.
S12, two image mosaics for gathering two cameras are a 3D rendering.
Specifically, two images that two cameras are gathered or so are stitched together side by side, it is preferable that the left side is imaged
The image mosaic of head collection is on the left side, and the image mosaic that the right camera is gathered on the right, finally obtains a left-right format
3D (solid) image.Alternatively it is also possible to which two images of two camera collections are stitched together side by side up and down, obtain
The 3D rendering of one top-down format.The 3D rendering is 3D photos or 3D video flowings.
Further, before image mosaic is carried out, unmanned plane first carries out resolution ratio reduction processing to original image, then
Splicing is carried out to the image for reducing resolution ratio, to reduce final 3D rendering size, it is to avoid consumed too during subsequent transmission
Many bandwidth resources, so as to improve transmission speed, improve the real-time of image transmitting.
So that the image of collection is video flowing as an example, two video flowings of the unmanned plane first by two camera collections are adopted respectively
Sample is the video flowing of two default resolution ratio, is then a 3D video flowing by the video stream splicing of two default resolution ratio, its
In, preset resolution ratio and be less than original resolution.
For example, two cameras of unmanned plane each shoot the video flowing of 4K resolution ratio, unmanned plane is by two 4K resolution ratio
Video flowing sampling turn into the video flowing of two 720P forms, sample mode can be carried out using general down-sampled algorithm,
4 potting gums are for example turned into a pixel;The video flowing of two 720P after sampling, will in the case where keeping frame synchronization
The picture that left camera is shot is placed in the left side, and the picture that right camera is shot is placed in the right, and splicing is as a resolution ratio
The 3D video flowings of 2560*720 left-right format.
In addition, the original image that unmanned plane can also shoot two cameras is stored in local memory space.Enter
One step, before preservation, processing is also compressed to original image, to save memory space, such as by video stream compression for H.265
The video file of form.
S13, it is sent out 3D rendering.
In this step S13, unmanned plane sends the 3D rendering of acquisition, for example, be sent to and established wirelessly with unmanned plane
The remote control equipment or terminal device of communication connection, such as mobile phone, tablet personal computer, wear-type virtual reality (VR) equipment (such as VR glasses,
VR helmets etc.) etc., or uploaded onto the server by cordless communication network.
Further, before 3D rendering is sent, unmanned plane is also compressed processing to 3D rendering, to reduce 3D rendering
Size, improves efficiency of transmission, realizes real-time Transmission.For example, by video flowing of the 3D video stream compressions for H.264 form, Ran Houxiang
It is outer to send.
Further, unmanned plane also carries out the depth finding of photographed scene using 3D rendering, obtains depth information, and profit
The functions such as object ranging, recognition of face, gesture identification, target following are realized with depth information, it is possible to reference to the appearance of unmanned plane
State information and depth information realize avoidance (such as forward direction avoidance), thus can just be realized simultaneously using one group of binocular camera take photo by plane,
A variety of functions such as avoidance, tracking, ranging.
Wherein, depth finding is carried out using 3D rendering, that is, utilizes left and right in 3D rendering (such as 3D video flowings) or two up and down
The difference (parallax) of image realizes depth finding.Parallax is exactly the same target institute from two points for having certain distance
The direction difference of generation, therefore same target obtains in the binocular camera (such as left camera and right camera) of diverse location
There is parallax in the image obtained.The target nearer apart from camera, the parallax in the image of binocular camera is bigger, therefore can
Target is calculated to the distance of camera, i.e. target with the parallax size according to target in the two images that binocular camera is obtained
Depth, so as to realize depth finding.Several effective coverages are divided the image into, the target range in each region is calculated successively, and
Distance and zone aspect are fed back into winged control, fly range-azimuth of the control according to objects ahead, it is possible to realize avoidance.
Alternatively, when user by remote control equipment (such as VR glasses) control camera pitching (generally by controlling placement to take the photograph
As camera pitching is realized in the head pitching of head) when, when luffing angle (can be according to actual need more than default luffing angle
Set) when, unmanned plane then points out user's barrier avoiding function to fail and/or holding floating state.
In the embodiment of the present invention, after user chooses the target for needing track up, unmanned plane will adjust itself and cloud
The target that the posture of platform is chosen with being aligned.Because side of the target following accuracy rate than conventional plane visual based on depth information
Method is more accurate, so as to realize the track up function with actual application value for unmanned plane.
In the embodiment of the present invention, when user trigger photographing instruction when, such as made before remote control equipment (such as VR glasses) wave,
When both hands picture frame etc. is acted, the photo that unmanned plane then shoots two full resolutions by two cameras is stored in locally, and profit
With two photomosaics of shooting into a 3D photo.
Further, unmanned plane can also carry out image quality improving by the respective picture of two cameras, such as carry out denoising
Processing, background blurring processing etc..Specifically, two photos that binocular camera is shot, after Feature Points Matching, can find out
The region being completely superposed, picture in the region equivalent to being shot twice.Same picture is repeatedly shot
Noise can be effectively reduced by carrying out overlap-add procedure (such as simplest weighted average) afterwards.Meanwhile, according to the depth information of binocular,
The prospect and background of picture can be distinguished, mould can be carried out to background by fuzzy algorithmic approach (such as simplest Gaussian Blur is filtered)
Paste processing, so as to form background blurring effect.
The method of taking photo by plane of the embodiment of the present invention, relative to prior art, with following technique effect:
1) 3D rendering is made by way of image mosaic so that unmanned plane can shoot 3D by binocular camera and draw
Face, and the terminal devices such as VR glasses are sent in real time, user can be carried out the real-time viewing of 3D videos by VR glasses, be carried in real time
The 3D video sources taken photo by plane have been supplied, Consumer's Experience is greatly improved;
2) depth finding is carried out by 3D rendering so that unmanned plane can also realize bat using same group of binocular camera
The depth finding of scene is taken the photograph, depth information is obtained, it is double without being utilized respectively two groups to functions such as avoidance, target followings before realizing
Mesh camera (i.e. four cameras) realizes that 3D is shot and depth finding respectively, so as to realize a variety of work(with relatively low cost
Energy;
3) distinctive application for taking the photograph shooting based on binocular on the terminal devices such as mobile phone can also be applied on unmanned plane, enters one
Step expands the function of unmanned plane;
4) combine depth information and carry out target following, improve the degree of accuracy followed the trail of reference object (such as people, car) so that
Unmanned plane carries out relatively reliable and effective during target following;
5) using depth information carry out object ranging, can accurately measure the distance of target, contribute to it is safe closely
Shoot.
Embodiment two
Referring to Fig. 2, the method for taking photo by plane of second embodiment of the invention is proposed, be the described method comprises the following steps:
S21, pass through two or so cameras being arranged side by side and gather video flowings.
In the present embodiment, binocular camera of unmanned plane or so is arranged side by side, and unmanned plane is adopted simultaneously by binocular camera
Collect video flowing.
S22, two video flowings for gathering two cameras are sampled as the video flowing of two default resolution ratio respectively.
S23, video flowing of two default resolution ratio or so is stitched together side by side, the 3D for obtaining a left-right format is regarded
Frequency flows.
Specifically, two video flowings that binocular camera is gathered are sampled as two default resolution ratio by unmanned plane respectively first
Video flowing, then video flowing of two default resolution ratio or so is stitched together side by side, it is preferable that adopt left side camera
The video stream splicing of collection is on the left side, and the video stream splicing that the right camera is gathered on the right, finally obtains a left-right format
3D video flowings, wherein, preset resolution ratio be less than original resolution.For example, two cameras of unmanned plane each shoot 4K points
The video flowing sampling of two 4K resolution ratio is turned into the video flowing of two 720P forms by the video flowing of resolution, unmanned plane first,
Then video flowing of two 720P forms or so (or up and down) splicing is turned into the left-right format that a resolution ratio is 2560*720
The 3D video flowings of (or top-down format).
Meanwhile, the video flowing for the 4K resolution ratio that unmanned plane also shoots two cameras is stored in local memory space
It is interior.Further, before preservation, processing is also compressed to original video stream, to save memory space, as by 4K resolution ratio
Video stream compression is the video file of H.265 form.
S24, VR glasses will be sent to after 3D video stream compressions.
In the present embodiment, video flowing of the 3D video stream compressions for H.264 form is then sent to VR glasses by unmanned plane,
Realize real-time Transmission.
VR glasses are played immediately after receiving 3D video flowings, and user can then watch the 3D that unmanned plane photographs in real time and regard
Frequently so that picture is more life-like, allow user to have sensation on the spot in person, be greatly improved Consumer's Experience.
User can control the flight attitude and shooting angle of unmanned plane by VR glasses, when user triggers photographing instruction
When, such as made before VR glasses wave, the action of both hands picture frame when, unmanned plane, which then passes through two cameras and shoots two, complete to be differentiated
The photo of rate is stored in locally, and into a 3D photo sends back VR glasses using two photomosaics of shooting, and user then may be used
The 3D photos photographed are watched with real-time.Further, unmanned plane can also carry out picture by the respective picture of two cameras
Matter improves, and such as carries out denoising, background blurring processing.
S25, the depth finding using 3D video flowings progress photographed scene, obtain depth information.
In the present embodiment, unmanned plane also carries out the depth finding of photographed scene using 3D video flowings simultaneously, obtains depth letter
Breath.Wherein, using 3D video flowings carry out depth finding, i.e., using in 3D video flowings left and right two video flowings difference (parallax) come
Realize depth finding.
S26, realized according to depth information before to functions such as avoidance, target following, object rangings.
In the present embodiment, using depth information realize avoidance (such as forward direction avoidance), recognition of face, gesture identification, target with
The functions such as track, it is possible to realize object ranging with reference to attitude information and the depth information of unmanned plane, implement process with it is existing
Technology is identical, will not be described here.
Two video stream splicings that the present embodiment shoots binocular camera are 3D video flowing, while by 3D video flowings
VR glasses are real-time transmitted to for user's real time inspection, while carrying out depth finding by 3D video flowings so that unmanned plane utilizes one
Group binocular camera can just realize simultaneously 3D take photo by plane, avoidance, tracking, a variety of functions such as ranging, without being utilized respectively two groups of binoculars
Camera (i.e. four cameras) realizes that 3D is shot and depth finding respectively, so as to realize a variety of functions with relatively low cost.
Embodiment three
Referring to Fig. 3, the aerial device of third embodiment of the invention is proposed, described device includes image capture module, image
Processing module and image sending module, wherein:
Image capture module:For passing through two camera collection images.
In the embodiment of the present invention, unmanned plane is provided with two cameras, constitutes one group of binocular camera.Two cameras are excellent
Choosing left and right is arranged side by side.It is of course also possible to be staggered, i.e., two cameras are not on same horizontal line.Two cameras
Between it is spaced apart, spacing distance is the bigger the better in theory.
By two cameras, (synchronization) gathers image to image capture module simultaneously, the image of collection can be photo or
Video flowing.
Image processing module:For being a 3D rendering by two image mosaics of two camera collections.
Specifically, two images that image processing module gathers two cameras or so are stitched together side by side, preferably
Ground, the image mosaic that left side camera is gathered is on the left side, and the image mosaic that the right camera is gathered is on the right, final to obtain
3D (solid) image of one left-right format.Alternatively, image processing module can also be by two figures of two camera collections
As being stitched together side by side up and down, the 3D rendering of a top-down format is finally obtained.The 3D rendering is that 3D photos or 3D are regarded
Frequency flows.
Further, before image mosaic is carried out, image processing module is first carried out at resolution ratio reduction to original image
Reason, then carries out splicing, to reduce final 3D rendering size, it is to avoid during subsequent transmission to the image for reducing resolution ratio
The too many bandwidth resources of consumption, so as to improve transmission speed, improve the real-time of image transmitting.
So that the image of collection is video flowing as an example, image processing module is first by two video flowings of two camera collections
The video flowing of two default resolution ratio is sampled as respectively, is then a 3D video by the video stream splicing of two default resolution ratio
Stream, wherein, preset resolution ratio and be less than original resolution.
For example, two cameras of unmanned plane each shoot the video flowing of 4K resolution ratio, image processing module is by two 4K
The video flowing sampling of resolution ratio turns into the video flowing of two 720P forms, and sample mode can use general down-sampled algorithm
Carry out, for example, 4 potting gums are turned into a pixel;In the case where keeping the video flowing frame synchronization of two 720P forms,
The picture that left camera is shot is placed in the left side, and the picture that right camera is shot is placed in the right, by the video of two 720P forms
The splicing of stream left and right turns into 3D video flowing of the resolution ratio for 2560*720 left-right format.
In addition, the original image that image processing module can also shoot two cameras is stored in local memory space
It is interior.Further, before preservation, image processing module is also compressed processing to original image, to save memory space, such as will
Video stream compression is the video file of H.265 form.
Image sends mould:For being sent out 3D rendering.
Specifically, image sends mould, by the 3D rendering of acquisition, (or timing) is sent in real time, for example, be sent to and nothing
The man-machine remote control equipment or terminal device for establishing wireless communication connection, such as mobile phone, tablet personal computer, wear-type virtual reality device
(such as VR glasses, the VR helmets) etc., or uploaded onto the server by cordless communication network.
Further, before 3D rendering is sent, image processing module is also compressed processing to 3D rendering, to reduce 3D
The size of image, improves efficiency of transmission, realizes real-time Transmission.For example, by video flowing of the 3D video stream compressions for H.264 form,
Then it is sent out.
In the embodiment of the present invention, when user trigger photographing instruction when, such as made before remote control equipment (such as VR glasses) wave,
When both hands picture frame etc. is acted, the photo that image capture module then shoots two full resolutions by two cameras is stored in this
Ground, image processing module is then using two photomosaics shot into a 3D photo.
Further, image processing module can also carry out image quality improving by the respective picture of two cameras, such as enter
Row denoising, background blurring processing etc..Specifically, two photos that binocular camera is shot, can after Feature Points Matching
The region being completely superposed with finding out, the picture in the region equivalent to being shot twice, and image processing module is to carrying out
Same picture after repeatedly shooting can effectively reduce noise if being overlapped processing (such as simplest weighted average).Together
When, according to the depth information of binocular, the prospect and background of picture can be distinguished, image processing module is by fuzzy algorithmic approach (as most
Simple Gaussian Blur filtering) Fuzzy Processing can be carried out to background, so as to form background blurring effect.
The aerial device of the embodiment of the present invention, two images are gathered by two cameras, and by two images of collection
It is spliced into after 3D rendering and sends so that unmanned plane can provides 3D rendering during taking photo by plane, and allow user real
When view picture 3D rendering true to nature, experience on the spot in person is brought to user, the body of taking photo by plane of user is greatly improved
Test.
Example IV
Reference picture 4, proposes the aerial device of fourth embodiment of the invention, and the present embodiment increases on the basis of 3rd embodiment
Depth finding module is added, the depth finding module is used for:The depth finding of photographed scene is carried out using 3D rendering, obtains deep
Information is spent, the functions such as object ranging, recognition of face, gesture identification, target following then can be just realized using depth information, and
The attitude information of unmanned plane can be combined and depth information realizes avoidance (such as forward direction avoidance), so as to utilize one group of binocular camera
Can just realize simultaneously take photo by plane, avoidance, tracking, a variety of functions such as ranging.
Wherein, depth finding is carried out using 3D rendering, i.e., two using the left and right in 3D rendering (such as 3D video flowings) or up and down
The difference (parallax) of individual image realizes depth finding.Parallax is exactly the same target from two points for having certain distance
Produced direction difference, therefore same target obtains in the binocular camera (such as left camera and right camera) of diverse location
There is parallax in the image obtained.The target nearer apart from camera, the parallax in the image of binocular camera is bigger, therefore deep
Degree detecting module can calculate target to camera according to the parallax size of target in the two images that binocular camera is obtained
Distance, i.e. target depth, so as to realize depth finding.
Alternatively, when user by remote control equipment (such as VR glasses) control camera pitching (generally by controlling placement to take the photograph
As camera pitching is realized in the head pitching of head) when, when luffing angle is more than default luffing angle, depth finding module
Then the failure of prompting user barrier avoiding function and/or holding unmanned plane are in floating state.
In the embodiment of the present invention, after user chooses the target for needing track up, unmanned plane will adjust itself and cloud
The target that the posture of platform is chosen with being aligned.Because side of the target following accuracy rate than conventional plane visual based on depth information
Method is more accurate, so as to realize the track up function with actual application value for unmanned plane.
In the present embodiment, after two image mosaics that binocular camera is shot are 3D rendering, while by 3D rendering
User's viewing is real-time transmitted to, while carrying out depth finding by 3D rendering so that unmanned plane utilizes one group of binocular camera (i.e.
Two cameras) can just realize simultaneously 3D take photo by plane, avoidance, tracking, a variety of functions such as ranging, taken the photograph without being utilized respectively two groups of binoculars
As head (i.e. four cameras) realizes that 3D is shot and depth finding respectively, so as to realize a variety of functions with relatively low cost.
The present invention proposes a kind of unmanned plane simultaneously, and the unmanned plane includes:One or more processors;Memory;One
Or multiple application programs, wherein one or more of application programs are stored in the memory and are configured as by described
One or more processors are performed, and one or more of application programs are configurable for execution and taken photo by plane method.It is described to take photo by plane
Method comprises the following steps:Pass through two camera collection images;Two image mosaics that described two cameras are gathered are
One 3D rendering;It is sent out the 3D rendering.Method of taking photo by plane described in the present embodiment is above-described embodiment in the present invention
Involved method of taking photo by plane, will not be repeated here.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Understood based on such, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are to cause a station terminal equipment (can be mobile phone, computer, clothes
It is engaged in device, air conditioner, or network equipment etc.) perform method described in each embodiment of the invention.
Above by reference to the preferred embodiments of the present invention have been illustrated, not thereby limit to the interest field of the present invention.This
Art personnel do not depart from the scope of the present invention and essence, can have a variety of flexible programs to realize the present invention, for example as one
The feature of individual embodiment can be used for another embodiment and obtain another embodiment.All institutes within the technical concept with the present invention
Any modifications, equivalent substitutions and improvements made, all should be within the interest field of the present invention.
Claims (15)
- The method 1. one kind is taken photo by plane, it is characterised in that comprise the following steps:Pass through two camera collection images;Two image mosaics that described two cameras are gathered are a 3D rendering;It is sent out the 3D rendering.
- 2. method according to claim 1 of taking photo by plane, it is characterised in that described image is photo or video flowing.
- 3. method according to claim 2 of taking photo by plane, it is characterised in that described by described in when described image is video flowing Two image mosaics of two camera collections include for a 3D rendering:Two video flowings that described two cameras are gathered are sampled as the video flowing of two default resolution ratio respectively, described default Resolution ratio is less than original resolution;It is a 3D video flowing by the video stream splicing of described two default resolution ratio.
- 4. method according to claim 1 of taking photo by plane, it is characterised in that described two cameras or so are arranged side by side, described Two image mosaics that described two cameras are gathered are that a 3D rendering includes:Two images that described two cameras are gathered or so are stitched together side by side, obtain the 3D figures of a left-right format Picture.
- 5. method according to claim 4 of taking photo by plane, it is characterised in that two figures for gathering described two cameras As left and right be stitched together side by side including:The image mosaic that left side camera is gathered is on the left side, and the image mosaic that the right camera is gathered is on the right.
- 6. the method for taking photo by plane according to claim any one of 1-5, it is characterised in that gather described two cameras two Also include after the step of individual image mosaic is a 3D rendering:The depth finding of photographed scene is carried out using the 3D rendering, depth information is obtained.
- 7. the method for taking photo by plane according to claim any one of 1-5, it is characterised in that described to be sent out the 3D rendering bag Include:The 3D rendering is sent to wear-type virtual reality device.
- 8. a kind of aerial device, it is characterised in that including:Image capture module, for passing through two camera collection images;Image processing module, two image mosaics for described two cameras to be gathered are a 3D rendering;Image sending module, for being sent out the 3D rendering.
- 9. aerial device according to claim 8, it is characterised in that described image is photo or video flowing.
- 10. aerial device according to claim 9, it is characterised in that when described image is video flowing, at described image Reason module is used for:Two video flowings that described two cameras are gathered are sampled as the video flowing of two default resolution ratio respectively, It is a 3D video flowing by the video stream splicing of described two default resolution ratio, wherein, the default resolution ratio is less than original point Resolution.
- 11. aerial device according to claim 8, it is characterised in that described two cameras or so are arranged side by side, described Image processing module is used for:Two images that described two cameras are gathered or so are stitched together side by side, obtain a left side The 3D rendering of right form.
- 12. aerial device according to claim 11, it is characterised in that described image processing module is used for:The left side is taken the photograph The image mosaic that picture head is gathered is on the left side, and the image mosaic that the right camera is gathered is on the right.
- 13. the aerial device according to claim any one of 8-12, it is characterised in that described device also includes depth finding Module, the depth finding module is used for:The depth finding of photographed scene is carried out using the 3D rendering, depth information is obtained.
- 14. the aerial device according to claim any one of 8-12, it is characterised in that described image sending module is used for: The 3D rendering is sent to wear-type virtual reality device.
- 15. a kind of unmanned plane, including:One or more processors;Memory;One or more application programs, wherein one or more of application programs are stored in the memory and are configured For by one or more of computing devices, one or more of application programs are configurable for perform claim requirement 1 To the method described in 7 any one.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710031741.XA CN107071389A (en) | 2017-01-17 | 2017-01-17 | Take photo by plane method, device and unmanned plane |
PCT/CN2017/115877 WO2018133589A1 (en) | 2017-01-17 | 2017-12-13 | Aerial photography method, device, and unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710031741.XA CN107071389A (en) | 2017-01-17 | 2017-01-17 | Take photo by plane method, device and unmanned plane |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107071389A true CN107071389A (en) | 2017-08-18 |
Family
ID=59597930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710031741.XA Pending CN107071389A (en) | 2017-01-17 | 2017-01-17 | Take photo by plane method, device and unmanned plane |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107071389A (en) |
WO (1) | WO2018133589A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107741781A (en) * | 2017-09-01 | 2018-02-27 | 中国科学院深圳先进技术研究院 | Flight control method, device, unmanned plane and the storage medium of unmanned plane |
CN108304000A (en) * | 2017-10-25 | 2018-07-20 | 河北工业大学 | The real-time VR systems of holder |
WO2018133589A1 (en) * | 2017-01-17 | 2018-07-26 | 亿航智能设备(广州)有限公司 | Aerial photography method, device, and unmanned aerial vehicle |
CN108521558A (en) * | 2018-04-10 | 2018-09-11 | 深圳慧源创新科技有限公司 | Unmanned plane figure transmission method, system, unmanned plane and unmanned plane client |
CN108616679A (en) * | 2018-04-09 | 2018-10-02 | 沈阳上博智像科技有限公司 | The method of binocular camera and control binocular camera |
CN108985193A (en) * | 2018-06-28 | 2018-12-11 | 电子科技大学 | A kind of unmanned plane portrait alignment methods based on image detection |
CN110460677A (en) * | 2019-08-23 | 2019-11-15 | 临工集团济南重机有限公司 | A kind of excavator and excavator tele-control system |
EP3664442A4 (en) * | 2017-09-12 | 2020-06-24 | SZ DJI Technology Co., Ltd. | Method and device for image transmission, movable platform, monitoring device, and system |
CN111780682A (en) * | 2019-12-12 | 2020-10-16 | 天目爱视(北京)科技有限公司 | 3D image acquisition control method based on servo system |
CN112714281A (en) * | 2020-12-19 | 2021-04-27 | 西南交通大学 | Unmanned aerial vehicle carries VR video acquisition transmission device based on 5G network |
CN113784051A (en) * | 2021-09-23 | 2021-12-10 | 深圳市道通智能航空技术股份有限公司 | Method, device, equipment and medium for controlling shooting of aircraft based on portrait mode |
WO2022088072A1 (en) * | 2020-10-30 | 2022-05-05 | 深圳市大疆创新科技有限公司 | Visual tracking method and apparatus, movable platform, and computer-readable storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109729337A (en) * | 2018-11-15 | 2019-05-07 | 华南师范大学 | A kind of vision synthesizer and its control method applied to dual camera |
CN112506228B (en) * | 2020-12-28 | 2023-11-07 | 广东电网有限责任公司中山供电局 | Optimal emergency risk avoiding path selection method for unmanned aerial vehicle of transformer substation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016086379A1 (en) * | 2014-12-04 | 2016-06-09 | SZ DJI Technology Co., Ltd. | Imaging system and method |
CN105791810A (en) * | 2016-04-27 | 2016-07-20 | 深圳市高巨创新科技开发有限公司 | Virtual stereo display method and device |
CN105828062A (en) * | 2016-03-23 | 2016-08-03 | 常州视线电子科技有限公司 | Unmanned aerial vehicle 3D virtual reality shooting system |
CN105872523A (en) * | 2015-10-30 | 2016-08-17 | 乐视体育文化产业发展(北京)有限公司 | Three-dimensional video data obtaining method, device and system |
CN205847443U (en) * | 2016-07-13 | 2016-12-28 | 杭州翼飞电子科技有限公司 | The more than enough people of a kind of energy shares the device that unmanned plane 3D schemes to pass in real time |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8860780B1 (en) * | 2004-09-27 | 2014-10-14 | Grandeye, Ltd. | Automatic pivoting in a wide-angle video camera |
CN106965945B (en) * | 2016-07-14 | 2019-09-03 | 科盾科技股份有限公司 | A kind of method and aircraft for avoiding collision obstacle synchronous based on data |
CN106162145B (en) * | 2016-07-26 | 2018-06-08 | 北京奇虎科技有限公司 | Stereoscopic image generation method, device based on unmanned plane |
CN107071389A (en) * | 2017-01-17 | 2017-08-18 | 亿航智能设备(广州)有限公司 | Take photo by plane method, device and unmanned plane |
-
2017
- 2017-01-17 CN CN201710031741.XA patent/CN107071389A/en active Pending
- 2017-12-13 WO PCT/CN2017/115877 patent/WO2018133589A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016086379A1 (en) * | 2014-12-04 | 2016-06-09 | SZ DJI Technology Co., Ltd. | Imaging system and method |
CN105872523A (en) * | 2015-10-30 | 2016-08-17 | 乐视体育文化产业发展(北京)有限公司 | Three-dimensional video data obtaining method, device and system |
CN105828062A (en) * | 2016-03-23 | 2016-08-03 | 常州视线电子科技有限公司 | Unmanned aerial vehicle 3D virtual reality shooting system |
CN105791810A (en) * | 2016-04-27 | 2016-07-20 | 深圳市高巨创新科技开发有限公司 | Virtual stereo display method and device |
CN205847443U (en) * | 2016-07-13 | 2016-12-28 | 杭州翼飞电子科技有限公司 | The more than enough people of a kind of energy shares the device that unmanned plane 3D schemes to pass in real time |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018133589A1 (en) * | 2017-01-17 | 2018-07-26 | 亿航智能设备(广州)有限公司 | Aerial photography method, device, and unmanned aerial vehicle |
CN107741781A (en) * | 2017-09-01 | 2018-02-27 | 中国科学院深圳先进技术研究院 | Flight control method, device, unmanned plane and the storage medium of unmanned plane |
EP3664442A4 (en) * | 2017-09-12 | 2020-06-24 | SZ DJI Technology Co., Ltd. | Method and device for image transmission, movable platform, monitoring device, and system |
CN108304000B (en) * | 2017-10-25 | 2024-01-23 | 河北工业大学 | Real-time VR system of cloud platform |
CN108304000A (en) * | 2017-10-25 | 2018-07-20 | 河北工业大学 | The real-time VR systems of holder |
CN108616679A (en) * | 2018-04-09 | 2018-10-02 | 沈阳上博智像科技有限公司 | The method of binocular camera and control binocular camera |
CN108521558A (en) * | 2018-04-10 | 2018-09-11 | 深圳慧源创新科技有限公司 | Unmanned plane figure transmission method, system, unmanned plane and unmanned plane client |
CN108985193A (en) * | 2018-06-28 | 2018-12-11 | 电子科技大学 | A kind of unmanned plane portrait alignment methods based on image detection |
CN110460677A (en) * | 2019-08-23 | 2019-11-15 | 临工集团济南重机有限公司 | A kind of excavator and excavator tele-control system |
CN111780682A (en) * | 2019-12-12 | 2020-10-16 | 天目爱视(北京)科技有限公司 | 3D image acquisition control method based on servo system |
WO2022088072A1 (en) * | 2020-10-30 | 2022-05-05 | 深圳市大疆创新科技有限公司 | Visual tracking method and apparatus, movable platform, and computer-readable storage medium |
CN112714281A (en) * | 2020-12-19 | 2021-04-27 | 西南交通大学 | Unmanned aerial vehicle carries VR video acquisition transmission device based on 5G network |
CN113784051A (en) * | 2021-09-23 | 2021-12-10 | 深圳市道通智能航空技术股份有限公司 | Method, device, equipment and medium for controlling shooting of aircraft based on portrait mode |
Also Published As
Publication number | Publication date |
---|---|
WO2018133589A1 (en) | 2018-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107071389A (en) | Take photo by plane method, device and unmanned plane | |
US10628675B2 (en) | Skeleton detection and tracking via client-server communication | |
CN108139799B (en) | System and method for processing image data based on a region of interest (ROI) of a user | |
CN109076249B (en) | System and method for video processing and display | |
US11711504B2 (en) | Enabling motion parallax with multilayer 360-degree video | |
US20180227482A1 (en) | Scene-aware selection of filters and effects for visual digital media content | |
CN103108126B (en) | A kind of video interactive system, method, interaction glasses and terminal | |
CN105187723B (en) | A kind of image pickup processing method of unmanned vehicle | |
US11398008B2 (en) | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture | |
US20190132569A1 (en) | Image processing for 360-degree camera | |
CN106385576A (en) | Three-dimensional virtual reality live method and device, and electronic device | |
WO2013069049A1 (en) | Image generation device, and image generation method | |
US10863210B2 (en) | Client-server communication for live filtering in a camera view | |
CN107343177A (en) | A kind of filming control method of unmanned plane panoramic video | |
US11769231B2 (en) | Methods and apparatus for applying motion blur to overcaptured content | |
WO2017166714A1 (en) | Method, device, and system for capturing panoramic image | |
CN109479090A (en) | Information processing method, unmanned plane, remote control equipment and non-volatile memory medium | |
WO2018214077A1 (en) | Photographing method and apparatus, and image processing method and apparatus | |
CN106060523B (en) | Panoramic stereo image acquisition, display methods and corresponding device | |
CN103109538A (en) | Image processing device, image capture device, image processing method, and program | |
CN108496201A (en) | Image processing method and equipment | |
US20190208124A1 (en) | Methods and apparatus for overcapture storytelling | |
CN106327583A (en) | Virtual reality equipment for realizing panoramic image photographing and realization method thereof | |
CN108616733B (en) | Panoramic video image splicing method and panoramic camera | |
CN108564654B (en) | Picture entering mode of three-dimensional large scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170818 |
|
RJ01 | Rejection of invention patent application after publication |