WO2017113183A1 - 无人机飞行体验方法、装置、系统以及无人机 - Google Patents

无人机飞行体验方法、装置、系统以及无人机 Download PDF

Info

Publication number
WO2017113183A1
WO2017113183A1 PCT/CN2015/099852 CN2015099852W WO2017113183A1 WO 2017113183 A1 WO2017113183 A1 WO 2017113183A1 CN 2015099852 W CN2015099852 W CN 2015099852W WO 2017113183 A1 WO2017113183 A1 WO 2017113183A1
Authority
WO
WIPO (PCT)
Prior art keywords
video file
view stereo
stereo video
flight experience
view
Prior art date
Application number
PCT/CN2015/099852
Other languages
English (en)
French (fr)
Inventor
赵丛
武燕楠
杨康
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201580065834.3A priority Critical patent/CN107005687B/zh
Priority to PCT/CN2015/099852 priority patent/WO2017113183A1/zh
Publication of WO2017113183A1 publication Critical patent/WO2017113183A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the invention relates to the field of drones, in particular to a drone flight experience method, device, system and drone.
  • the first-person perspective FPV (First-Person View) flight mode is one of the most active directions in the aerial photography field, and it can bring users a flying experience. It has a wide range of applications, such as games that combine virtual and real, as well as the desire to help people with disabilities to go out. Related products on the market currently do not provide a good user experience. For example, current binocular stereo cameras can capture binocular stereo video and exist on the device, but do not achieve a good real-time flight experience.
  • a drone flight experience method includes the following steps:
  • the decoded multi-view stereo video file is displayed.
  • the method further includes the step of performing video smoothing processing on the multi-view stereo video file.
  • step of performing a video smoothing process on the multi-view stereo video file includes:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the method further includes the step of: calculating a distance between the photographing device and the obstacle based on the multi-view stereo video file to obtain visual depth information.
  • the UAV flight experience method further includes: displaying the visual depth information.
  • the multi-view stereo video file is transmitted by using a high-definition transmission technology.
  • the multi-view stereo video file is compression-encoded and decoded using a multi-view video coding standard.
  • the photographing device includes a pan/tilt head and an image acquiring device, and the image capturing device is installed on the drone through the pan/tilt; the drone flight experience method is displayed after being decoded by the wearable display device The multi-dimensional stereoscopic video file; the drone flight experience method further includes:
  • a drone flight experience system includes a drone and a drone flight experience device disposed at the receiving end, the drone including:
  • a first image processor coupled to the photographing device, configured to acquire the multi-dimensional stereoscopic video file captured by the photographing device, compress-encode the multi-view stereoscopic video file, and generate a continuous video stream; as well as
  • a first image transmission device coupled to the first image processor, for transmitting the encoded multi-view stereo video file to a receiving end;
  • the drone flight experience device includes:
  • a second image transmission device configured to receive the compression-encoded multi-view stereo video file transmitted by the first image transmission device
  • a second image processor coupled to the second image transmission device, configured to decode the received multi-view stereo video file to obtain a decoded multi-view stereo video file
  • a display device configured to display the decoded multi-view stereo video file.
  • one of the first image processor and the second image processor is further configured to perform video smoothing processing on the multi-view stereo video file.
  • one of the first image processor and the second image processor performs video smoothing processing on the multi-view stereo video file, specifically:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the first image processor is further configured to perform video smoothing processing on the multi-view stereo video file before compressing and encoding the multi-view stereo video file;
  • the second image processor is further configured to perform video smoothing processing on the multi-view stereo video file.
  • one of the first image processor and the second image processor is further configured to calculate a distance between the photographing device and an obstacle based on the multi-view stereo video file to obtain a visual depth information.
  • the first image processor is further configured to calculate the visual depth information based on the captured multi-dimensional stereoscopic video file before compressing and encoding the multi-view stereoscopic video file, and the visual The depth information is loaded into the multi-view stereo video file and compression-encoded; or
  • the second image processor After the second image processor decodes the received multi-view stereo video file, the second image processor is further configured to calculate the visual depth information based on the decoded multi-view stereo video file, and the visual depth is The information is loaded into the decoded multi-view stereo video file.
  • the display device is further configured to display the visual depth information.
  • the first image transmission device and the second image transmission device both transmit the multi-view stereo video file by using a high-definition transmission technology.
  • the first image transmission device and the second image transmission device perform data transmission through a wireless network
  • the wireless network includes at least one of the following: high definition image transmission, Bluetooth, WIFI, 2G network, 3G network, 4G Network, 5G network.
  • the display device is connected to the second image processor, and the second image processor is further configured to transmit the decoded multi-view stereo video file to the display device for display;
  • the second image transmission device communicates with the display device through a wireless network, and the second image transmission device is further configured to transmit the decoded multi-view stereo video file to the display device for display by using a wireless network.
  • the wireless network includes at least one of the following: Bluetooth, infrared, WIFI, Zwave, ZigBee.
  • first image processor and the second image processor both compress encode or decode the video file by using a multi-view video coding standard.
  • the photographing device is a multi-view stereoscopic camera or a camera.
  • the photographing device includes a pan/tilt head and an image acquiring device, and the image capturing device is mounted on the drone through the pan/tilt.
  • the display device is a wearable display device.
  • the display device is an immersive eyeglass.
  • the UAV flight experience device further includes:
  • a first posture acquiring unit disposed on the wearable display device, configured to detect posture information of the wearable display device
  • a wireless transmission device configured to send posture information of the wearable display device to the drone
  • the photographing device includes a cloud platform and an image acquisition device, and the image acquisition device is mounted on the drone through the pan/tilt;
  • the drone further includes:
  • a second posture acquiring unit configured to detect posture information of the photographing device
  • a controller configured to receive posture information of the wearable display device, and control the pan/tilt rotation according to posture information of the photographing device and posture information of the wearable display device to adjust the image acquisition The angle at which the device is photographed.
  • a drone flight experience method includes the following steps:
  • the encoded multi-view stereo video file is transmitted to the receiving end.
  • the method further includes: performing video smoothing processing on the multi-view stereo video file.
  • step of performing a video smoothing process on the multi-view stereo video file includes:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the method before performing the compression encoding step on the multi-view stereo video file, the method further includes: calculating a distance between the photographing device and the obstacle based on the captured multi-view stereo video file to obtain visual depth information And loading the visual depth information into the multi-view stereo video file and performing compression encoding.
  • the multi-view stereo video file is transmitted by using a high-definition transmission technology.
  • the multi-view stereo video file is compression-encoded using a multi-view video coding standard.
  • the photographing device includes a pan/tilt and an image acquiring device, and the image capturing device is installed on the drone through the pan/tilt; the drone flight experience method further includes:
  • a drone that includes:
  • An image processor coupled to the photographing device, configured to acquire the multi-view stereo video file captured by the photographing device, compress-encode the multi-view stereo video file, and generate a continuous video stream;
  • the image transmission device is connected to the image processor and configured to transmit the encoded multi-view stereo video file to the receiving end.
  • the image processor is further configured to perform video smoothing processing on the multi-view stereo video file.
  • the image processor when the image processor performs video smoothing processing on the multi-view stereo video file, the image processor is specifically configured to:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the image processor is further configured to calculate a distance between the photographing device and an obstacle based on the captured multi-eye stereoscopic video file to obtain visual depth information, and load the visual depth information into The multi-view stereo video file is compression-encoded together.
  • the image transmission device transmits the multi-view stereo video file by using a high-definition transmission technology.
  • the image transmission device and another image transmission device on the receiving end perform data transmission through a wireless network
  • the wireless network includes at least one of the following: high definition image transmission, Bluetooth, WIFI, 2G network, and 3G network. , 4G network, 5G network.
  • the image processor compresses and encodes the multi-view stereo video file by using a multi-view video coding standard.
  • the photographing device is a multi-view stereoscopic camera or a camera.
  • the photographing device includes a cloud platform and an image acquisition device, and the image acquisition device is installed on the drone through the pan/tilt head through the pan/tilt.
  • the drone further includes:
  • a posture acquiring unit configured to detect posture information of the photographing device
  • a controller configured to receive posture information of the wearable display device from the receiving end, and control the pan/tilt rotation according to the posture information of the photographing device and the posture information of the wearable display device to adjust the The shooting angle of the image acquisition device.
  • a drone flight experience method includes the following steps:
  • the decoded multi-view stereo video file is displayed.
  • the method further includes: performing video smoothing processing on the decoded multi-view stereo video file.
  • the multi-view stereo video file is captured by a photographing device disposed on the drone;
  • the step of performing a video smoothing process on the decoded multi-view stereo video file includes:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the multi-view stereo video file is captured by a photographing device disposed on the drone;
  • the method further includes: calculating a distance between the photographing device and the obstacle based on the decoded multi-view stereo video file to obtain visual depth information, and The visual depth information is loaded in the decoded multi-view stereo video file.
  • the UAV flight experience method further includes the step of displaying the visual depth information.
  • the multi-view stereo video file is transmitted by using a high-definition transmission technology.
  • the multi-view stereo video file is decoded using a multi-view video coding standard.
  • the UAV flight experience method displays the decoded multi-view stereo video file through a wearable display device; the UAV flight experience method further includes:
  • a drone flight experience device includes:
  • An image transmission device configured to receive a compression-encoded multi-view stereo video file transmitted by the drone;
  • An image processor coupled to the image transmission device, for decoding the received multi-view stereo video file to obtain a decoded multi-view stereo video file;
  • a display device configured to display the decoded multi-view stereo video file.
  • the drone flight experience device is a wearable glasses or a remote controller.
  • the image processor is further configured to perform video smoothing processing on the decoded multi-view stereo video file.
  • the multi-view stereo video file is photographed by a photographing device disposed on the drone;
  • the image processor is specifically configured to: when performing video smoothing processing on the decoded multi-view stereo video file:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the multi-view stereo video file is photographed by a photographing device disposed on the drone;
  • the image processor is further configured to calculate a distance between the photographing device and an obstacle based on the decoded multi-view stereoscopic video file to obtain visual depth information, and load the visual depth information into the decoded In the multi-view stereo video file.
  • the display device is further configured to display the visual depth information.
  • the image transmission device transmits the multi-view stereo video file by using a high-definition transmission technology.
  • the image transmission device and another image transmission device on the drone perform data transmission through a wireless network
  • the wireless network includes at least one of the following: high definition image transmission, Bluetooth, WIFI, 2G network, 3G network , 4G network, 5G network.
  • the display device is connected to the image processor, and the image processor is further configured to transmit the decoded multi-view stereo video file to the display device for display;
  • the image transmission device communicates with the display device through a wireless network, and the image transmission device is further configured to transmit the decoded multi-view stereo video file to the display device for display by using a wireless network, where the wireless network includes At least one of the following: Bluetooth, Infrared, WIFI, Zwave, ZigBee.
  • the image processor decodes the multi-view stereo video file by using a multi-view video coding standard.
  • the display device is a wearable display device.
  • the UAV flight experience device further includes:
  • An attitude acquiring unit disposed on the wearable display device, configured to detect posture information of the wearable display device
  • a wireless transmission device configured to send the posture information of the wearable display device to the drone to adjust a shooting angle of the photographing device on the drone according to the posture information.
  • the UAV flight experience method of the embodiment of the present invention compresses and encodes a multi-view stereo video file captured in real time and then transmits it back to the receiving end, so that the transmission code rate is greatly reduced, and the video file is also video smoothed.
  • the processing makes the change of the viewing angle that the user feels in real time relatively stable, so that a good FPV flight experience effect can be obtained.
  • FIG. 1 is a schematic flow chart of a drone flight experience method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an indication line of a motion trajectory according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a video display interface according to an embodiment of the present invention.
  • FIG. 4 is a schematic flow chart of another UAV flight experience method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flow chart of still another method for flying experience of a drone according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a drone flight experience system according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural view of a drone according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a drone flight experience device according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a drone flight experience method 100 according to an embodiment of the present invention.
  • the method 100 is applicable to a drone and a drone flight experience device provided at the receiving end, wherein the drone is provided with a photographing device, and the photographing device is used for photographing a multi-head. Stereo video file.
  • the method 100 of the embodiment of the present invention is not limited to the steps and the sequence in the flowchart shown in FIG. 1. According to various embodiments, the steps in the flowchart shown in FIG. 1 may be added, removed, or changed in order. In the present embodiment, the method 100 can begin at step 101.
  • Step 101 Acquire a multi-view stereo video file captured by a camera set on the drone.
  • Step 102 Perform video smoothing processing on the multi-view stereo video file.
  • the step 102 may specifically include:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the posture information of the imaging device associated with the multi-view stereoscopic video file means that the posture information is synchronously detected when the imaging device is photographed.
  • the posture information includes at least smooth posture information indicating that the photographing device or the moving object on which the photographing device is mounted is moving or stationary at a constant speed during photographing, and the photographing device or the moving object on which the photographing device is mounted Uneven motion information after angular velocity or acceleration in a certain direction is generated during shooting.
  • the posture information may be a length of an indication line 201 for describing a motion trajectory
  • the smooth posture information is represented as a straight line segment
  • the uneven posture information is represented as a curve. segment.
  • the step of filtering the motion trajectory of the photographing device and fitting a smoothly changing virtual trajectory may include:
  • Editing the part of the motion trajectory of the camera with high frequency jitter that is, the curve with dense curve in the curve segment, for example, taking the intermediate point, or deleting some curve segments, and then combining the remaining points or line segments to obtain a piece.
  • An indicator line 202 that smoothly changes the virtual trajectory.
  • mapping the video frame of the multi-view stereo video file may include performing a clipping process on the multi-view stereo video file, specifically:
  • mapping the video frame of the multi-view stereo video file may also be: copying the video frame in the multi-dimensional stereo video file for the time period, and combining the copied video frame to obtain New video files so you can keep the original video files.
  • the method 100 of the embodiment adopts a video smoothing technology, and analyzes the posture data of the photographing device to fit a smoothly changing virtual camera perspective, so that the user feels a relatively stable change in the viewing angle, thereby reducing the user control cloud.
  • the change in the speed of the table or the instability of the drone/pTZ itself causes the image viewing angle to change too fast or the image quality is blurred to give the user a discomfort.
  • Step 103 Calculate a distance between the photographing device and an obstacle based on the captured multi-view stereoscopic video file to obtain visual depth information, and load the visual depth information into the multi-view stereo video file.
  • step 102 the order of execution of the step 102 and the step 103 can be interchanged.
  • Step 104 Perform compression coding on the multi-view stereo video file, and generate a continuous video stream.
  • the step 104 compresses and encodes the multi-view stereo video file by using a Multi-view Video Coding Standard (MVC), and considers the correlation between the multiple images.
  • MVC Multi-view Video Coding Standard
  • the multi-view stereo video file is compression-encoded, that is, multi-view joint coding, thereby effectively reducing the code rate, so that the multi-view video has less increase in the bit rate than the monocular video, thereby reducing information redundancy.
  • step 104 can also perform compression coding on the multi-view stereo video file by using other prior art techniques to reduce the code rate.
  • Step 105 The encoded multi-view stereo video file is transmitted to the receiving end.
  • the method 100 uses the high-definition transmission technology to transmit the multi-view stereo video file, so that high-definition stereo video can be generated and transmitted back to the receiving end through the high-definition image.
  • Step 106 Receive the encoded multi-view stereo video file at the receiving end, and decode the received multi-view stereo video file to obtain the decoded multi-view stereo video file.
  • the step 106 decodes the multi-view stereo video file by using a multi-view video coding standard.
  • the video smoothing process and the visual depth information calculation are performed on the drone, and are completed before compression encoding the multi-view stereo video file, and The visual depth information is loaded into the multi-view stereo video file before encoding.
  • one or both of the video smoothing process and the visual depth information calculation may be performed after the receiving end decodes the multi-view stereo video file. The receiving end is completed.
  • the step 102 is performed after the step 106, that is, after the step 106, further comprising: performing video smoothing processing on the multi-view stereo video file.
  • the step 103 is performed, that is, after the step 106, further comprising: calculating, between the capturing device and the obstacle, based on the decoded multi-view stereo video file The distance is obtained to obtain visual depth information, and the visual depth information is loaded into the decoded multi-view stereoscopic video file.
  • Step 107 Display the decoded multi-view stereo video file and the visual depth information.
  • the method 100 may display the decoded multi-view stereo video file and the visual depth information through a wearable display device, such as immersive glasses.
  • the photographing device includes a pan/tilt and an image acquiring device, and the image capturing device is mounted on the drone through the pan/tilt.
  • the image acquiring device is a binocular stereo camera, and the binocular stereo camera can be used as an input of visual depth calculation, and the method 100 can calculate the depth information to connect the drone with the front obstacle. The distance is fed back to the wearable display device, such as immersive glasses, and the image seen by the user can be seen in FIG.
  • the method 100 further includes:
  • the user can also control the shooting angle of the photographing device through the body, such as head movement.
  • the wearable display device internally integrates an IMU (Inertial Measurement Unit), a GPS, and a compass, wherein the IMU internally includes a three-axis gyroscope and a three-axis accelerometer.
  • the three-axis gyroscope obtains its own attitude information by integrating, and the three-axis accelerometer corrects the posture of the gyroscope integration, and simultaneously integrates the information of the compass and the GPS, and finally obtains accurate posture information.
  • the wearable display device can also obtain the posture information of the wearable display device only through the IMU, thereby eliminating the GPS and the compass.
  • the wearable display device also has a wireless transmission module for transmitting its own posture information to the pan/tilt on the drone.
  • the inside of the pan/tilt can also integrate the IMU, GPS, and compass, and can also obtain its own posture.
  • the wearable display device After the wearable display device sends its own posture information to the pan/tilt, the gimbal will be The wearable display device is used as its own target posture, and then smoothly moves to the target posture using its own control algorithm, thereby realizing the control of the pan/tilt by the somatosensory controller. It can be understood that the pan/tilt can also obtain the attitude information of the pan/tilt only through the IMU, thereby eliminating the GPS and the compass.
  • the UAV flight experience method 100 of the embodiment of the present invention compresses and encodes a multi-view stereo video file captured in real time and then transmits it back to the receiving end, so that the transmission code rate is greatly reduced, and the video file is also video-recorded.
  • the smoothing process makes the change of the viewing angle perceived by the user in real time relatively stable, so that a good FPV flight experience can be obtained.
  • FIG. 4 is a schematic flowchart diagram of another UAV flight experience method 400 according to an embodiment of the present invention.
  • the method 400 can be applied to a drone, and the drone is provided with a photographing device for photographing a multi-view stereoscopic video file.
  • the method of the embodiment of the present invention is not limited to the steps and the sequence in the flowchart shown in FIG. 4. According to various embodiments, the steps in the flowchart shown in FIG. 4 may add, remove, or change the order.
  • the method 400 can begin at step 401.
  • Step 401 Acquire a multi-eye stereo video file captured by a photographing device disposed on the drone.
  • Step 402 Perform video smoothing processing on the multi-view stereo video file.
  • the step 402 may specifically include:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the posture information of the imaging device associated with the multi-view stereoscopic video file means that the posture information is synchronously detected when the imaging device is photographed.
  • the posture information includes at least smooth posture information indicating that the photographing device or the moving object on which the photographing device is mounted is moving or stationary at a constant speed during photographing, and the photographing device or the moving object on which the photographing device is mounted Uneven motion information after angular velocity or acceleration in a certain direction is generated during shooting.
  • the posture information may be a length of an indication line 201 for describing a motion trajectory
  • the smooth posture information is represented as a straight line segment
  • the uneven posture information is represented as a curve. segment.
  • the step of filtering the motion trajectory of the photographing device and fitting a smoothly changing virtual trajectory may include:
  • Editing the part of the motion trajectory of the camera with high frequency jitter that is, the curve with dense curve in the curve segment, for example, taking the intermediate point, or deleting some curve segments, and then combining the remaining points or line segments to obtain a piece.
  • An indicator line 202 that smoothly changes the virtual trajectory.
  • mapping the video frame of the multi-view stereo video file may include performing a clipping process on the multi-view stereo video file, specifically:
  • mapping the video frame of the multi-view stereo video file may also be: copying the video frame in the multi-dimensional stereo video file for the time period, and combining the copied video frame to obtain New video files so you can keep the original video files.
  • the method 400 of the embodiment adopts a video smoothing technique, and analyzes the posture data of the photographing device to fit a smoothly changing virtual camera angle of view, so that the change of the viewing angle perceived by the user is relatively stable, thereby reducing the user control cloud.
  • the change in the speed of the table or the instability of the drone/pTZ itself causes the image viewing angle to change too fast or the image quality is blurred to give the user a discomfort.
  • Step 403 Calculate a distance between the photographing device and the obstacle based on the captured multi-eye stereoscopic video file to obtain visual depth information, and load the visual depth information into the multi-view stereo video file.
  • step 402 and the step 403 can be interchanged.
  • Step 404 Perform compression coding on the multi-view stereo video file, and generate a continuous video stream.
  • the step 404 compresses and encodes the multi-view stereo video file by using a multi-view video coding standard, and compresses and encodes the multi-view stereo video file by considering correlation between multiple images. That is, multi-joint joint coding is performed, thereby effectively reducing the code rate, so that the multi-view video has little increase in the bit rate of the monocular video, thereby reducing information redundancy.
  • step 404 can also perform compression coding on the multi-view stereo video file by using other prior art techniques to reduce the code rate.
  • Step 405 The encoded multi-view stereo video file is transmitted to the receiving end.
  • the method 400 transmits the multi-view stereo video file by using a high-definition transmission technology, so that a high-definition stereo video can be generated and transmitted to the receiving end through the high-definition image.
  • the video smoothing process and the visual depth information calculation are performed on the drone, and are completed before compression encoding the multi-view stereo video file, and
  • the visual depth information is loaded into the multi-view stereo video file before encoding, so that the receiving end displays the visual depth information while displaying the multi-view stereo video file.
  • the step 402 and/or the step 403 may also be omitted, and the step 402 and/or the step 403 are performed on the receiving end, that is, the video smoothing process and the One or two of the visual depth information calculations may be completed by the receiving end after the multi-view stereoscopic video file is decoded by the receiving end.
  • the photographing device includes a pan/tilt and an image acquiring device, and the image capturing device is mounted on the drone through the pan/tilt.
  • the image acquiring device is a binocular stereo camera, and the binocular stereo camera can be used as an input of visual depth calculation, and the method 400 can calculate the depth information to connect the drone with the front obstacle. The distance is fed back to the display device on the receiving end, such as on immersive glasses.
  • method 400 further includes:
  • the wearable display device internally integrates an IMU (Inertial Measurement Unit), a GPS, and a compass, wherein the IMU internally includes a three-axis gyroscope and a three-axis accelerometer.
  • the three-axis gyroscope obtains its own attitude information by integrating, and the three-axis accelerometer corrects the posture of the gyroscope integration, and simultaneously integrates the information of the compass and the GPS, and finally obtains accurate posture information.
  • the wearable display device can also obtain the posture information of the wearable display device only through the IMU, thereby eliminating the GPS and the compass.
  • the wearable display device also has a wireless transmission module for transmitting its own posture information to the pan/tilt on the drone.
  • the inside of the pan/tilt can also integrate the IMU, GPS, and compass, and can also obtain its own posture.
  • the wearable display device After the wearable display device sends its own posture information to the pan/tilt, the gimbal will be The wearable display device is used as its own target posture, and then smoothly moves to the target posture using its own control algorithm, thereby realizing the control of the pan/tilt by the somatosensory controller. It can be understood that the pan/tilt can also obtain the attitude information of the pan/tilt only through the IMU, thereby eliminating the GPS and the compass.
  • the UAV flight experience method 400 of the embodiment of the present invention compresses and encodes a multi-view stereo video file captured in real time and then transmits it back to the receiving end, so that the transmission code rate is greatly reduced, and the video file is also video-recorded.
  • the smoothing process makes the change of the viewing angle perceived by the user in real time relatively stable, so that a good FPV flight experience can be obtained.
  • FIG. 5 is a schematic flowchart of still another UAV flight experience method 500 according to an embodiment of the present invention.
  • the method 500 is applicable to a drone flight experience device that can communicate with a drone.
  • the UAV flight experience device can be used to display a multi-view stereo video file.
  • the method 500 of the embodiment of the present invention is not limited to the steps and the sequence in the flowchart shown in FIG. 5. According to various embodiments, the steps in the flowchart shown in FIG. 5 may add, remove, or change the order.
  • the method 500 can begin at step 501.
  • Step 501 Receive a compression-encoded multi-view stereo video file transmitted by the drone.
  • the method 500 transmits the multi-view stereo video file by using a high-definition transmission technology, so that high-definition stereo video can be generated.
  • Step 502 Decode the received multi-view stereo video file to obtain a decoded multi-view stereo video file.
  • the method 500 decodes the multi-view stereo video file by using a multi-view video coding standard, and compresses and encodes the multi-view stereo video file by considering correlation between multiple images. That is, multi-view joint coding, thereby effectively reducing the code rate, so that the multi-view video has little increase compared to the monocular video bit rate, thereby reducing information redundancy.
  • step 502 can also use other prior art techniques to decode the multi-view stereo video file.
  • Step 503 Perform video smoothing processing on the decoded multi-view stereo video file.
  • the multi-view stereoscopic video file is captured by an imaging device provided on the drone.
  • the step 503 specifically includes:
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the posture information of the imaging device associated with the multi-view stereoscopic video file means that the posture information is synchronously detected when the imaging device is photographed.
  • the posture information includes at least smooth posture information indicating that the photographing device or the moving object on which the photographing device is mounted is moving or stationary at a constant speed during photographing, and the photographing device or the moving object on which the photographing device is mounted Uneven motion information after angular velocity or acceleration in a certain direction is generated during shooting.
  • the posture information may be a length of an indication line 201 for describing a motion trajectory
  • the smooth posture information is represented as a straight line segment
  • the uneven posture information is represented as a curve. segment.
  • the step of filtering the motion trajectory of the photographing device and fitting a smoothly changing virtual trajectory may include:
  • Editing the part of the motion trajectory of the camera with high frequency jitter that is, the curve with dense curve in the curve segment, for example, taking the intermediate point, or deleting some curve segments, and then combining the remaining points or line segments to obtain a piece.
  • An indicator line 202 that smoothly changes the virtual trajectory.
  • mapping the video frame of the multi-view stereo video file may include performing a clipping process on the multi-view stereo video file, specifically:
  • mapping the video frame of the multi-view stereo video file may also be: copying the video frame in the multi-dimensional stereo video file for the time period, and combining the copied video frame to obtain New video files so you can keep the original video files.
  • the method 500 of the embodiment adopts a video smoothing technology, and analyzes the posture data of the photographing device to fit a smoothly changing virtual camera perspective, so that the viewing angle change perceived by the user is relatively stable, thereby reducing the user control cloud.
  • the change in the speed of the table or the instability of the drone/pTZ itself causes the image viewing angle to change too fast or the image quality is blurred to give the user a discomfort.
  • Step 504 Calculate a distance between the photographing device and the obstacle based on the decoded multi-view stereo video file to obtain visual depth information, and load the visual depth information into the multi-view stereo video file.
  • the video smoothing process and the visual depth information calculation are performed on the receiving end, and are completed after decoding the received multi-view stereo video file, and The visual depth information is loaded into the decoded multi-view stereo video file.
  • the step 503 and/or the step 504 may also be omitted, and the step 503 and/or the step 504 are performed on the drone, that is, the video smoothing process and the One or both of the visual depth information calculations may be performed by the drone before the drone compresses the multi-view stereo video file.
  • Step 505 Display the decoded multi-view stereo video file and the visual depth information.
  • the method 500 may display the decoded multi-view stereo video file and the visual depth information through a wearable display device, such as immersive glasses.
  • the photographing device includes a pan/tilt and an image acquiring device, and the image capturing device is mounted on the drone through the pan/tilt.
  • the image acquiring device is a binocular stereo camera, and the binocular stereo camera can be used as an input of visual depth calculation, and the method 500 can calculate the depth information to connect the drone with the front obstacle. The distance is fed back to the wearable display device, such as immersive glasses.
  • method 500 further includes:
  • the user can also control the shooting angle of the photographing device through the body, such as head movement.
  • the wearable display device internally integrates an IMU (Inertial Measurement Unit), a GPS, and a compass, wherein the IMU internally includes a three-axis gyroscope and a three-axis accelerometer.
  • the three-axis gyroscope obtains its own attitude information by integrating, and the three-axis accelerometer corrects the posture of the gyroscope integration, and simultaneously integrates the information of the compass and the GPS, and finally obtains accurate posture information.
  • the wearable display device can also obtain the posture information of the wearable display device only through the IMU, thereby eliminating the GPS and the compass.
  • the wearable display device also has a wireless transmission module for transmitting its own posture information to the pan/tilt on the drone.
  • the inside of the pan/tilt can also integrate the IMU, GPS, and compass, and can also obtain its own posture.
  • the wearable display device After the wearable display device sends its own posture information to the pan/tilt, the gimbal will be The wearable display device is used as its own target posture, and then smoothly moves to the target posture using its own control algorithm, thereby realizing the control of the pan/tilt by the somatosensory controller. It can be understood that the pan/tilt can also obtain the attitude information of the pan/tilt only through the IMU, thereby eliminating the GPS and the compass.
  • the UAV flight experience method 500 of the embodiment of the present invention compresses and encodes a multi-view stereo video file captured in real time and then transmits it back to the receiving end, so that the transmission code rate is greatly reduced, and the video file is also video-recorded.
  • the smoothing process makes the change of the viewing angle perceived by the user in real time relatively stable, so that a good FPV flight experience can be obtained.
  • FIG. 6 is a schematic structural diagram of a drone flight experience system 50 according to an embodiment of the present invention.
  • the UAV flight experience system 50 includes a drone 51 and a drone flight experience device 52 provided at the receiving end.
  • the drone flight experience device 52 is a wearable glasses or a remote controller.
  • the drone 51 includes, but is not limited to, a photographing device 511, a first image processor 512, and a first image transmitting device 513.
  • the photographing device 511 is configured to photograph a multi-view stereoscopic video file.
  • the camera 511 can be a multi-view stereo camera or a camera.
  • the imaging device 511 is mounted on the front view of the drone 51, and may be directly mounted on the drone 51 or may be mounted on the drone 51 through a pan/tilt to facilitate the shooting.
  • the device 511 is capable of capturing a relatively stable multi-view video file.
  • the imaging device 511 includes a pan/tilt (not shown) and an image acquisition device (not shown), and the image acquisition device is mounted on the drone 51 via the pan/tilt.
  • the image acquisition device is a binocular stereo vision camera.
  • the first image processor 512 is connected to the photographing device 511, and is configured to acquire the multi-view stereo video file captured by the photographing device 511, and perform compression encoding on the multi-view stereo video file, and generate continuous Video stream.
  • the first image transmission device 513 is connected to the first image processor 512 for transmitting the encoded multi-view stereo video file to the receiving end.
  • the UAV flight experience device 52 includes, but is not limited to, a second image transmission device 521, a second image processor 522, and a display device 523.
  • the second image transmission device 521 is connected to the second image processor 522, and configured to receive the compression-encoded multi-view stereo video file transmitted by the first image transmission device 513, and receive the received video file. Transfer to the second image processor 522.
  • the first image transmission device 513 and the second image transmission device 521 both transmit the multi-view stereo video file by using a high-definition transmission technology, so that high-definition stereo images can be generated on the drone 51.
  • the video is transmitted back to the receiving end through the HD image.
  • the first image transmission device 513 and the second image transmission device 521 perform data transmission through a wireless network, including but not limited to, high-definition image transmission, Bluetooth, WIFI, 2G network, 3G network, 4G network, 5G network.
  • a wireless network including but not limited to, high-definition image transmission, Bluetooth, WIFI, 2G network, 3G network, 4G network, 5G network.
  • the second image processor 522 is configured to decode the received multi-view stereo video file to obtain a decoded multi-view stereo video file.
  • the first image processor 512 and the second image processor 522 are both video codec processors, and the video files are compression-encoded or decoded by using a multi-view video coding standard, respectively. Correlation between multiplexed images to compression-encode the multi-view stereo video file, that is, multi-view joint encoding, thereby effectively reducing the code rate, so that the multi-view video has little increase in the bit rate of the monocular video, thereby Reduce information redundancy.
  • first image processor 512 and the second image processor 522 can also perform compression encoding or decoding on the multi-view stereo video file by using other prior art techniques to reduce the code rate.
  • one of the first image processor 512 and the second image processor 522 is further configured to perform video smoothing processing on the multi-view stereo video file.
  • the drone 51 further includes a first posture acquiring unit 514 for detecting posture information of the photographing device 511.
  • a first posture acquiring unit 514 for detecting posture information of the photographing device 511.
  • mapping the video frames of the multi-view stereo video file according to the virtual trajectory to implement smoothing processing of the video.
  • the posture information of the photographing device associated with the multi-view stereoscopic video file means that the posture information is detected by the first posture acquiring unit 514 when the photographing device 511 photographs. of.
  • the posture information includes at least smooth posture information indicating that the photographing device or the moving object on which the photographing device is mounted is moving or stationary at a constant speed during photographing, and the photographing device or the moving object on which the photographing device is mounted Uneven motion information after angular velocity or acceleration in a certain direction is generated during shooting.
  • the posture information may be a length of an indication line 201 for describing a motion trajectory
  • the smooth posture information is represented as a straight line segment
  • the uneven posture information is represented as a curve. segment.
  • the step of filtering the motion trajectory of the photographing device 511 and fitting a smoothly changing virtual trajectory may include:
  • the portion of the motion trajectory of the photographing device 511 that is high-frequency jitter that is, the portion of the curve segment where the curve is dense is edited, for example, taking an intermediate point, or deleting some curved segments, and then combining the remaining points or line segments to obtain An indicator line 202 of a smoothly varying virtual track.
  • mapping the video frame of the multi-view stereo video file may include performing a clipping process on the multi-view stereo video file, specifically:
  • a video clip with better quality can delete video clips with poor image quality, and then synthesize new video files.
  • mapping the video frame of the multi-view stereo video file may also be: copying the video frame in the multi-dimensional stereo video file for the time period, and combining the copied video frame to obtain New video files so you can keep the original video files.
  • the first image processor 512 or the second image processor 522 of the embodiment adopts a video smoothing technique, and by analyzing the posture data of the photographing device 511, a smooth changing virtual camera angle is fitted to make the user The perceived change in the angle of view is relatively stable, thereby reducing the viewing discomfort given to the user due to the user's control of the speed change of the gimbal or the instability of the image of the drone/pTZ itself or the blurring of the image quality.
  • the first image processor 512 is further configured to perform video smoothing processing on the multi-view stereo video file before performing compression encoding on the multi-view stereo video file. That is, the video smoothing processing is performed on the drone 51 and is performed before compression encoding the multi-view stereo video file.
  • the second image processor 522 is further configured to perform video smoothing on the multi-view stereo video file. deal with. That is, the video smoothing process is performed on the receiving end and is performed after decoding the multi-view stereo video file.
  • one of the first image processor 512 and the second image processor 522 is further configured to calculate a distance between the camera and the obstacle based on the multi-view stereo video file. To get visual depth information.
  • the first image processor 512 is specifically configured to calculate the visual depth information based on the captured multi-view stereo video file, and load the visual depth information into the multi-dimensional stereo
  • the video file is compressed and encoded together. That is, the visual depth information calculation is performed on the drone 51 and is performed before compression encoding the multi-view stereo video file.
  • the second image processor 522 is specifically configured to calculate the visual depth information based on the decoded multi-view stereo video file, and load the visual depth information into The decoded multi-view stereo video file. That is, the visual depth information calculation is performed on the receiving end and is performed after decoding the multi-view stereo video file.
  • the display device 523 is configured to display the decoded multi-view stereo video file and the visual depth information.
  • the second image transmission device 521 and the second image processor 522 may be disposed on the display device 523 and connected to the display device 523, the second image processor.
  • the 522 is further configured to transmit the decoded multi-view stereo video file to the display device 523 for display.
  • the second image transmission device 521 and the second image processor 522 may be separated from the display device 523, and the second image transmission device 521 and the display The device 523 communicates through a wireless network, and the second image transmission device 521 is further configured to transmit the decoded multi-view stereo video file to the display device 523 for display by using a wireless network, including but not limited to, Bluetooth, infrared, WIFI, Zwave, ZigBee.
  • a wireless network including but not limited to, Bluetooth, infrared, WIFI, Zwave, ZigBee.
  • the display device 523 is a wearable display device, such as an immersive glasses.
  • the imaging device 511 includes a pan/tilt and an image acquisition device, and the image acquisition device is mounted on the drone through the pan/tilt.
  • the image acquiring device is a binocular stereo camera, and the binocular stereo camera can be used as an input of visual depth calculation, and the drone flight experience device 52 can calculate the depth information to enable the drone The distance from 51 to the front obstacle is fed back to the wearable display device, such as immersion glasses.
  • the UAV flight experience device 52 further includes a second posture acquiring unit 524 disposed on the wearable display device 523, and the second posture acquiring unit 524 is configured to detect the The posture information of the wearable display device 523.
  • the UAV flight experience device 52 further includes a wireless transmission device 525 for transmitting posture information of the wearable display device to the drone 51.
  • the drone 51 further includes a controller 515, configured to receive posture information of the wearable display device, and according to posture information of the photographing device 511 and the wearable display device The attitude information controls the pan/tilt rotation to adjust a shooting angle of the image capturing device.
  • the user can also control the shooting angle of the photographing device 511 by the body, such as head movement.
  • the wearable display device internally integrates an IMU (Inertial Measurement Unit), a GPS, and a compass, wherein the IMU internally includes a three-axis gyroscope and a three-axis accelerometer.
  • the three-axis gyroscope obtains its own attitude information by integrating, and the three-axis accelerometer corrects the posture of the gyroscope integration, and simultaneously integrates the information of the compass and the GPS, and finally obtains accurate posture information.
  • the wearable display device can also obtain the posture information of the wearable display device only through the IMU, thereby eliminating the GPS and the compass.
  • the wearable display device also has a wireless transmission module for transmitting its own posture information to the pan/tilt on the drone.
  • the inside of the pan/tilt can also integrate the IMU, GPS, and compass, and can also obtain its own posture.
  • the wearable display device After the wearable display device sends its own posture information to the pan/tilt, the gimbal will be The wearable display device is used as its own target posture, and then smoothly moves to the target posture using its own control algorithm, thereby realizing the control of the pan/tilt by the somatosensory controller. It can be understood that the pan/tilt can also obtain the attitude information of the pan/tilt only through the IMU, thereby eliminating the GPS and the compass.
  • the UAV flight experience system 50 of the embodiment of the present invention compresses and encodes the multi-view stereo video file captured in real time and then transmits it back to the receiving end, so that the transmission code rate is greatly reduced, and the video file is also video-recorded.
  • the smoothing process makes the change of the viewing angle perceived by the user in real time relatively stable, so that a good FPV flight experience can be obtained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

一种无人机飞行体验方法,包括:获取设置于无人机上的拍摄装置拍摄的多目立体视频文件(101);对所述多目立体视频文件进行压缩编码,并生成连续的视频流(104);将经过编码后的多目立体视频文件传输至接收端(105);在接收端对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件(106);以及显示解码后的所述多目立体视频文件(107)。本发明还涉及一种无人机飞行体验装置、系统以及无人机。

Description

无人机飞行体验方法、装置、系统以及无人机 技术领域
本发明涉及无人机领域,特别涉及一种无人机飞行体验方法、装置、系统以及无人机。
背景技术
第一人称视角FPV(First-Person View)飞行模式是航拍领域最活跃的方向之一,它可以带给用户飞翔的体验。其应用领域很广,譬如虚实结合的游戏,以及帮助残疾人实现走出去的愿望等等。目前市场上的相关产品均无法提供良好的用户体验。例如,目前的双目立体相机可以拍摄双目立体视频并存在设备上,但是无法实现良好的实时飞行体验。
发明内容
有鉴于此,有必要提出一种无人机飞行体验方法、装置、系统以及无人机,以解决上述问题。
一种无人机飞行体验方法,包括以下步骤:
获取设置于无人机上的拍摄装置拍摄的多目立体视频文件;
对所述多目立体视频文件进行压缩编码,并生成连续的视频流;
将经过编码后的多目立体视频文件传输至接收端;
在接收端接收经过编码后的所述多目立体视频文件,并对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件;以及
显示解码后的所述多目立体视频文件。
进一步地,在显示所述多目立体视频文件之前还包括步骤:对所述多目立体视频文件进行视频平滑处理。
进一步地,对所述多目立体视频文件进行视频平滑处理步骤,具体包括:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
进一步地,在对所述多目立体视频文件进行压缩编码步骤之前,对所述多目立体视频文件进行视频平滑处理;或者
在对接收到的所述多目立体视频文件进行解码步骤之后,对所述多目立体视频文件进行视频平滑处理。
进一步地,在显示所述多目立体视频文件之前还包括步骤:基于所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息。
进一步地,在对所述多目立体视频文件进行压缩编码步骤之前,基于拍摄到的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于编码前的所述多目立体视频文件中;或者
在对接收到的所述多目立体视频文件进行解码步骤之后,基于解码后的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
进一步地,所述无人机飞行体验方法还包括:显示所述视觉深度信息。
进一步地,采用高清传输技术传输所述多目立体视频文件。
进一步地,采用多视点视频编码标准对所述多目立体视频文件进行压缩编码以及解码。
进一步地,所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上;所述无人机飞行体验方法通过可穿戴式显示设备显示解码后的所述多目立体视频文件;所述无人机飞行体验方法还包括:
获取所述可穿戴式显示设备的姿态信息,并将所述可穿戴式显示设备的姿态信息发送给所述无人机;以及
获取所述拍摄装置的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
一种无人机飞行体验系统,包括无人机以及设于接收端的无人机飞行体验装置,所述无人机包括:
拍摄装置,用于拍摄多目立体视频文件;
第一图像处理器,与所述拍摄装置连接,用于获取所述拍摄装置拍摄的所述多目立体视频文件,并对所述多目立体视频文件进行压缩编码,并生成连续的视频流;以及
第一图像传输装置,与所述第一图像处理器连接,用于将经过编码后的所述多目立体视频文件传输至接收端;以及
所述无人机飞行体验装置包括:
第二图像传输装置,用于接收所述第一图像传输装置传输的经过压缩编码后的多目立体视频文件;
第二图像处理器,与所述第二图像传输装置连接,用于对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件;以及
显示设备,用于显示解码后的所述多目立体视频文件。
进一步地,所述第一图像处理器与所述第二图像处理器中的其中一个还用于对所述多目立体视频文件进行视频平滑处理。
进一步地,所述第一图像处理器与所述第二图像处理器中的其中一个在对所述多目立体视频文件进行视频平滑处理时,具体用于:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
进一步地,所述第一图像处理器在对所述多目立体视频文件进行压缩编码之前,还用于对所述多目立体视频文件进行视频平滑处理;或者
所述第二图像处理器在对接收到的所述多目立体视频文件进行解码之后,还用于对所述多目立体视频文件进行视频平滑处理。
进一步地,所述第一图像处理器与所述第二图像处理器中的其中一个还用于基于所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息。
进一步地,所述第一图像处理器在对所述多目立体视频文件进行压缩编码之前,还用于基于拍摄到的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中一并进行压缩编码;或者
所述第二图像处理器在对接收到的所述多目立体视频文件进行解码之后,还用于基于解码后的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
进一步地,所述显示设备还用于显示所述视觉深度信息。
进一步地,所述第一图像传输装置以及所述第二图像传输装置均采用高清传输技术传输所述多目立体视频文件。
进一步地,所述第一图像传输装置与所述第二图像传输装置通过无线网络进行数据传输,所述无线网络包括如下至少一种:高清图传、蓝牙、WIFI、2G网络、3G网络、4G网络、5G网络。
进一步地,所述显示设备与所述第二图像处理器连接,所述第二图像处理器还用于将解码后的多目立体视频文件传输给所述显示设备进行显示;或者
所述第二图像传输装置与所述显示设备通过无线网络进行通信,所述第二图像传输装置还用于通过无线网络将解码后的多目立体视频文件传输给所述显示设备进行显示,所述无线网络包括如下至少一种:蓝牙、红外线、WIFI、Zwave、ZigBee。
进一步地,所述第一图像处理器以及所述第二图像处理器均采用多视点视频编码标准对视频文件进行压缩编码或解码。
进一步地,所述拍摄装置为多目立体视觉相机或摄像头。
进一步地,所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上。
进一步地,所述显示设备为可穿戴式显示设备。
进一步地,所述显示设备为沉浸式眼镜。
进一步地,所述无人机飞行体验装置还包括:
设置于所述可穿戴式显示设备上的第一姿态获取单元,用于检测所述可穿戴式显示设备的姿态信息;以及
无线传输装置,用于将所述可穿戴式显示设备的姿态信息发送给所述无人机;
所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上;以及
所述无人机还包括:
第二姿态获取单元,用于检测所述拍摄装置的姿态信息;以及
控制器,用于接收所述可穿戴式显示设备的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
一种无人机飞行体验方法,包括以下步骤:
获取设置于无人机上的拍摄装置拍摄的多目立体视频文件;
对所述多目立体视频文件进行压缩编码,并生成连续的视频流;以及
将经过编码后的多目立体视频文件传输至接收端。
进一步地,在对所述多目立体视频文件进行压缩编码步骤之前,还包括:对所述多目立体视频文件进行视频平滑处理。
进一步地,对所述多目立体视频文件进行视频平滑处理步骤,具体包括:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
进一步地,在对所述多目立体视频文件进行压缩编码步骤之前,还包括:基于拍摄到的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中一并进行压缩编码。
进一步地,采用高清传输技术传输所述多目立体视频文件。
进一步地,采用多视点视频编码标准对所述多目立体视频文件进行压缩编码。
进一步地,所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上;所述无人机飞行体验方法还包括:
获取所述拍摄装置的姿态信息;以及
接收来自所述接收端的可穿戴式显示设备的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
一种无人机,包括:
拍摄装置,用于拍摄多目立体视频文件;
图像处理器,与所述拍摄装置连接,用于获取所述拍摄装置拍摄的所述多目立体视频文件,并对所述多目立体视频文件进行压缩编码,并生成连续的视频流;以及
图像传输装置,与所述图像处理器连接,用于将经过编码后的所述多目立体视频文件传输至接收端。
进一步地,所述图像处理器还用于对所述多目立体视频文件进行视频平滑处理。
进一步地,所述图像处理器在对所述多目立体视频文件进行视频平滑处理时,具体用于:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
进一步地,所述图像处理器还用于基于拍摄到的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中一并进行压缩编码。
进一步地,所述图像传输装置采用高清传输技术传输所述多目立体视频文件。
进一步地,所述图像传输装置与所述接收端上的另一图像传输装置通过无线网络进行数据传输,所述无线网络包括如下至少一种:高清图传、蓝牙、WIFI、2G网络、3G网络、4G网络、5G网络。
进一步地,所述图像处理器采用多视点视频编码标准对所述多目立体视频文件进行压缩编码。
进一步地,所述拍摄装置为多目立体视觉相机或摄像头。
进一步地,所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台通过云台安装于所述无人机上。
进一步地,所述无人机还包括:
姿态获取单元,用于检测所述拍摄装置的姿态信息;以及
控制器,用于接收来自所述接收端的可穿戴式显示设备的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
一种无人机飞行体验方法,包括以下步骤:
接收无人机传输的经过压缩编码后的多目立体视频文件;
对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件;以及
显示解码后的所述多目立体视频文件。
进一步地,在显示解码后的所述多目立体视频文件步骤之前,还包括:对解码后的所述多目立体视频文件进行视频平滑处理。
进一步地,通过设置于无人机上的拍摄装置拍摄所述多目立体视频文件;
对解码后的所述多目立体视频文件进行视频平滑处理步骤,具体包括:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
进一步地,通过设置于无人机上的拍摄装置拍摄所述多目立体视频文件;
在显示解码后的所述多目立体视频文件步骤之前,还包括:基于解码后的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
进一步地,所述无人机飞行体验方法还包括步骤:显示所述视觉深度信息。
进一步地,采用高清传输技术传输所述多目立体视频文件。
进一步地,采用多视点视频编码标准对所述多目立体视频文件进行解码。
进一步地,所述无人机飞行体验方法通过可穿戴式显示设备显示解码后的所述多目立体视频文件;所述无人机飞行体验方法还包括:
获取所述可穿戴式显示设备的姿态信息,并将所述可穿戴式显示设备的姿态信息发送给所述无人机,以依据所述姿态信息调节所述无人机上的拍摄装置的拍摄角度。
一种无人机飞行体验装置,包括:
图像传输装置,用于接收无人机传输的经过压缩编码后的多目立体视频文件;
图像处理器,与所述图像传输装置连接,用于对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件;以及
显示设备,用于显示解码后的所述多目立体视频文件。
进一步地,所述无人机飞行体验装置为可穿戴式眼镜或遥控器。
进一步地,所述图像处理器还用于对解码后的所述多目立体视频文件进行视频平滑处理。
进一步地,所述多目立体视频文件由设置于无人机上的拍摄装置拍摄;
所述图像处理器在对解码后的所述多目立体视频文件进行视频平滑处理时,具体用于:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
进一步地,所述多目立体视频文件由设置于无人机上的拍摄装置拍摄;
所述图像处理器还用于基于解码后的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
进一步地,所述显示设备还用于显示所述视觉深度信息。
进一步地,所述图像传输装置采用高清传输技术传输所述多目立体视频文件。
进一步地,所述图像传输装置与所述无人机上的另一图像传输装置通过无线网络进行数据传输,所述无线网络包括如下至少一种:高清图传、蓝牙、WIFI、2G网络、3G网络、4G网络、5G网络。
进一步地,所述显示设备与所述图像处理器连接,所述图像处理器还用于将解码后的多目立体视频文件传输给所述显示设备进行显示;或者
所述图像传输装置与所述显示设备通过无线网络进行通信,所述图像传输装置还用于通过无线网络将解码后的多目立体视频文件传输给所述显示设备进行显示,所述无线网络包括如下至少一种:蓝牙、红外线、WIFI、Zwave、ZigBee。
进一步地,所述图像处理器采用多视点视频编码标准对所述多目立体视频文件进行解码。
进一步地,所述显示设备为可穿戴式显示设备。
进一步地,所述无人机飞行体验装置还包括:
设置于所述可穿戴式显示设备上的姿态获取单元,用于检测所述可穿戴式显示设备的姿态信息;以及
无线传输装置,用于将所述可穿戴式显示设备的姿态信息发送给所述无人机,以依据所述姿态信息调节所述无人机上的拍摄装置的拍摄角度。
本发明实施例的所述无人机飞行体验方法通过将实时拍摄的多目立体视频文件进行压缩编码后再传回接收端,使得传输码率大大降低,同时还对所述视频文件进行视频平滑处理,使得用户实时感受到的视角变化较为平稳,从而能够获得良好的FPV飞行体验效果。
附图说明
图1是本发明实施例的一种无人机飞行体验方法的流程示意图。
图2是本发明实施例的一种运动轨迹的指示线示意图。
图3是本发明实施例的一种视频显示界面示意图。
图4是本发明实施例的另一种无人机飞行体验方法的流程示意图。
图5是本发明实施例的再一种无人机飞行体验方法的流程示意图。
图6是本发明实施例的一种无人机飞行体验系统的结构示意图。
图7是本发明实施例的一种无人机的结构示意图。
图8是本发明实施例的一种无人机飞行体验装置的结构示意图。
主要元件符号说明
指示线 201、202
无人机飞行体验系统 50
无人机 51
拍摄装置 511
第一图像处理器 512
第一图像传输装置 513
第一姿态获取单元 514
控制器 515
无人机飞行体验装置 52
第二图像传输装置 521
第二图像处理器 522
显示设备 523
第二姿态获取单元 524
无线传输装置 525
方法 100、400、500
步骤 101-107、401-405、501-505
如下具体实施方式将结合上述附图进一步说明本发明。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,是本发明实施例的一种无人机飞行体验方法100的流程示意图。在本实施方式中,所述方法100可应用于无人机以及设于接收端的无人机飞行体验装置中,其中,所述无人机上设有拍摄装置,所述拍摄装置用于拍摄多目立体视频文件。应说明的是,本发明实施例的所述方法100并不限于图1所示的流程图中的步骤及顺序。根据不同的实施例,图1所示的流程图中的步骤可以增加、移除、或者改变顺序。在本实施方式中,所述方法100可以从步骤101开始。
步骤101,获取设置于无人机上的拍摄装置拍摄的多目立体视频文件。
步骤102,对所述多目立体视频文件进行视频平滑处理。
在本实施方式中,所述步骤102具体可包括:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
在本实施方式中,与所述多目立体视频文件关联的所述拍摄装置的姿态信息是指所述姿态信息是在所述拍摄装置拍摄时同步检测到的。
所述姿态信息至少包括表示所述拍摄装置、或搭载所述拍摄装置的移动物体在拍摄过程中匀速移动或静止的平稳姿态信息,以及表示所述拍摄装置、或搭载所述拍摄装置的移动物体在拍摄过程中产生了角速度、或者在某个方向上的加速度后的不平稳姿态信息。
如图2所示,在一种表现形式中,所述姿态信息可以是一段用于描述运动轨迹的指示线201,所述平稳姿态信息表现为直线段,所述不平稳姿态信息则表现为曲线段。
在一种实施方式中,对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹步骤,具体可包括:
对所述拍摄装置的运动轨迹中高频抖动的部分,即曲线段中曲线较密集的部分做编辑处理,例如取中间点、或删除一些曲线段,再将剩余部分的点或线段组合而得到一条平滑变化的虚拟轨迹的指示线202。
在一种实施方式中,对所述多目立体视频文件的视频帧做映射可以包括对所述多目立体视频文件的剪辑处理,具体为:
确定所述拍摄装置的运动轨迹与所述虚拟轨迹中有重叠或交叉的部分对应的时段,保留所述多目立体视频文件中该时段下的视频帧,并删除其他视频帧,即保留影像质量较好的视频片段,删除影像质量不好的视频片段,进而可以合成新的视频文件。
当然,在另一实施方式中,对所述多目立体视频文件的视频帧做映射也可以是复制出所述多目立体视频文件中该时段下的视频帧,组合该复制的视频帧而得到新的视频文件,从而可以保留原始视频文件。
本实施例的所述方法100采用视频平滑技术,通过对拍摄装置的姿态数据进行分析,拟合出一条平滑变化的虚拟相机视角,使得用户感受到的视角变化较为平稳,从而降低由于用户控制云台速度变化或者无人机/云台自身不稳定等因素导致的图像视角变化过快或画质模糊而给用户带来的观看不适感。
步骤103,基于拍摄到的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中。
可以理解的是,所述步骤102与所述步骤103的执行顺序可以互换。
步骤104,对所述多目立体视频文件进行压缩编码,并生成连续的视频流。
在本实施方式中,所述步骤104采用多视点视频编码标准(MVC,Multi—view Video Coding standard)对所述多目立体视频文件进行压缩编码,通过考虑多路图像之间的相关性来对所述多目立体视频文件进行压缩编码,即进行多目联合编码,从而有效地降低码率,使得多目视频相比单目视频码率增加不大,从而降低信息冗余。
可以理解的是,所述步骤104也可以采用其他现有技术来对所述多目立体视频文件进行压缩编码,以降低码率。
步骤105,将经过编码后的多目立体视频文件传输至接收端。
在本实施方式中,所述方法100采用高清传输技术传输所述多目立体视频文件,从而能够产生高清立体视频,并通过高清图传传回接收端。
步骤106,在接收端接收经过编码后的所述多目立体视频文件,并对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件。
在本实施方式中,所述步骤106采用多视点视频编码标准对所述多目立体视频文件进行解码。
在本实施方式中,所述视频平滑处理以及所述视觉深度信息计算均是在所述无人机上进行的,且是在对所述多目立体视频文件进行压缩编码之前完成的,并将所述视觉深度信息加载于编码前的所述多目立体视频文件中。
可选的,在其他实施方式中,所述视频平滑处理以及所述视觉深度信息计算中的一种或两种可在所述接收端对所述多目立体视频文件进行解码之后,由所述接收端完成。
例如,可选的,在所述步骤106之后执行所述步骤102,即,在所述步骤106之后还包括:对所述多目立体视频文件进行视频平滑处理。
可选的,在所述步骤106之后执行所述步骤103,即,在所述步骤106之后,还包括:基于解码后的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
步骤107,显示解码后的所述多目立体视频文件以及所述视觉深度信息。
在本实施方式中,所述方法100可通过可穿戴式显示设备,例如沉浸式眼镜显示解码后的所述多目立体视频文件以及所述视觉深度信息。
在本实施方式中,所述拍摄装置包括云台和图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上。在本实施方式中,所述图像获取装置为双目立体摄像头,所述双目立体摄像头可以作为视觉深度计算的输入,所述方法100通过计算深度信息,可将无人机与前方障碍物的距离反馈到可穿戴式显示设备上,例如沉浸式眼镜上,用户看到的图像可如图3所示。
进一步地,所述方法100还包括:
获取所述可穿戴式显示设备的姿态信息,并将所述可穿戴式显示设备的姿态信息发送给所述无人机;以及
获取所述拍摄装置的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
这样,用户在通过可穿戴式显示设备观看所述视频文件的同时,还可以通过身体,例如头部运动来控制所述拍摄装置的拍摄角度。
具体地,所述可穿戴式显示设备内部集成IMU(Inertial Measurement Unit)、GPS、指南针,其中IMU内部包含三轴陀螺仪和三轴加速度计。所述三轴陀螺仪通过积分获取得到自身的姿态信息,三轴加速度计对陀螺仪积分出来的姿态进行修正,同时融合指南针和GPS的信息,最终获取得到准确的姿态信息。当然,所述可穿戴式显示设备也可以只通过IMU获取得到所述可穿戴式显示设备的姿态信息,从而省去GPS和指南针。所述可穿戴式显示设备内部还有无线传输模块,用于将自身的姿态信息发送给无人机上的云台。
所述云台内部也可集成IMU、GPS、指南针,也能够获取得到自身的姿态,当所述可穿戴式显示设备将自身的姿态信息发送给所述云台之后,所述云台就将所述可穿戴式显示设备作为自己的目标姿态,然后运用自身的控制算法平稳地运动到目标姿态,从而实现体感控制器对所述云台的控制。可以理解的是,所述云台也可以只通过IMU获取得到所述云台的姿态信息,从而省去GPS和指南针。
本发明实施例的所述无人机飞行体验方法100通过将实时拍摄的多目立体视频文件进行压缩编码后再传回接收端,使得传输码率大大降低,同时还对所述视频文件进行视频平滑处理,使得用户实时感受到的视角变化较为平稳,从而能够获得良好的FPV飞行体验效果。
请参阅图4,是本发明实施例的另一种无人机飞行体验方法400的流程示意图。在本实施方式中,所述方法400可应用于无人机中,所述无人机上设有拍摄装置,所述拍摄装置用于拍摄多目立体视频文件。应说明的是,本发明实施例的所述方法并不限于图4所示的流程图中的步骤及顺序。根据不同的实施例,图4所示的流程图中的步骤可以增加、移除、或者改变顺序。在本实施方式中,所述方法400可以从步骤401开始。
步骤401,获取设置于无人机上的拍摄装置拍摄的多目立体视频文件。
步骤402,对所述多目立体视频文件进行视频平滑处理。
在本实施方式中,所述步骤402具体可包括:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
在本实施方式中,与所述多目立体视频文件关联的所述拍摄装置的姿态信息是指所述姿态信息是在所述拍摄装置拍摄时同步检测到的。
所述姿态信息至少包括表示所述拍摄装置、或搭载所述拍摄装置的移动物体在拍摄过程中匀速移动或静止的平稳姿态信息,以及表示所述拍摄装置、或搭载所述拍摄装置的移动物体在拍摄过程中产生了角速度、或者在某个方向上的加速度后的不平稳姿态信息。
如图2所示,在一种表现形式中,所述姿态信息可以是一段用于描述运动轨迹的指示线201,所述平稳姿态信息表现为直线段,所述不平稳姿态信息则表现为曲线段。
在一种实施方式中,对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹步骤,具体可包括:
对所述拍摄装置的运动轨迹中高频抖动的部分,即曲线段中曲线较密集的部分做编辑处理,例如取中间点、或删除一些曲线段,再将剩余部分的点或线段组合而得到一条平滑变化的虚拟轨迹的指示线202。
在一种实施方式中,对所述多目立体视频文件的视频帧做映射可以包括对所述多目立体视频文件的剪辑处理,具体为:
确定所述拍摄装置的运动轨迹与所述虚拟轨迹中有重叠或交叉的部分对应的时段,保留所述多目立体视频文件中该时段下的视频帧,并删除其他视频帧,即保留影像质量较好的视频片段,删除影像质量不好的视频片段,进而可以合成新的视频文件。
当然,在另一实施方式中,对所述多目立体视频文件的视频帧做映射也可以是复制出所述多目立体视频文件中该时段下的视频帧,组合该复制的视频帧而得到新的视频文件,从而可以保留原始视频文件。
本实施例的所述方法400采用视频平滑技术,通过对拍摄装置的姿态数据进行分析,拟合出一条平滑变化的虚拟相机视角,使得用户感受到的视角变化较为平稳,从而降低由于用户控制云台速度变化或者无人机/云台自身不稳定等因素导致的图像视角变化过快或画质模糊而给用户带来的观看不适感。
步骤403,基于拍摄到的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中。
可以理解的是,所述步骤402与所述步骤403的执行顺序可以互换。
步骤404,对所述多目立体视频文件进行压缩编码,并生成连续的视频流。
在本实施方式中,所述步骤404采用多视点视频编码标准对所述多目立体视频文件进行压缩编码,通过考虑多路图像之间的相关性来对所述多目立体视频文件进行压缩编码,即进行多目联合编码,从而有效地降低码率,使得多目视频相比单目视频码率增加不大,从而降低信息冗余。
可以理解的是,所述步骤404也可以采用其他现有技术来对所述多目立体视频文件进行压缩编码,以降低码率。
步骤405,将经过编码后的多目立体视频文件传输至接收端。
在本实施方式中,所述方法400采用高清传输技术传输所述多目立体视频文件,从而能够产生高清立体视频,并通过高清图传传回接收端。
在本实施方式中,所述视频平滑处理以及所述视觉深度信息计算均是在所述无人机上进行的,且是在对所述多目立体视频文件进行压缩编码之前完成的,并将所述视觉深度信息加载于编码前的所述多目立体视频文件中,从而使接收端在显示所述多目立体视频文件的同时还显示所述视觉深度信息。
可选的,在其他实施方式中,所述步骤402及/或步骤403也可以省略,而在所述接收端上执行所述步骤402及/或步骤403,即所述视频平滑处理以及所述视觉深度信息计算中的一种或两种可在所述接收端对所述多目立体视频文件进行解码之后,由所述接收端完成。
在本实施方式中,所述拍摄装置包括云台和图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上。在本实施方式中,所述图像获取装置为双目立体摄像头,所述双目立体摄像头可以作为视觉深度计算的输入,所述方法400通过计算深度信息,可将无人机与前方障碍物的距离反馈到接收端的显示设备上,例如沉浸式眼镜上。
进一步地,所述方法400还包括:
获取所述拍摄装置的姿态信息;以及
接收来自所述接收端的可穿戴式显示设备的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
具体地,所述可穿戴式显示设备内部集成IMU(Inertial Measurement Unit)、GPS、指南针,其中IMU内部包含三轴陀螺仪和三轴加速度计。所述三轴陀螺仪通过积分获取得到自身的姿态信息,三轴加速度计对陀螺仪积分出来的姿态进行修正,同时融合指南针和GPS的信息,最终获取得到准确的姿态信息。当然,所述可穿戴式显示设备也可以只通过IMU获取得到所述可穿戴式显示设备的姿态信息,从而省去GPS和指南针。所述可穿戴式显示设备内部还有无线传输模块,用于将自身的姿态信息发送给无人机上的云台。
所述云台内部也可集成IMU、GPS、指南针,也能够获取得到自身的姿态,当所述可穿戴式显示设备将自身的姿态信息发送给所述云台之后,所述云台就将所述可穿戴式显示设备作为自己的目标姿态,然后运用自身的控制算法平稳地运动到目标姿态,从而实现体感控制器对所述云台的控制。可以理解的是,所述云台也可以只通过IMU获取得到所述云台的姿态信息,从而省去GPS和指南针。
本发明实施例的所述无人机飞行体验方法400通过将实时拍摄的多目立体视频文件进行压缩编码后再传回接收端,使得传输码率大大降低,同时还对所述视频文件进行视频平滑处理,使得用户实时感受到的视角变化较为平稳,从而能够获得良好的FPV飞行体验效果。
请参阅图5,是本发明实施例的再一种无人机飞行体验方法500的流程示意图。在本实施方式中,所述方法500可应用于可与无人机进行通信的无人机飞行体验装置中。所述无人机飞行体验装置可用于显示多目立体视频文件。应说明的是,本发明实施例的所述方法500并不限于图5所示的流程图中的步骤及顺序。根据不同的实施例,图5所示的流程图中的步骤可以增加、移除、或者改变顺序。在本实施方式中,所述方法500可以从步骤501开始。
步骤501,接收无人机传输的经过压缩编码后的多目立体视频文件。
在本实施方式中,所述方法500采用高清传输技术传输所述多目立体视频文件,从而能够产生高清立体视频。
步骤502,对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件。
在本实施方式中,所述方法500采用多视点视频编码标准对所述多目立体视频文件进行解码,通过考虑多路图像之间的相关性来对所述多目立体视频文件进行压缩编码,即多目联合编码,从而有效地降低码率,使得多目视频相比单目视频码率增加不大,从而降低信息冗余。
可以理解的是,所述步骤502也可以采用其他现有技术来对所述多目立体视频文件进行解码。
步骤503,对解码后的所述多目立体视频文件进行视频平滑处理。
在本实施方式中,通过设置于无人机上的拍摄装置拍摄所述多目立体视频文件。所述步骤503具体可包括:
获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
在本实施方式中,与所述多目立体视频文件关联的所述拍摄装置的姿态信息是指所述姿态信息是在所述拍摄装置拍摄时同步检测到的。
所述姿态信息至少包括表示所述拍摄装置、或搭载所述拍摄装置的移动物体在拍摄过程中匀速移动或静止的平稳姿态信息,以及表示所述拍摄装置、或搭载所述拍摄装置的移动物体在拍摄过程中产生了角速度、或者在某个方向上的加速度后的不平稳姿态信息。
如图2所示,在一种表现形式中,所述姿态信息可以是一段用于描述运动轨迹的指示线201,所述平稳姿态信息表现为直线段,所述不平稳姿态信息则表现为曲线段。
在一种实施方式中,对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹步骤,具体可包括:
对所述拍摄装置的运动轨迹中高频抖动的部分,即曲线段中曲线较密集的部分做编辑处理,例如取中间点、或删除一些曲线段,再将剩余部分的点或线段组合而得到一条平滑变化的虚拟轨迹的指示线202。
在一种实施方式中,对所述多目立体视频文件的视频帧做映射可以包括对所述多目立体视频文件的剪辑处理,具体为:
确定所述拍摄装置的运动轨迹与所述虚拟轨迹中有重叠或交叉的部分对应的时段,保留所述多目立体视频文件中该时段下的视频帧,并删除其他视频帧,即保留影像质量较好的视频片段,删除影像质量不好的视频片段,进而可以合成新的视频文件。
当然,在另一实施方式中,对所述多目立体视频文件的视频帧做映射也可以是复制出所述多目立体视频文件中该时段下的视频帧,组合该复制的视频帧而得到新的视频文件,从而可以保留原始视频文件。
本实施例的所述方法500采用视频平滑技术,通过对拍摄装置的姿态数据进行分析,拟合出一条平滑变化的虚拟相机视角,使得用户感受到的视角变化较为平稳,从而降低由于用户控制云台速度变化或者无人机/云台自身不稳定等因素导致的图像视角变化过快或画质模糊而给用户带来的观看不适感。
步骤504,基于解码后的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中。
可以理解的是,所述步骤503与所述步骤504的执行顺序可以互换。
在本实施方式中,所述视频平滑处理以及所述视觉深度信息计算均是在所述接收端上进行的,且是在对接收到的所述多目立体视频文件进行解码之后完成的,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
可选的,在其他实施方式中,所述步骤503及/或步骤504也可以省略,而在所述无人机上执行所述步骤503及/或步骤504,即所述视频平滑处理以及所述视觉深度信息计算中的一种或两种可在所述无人机对所述多目立体视频文件进行压缩编码之前,由所述无人机完成。
步骤505,显示解码后的所述多目立体视频文件以及所述视觉深度信息。
在本实施方式中,所述方法500可通过可穿戴式显示设备,例如沉浸式眼镜显示解码后的所述多目立体视频文件以及所述视觉深度信息。
在本实施方式中,所述拍摄装置包括云台和图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上。在本实施方式中,所述图像获取装置为双目立体摄像头,所述双目立体摄像头可以作为视觉深度计算的输入,所述方法500通过计算深度信息,可将无人机与前方障碍物的距离反馈到可穿戴式显示设备上,例如沉浸式眼镜上。
进一步地,所述方法500还包括:
获取所述可穿戴式显示设备的姿态信息,并将所述可穿戴式显示设备的姿态信息发送给所述无人机,以依据所述姿态信息调节所述无人机上的拍摄装置的拍摄角度。
这样,用户在通过可穿戴式显示设备观看所述视频文件的同时,还可以通过身体,例如头部运动来控制所述拍摄装置的拍摄角度。
具体地,所述可穿戴式显示设备内部集成IMU(Inertial Measurement Unit)、GPS、指南针,其中IMU内部包含三轴陀螺仪和三轴加速度计。所述三轴陀螺仪通过积分获取得到自身的姿态信息,三轴加速度计对陀螺仪积分出来的姿态进行修正,同时融合指南针和GPS的信息,最终获取得到准确的姿态信息。当然,所述可穿戴式显示设备也可以只通过IMU获取得到所述可穿戴式显示设备的姿态信息,从而省去GPS和指南针。所述可穿戴式显示设备内部还有无线传输模块,用于将自身的姿态信息发送给无人机上的云台。
所述云台内部也可集成IMU、GPS、指南针,也能够获取得到自身的姿态,当所述可穿戴式显示设备将自身的姿态信息发送给所述云台之后,所述云台就将所述可穿戴式显示设备作为自己的目标姿态,然后运用自身的控制算法平稳地运动到目标姿态,从而实现体感控制器对所述云台的控制。可以理解的是,所述云台也可以只通过IMU获取得到所述云台的姿态信息,从而省去GPS和指南针。
本发明实施例的所述无人机飞行体验方法500通过将实时拍摄的多目立体视频文件进行压缩编码后再传回接收端,使得传输码率大大降低,同时还对所述视频文件进行视频平滑处理,使得用户实时感受到的视角变化较为平稳,从而能够获得良好的FPV飞行体验效果。
请参见图6,是本发明实施例的一种无人机飞行体验系统50的结构示意图。所述无人机飞行体验系统50包括无人机51以及设于接收端的无人机飞行体验装置52。其中,所述无人机飞行体验装置52为可穿戴式眼镜或遥控器。
请一并参阅图7,所述无人机51包括但不限于,拍摄装置511、第一图像处理器512、第一图像传输装置513。所述拍摄装置511用于拍摄多目立体视频文件。
其中,所述拍摄装置511可为多目立体视觉相机或摄像头。所述拍摄装置511安装于所述无人机51的前视方向,可以直接安装于所述无人机51上,也可以通过云台安装于所述无人机51上,以利于所述拍摄装置511能够拍摄到较稳定的多视角的视频文件。在本实施方式中,所述拍摄装置511包括云台(图未示)以及图像获取装置(图未示),所述图像获取装置通过所述云台安装于所述无人机51上。在本实施方式中,所述图像获取装置为双目立体视觉相机。
所述第一图像处理器512与所述拍摄装置511连接,用于获取所述拍摄装置511拍摄的所述多目立体视频文件,并对所述多目立体视频文件进行压缩编码,并生成连续的视频流。
所述第一图像传输装置513与所述第一图像处理器512连接,用于将经过编码后的所述多目立体视频文件传输至接收端。
请一并参阅图8,所述无人机飞行体验装置52包括但不限于,第二图像传输装置521、第二图像处理器522以及显示设备523。所述第二图像传输装置521与所述第二图像处理器522连接,用于接收所述第一图像传输装置513传输的经过压缩编码后的多目立体视频文件,并将接收到的视频文件传输给所述第二图像处理器522。
在本实施方式中,所述第一图像传输装置513以及所述第二图像传输装置521均采用高清传输技术传输所述多目立体视频文件,从而能够在所述无人机51上产生高清立体视频,并通过高清图传传回接收端。
在本实施方式中,所述第一图像传输装置513与所述第二图像传输装置521通过无线网络进行数据传输,所述无线网络包括但不限于,高清图传、蓝牙、WIFI、2G网络、3G网络、4G网络、5G网络。
在本实施方式中,所述第二图像处理器522用于对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件。
在本实施方式中,所述第一图像处理器512以及所述第二图像处理器522均为视频编解码处理器,且分别采用多视点视频编码标准对视频文件进行压缩编码或解码,通过考虑多路图像之间的相关性来对所述多目立体视频文件进行压缩编码,即多目联合编码,从而有效地降低码率,使得多目视频相比单目视频码率增加不大,从而降低信息冗余。
可以理解的是,所述第一图像处理器512以及所述第二图像处理器522也可以采用其他现有技术来对所述多目立体视频文件进行压缩编码或解码,以降低码率。
在本实施方式中,所述第一图像处理器512与所述第二图像处理器522中的其中一个还用于对所述多目立体视频文件进行视频平滑处理。
在本实施方式中,所述无人机51还包括有第一姿态获取单元514,用于检测所述拍摄装置511的姿态信息。所述第一图像处理器512与所述第二图像处理器522中的其中一个在对所述多目立体视频文件进行视频平滑处理时,具体用于:
获取所述第一姿态获取单元514检测到的与所述多目立体视频文件关联的所述拍摄装置511的姿态信息,并依据所述拍摄装置511的姿态信息解算出所述拍摄装置511的运动轨迹;
对所述拍摄装置511的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
在本实施方式中,与所述多目立体视频文件关联的所述拍摄装置的姿态信息是指所述姿态信息是有所述第一姿态获取单元514在所述拍摄装置511拍摄时同步检测到的。
所述姿态信息至少包括表示所述拍摄装置、或搭载所述拍摄装置的移动物体在拍摄过程中匀速移动或静止的平稳姿态信息,以及表示所述拍摄装置、或搭载所述拍摄装置的移动物体在拍摄过程中产生了角速度、或者在某个方向上的加速度后的不平稳姿态信息。
如图2所示,在一种表现形式中,所述姿态信息可以是一段用于描述运动轨迹的指示线201,所述平稳姿态信息表现为直线段,所述不平稳姿态信息则表现为曲线段。
在一种实施方式中,对所述拍摄装置511的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹步骤,具体可包括:
对所述拍摄装置511的运动轨迹中高频抖动的部分,即曲线段中曲线较密集的部分做编辑处理,例如取中间点、或删除一些曲线段,再将剩余部分的点或线段组合而得到一条平滑变化的虚拟轨迹的指示线202。
在一种实施方式中,对所述多目立体视频文件的视频帧做映射可以包括对所述多目立体视频文件的剪辑处理,具体为:
确定所述拍摄装置511的运动轨迹与所述虚拟轨迹中有重叠或交叉的部分对应的时段,保留所述多目立体视频文件中该时段下的视频帧,并删除其他视频帧,即保留影像质量较好的视频片段,删除影像质量不好的视频片段,进而可以合成新的视频文件。
当然,在另一实施方式中,对所述多目立体视频文件的视频帧做映射也可以是复制出所述多目立体视频文件中该时段下的视频帧,组合该复制的视频帧而得到新的视频文件,从而可以保留原始视频文件。
本实施例的所述第一图像处理器512或所述第二图像处理器522采用视频平滑技术,通过对拍摄装置511的姿态数据进行分析,拟合出一条平滑变化的虚拟相机视角,使得用户感受到的视角变化较为平稳,从而降低由于用户控制云台速度变化或者无人机/云台自身不稳定等因素导致的图像视角变化过快或画质模糊而给用户带来的观看不适感。
在一种实施方式中,所述第一图像处理器512在对所述多目立体视频文件进行压缩编码之前,还用于对所述多目立体视频文件进行视频平滑处理。即,所述视频平滑处理是在所述无人机51上进行的,且是在对所述多目立体视频文件进行压缩编码之前完成的。
可选地,在另一种实施方式中,所述第二图像处理器522在对接收到的所述多目立体视频文件进行解码之后,还用于对所述多目立体视频文件进行视频平滑处理。即,所述视频平滑处理是在所述接收端上进行的,且是在对所述多目立体视频文件进行解码之后完成的。
在本实施方式中,所述第一图像处理器512与所述第二图像处理器522中的其中一个还用于基于所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息。
在一种实施方式中,所述第一图像处理器512具体用于基于拍摄到的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中一并进行压缩编码。即,所述视觉深度信息计算是在所述无人机51上进行的,且是在对所述多目立体视频文件进行压缩编码之前完成的。
可选地,在另一种实施方式中,所述第二图像处理器522具体用于基于解码后的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。即,所述视觉深度信息计算是在所述接收端上进行的,且是在对所述多目立体视频文件进行解码之后完成的。
所述显示设备523用于显示解码后的所述多目立体视频文件以及所述视觉深度信息。
在一种实施方式中,所述第二图像传输装置521与所述第二图像处理器522可设于所述显示设备523上,并与所述显示设备523连接,所述第二图像处理器522还用于将解码后的多目立体视频文件传输给所述显示设备523进行显示。
可选的,在另一种实施方式中,所述第二图像传输装置521与所述第二图像处理器522可与所述显示设备523分离,所述第二图像传输装置521与所述显示设备523通过无线网络进行通信,所述第二图像传输装置521还用于通过无线网络将解码后的多目立体视频文件传输给所述显示设备523进行显示,所述无线网络包括但不限于,蓝牙、红外线、WIFI、Zwave、ZigBee。
在本实施方式中,所述显示设备523为可穿戴式显示设备,例如为沉浸式眼镜。
在本实施方式中,所述拍摄装置511包括云台和图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上。在本实施方式中,所述图像获取装置为双目立体摄像头,所述双目立体摄像头可以作为视觉深度计算的输入,所述无人机飞行体验装置52通过计算深度信息,可将无人机51与前方障碍物的距离反馈到可穿戴式显示设备上,例如沉浸式眼镜上。
在本实施方式中,所述无人机飞行体验装置52还包括设置于所述可穿戴式显示设备523上的第二姿态获取单元524,所述第二姿态获取单元524用于检测所述可穿戴式显示设备523的姿态信息。
在本实施方式中,所述无人机飞行体验装置52还包括无线传输装置525,用于将所述可穿戴式显示设备的姿态信息发送给所述无人机51。
所述无人机51还包括控制器515,所述控制器515用于接收所述可穿戴式显示设备的姿态信息,并根据所述拍摄装置511的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
这样,用户在通过可穿戴式显示设备观看所述视频文件的同时,还可以通过身体,例如头部运动来控制所述拍摄装置511的拍摄角度。
具体地,所述可穿戴式显示设备内部集成IMU(Inertial Measurement Unit)、GPS、指南针,其中IMU内部包含三轴陀螺仪和三轴加速度计。所述三轴陀螺仪通过积分获取得到自身的姿态信息,三轴加速度计对陀螺仪积分出来的姿态进行修正,同时融合指南针和GPS的信息,最终获取得到准确的姿态信息。当然,所述可穿戴式显示设备也可以只通过IMU获取得到所述可穿戴式显示设备的姿态信息,从而省去GPS和指南针。所述可穿戴式显示设备内部还有无线传输模块,用于将自身的姿态信息发送给无人机上的云台。
所述云台内部也可集成IMU、GPS、指南针,也能够获取得到自身的姿态,当所述可穿戴式显示设备将自身的姿态信息发送给所述云台之后,所述云台就将所述可穿戴式显示设备作为自己的目标姿态,然后运用自身的控制算法平稳地运动到目标姿态,从而实现体感控制器对所述云台的控制。可以理解的是,所述云台也可以只通过IMU获取得到所述云台的姿态信息,从而省去GPS和指南针。
本发明实施例的所述无人机飞行体验系统50通过将实时拍摄的多目立体视频文件进行压缩编码后再传回接收端,使得传输码率大大降低,同时还对所述视频文件进行视频平滑处理,使得用户实时感受到的视角变化较为平稳,从而能够获得良好的FPV飞行体验效果。
最后应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或等同替换,而不脱离本发明技术方案的精神和范围。

Claims (63)

  1. 一种无人机飞行体验方法,其特征在于:包括以下步骤:
    获取设置于无人机上的拍摄装置拍摄的多目立体视频文件;
    对所述多目立体视频文件进行压缩编码,并生成连续的视频流;
    将经过编码后的多目立体视频文件传输至接收端;
    在接收端接收经过编码后的所述多目立体视频文件,并对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件;以及
    显示解码后的所述多目立体视频文件。
  2. 如权利要求1所述的无人机飞行体验方法,其特征在于:在显示所述多目立体视频文件之前还包括步骤:对所述多目立体视频文件进行视频平滑处理。
  3. 如权利要求2所述的无人机飞行体验方法,其特征在于:对所述多目立体视频文件进行视频平滑处理步骤,具体包括:
    获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
    对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
    根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
  4. 如权利要求2或3所述的无人机飞行体验方法,其特征在于:在对所述多目立体视频文件进行压缩编码步骤之前,对所述多目立体视频文件进行视频平滑处理;或者
    在对接收到的所述多目立体视频文件进行解码步骤之后,对所述多目立体视频文件进行视频平滑处理。
  5. 如权利要求1所述的无人机飞行体验方法,其特征在于:在显示所述多目立体视频文件之前还包括步骤:基于所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息。
  6. 如权利要求5所述的无人机飞行体验方法,其特征在于:在对所述多目立体视频文件进行压缩编码步骤之前,基于拍摄到的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于编码前的所述多目立体视频文件中;或者
    在对接收到的所述多目立体视频文件进行解码步骤之后,基于解码后的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
  7. 如权利要求5所述的无人机飞行体验方法,其特征在于:所述无人机飞行体验方法还包括:显示所述视觉深度信息。
  8. 如权利要求1所述的无人机飞行体验方法,其特征在于:采用高清传输技术传输所述多目立体视频文件。
  9. 如权利要求1所述的无人机飞行体验方法,其特征在于:采用多视点视频编码标准对所述多目立体视频文件进行压缩编码以及解码。
  10. 如权利要求1所述的无人机飞行体验方法,其特征在于:所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上;所述无人机飞行体验方法通过可穿戴式显示设备显示解码后的所述多目立体视频文件;所述无人机飞行体验方法还包括:
    获取所述可穿戴式显示设备的姿态信息,并将所述可穿戴式显示设备的姿态信息发送给所述无人机;以及
    获取所述拍摄装置的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
  11. 一种无人机飞行体验系统,包括无人机以及设于接收端的无人机飞行体验装置,其特征在于:所述无人机包括:
    拍摄装置,用于拍摄多目立体视频文件;
    第一图像处理器,与所述拍摄装置连接,用于获取所述拍摄装置拍摄的所述多目立体视频文件,并对所述多目立体视频文件进行压缩编码,并生成连续的视频流;以及
    第一图像传输装置,与所述第一图像处理器连接,用于将经过编码后的所述多目立体视频文件传输至接收端;以及
    所述无人机飞行体验装置包括:
    第二图像传输装置,用于接收所述第一图像传输装置传输的经过压缩编码后的多目立体视频文件;
    第二图像处理器,与所述第二图像传输装置连接,用于对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件;以及
    显示设备,用于显示解码后的所述多目立体视频文件。
  12. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述第一图像处理器与所述第二图像处理器中的其中一个还用于对所述多目立体视频文件进行视频平滑处理。
  13. 如权利要求12所述的无人机飞行体验系统,其特征在于:所述第一图像处理器与所述第二图像处理器中的其中一个在对所述多目立体视频文件进行视频平滑处理时,具体用于:
    获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
    对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
    根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
  14. 如权利要求12或13所述的无人机飞行体验系统,其特征在于:所述第一图像处理器在对所述多目立体视频文件进行压缩编码之前,还用于对所述多目立体视频文件进行视频平滑处理;或者
    所述第二图像处理器在对接收到的所述多目立体视频文件进行解码之后,还用于对所述多目立体视频文件进行视频平滑处理。
  15. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述第一图像处理器与所述第二图像处理器中的其中一个还用于基于所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息。
  16. 如权利要求15所述的无人机飞行体验系统,其特征在于:所述第一图像处理器在对所述多目立体视频文件进行压缩编码之前,还用于基于拍摄到的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中一并进行压缩编码;或者
    所述第二图像处理器在对接收到的所述多目立体视频文件进行解码之后,还用于基于解码后的所述多目立体视频文件计算所述视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
  17. 如权利要求15所述的无人机飞行体验系统,其特征在于:所述显示设备还用于显示所述视觉深度信息。
  18. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述第一图像传输装置以及所述第二图像传输装置均采用高清传输技术传输所述多目立体视频文件。
  19. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述第一图像传输装置与所述第二图像传输装置通过无线网络进行数据传输,所述无线网络包括如下至少一种:高清图传、蓝牙、WIFI、2G网络、3G网络、4G网络、5G网络。
  20. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述显示设备与所述第二图像处理器连接,所述第二图像处理器还用于将解码后的多目立体视频文件传输给所述显示设备进行显示;或者
    所述第二图像传输装置与所述显示设备通过无线网络进行通信,所述第二图像传输装置还用于通过无线网络将解码后的多目立体视频文件传输给所述显示设备进行显示,所述无线网络包括如下至少一种:蓝牙、红外线、WIFI、Zwave、ZigBee。
  21. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述第一图像处理器以及所述第二图像处理器均采用多视点视频编码标准对视频文件进行压缩编码或解码。
  22. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述拍摄装置为多目立体视觉相机或摄像头。
  23. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上。
  24. 如权利要求11所述的无人机飞行体验系统,其特征在于:所述显示设备为可穿戴式显示设备。
  25. 如权利要求24所述的无人机飞行体验系统,其特征在于:所述显示设备为沉浸式眼镜。
  26. 如权利要求24或25所述的无人机飞行体验系统,其特征在于:所述无人机飞行体验装置还包括:
    设置于所述可穿戴式显示设备上的第一姿态获取单元,用于检测所述可穿戴式显示设备的姿态信息;以及
    无线传输装置,用于将所述可穿戴式显示设备的姿态信息发送给所述无人机;
    所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上;以及
    所述无人机还包括:
    第二姿态获取单元,用于检测所述拍摄装置的姿态信息;以及
    控制器,用于接收所述可穿戴式显示设备的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
  27. 一种无人机飞行体验方法,其特征在于:包括以下步骤:
    获取设置于无人机上的拍摄装置拍摄的多目立体视频文件;
    对所述多目立体视频文件进行压缩编码,并生成连续的视频流;以及
    将经过编码后的多目立体视频文件传输至接收端。
  28. 如权利要求27所述的无人机飞行体验方法,其特征在于:在对所述多目立体视频文件进行压缩编码步骤之前,还包括:对所述多目立体视频文件进行视频平滑处理。
  29. 如权利要求28所述的无人机飞行体验方法,其特征在于:对所述多目立体视频文件进行视频平滑处理步骤,具体包括:
    获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
    对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
    根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
  30. 如权利要求27所述的无人机飞行体验方法,其特征在于:在对所述多目立体视频文件进行压缩编码步骤之前,还包括:基于拍摄到的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中一并进行压缩编码。
  31. 如权利要求27所述的无人机飞行体验方法,其特征在于:采用高清传输技术传输所述多目立体视频文件。
  32. 如权利要求27所述的无人机飞行体验方法,其特征在于:采用多视点视频编码标准对所述多目立体视频文件进行压缩编码。
  33. 如权利要求27所述的无人机飞行体验方法,其特征在于:所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台安装于所述无人机上;所述无人机飞行体验方法还包括:
    获取所述拍摄装置的姿态信息;以及
    接收来自所述接收端的可穿戴式显示设备的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
  34. 一种无人机,其特征在于:包括:
    拍摄装置,用于拍摄多目立体视频文件;
    图像处理器,与所述拍摄装置连接,用于获取所述拍摄装置拍摄的所述多目立体视频文件,并对所述多目立体视频文件进行压缩编码,并生成连续的视频流;以及
    图像传输装置,与所述图像处理器连接,用于将经过编码后的所述多目立体视频文件传输至接收端。
  35. 如权利要求34所述的无人机,其特征在于:所述图像处理器还用于对所述多目立体视频文件进行视频平滑处理。
  36. 如权利要求35所述的无人机,其特征在于:所述图像处理器在对所述多目立体视频文件进行视频平滑处理时,具体用于:
    获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
    对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
    根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
  37. 如权利要求34所述的无人机,其特征在于:所述图像处理器还用于基于拍摄到的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于所述多目立体视频文件中一并进行压缩编码。
  38. 如权利要求34所述的无人机,其特征在于:所述图像传输装置采用高清传输技术传输所述多目立体视频文件。
  39. 如权利要求34所述的无人机,其特征在于:所述图像传输装置与所述接收端上的另一图像传输装置通过无线网络进行数据传输,所述无线网络包括如下至少一种:高清图传、蓝牙、WIFI、2G网络、3G网络、4G网络、5G网络。
  40. 如权利要求34所述的无人机,其特征在于:所述图像处理器采用多视点视频编码标准对所述多目立体视频文件进行压缩编码。
  41. 如权利要求34所述的无人机,其特征在于:所述拍摄装置为多目立体视觉相机或摄像头。
  42. 如权利要求34所述的无人机,其特征在于:所述拍摄装置包括云台以及图像获取装置,所述图像获取装置通过所述云台通过云台安装于所述无人机上。
  43. 如权利要求42所述的无人机,其特征在于:所述无人机还包括:
    姿态获取单元,用于检测所述拍摄装置的姿态信息;以及
    控制器,用于接收来自所述接收端的可穿戴式显示设备的姿态信息,并根据所述拍摄装置的姿态信息以及所述可穿戴式显示设备的姿态信息控制所述云台转动,以调节所述图像获取装置的拍摄角度。
  44. 一种无人机飞行体验方法,其特征在于:包括以下步骤:
    接收无人机传输的经过压缩编码后的多目立体视频文件;
    对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件;以及
    显示解码后的所述多目立体视频文件。
  45. 如权利要求44所述的无人机飞行体验方法,其特征在于:在显示解码后的所述多目立体视频文件步骤之前,还包括:对解码后的所述多目立体视频文件进行视频平滑处理。
  46. 如权利要求45所述的无人机飞行体验方法,其特征在于:通过设置于无人机上的拍摄装置拍摄所述多目立体视频文件;
    对解码后的所述多目立体视频文件进行视频平滑处理步骤,具体包括:
    获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
    对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
    根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
  47. 如权利要求44所述的无人机飞行体验方法,其特征在于:通过设置于无人机上的拍摄装置拍摄所述多目立体视频文件;
    在显示解码后的所述多目立体视频文件步骤之前,还包括:基于解码后的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
  48. 如权利要求47所述的无人机飞行体验方法,其特征在于:所述无人机飞行体验方法还包括步骤:显示所述视觉深度信息。
  49. 如权利要求44所述的无人机飞行体验方法,其特征在于:采用高清传输技术传输所述多目立体视频文件。
  50. 如权利要求44所述的无人机飞行体验方法,其特征在于:采用多视点视频编码标准对所述多目立体视频文件进行解码。
  51. 如权利要求44所述的无人机飞行体验方法,其特征在于:所述无人机飞行体验方法通过可穿戴式显示设备显示解码后的所述多目立体视频文件;所述无人机飞行体验方法还包括:
    获取所述可穿戴式显示设备的姿态信息,并将所述可穿戴式显示设备的姿态信息发送给所述无人机,以依据所述姿态信息调节所述无人机上的拍摄装置的拍摄角度。
  52. 一种无人机飞行体验装置,其特征在于:包括:
    图像传输装置,用于接收无人机传输的经过压缩编码后的多目立体视频文件;
    图像处理器,与所述图像传输装置连接,用于对接收到的所述多目立体视频文件进行解码,以得到解码后的多目立体视频文件;以及
    显示设备,用于显示解码后的所述多目立体视频文件。
  53. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述无人机飞行体验装置为可穿戴式眼镜或遥控器。
  54. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述图像处理器还用于对解码后的所述多目立体视频文件进行视频平滑处理。
  55. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述多目立体视频文件由设置于无人机上的拍摄装置拍摄;
    所述图像处理器在对解码后的所述多目立体视频文件进行视频平滑处理时,具体用于:
    获取与所述多目立体视频文件关联的所述拍摄装置的姿态信息,并依据所述拍摄装置的姿态信息解算出所述拍摄装置的运动轨迹;
    对所述拍摄装置的运动轨迹进行滤波,并拟合出一条平滑变化的虚拟轨迹;以及
    根据所述虚拟轨迹对所述多目立体视频文件的视频帧做映射,以实现视频的平滑处理。
  56. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述多目立体视频文件由设置于无人机上的拍摄装置拍摄;
    所述图像处理器还用于基于解码后的所述多目立体视频文件计算所述拍摄装置与障碍物之间的距离,以得到视觉深度信息,并将所述视觉深度信息加载于解码后的所述多目立体视频文件中。
  57. 如权利要求56所述的无人机飞行体验装置,其特征在于:所述显示设备还用于显示所述视觉深度信息。
  58. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述图像传输装置采用高清传输技术传输所述多目立体视频文件。
  59. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述图像传输装置与所述无人机上的另一图像传输装置通过无线网络进行数据传输,所述无线网络包括如下至少一种:高清图传、蓝牙、WIFI、2G网络、3G网络、4G网络、5G网络。
  60. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述显示设备与所述图像处理器连接,所述图像处理器还用于将解码后的多目立体视频文件传输给所述显示设备进行显示;或者
    所述图像传输装置与所述显示设备通过无线网络进行通信,所述图像传输装置还用于通过无线网络将解码后的多目立体视频文件传输给所述显示设备进行显示,所述无线网络包括如下至少一种:蓝牙、红外线、WIFI、Zwave、ZigBee。
  61. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述图像处理器采用多视点视频编码标准对所述多目立体视频文件进行解码。
  62. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述显示设备为可穿戴式显示设备。
  63. 如权利要求52所述的无人机飞行体验装置,其特征在于:所述无人机飞行体验装置还包括:
    设置于所述可穿戴式显示设备上的姿态获取单元,用于检测所述可穿戴式显示设备的姿态信息;以及
    无线传输装置,用于将所述可穿戴式显示设备的姿态信息发送给所述无人机,以依据所述姿态信息调节所述无人机上的拍摄装置的拍摄角度。
PCT/CN2015/099852 2015-12-30 2015-12-30 无人机飞行体验方法、装置、系统以及无人机 WO2017113183A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201580065834.3A CN107005687B (zh) 2015-12-30 2015-12-30 无人机飞行体验方法、装置、系统以及无人机
PCT/CN2015/099852 WO2017113183A1 (zh) 2015-12-30 2015-12-30 无人机飞行体验方法、装置、系统以及无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/099852 WO2017113183A1 (zh) 2015-12-30 2015-12-30 无人机飞行体验方法、装置、系统以及无人机

Publications (1)

Publication Number Publication Date
WO2017113183A1 true WO2017113183A1 (zh) 2017-07-06

Family

ID=59224128

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099852 WO2017113183A1 (zh) 2015-12-30 2015-12-30 无人机飞行体验方法、装置、系统以及无人机

Country Status (2)

Country Link
CN (1) CN107005687B (zh)
WO (1) WO2017113183A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109931909A (zh) * 2019-03-29 2019-06-25 大连理工大学 一种基于无人机的海上风机塔柱状态巡检方法和装置
CN113691867A (zh) * 2021-10-27 2021-11-23 北京创米智汇物联科技有限公司 动作分析方法、装置、电子设备及存储介质
CN114185320A (zh) * 2020-09-15 2022-03-15 中国科学院软件研究所 一种无人系统集群的测评方法、装置、系统及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107360413A (zh) * 2017-08-25 2017-11-17 秦山 一种多目立体图像信号传输方法和系统
EP3664442A4 (en) * 2017-09-12 2020-06-24 SZ DJI Technology Co., Ltd. IMAGE TRANSMISSION METHOD AND DEVICE, MOBILE PLATFORM, MONITORING DEVICE, AND SYSTEM
CN110326283B (zh) * 2018-03-23 2021-05-28 深圳市大疆创新科技有限公司 成像系统
CN108769531B (zh) * 2018-06-21 2020-10-23 深圳市道通智能航空技术有限公司 控制拍摄装置的拍摄角度的方法、控制装置及遥控器
CN111912298B (zh) * 2020-06-30 2021-04-06 日照幕天飞行器开发有限公司 一种基于5g网络的智能反蜂群式无人机方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110085034A1 (en) * 2009-10-14 2011-04-14 Harris Corporation Surveillance system for transcoding surveillance image files while retaining geospatial metadata and associated methods
CN103905790A (zh) * 2014-03-14 2014-07-02 深圳市大疆创新科技有限公司 视频的处理方法、装置及系统
CN104219492A (zh) * 2013-11-14 2014-12-17 成都时代星光科技有限公司 无人机图像传输系统
CN104811615A (zh) * 2015-04-17 2015-07-29 刘耀 一种体感控制摄像系统及方法
CN104902263A (zh) * 2015-05-26 2015-09-09 深圳市圆周率软件科技有限责任公司 一种图像信息展现系统和方法
CN105141895A (zh) * 2015-08-06 2015-12-09 广州飞米电子科技有限公司 视频处理方法及装置、系统、四轴飞行器

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202075794U (zh) * 2011-05-24 2011-12-14 段连飞 一种无人机航摄立体成像处理设备
CN104035446B (zh) * 2014-05-30 2017-08-25 深圳市大疆创新科技有限公司 无人机的航向生成方法和系统
CN105141807B (zh) * 2015-09-23 2018-11-30 北京二郎神科技有限公司 视频信号图像处理方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110085034A1 (en) * 2009-10-14 2011-04-14 Harris Corporation Surveillance system for transcoding surveillance image files while retaining geospatial metadata and associated methods
CN104219492A (zh) * 2013-11-14 2014-12-17 成都时代星光科技有限公司 无人机图像传输系统
CN103905790A (zh) * 2014-03-14 2014-07-02 深圳市大疆创新科技有限公司 视频的处理方法、装置及系统
CN104811615A (zh) * 2015-04-17 2015-07-29 刘耀 一种体感控制摄像系统及方法
CN104902263A (zh) * 2015-05-26 2015-09-09 深圳市圆周率软件科技有限责任公司 一种图像信息展现系统和方法
CN105141895A (zh) * 2015-08-06 2015-12-09 广州飞米电子科技有限公司 视频处理方法及装置、系统、四轴飞行器

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109931909A (zh) * 2019-03-29 2019-06-25 大连理工大学 一种基于无人机的海上风机塔柱状态巡检方法和装置
CN109931909B (zh) * 2019-03-29 2023-07-18 大连理工大学 一种基于无人机的海上风机塔柱状态巡检方法和装置
CN114185320A (zh) * 2020-09-15 2022-03-15 中国科学院软件研究所 一种无人系统集群的测评方法、装置、系统及存储介质
CN114185320B (zh) * 2020-09-15 2023-10-24 中国科学院软件研究所 一种无人系统集群的测评方法、装置、系统及存储介质
CN113691867A (zh) * 2021-10-27 2021-11-23 北京创米智汇物联科技有限公司 动作分析方法、装置、电子设备及存储介质
CN113691867B (zh) * 2021-10-27 2022-01-18 北京创米智汇物联科技有限公司 动作分析方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN107005687B (zh) 2019-07-26
CN107005687A (zh) 2017-08-01

Similar Documents

Publication Publication Date Title
WO2017113183A1 (zh) 无人机飞行体验方法、装置、系统以及无人机
WO2017018612A1 (en) Method and electronic device for stabilizing video
WO2017111302A1 (en) Apparatus and method for generating time lapse image
WO2020080765A1 (en) Apparatuses and methods for performing artificial intelligence encoding and artificial intelligence decoding on image
WO2017034089A1 (ko) 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치
WO2017185316A1 (zh) 一种无人机第一视角飞行的控制方法及系统、智能眼镜
WO2016208849A1 (ko) 디지털 촬영 장치 및 그 동작 방법
WO2016013902A1 (en) Image photographing apparatus and image photographing method
WO2017090837A1 (en) Digital photographing apparatus and method of operating the same
WO2016060397A1 (en) Method and apparatus for processing screen using device
WO2016154926A1 (zh) 成像装置及其补光控制方法、系统,以及可移动物体
WO2009145426A1 (en) Method and apparatus for generating stereoscopic image data stream by using camera parameter, and method and apparatus for restoring stereoscopic image by using camera parameter
WO2017096546A1 (en) Imaging system and method for unmanned vehicles
WO2019225978A1 (en) Camera and terminal including the same
WO2018166224A1 (zh) 全景视频的目标追踪显示方法、装置及存储介质
WO2019117652A1 (en) Prism apparatus, and camera apparatus including the same
WO2021172834A1 (en) Apparatus and method for performing artificial intelligence encoding and artificial intelligence decoding on image by using pre-processing
WO2018139884A1 (en) Method for processing vr audio and corresponding equipment
WO2019017641A1 (ko) 전자 장치 및 전자 장치의 이미지 압축 방법
WO2017090833A1 (en) Photographing device and method of controlling the same
WO2019208915A1 (ko) 외부 장치의 자세 조정을 통해 복수의 카메라들을 이용하여 이미지를 획득하는 전자 장치 및 방법
WO2022124607A1 (en) Depth estimation method, device, electronic equipment and computer readable storage medium
WO2020101146A1 (ko) 차량에 탑재되는 단말 장치가 영상을 전송하는 방법 및 차량의 주행을 제어하는 원격 제어 장치가 영상을 수신하는 방법
WO2020017936A1 (ko) 전자 장치 및 이미지의 전송 상태에 기반하여 이미지를 보정하는 방법
WO2019231234A1 (en) Method of transmitting 3-dimensional 360 degree video data, display apparatus using the method, and video storage apparatus using the method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15911812

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15911812

Country of ref document: EP

Kind code of ref document: A1