WO2019227333A1 - 集体照拍摄方法和装置 - Google Patents

集体照拍摄方法和装置 Download PDF

Info

Publication number
WO2019227333A1
WO2019227333A1 PCT/CN2018/088997 CN2018088997W WO2019227333A1 WO 2019227333 A1 WO2019227333 A1 WO 2019227333A1 CN 2018088997 W CN2018088997 W CN 2018088997W WO 2019227333 A1 WO2019227333 A1 WO 2019227333A1
Authority
WO
WIPO (PCT)
Prior art keywords
settlement
drone
target
shooting
camera
Prior art date
Application number
PCT/CN2018/088997
Other languages
English (en)
French (fr)
Inventor
钱杰
刘政哲
邬奇峰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880012007.1A priority Critical patent/CN110337806A/zh
Priority to PCT/CN2018/088997 priority patent/WO2019227333A1/zh
Publication of WO2019227333A1 publication Critical patent/WO2019227333A1/zh
Priority to US17/106,995 priority patent/US20210112194A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0094Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • B64U10/13Flying platforms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • B64U2201/20Remote controls

Definitions

  • the present invention relates to the field of photography, and in particular, to a method and device for collective photography.
  • the invention provides a method and a device for shooting group photos.
  • a group photo shooting method includes:
  • a camera mounted on the drone is triggered to shoot.
  • a group photographing device including: a storage device and a processor;
  • the storage device is configured to store program instructions
  • the processor calls the program instructions, and when the program instructions are executed, are used to:
  • multiple targets in the current shooting frame are identified; when it is determined that the multiple targets meet a shooting trigger condition, a camera mounted on the drone is triggered to shoot.
  • a computer-readable storage medium stores program instructions.
  • the program instructions are executed by a processor, the program instructions are used to perform the following steps:
  • a camera mounted on the drone is triggered to shoot.
  • the drone automatically triggers the camera to shoot, thereby obtaining multiple The group photo of the target realizes the automatic shooting of the group photo.
  • the shooting process is convenient, the shooting efficiency is improved, and the labor cost is saved.
  • FIG. 1 is an application scenario diagram of a group photo shooting method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a group photo shooting method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a group photo shooting method according to another embodiment of the present invention.
  • FIG. 4 is another application scenario diagram of a group photo shooting method according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a group photo shooting method according to another embodiment of the present invention.
  • FIG. 6 is a structural block diagram of a group photographing device according to an embodiment of the present invention.
  • the drone 100 may include a carrier 102 and a load 104.
  • the load 104 may be located directly on the drone 100 without the need for the carrier 102.
  • the carrier 102 is a gimbal, for example, a two-axis gimbal or a three-axis gimbal.
  • the load 104 may be an image capturing device or an imaging device (such as a camera, a camcorder, an infrared camera, an ultraviolet camera, or the like), an audio capturing device (for example, a parabolic reflection microphone), an infrared camera, etc.
  • the load 104 can provide static sensing data (such as pictures) or dynamic sensing data (such as videos).
  • the load 104 is mounted on the carrier 102, so that the load 104 is controlled to rotate by the carrier 102.
  • the carrier 102 is a gimbal and the load is a camera.
  • the drone 100 may include a power mechanism 106, a sensing system 108, and a communication system 110.
  • the power mechanism 106 may include one or more rotating bodies, propellers, blades, motors, electronic governors, and the like.
  • the rotating body of the power mechanism may be a self-tightening rotating body, a rotating body assembly, or another rotating body power unit.
  • the drone 100 may have one or more power mechanisms. All power mechanisms can be of the same type. Optionally, one or more power mechanisms may be of different types.
  • the power mechanism 106 may be mounted on the drone by suitable means, such as by a supporting element (such as a drive shaft).
  • the power mechanism 106 can be installed in any suitable position of the drone 100, such as the top, bottom, front, rear, side, or any combination thereof. By controlling one or more power mechanisms 106 to control the flight of the drone 100.
  • the sensing system 108 may include one or more sensors to sense the spatial orientation, velocity, and / or acceleration of the drone 100 (such as rotation and translation relative to up to three degrees of freedom).
  • the one or more sensors may include a GPS sensor, a motion sensor, an inertial sensor, a proximity sensor, or an image sensor.
  • the sensing data provided by the sensing system 108 can be used to track the spatial orientation, velocity, and / or acceleration of the target (as described below, using a suitable processing unit and / or control unit).
  • the sensing system 108 may be used to collect environmental data of the drone, such as climatic conditions, potential obstacles to be approached, locations of geographical features, locations of man-made structures, and the like.
  • the communication system 110 can communicate with a terminal 112 having a communication system 114 through a wireless signal 116.
  • the communication system 110, 114 may include any number of transmitters, receivers, and / or transceivers for wireless communication.
  • the communication may be a one-way communication so that data can be sent from one direction.
  • one-way communication may include that only the drone 100 transmits data to the terminal 112, or vice versa.
  • One or more transmitters of the communication system 110 may send data to one or more receivers of the communication system 112 and vice versa.
  • the communication may be two-way communication, so that data can be transmitted in two directions between the drone 100 and the terminal 112.
  • Two-way communication includes that one or more transmitters of the communication system 110 can send data to one or more receivers of the communication system 114, and vice versa.
  • the terminal 112 may provide control data to one or more of the drone 100, the carrier 102, and the load 104, and from one or more of the drone 100, the carrier 102, and the load 104 Received information (such as the position and / or motion information of the drone, the carrier or the load, data sensed by the load, such as image data captured by the camera).
  • Received information such as the position and / or motion information of the drone, the carrier or the load, data sensed by the load, such as image data captured by the camera.
  • the drone 100 may communicate with other remote devices other than the terminal 112, and the terminal 112 may also communicate with other remote devices other than the drone 100.
  • the drone and / or terminal 112 may communicate with another drone or another carrier or load of the drone.
  • the additional remote device may be a second terminal or other computing device (such as a computer, desktop computer, tablet computer, smartphone, or other mobile device).
  • the remote device may transmit data to the drone 100, receive data from the drone 100, transmit data to the terminal 112, and / or receive data from the terminal 112.
  • the remote device may be connected to the Internet or other telecommunication networks, so that the data received from the drone 100 and / or the terminal 112 is uploaded to a website or a server.
  • the movement of the drone 100, the movement of the carrier 102 and the movement of the load 104 relative to a fixed reference object (such as the external environment), and / or the movement between each other, may be controlled by the terminal 112.
  • the terminal 112 may be a remote control terminal located far from the drone, the carrier and / or the load.
  • the terminal 112 may be located on or affixed to the supporting platform.
  • the terminal 112 may be handheld or wearable.
  • the terminal 112 may include a smart phone, a tablet computer, a desktop computer, a computer, glasses, gloves, a helmet, a microphone, or any combination thereof.
  • the terminal 112 may include a user interface, such as a keyboard, a mouse, a joystick, a touch screen, or a display. Any suitable user input may interact with the terminal 112, such as manually inputting instructions, sound control, gesture control, or position control (such as movement, position, or tilt through the terminal 112).
  • a user interface such as a keyboard, a mouse, a joystick, a touch screen, or a display. Any suitable user input may interact with the terminal 112, such as manually inputting instructions, sound control, gesture control, or position control (such as movement, position, or tilt through the terminal 112).
  • FIG. 2 a flowchart of a group photo shooting method according to an embodiment of the present invention.
  • the method may include the following steps:
  • Step S201 Enter a group photo shooting mode based on a trigger instruction
  • step S201 is performed before the drone 100 flies.
  • the user may send a trigger instruction to the drone 100 through the operation terminal, or generate a button by operating a button provided on the drone 100.
  • the trigger instruction triggers the drone 100 to enter a group photo shooting mode.
  • step S201 is performed during the flight of the drone 100, and the trigger instruction may be determined by the target recognized by the drone 100 and the attitude (such as a gesture) of the target.
  • the drone 100 switches to the group photo shooting mode during flight includes two cases:
  • the trigger instruction may be determined by a gesture of the target, and the drone 100 may receive the trigger instruction.
  • the gesture that the drone 100 recognizes the target is a specific gesture, such as a gesture of "Yeah", a gesture of "Like", and so on.
  • the target may include a gesture controller of the drone 100, and the first target captured by the camera after the drone 100 is powered on. A gesture controller of 100 or the settlement identified by the first target.
  • the trigger instruction is determined by the target and the gesture of the target together.
  • the trigger instruction may refer to the The drone 100 recognizes a settlement based on the target, and the target in the specific gesture in the settlement is greater than or equal to a preset number.
  • Step S202 identifying multiple targets in the current shooting frame in the group shooting mode
  • step S202 specifically includes: identifying a settlement in a current shooting frame based on an image recognition and a clustering algorithm.
  • the settlement means that the distance is relatively close (the distance can be determined according to experience), the speed (that is, the speed of movement), and the direction (which can include the target's face orientation, movement direction, etc.) are almost the same Groups of goals.
  • the settlement is a settlement where a specific target is located.
  • the specific target may be the first target among the settlements captured by the camera after the drone 100 is powered on.
  • the first target is identified by the camera.
  • Each target is the main target, the first target captured is tracked based on image recognition, and other targets that are close to the first target and have approximately the same speed and direction are automatically included based on the clustering algorithm to form a Settlement.
  • the specific target may also be a gesture controller of the drone 100.
  • This embodiment uses the gesture controller as the main target, tracks the gesture controller based on image recognition, and automatically includes other targets that are closer to the gesture controller and have approximately the same speed and direction based on the clustering algorithm. To form a settlement.
  • the terminal receives the shooting picture transmitted by the drone 100, and the user can directly select a certain target in the shooting picture as the specific target by operating the terminal.
  • the specific target is the main target
  • the specific target is tracked based on image recognition, and other targets that are close to the specific target and have approximately the same speed and direction are automatically based on the clustering algorithm. Included to form a settlement.
  • the user may also directly select multiple targets in the shooting screen as the settlement by operating the terminal.
  • any existing image recognition algorithm can be used to identify the target, for example, a face recognition algorithm.
  • the target may also be identified by a two-dimensional code, GPS, infrared light, or the like.
  • the settlement in this embodiment is changed.
  • the coordinates of the settlement that is, the coordinates of the settlement in the shooting screen may be the average coordinates of the coordinates of the targets in the settlement, or the settlement).
  • (Coordinates of the main target) and speed including targets that are closer to the settlement and whose speed and direction are approximately the same as the settlement.
  • the targets in the current settlement that are far away from the rest of the targets and whose speed and direction are far from the remaining targets can be automatically eliminated.
  • Step S203 When it is determined that a plurality of the targets meet a shooting trigger condition, a camera mounted on the drone 100 is triggered to perform shooting.
  • This embodiment uses the function of image recognition to trigger the group photo shooting. Compared with the existing methods of triggering the group photo shooting using voice, mechanical switch, user's hand-held light, etc., the composition of the image captured by this embodiment is richer and more professional. .
  • determining that a plurality of the targets meet the shooting trigger condition specifically includes determining that the targets in a specific posture in the settlement is greater than or equal to a preset number.
  • the preset number may take a fixed value, such as three or five, or may be set to a certain ratio of the number of settlement targets, such as 1/2.
  • the type of the specific gesture may include multiple types. For example, in some embodiments, determining that the target is in a specific state includes: determining that the gesture of the target is a specific shape, such as a gesture shape such as "Yeah" or "Like". A gesture based on a specific shape triggers the drone 100 to automatically shoot, making shooting more convenient and interesting, and saving labor costs.
  • determining that the target is in a specific posture includes: determining that the target is in a jumping state.
  • This embodiment triggers the automatic shooting of the drone 100 based on the jumping of the target, which improves the fun and convenience of shooting, and reduces the labor cost.
  • determining that the target is in a jumping state includes: determining that a change in the distance between the target and the UAV 100 in a vertical direction satisfies a specific condition.
  • the distance between the target and the drone 100 in the vertical direction refers to the vertical distance between the top of the target and the drone 100.
  • the camera may include three shooting modes: a top shot, a flat shot, and a top shot.
  • the distance between the target and the drone 100 in the vertical direction is momentarily or continuously reduced, and the target has a speed of change in the vertical direction, it is determined that the target is in a jumping state.
  • the camera shoots horizontally or vertically when the distance between the target and the drone 100 in the vertical direction is instantaneously or continuously increased and the target has a speed of change in the vertical direction, it is determined that the target is in a jumping state.
  • determining that the target is in a specific posture includes: determining that the target is in an extended state (this embodiment mainly refers to the human limbs in an extended state). Triggering the automatic shooting of the drone 100 based on the stretching of the target improves the fun and convenience of shooting and reduces the labor cost.
  • the method of triggering the automatic shooting of the drone 100 based on the stretching of the target is suitable for the camera to shoot down.
  • the method may further include : Controlling the drone 100 to be located directly above the settlement, and controlling the camera to shoot downwards, so that the camera performs an overhead shooting.
  • determining that at least part of the targets in the settlement are in an extended state specifically includes: obtaining a joint point position of the target in the shooting frame according to a human joint point model; and determining the joint point position of the target in the shooting frame based on the human joint point model.
  • the target is stretched.
  • a human joint point model is obtained based on deep learning technology. Specifically, a large number of target images are collected. Based on the deep learning technology, a large number of collected target images are classified to train a human joint point model.
  • This embodiment uses deep learning technology to train a human joint point model, and determines whether the target is in an extended state according to the human joint point model, and the recognition result has high accuracy.
  • determining that the target is in an extended state based on the position of the joint point of the target specifically includes: determining based on a positional relationship between at least one of an elbow joint, a wrist joint, a knee joint, and an ankle of the target and the torso of the target The target is stretched.
  • determining that at least part of the targets in the settlement are in a specific posture includes: determining that at least part of the targets in the settlement are in a non-conventional posture. Triggering the automatic shooting of the drone 100 based on the special posture of the target can improve the fun and convenience of shooting and reduce the labor cost.
  • determining that at least part of the targets in the settlement are in a non-conventional posture specifically includes: determining that at least part of the targets in the settlement are in a non-conventional posture according to a conventional posture model.
  • a conventional pose model is trained based on deep learning technology. Specifically, a large number of target images in a conventional pose are collected. Based on the deep learning technology, a large number of collected target images are classified to train a conventional pose model. .
  • This embodiment uses deep learning technology to train a conventional pose model, and determines whether the target is in an unconventional pose according to the conventional pose model, and the recognition result has high accuracy. Of course, other methods can be used to identify whether the target is in an unusual posture, and is not limited to the deep learning technology of this embodiment.
  • determining that a plurality of the targets meet a shooting trigger condition further includes determining that an average speed of the settlement is less than a preset speed threshold.
  • the average speed of the settlement refers to the average value of the movement speeds of all the targets in the settlement. In an ideal state, when the moving speed of all targets in the settlement is 0, the camera mounted on the drone 100 is triggered to take a picture. However, in actual situations, it is difficult to achieve absolute stationary for all targets in the settlement. Therefore, in this embodiment, when the average speed of the settlement is less than a preset speed threshold, the settlement is considered to be stationary.
  • the preset speed threshold can be set according to the sharpness of the shooting picture or other requirements.
  • triggering the camera mounted on the drone 100 to perform shooting specifically includes: determining a focal length of the camera according to a preset strategy.
  • a focal length of the camera is determined according to a preset strategy.
  • the method for determining the focal length of the camera may be set according to shooting requirements.
  • a target closest to the camera in the settlement is determined according to the settlement in the current shooting frame; based on the closest target and the The horizontal distance between the cameras determines the focal length of the cameras to achieve focused focusing and exposure on the closest target to the camera.
  • the target closest to the camera in the settlement is determined according to the size of each target in the settlement, and specifically, the size frame of each target of the settlement in the current shooting frame is determined based on image recognition ( bounding box), to determine the target closest to the camera in the settlement according to the size of the size box of each target.
  • a target closest to the camera in the settlement is determined on a depth map corresponding to the current shooting picture.
  • the face value of each target in the settlement is calculated; according to the horizontal distance between the target with the highest face value and the camera, the focal length of the camera is determined, and High-value targets are focused and exposed.
  • the face value calculation algorithm can use the existing face value calculation algorithm.
  • a focal distance of the camera is determined according to a horizontal distance between a specific target in the settlement and the camera, so as to focus and expose the specific target.
  • the specific target in this embodiment may be the first target in the settlement captured by the camera after the drone 100 is powered on, and may also be a gesture controller of the drone 100. Specifically, For details, refer to the description of the specific target in step S202, and details are not described herein again.
  • the shooting mode of the camera can also be set as required.
  • the camera can be set to slow-motion shooting, so as to obtain a shooting picture similar to the bullet time.
  • the drone 100 by setting a group photo shooting mode on the drone 100, when a plurality of targets in the shooting screen meet the shooting trigger conditions, the drone 100 automatically triggers the camera to shoot, thereby obtaining a collective of multiple targets.
  • the automatic shooting of group photos is realized, the shooting process is convenient, the shooting efficiency is improved, and the labor cost is saved.
  • the method may further include the following steps:
  • Step S501 controlling the drone 100 to fly to a specific position according to the settlement in the current shooting picture
  • the specific aircraft position is the next aircraft position relative to the current position of the drone 100.
  • the setting method of the specific aircraft position may be selected according to requirements.
  • the specific aircraft position is located within the obstacle avoidance field of view of the drone 100 when the current aircraft position.
  • the observation range of binocular fov (camera perspective) is 30 degrees up and down, and 60 degrees left and right.
  • the connection between the specific camera and the drone 100 at the current moment needs to be controlled within the observation range of binocular fov to ensure no one. Machine 100 security.
  • the specific aircraft position is an empirical classic aircraft position.
  • the specific aircraft position may be 3 meters high with respect to the target, 45 degrees inclined, or 10 meters high with respect to the target, 70 degrees inclined.
  • a position at a height of 3 meters and an angle of 45 degrees relative to the target may be set as the first specific position
  • a position at a height of 10 meters and the angle of 70 degrees relative to the target may be set as the second specific position.
  • the first specific position is the previous one of the second specific position.
  • a specific aircraft position in order to obtain a three-dimensional image of the settlement, may be selected as a position at the same height and different angle with respect to the settlement.
  • step S501 may also be implemented in different ways.
  • step S501 specifically includes controlling the drone 100 to fly to a specific position on a flight plane, where The flight plane is perpendicular to the horizontal plane, and the line connecting the current position of the drone 100 and the settlement is located on the flight plane, and the specific position is located on the flight plane.
  • the drone 100 is preset with a distance from the drone 100 to the settlement when the drone 100 is in the specific camera position in the group photo shooting mode, The method further includes: according to the distance of the drone 100 from the settlement on the flying plane to meet the shooting requirements.
  • the drone 100 is preset in the group photo shooting mode, and the area occupied by the settlement in the shooting screen when the drone 100 is at the specific camera position, the The method further includes flying on the flying plane to a specific aircraft position according to the area occupied by the settlement in the shooting picture to meet the shooting requirements.
  • step S501 specifically includes: using the center of the settlement as the center of the circle, controlling the drone 100 to fly around the settlement at a specific radius and at a specific radius; setting the drone 100 to The designated position during the flight is the specific aircraft position.
  • the center of the settlement is taken as the center of the circle, and the drone 100 is controlled to fly around the settlement at a specific radius and at a specific radius.
  • the center of the settlement is taken as the center of the circle, and the drone 100 is controlled to fly an arc segment around the settlement at a specific height and at a specific radius.
  • the designated position may be a front, two sides, or a back of a specific target in the settlement, and may be specifically selected according to needs.
  • a specific height and a specific radius may also be set according to shooting requirements.
  • the specific height and the specific radius are respectively when the drone 100 enters the group photo shooting mode. Height and distance from the settlement.
  • the specific height and the specific radius may also be preset default values, or may be input by a user in advance.
  • Step S502 The camera mounted on the drone 100 is triggered to perform shooting again.
  • step S502 After step S502 is performed, multiple images captured for the same settlement can be obtained.
  • step S203 For the manner of triggering the camera mounted on the drone 100 to take a picture, refer to the description of step S203 above, which is not repeated here.
  • three collective photos need to be taken for a certain settlement, and the coordinates of a specific camera are (x 1 , y 1 , z 1 ), (x 2 , y 2 , z 2 ), (x 3 , y 3 , z 3 ), in the navigation coordinate system, when the drone 100 enters the group photo shooting mode based on the trigger instruction, the yaw angle with respect to the settlement is a, and the distance from the target settlement is d.
  • the formula for calculating the coordinates of the aircraft position is:
  • x i sin (a) * x g + cos (a) * y g ;
  • y i sin (a) * x g + cos (a) * y g ;
  • i 1, 2 or 3
  • (x g , y g , z g ) are real-time coordinates of the settlement.
  • the first specific camera position is a position 60 ° above the settlement obliquely, and the distance from the position to the settlement is still the distance and direction when the drone 100 enters the group photo shooting mode based on the trigger instruction.
  • PID control can be performed in three directions of x, y, and z respectively, so as to control the drone 100 to reach three specific aircraft positions in sequence.
  • the method may further include the steps of: obtaining images obtained by the drone 100 on at least two positions; and according to the drone 100 on at least two positions The image obtained on the position generates a three-dimensional image of the settlement.
  • the settlements in the images obtained on at least two aircraft positions are at least partially overlapped to realize a three-dimensional composition of the settlements.
  • the drone 100 is preset with at least two scene modes, for example, an alpine scene mode, a plain scene mode, and an ocean scene module.
  • different specific camera positions are preset in different scene modes.
  • the method further includes: determining a specific camera position corresponding to the scene mode according to the currently set scene mode.
  • the method may further include: adjusting a shooting angle of the camera mounted on the drone 100 according to a settlement in a current shooting frame to meet a shooting requirement.
  • the camera shooting angle can be set by the user in advance, or it can be set according to the composition.
  • the optimal shooting angle of the camera is set according to the composition, and the composition strategy can be set as needed.
  • the camera is adjusted according to the expected position of the cluster in the shooting frame. The shooting angle of the camera mounted on the drone 100 will be described.
  • the expected position may be a position where the center point of the settlement is 1/3 pixel height from the bottom of the shooting screen (1/3 pixel height is the shooting screen pixel height / 3), or may be the center point of the settlement and the
  • the distance between a certain position of the shooting picture is a position of a preset distance, or the distance between another position of the settlement and a certain position of the shooting picture is a position of a preset distance.
  • composition strategies can also be used to adjust the shooting angle of the camera mounted on the drone 100 to meet the actual shooting needs. For example, by dividing the scene of the shooting screen, placing the A certain position of the scene, or by dividing the scene of the shooting scene, placing the settlement on a certain ratio relative to the scene, etc.
  • a scene of a shooting picture may be segmented based on deep learning.
  • the method may further include: controlling the drone 100 to stay at the current position for a preset period of time to ensure that the drone 100 is stable before controlling Camera shoots to get higher quality images.
  • the size of the preset duration in this embodiment may be set according to requirements, for example, it may be 1 second, 2 seconds, or other durations.
  • the drone 100 may have an automatic reset function. Specifically, after triggering the camera mounted on the drone 100 to perform shooting, the method further includes: determining that the number of images captured by the camera reaches a preset When the number of shots is taken, the drone 100 is controlled to return to the position where the settlement was shot for the first time.
  • the preset number of sheets can be preset by the user.
  • an embodiment of the present invention further provides a group photographing device.
  • the device may include a storage device 210 and a processor 220.
  • the storage device 210 may include a volatile memory, such as a random-access memory (RAM); the storage device 210 may also include a non-volatile memory, such as a flash memory. Flash memory (flash memory), hard disk (HDD) or solid-state drive (SSD); storage device 210 may also include a combination of the above types of memory.
  • RAM random-access memory
  • flash memory flash memory
  • HDD hard disk
  • SSD solid-state drive
  • the processor 220 may be a central processing unit (CPU).
  • the processor 220 may further include a hardware chip.
  • the above hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL), or any combination thereof.
  • the storage device 210 is further configured to store program instructions.
  • the processor 220 may call the program instructions to implement the corresponding methods shown in the embodiments of FIG. 2, FIG. 3, and FIG. 5.
  • the processor 220 calls the program instruction.
  • the processor 220 is configured to: enter a group photo shooting mode based on a trigger instruction; in the group photo shooting mode, identify a current shooting frame When it is determined that a plurality of the targets meet a shooting trigger condition, a camera mounted on the drone 100 is triggered to perform shooting.
  • the processor 220 is configured to identify a settlement in a currently captured picture based on an image recognition and clustering algorithm.
  • the settlement is a settlement where a specific target is located; the specific target is the first target among the settlements captured by the camera after the drone 100 is powered on; or, The specific target is a gesture controller of the drone 100.
  • the processor 220 determining that a plurality of the targets meet the shooting trigger condition includes: determining that the target in a specific posture in the settlement is greater than or equal to a preset number, or determining that the The ratio of the number of targets in a specific posture to the total number of targets is greater than a preset ratio.
  • the processor 220 determining that the target is in a specific state includes: determining that the gesture of the target is a specific shape.
  • the processor 220 determining that the target is in a specific posture includes: determining that the target is in a jumping state.
  • the processor 220 determining that the target is in a jumping state includes: determining that a change in the distance between the target and the drone 100 in a vertical direction satisfies a specific condition.
  • the processor 220 determining that the target is in a specific posture includes: determining that the target is in an extended state.
  • the processor 220 is further configured to: control the drone 100 to be located directly above the settlement; and control the camera toward Shooting.
  • the processor 220 is configured to obtain a joint point position of the target in the shooting frame according to a human joint point model; and determine that the target is in an extended state based on the joint point position of the target in the shooting frame. .
  • the processor 220 is configured to determine that the target is in an extended state based on a positional relationship between at least one of an elbow joint, a wrist joint, a knee joint, and an ankle of the target and a torso of the target.
  • the processor 220 determining that at least part of the targets in the settlement are in a specific posture includes: determining that at least part of the targets in the settlement are in a non-conventional posture.
  • the processor 220 is configured to determine, according to a conventional attitude model, that at least part of the targets in the settlement are in an unusual attitude.
  • the processor 220 determines that a plurality of the targets meet the shooting trigger condition further includes: determining that the average speed of the settlement is less than a preset speed threshold.
  • the processor 220 when it is determined that multiple targets meet the shooting trigger conditions, after the processor 220 triggers the camera mounted on the drone 100 to perform shooting, it is further configured to control the camera according to the settlement in the current shooting frame.
  • the drone 100 is flown to a specific position; the camera mounted on the drone 100 is triggered to shoot again.
  • the specific aircraft position is located within an obstacle avoidance field of view of the drone 100 when the current aircraft position.
  • the processor 220 controls the drone 100 to fly to a specific position according to the settlement in the current shooting picture, including: controlling the drone 100 to fly to a specific position on a flight plane,
  • the flight plane is perpendicular to the horizontal plane, and the line connecting the current position of the drone 100 and the settlement is located on the flight plane, and the specific position is located on the flight plane.
  • the drone 100 is preset with a distance between the drone 100 and the settlement when the drone 100 is in the specific camera position in the group photo shooting mode, or The area occupied by the settlement in the shooting picture; the processor 220 is further configured to be in the flying plane according to the distance of the drone 100 from the settlement or the settlement is in the shooting picture The occupied area is flown to a specific aircraft position.
  • the processor 220 is configured to control the drone 100 to fly around the settlement at a specific radius and a specific radius at the center of the settlement as a center; and set the drone The designated position during the 100-round flight is the specific aircraft position.
  • the specific height and the specific radius are the height when the drone 100 enters the group photo shooting mode and the distance from the settlement.
  • the processor 220 is further configured to obtain images obtained by the drone 100 on at least two positions, The settlements in the images obtained on at least two positions are at least partially overlapped; a three-dimensional image of the settlement is generated based on the images obtained on the at least two positions of the drone 100.
  • the drone 100 is preset with at least two scene modes, where different specific scene modes are preset with corresponding specific camera positions respectively; the processor 220 is based on the settlement in the current shooting picture Before controlling the drone 100 to fly to a specific aircraft position, it is further configured to determine the specific aircraft position corresponding to the scene mode according to the currently set scene mode.
  • the processor 220 before triggering the camera mounted on the drone 100 to perform shooting, is further configured to adjust a shooting angle of the camera mounted on the drone 100 according to the settlement in the current shooting frame.
  • the processor 220 is configured to adjust a shooting angle of a camera mounted on the drone 100 according to an expected position of the settlement in the shooting frame.
  • the expected position refers to a position where the center point of the settlement is 1/3 pixel height from the bottom of the shooting screen.
  • the processor 220 is further configured to control the drone 100 to return when the number of images captured by the camera reaches a preset number. To the camera position when the settlement was first photographed.
  • the processor 220 is configured to determine a focal length of the camera according to a preset policy.
  • the processor 220 is configured to determine a target closest to the camera in the settlement according to the settlement in the current shooting frame; based on the level between the closest target and the camera The distance determines the focal length of the camera.
  • the processor 220 is configured to determine a target closest to the camera in the settlement according to the size of each target in the settlement.
  • the processor 220 is configured to calculate a color value of each target in the settlement according to a color value calculation algorithm; and use the distance between the target with the highest color value and the camera as the The focal length of the camera.
  • the processor 220 is configured to use a distance between a specific target in the settlement and the camera as a focal length of the camera.
  • the specific target is the first target in the settlement captured by the camera after the drone 100 is powered on; or, the specific target is the drone 100 gesture controller.
  • processor 220 for specific implementation of the processor 220 according to the embodiment of the present invention, reference may be made to the description of corresponding content in the foregoing embodiments, and details are not described herein.
  • An embodiment of the present invention further provides a computer-readable storage medium.
  • the computer-readable storage medium stores program instructions. When the program instructions are executed by the processor 220, the program instructions are used to execute the group photo shooting method of the foregoing embodiment.
  • the program can be stored in a computer-readable storage medium.
  • the program When executed, the processes of the embodiments of the methods described above may be included.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random, Access Memory, RAM).

Abstract

一种集体照拍摄方法和装置,其中,方法包括:基于触发指令进入集体照拍摄模式(S201);在集体照拍摄模式中,识别当前拍摄画面中的多个目标(S202);在确定出多个目标满足拍摄触发条件时,触发无人机搭载的相机进行拍摄(S203)。通过在无人机上设置集体照拍摄模式,在拍摄画面中的多个目标满足拍摄触发条件时,无人机自动触发相机进行拍摄,从而获得多个目标的集体照,实现集体照的自动拍摄,拍摄过程方便,拍摄效率高,节省了人力成本。

Description

集体照拍摄方法和装置 技术领域
本发明涉及拍摄领域,尤其涉及一种集体照拍摄方法和装置。
背景技术
目前拍摄集体照时,需要一个拍摄者,通过不停的调整位置,获得较为理想的集体照,此方式拍摄较为麻烦,拍摄角度也较为单一。随着航拍无人机技术的发展,基于无人机拍摄替代现有的人工拍摄,拍摄角度更加丰富。但现有技术中,对基于无人机拍摄集体照的研究较少。
发明内容
本发明提供一种集体照拍摄方法和装置。
根据本发明的第一方面,提供一种集体照拍摄方法,所述方法包括:
基于触发指令进入集体照拍摄模式;
在所述集体照拍摄模式中,识别当前拍摄画面中的多个目标;
在确定出多个所述目标满足拍摄触发条件时,触发无人机搭载的相机进行拍摄。
根据本发明的第二方面,提供一种集体照拍摄装置,包括:存储装置和处理器;
所述存储装置,用于存储程序指令;
所述处理器,调用所述程序指令,当所述程序指令被执行时,用于:
基于触发指令进入集体照拍摄模式;
在所述集体照拍摄模式中,识别当前拍摄画面中的多个目标;在确定出多个所述目标满足拍摄触发条件时,触发无人机搭载的相机进行拍摄。
根据本发明的第三方面,提供一种计算机可读存储介质,该计算机可读存储介质中存储有程序指令,该程序指令被处理器运行时,用于执行如下步骤:
基于触发指令进入集体照拍摄模式;
在所述集体照拍摄模式中,识别当前拍摄画面中的多个目标;
在确定出多个所述目标满足拍摄触发条件时,触发无人机搭载的相机进行拍摄。
由以上本发明实施例提供的技术方案可见,通过在无人机上设置集体照拍摄模 式,在拍摄画面中的多个目标满足拍摄触发条件时,无人机自动触发相机进行拍摄,从而获得多个目标的集体照,实现集体照的自动拍摄,拍摄过程方便,拍摄效率提高,节省了人力成本。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一实施例中的集体照拍摄方法的应用场景图;
图2是本发明一实施例中的集体照拍摄方法的流程图;
图3是本发明另一实施例中的集体照拍摄方法的流程图;
图4是本发明一实施例中的集体照拍摄方法的另一应用场景图;
图5是本发明又一实施例中的集体照拍摄方法的流程图;
图6是本发明一实施例中的集体照拍摄装置的结构框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合附图,对本发明的集体照拍摄方法和装置进行详细说明。在不冲突的情况下,下述的实施例及实施方式中的特征可以相互组合。
本发明的集体照拍摄方法应用于无人机上。参见图1,所述无人机100可包括承载体102及负载104。在某些实施例中,负载104可以直接位于无人机100上,而不需要承载体102。本实施例中,所述承载体102为云台,例如,两轴云台或三轴云台。所述负载104可以为影像捕获设备或者摄像设备(如相机、摄录机、红外线摄像设备、紫外线摄像设备或者类似的设备),音频捕获装置(例如,抛物面反射传声器),红外线摄像设备等,所述负载104可以提供静态感应数据(如图片)或者动态感应数据(如视频)。所述负载104搭载在所述承载体102,从而通过所述承载体102控制所述负载104转动。本实施例以所述承载体102为云台,所述负载为相机为例进行说明。
进一步地,无人机100可以包括动力机构106,传感系统108以及通讯系统110。其中,动力机构106可以包括一个或者多个旋转体、螺旋桨、桨叶、电机、电子调速器等。例如,所述动力机构的旋转体可以是自紧固(self-tightening)旋转体、旋转体 组件、或者其它的旋转体动力单元。无人机100可以有一个或多个动力机构。所有的动力机构可以是相同的类型。可选的,一个或者多个动力机构可以是不同的类型。动力机构106可以通过合适的手段安装在无人机上,如通过支撑元件(如驱动轴)。动力机构106可以安装在无人机100任何合适的位置,如顶端、下端、前端、后端、侧面或者其中的任意结合。通过控制一个或多个动力机构106,以控制无人机100的飞行。
传感系统108可以包括一个或者多个传感器,以感测无人机100的空间方位、速度及/或加速度(如相对于多达三个自由度的旋转及平移)。所述一个或者多个传感器可包括GPS传感器、运动传感器、惯性传感器、近程传感器或者影像传感器。传感系统108提供的感测数据可以用于追踪目标的空间方位、速度及/或加速度(如下所述,利用适合的处理单元及/或控制单元)。可选的,传感系统108可以用于采集无人机的环境数据,如气候条件、要接近的潜在的障碍、地理特征的位置、人造结构的位置等。
通讯系统110能够实现与具有通讯系统114的终端112通过无线信号116进行通讯。通讯系统110、114可以包括任何数量的用于无线通讯的发送器、接收器、及/或收发器。所述通讯可以是单向通讯,这样数据可以从一个方向发送。例如,单向通讯可以包括,只有无人机100传送数据给终端112,或者反之亦然。通讯系统110的一个或者多个发送器可以发送数据给通讯系统112的一个或者多个接收器,反之亦然。可选的,所述通讯可以是双向通讯,这样,数据可以在无人机100与终端112之间在两个方向传输。双向通讯包括通讯系统110的一个或者多个发送器可以发送数据给通讯系统114的一个或者多个接收器,及反之亦然。
在某些实施例中,终端112可以向无人机100、承载体102及负载104中的一个或者多个提供控制数据,并且从无人机100、承载体102及负载104中的一个或者多个中接收信息(如无人机、承载体或者负载的位置及/或运动信息,负载感测的数据,如相机捕获的影像数据)。
在某些实施例中,无人机100可以与除了终端112之外的其它远程设备通讯,终端112也可以与除无人机100之外的其它远程设备进行通讯。例如,无人机及/或终端112可以与另一个无人机或者另一个无人机的承载体或负载通讯。当有需要的时候,所述另外的远程设备可以是第二终端或者其它计算设备(如计算机、桌上型电脑、平板电脑、智能手机、或者其它移动设备)。该远程设备可以向无人机100传送数据,从无人机100接收数据,传送数据给终端112,及/或从终端112接收数据。可选的,该远程设备可以连接到因特网或者其它电信网络,以使从无人机100及/或终端112接收的数据上传到网站或者服务器上。
在某些实施例中,无人机100的运动、承载体102的运动及负载104相对固定参照物(如外部环境)的运动,及/或者彼此间的运动,都可以由终端112所控制。所 述终端112可以是远程控制终端,位于远离无人机、承载体及/或负载的地方。终端112可以位于或者粘贴于支撑平台上。可选的,所述终端112可以是手持的或者穿戴式的。例如,所述终端112可以包括智能手机、平板电脑、桌上型电脑、计算机、眼镜、手套、头盔、麦克风或者其中任意的结合。所述终端112可以包括用户界面,如键盘、鼠标、操纵杆、触摸屏或者显示器。任何适合的用户输入可以与终端112交互,如手动输入指令、声音控制、手势控制或者位置控制(如通过终端112的运动、位置或者倾斜)。
如图2所示,本发明实施例的集体照拍摄方法的流程图。参见图2,所述方法可包括以下步骤:
步骤S201:基于触发指令进入集体照拍摄模式;
该步骤可在无人机100飞行前执行,也可在无人机100飞行的过程中执行。例如,在其中一实施例中,步骤S201在无人机100飞行前执行,用户可通过操作终端来发送触发指令至无人机100,也可通过操作设于无人机100上的按钮以产生触发指令,从而触发无人机100进入集体照拍摄模式。
在另一实施例中,步骤S201是在无人机100飞行过程中执行的,所述触发指令可由无人机100识别到的目标和所述目标的姿态(如手势)来确定。以手势为例进行说明,无人机100在飞行过程中切换至集体照拍摄模式包括两种情况:
第一种,当所述无人机100至目标的距离小于或等于预设距离(例如5m)时,所述触发指令可由所述目标的手势决定,所述无人机100接收到触发指令可指所述无人机100识别到所述目标的手势为特定手势,如“耶”的手势,“赞”的手势等等。其中,所述目标可包括所述无人机100的手势控制者、所述无人机100飞行上电后,所述相机所拍摄到的第一个目标,还可包括基于所述无人机100的手势控制者或所述第一个目标所识别到的聚落。
第二种,当所述无人机100至目标的距离大于预设距离时,所述触发指令由所述目标和所述目标的手势共同决定,可选地,所述触发指令可指所述无人机100基于所述目标识别到聚落,并且所述聚落中处于特定手势的目标大于或等于预设数量。
步骤S202:在所述集体照拍摄模式中,识别当前拍摄画面中的多个目标;
本实施例中,无人机100可采用现有算法识别当前拍摄拍摄画面中的多个目标。在一可行的实现方式中,参见图3,步骤S202具体包括:基于图像识别和聚类算法,识别当前拍摄画面中的聚落。需要说明的是,在本实施例中,聚落是指距离较近(可根据经验确定该距离)、速度(即运动速度)和方向(可包括目标的面部朝向、运动方向等)近似一致的多个目标形成的群体。
本实施例中,所述聚落为特定目标所在的聚落。在一实施例中,所述特定目标 可为所述无人机100飞行上电后,所述相机所拍摄到的所述聚落中第一个目标,本实施例以所述相机识别到第一个目标为主目标,基于图像识别跟踪所拍摄到的第一个目标,并基于聚类算法将距离所述第一个目标较近的、速度和方向近似一致的其他目标自动包括进来,形成一聚落。
在另一实施例中,所述特定目标也可为所述无人机100的手势控制者。本实施例以所述手势控制者为主目标,基于图像识别跟踪所述手势控制者,并基于聚类算法将距离所述手势控制者较近的、速度和方向近似一致的其他目标自动包括进来,形成一聚落。
在又一实施例中,终端接收到无人机100传输的拍摄画面,用户可通过操作所述终端直接选定拍摄画面中的某个目标作为所述特定目标。在用户选定特定目标后,以所述特定目标为主目标,基于图像识别跟踪所述特定目标,并基于聚类算法将距离所述特定目标较近的、速度和方向近似一致的其他目标自动包括进来,形成一聚落。当然,用户也可通过操作所述终端直接选定拍摄画面中的多个目标作为所述聚落。
本实施例中,可采用现有任意图像识别算法识别目标,例如,人脸识别算法。当然,在其他实施例中,也可通过二维码、GPS、红外光等方式来识别目标。
此外,本实施例的聚落是变化的,例如,在生成一个聚落后,可根据聚落的坐标(即聚落在拍摄画面中的坐标,可为聚落中各目标的坐标的平均坐标、也可为聚落中主目标的坐标)和速度,将距离所述聚落较近的、速度和方向与所述聚落近似一致的目标包括进聚落。当然,也可根据聚落的坐标和速度,将当前聚落中距离其余目标均较远的、速度和方向与其余目标差距较大的目标自动剔除。
步骤S203:在确定出多个所述目标满足拍摄触发条件时,触发无人机100搭载的相机进行拍摄。
本实施例采用图像识别方式触发集体照拍摄的功能,相比现有采用语音、机械开关、用户手持灯光等触发集体照拍摄的方式,本实施例所拍摄的图像的构图更加丰富并且更加专业程度。
在步骤S203中,确定出多个所述目标满足拍摄触发条件具体包括:确定出所述聚落中处于特定姿态的目标大于或等于预设数量。其中,所述预设数量可以取固定值,比如3个或5个,也可设定为聚落目标数量的某个比例,比如1/2。而特定姿态的类型可包括多种,例如,在某些实施例中,确定目标处于特定状态包括:确定所述目标的手势为特定形状,比如“耶”或“赞”等手势形状。基于特定形状的手势触发无人机100自动拍摄,拍摄更加方便且趣味性强,并且节省了人力成本。
在某些实施例中,如图4所示,确定目标处于特定姿态包括:确定目标处于跳跃状态。本实施例基于目标的跳跃来触发无人机100自动拍摄,提高拍摄的趣味性和便捷性,并且降低了人力成本。在本实施例中,确定目标处于跳跃状态包括:确定目 标与所述无人机100在垂直方向的距离变化满足特定条件。需要说明的是,本实施例中,所述目标与所述无人机100在垂直方向的距离是指所述目标的头顶与所述无人机100之间的垂直距离。进一步地,相机可包括俯拍、平拍和仰拍三种拍摄方式。当相机俯拍时,所述目标与所述无人机100在垂直方向的距离瞬间或持续减小并且所述目标存在垂直方向的变化速度时,确定所述目标处于跳跃状态。当相机平拍或仰拍时,所述目标与所述无人机100在垂直方向的距离瞬间或持续增大并且所述目标存在垂直方向的变化速度时,确定所述目标处于跳跃状态。
在某些实施例中,确定目标处于特定姿态包括:确定目标处于伸展状态(本实施例主要是指人体四肢处于伸展状态)。基于目标的伸展来触发无人机100自动拍摄,提高拍摄的趣味性和便捷性,并且降低了人力成本。而基于目标的伸展触发无人机100自动拍摄的方式适用于相机俯拍,本实施例中,确定出所述聚落中处于特定姿态的目标大于或等于预设数量之前,所述方法还可包括:控制所述无人机100位于所述聚落的正上方,并控制所述相机朝下拍摄,使得相机进行俯拍。
进一步地,确定出所述聚落中至少部分目标处于伸展状态具体包括:根据人体关节点模型获取所述目标在拍摄画面中的关节点位置;基于所述目标在拍摄画面中的关节点位置确定出目标处于伸展状态。本实施例是基于深度学习技术获得人体关节点模型的,具体而言,采集大量的目标图像,基于深度学习技术,对所采集的大量的目标图像进行分类,训练出人体关节点模型。本实施例采用深度学习技术训练出人体关节点模型,根据人体关节点模型确定目标是否处于伸展状态,识别结果精度高。当然,识别目标是否处于伸展状态也可采用其它方式,并不限于本实施例的深度学习技术。更进一步地,基于所述目标的关节点位置确定出目标处于伸展状态具体包括:基于所述目标的肘关节、腕关节、膝关节、脚踝中的至少一个与所述目标的躯干的位置关系确定目标处于伸展状态。
在某些实施例中,确定出所述聚落中至少部分目标处于特定姿态包括:确定出所述聚落中至少部分目标处于非常规姿态。基于目标的特殊姿态来触发无人机100自动拍摄,能够提高拍摄的趣味性和便捷性,并降低人力成本。
本实施例中,确定出所述聚落中至少部分目标处于非常规姿态具体包括:根据常规姿态模型,确定出所述聚落中至少部分目标处于非常规姿态。本实施例是基于深度学习技术训练出常规姿态模型的,具体而言,采集大量的处于常规姿态的目标图像,基于深度学习技术,对所采集的大量的目标图像进行分类,训练出常规姿态模型。本实施例采用深度学习技术训练出常规姿态模型,根据常规姿态模型确定目标是否处于非常规姿态,识别结果精度高。当然,识别目标是否处于非常规姿态也可采用其它方式,并不限于本实施例的深度学习技术。
在某些实施例中,确定出多个所述目标满足拍摄触发条件进一步包括:确定出 所述聚落的平均速度小于预设速度阈值。需要说明的是,本实施例中,聚落的平均速度是指聚落中所有目标的运动速度的平均值。在理想状态下,聚落中所有目标的运动速度均为0时,触发无人机100搭载的相机进行拍摄。但在实际情况下,聚落中所有目标绝对静止很难实现,故本实施例以聚落的平均速度小于预设速度阈值时,认为所述聚落静止。其中,预设速度阈值可根据拍摄画面的清晰度或其他需求设定。
进一步地,在步骤S203中,触发无人机100搭载的相机进行拍摄具体包括:根据预设策略,确定所述相机的焦距。在拍摄集体照时,由于相机的对焦、测光是有所偏重的,对于多个目标,可能只存在部分目标适合进行对焦或者适合进行曝光,故根据预设策略来确定相机的焦距,能够对多个目标中适合进行对焦或者适合进行曝光的目标进行筛选,满足拍摄需求。相机焦距的确定方式可根据拍摄需求设定,例如,在一些例子中,根据当前拍摄画面中的聚落,确定所述聚落中距离所述相机最近的目标;基于所述距离最近的目标与所述相机之间的水平距离,确定所述相机的焦距,实现对距离相机最近的目标进行重点对焦和曝光。可选地,根据所述聚落中每一目标的尺寸大小,确定所述聚落中距离所述相机最近的目标,具体地,基于图像识别确定当前拍摄画面中的聚落的每个目标的尺寸框(bounding box),根据每个目标的尺寸框的大小,确定所述聚落中距离所述相机最近的目标。可选地,在当前拍摄画面对应的深度图上确定所述聚落中距离所述相机最近的目标。
在另一些例子中,根据颜值计算算法,计算所述聚落中每一目标的颜值高低;根据颜值最高的目标与所述相机之间的水平距离,确定所述相机的焦距,从而对颜值较高的目标进行重点对焦和曝光。其中,颜值计算算法可采用现有颜值计算算法。
在又一些例子中,根据所述聚落中的特定目标作与所述相机之间的水平距离,确定所述相机的焦距,从而对特定目标进行重点对焦和曝光。本实施例的特定目标可以为所述无人机100飞行上电后,所述相机所拍摄到的所述聚落中第一个目标,也可为所述无人机100的手势控制者,具体可参见步骤S202中对特定目标的描述,此处不再赘述。
相机的拍摄方式也可以根据需要设定,例如,相机可设定为慢动作摄影,从而获得类似于子弹时间的拍摄画面。
本发明实施例中,通过在无人机100上设置集体照拍摄模式,在拍摄画面中的多个目标满足拍摄触发条件时,无人机100自动触发相机进行拍摄,从而获得多个目标的集体照,实现集体照的自动拍摄,拍摄过程方便,拍摄效率提高,节省了人力成本。
参见图5,在步骤S203之后,所述方法还可以包括如下步骤:
步骤S501:根据当前拍摄画面中的聚落,控制所述无人机100飞行至特定机位;
本步骤中,特定机位为相对于无人机100当前时刻机位的下一机位。
特定机位的设定方式可根据需要选择,例如,在某些实施例中,所述特定机位位于所述无人机100在当前机位时的避障视场范围内。如双目fov(相机视角)的观测范围是上下30度、左右60度,特定机位和无人机100在当前时刻机位连线需要控制在双目fov的观测范围内,以保证无人机100的安全。
在某些实施例中,特定机位为经验经典机位,比如,特定机位可以相对目标3米高、斜45度或者相对目标10米高、斜70度等。在一些实施例中,可以将相对目标3米高、斜45度的位置设定为第一个特定机位,将相对目标10米高、斜70度的位置设定为第二个特定机位,其中第一个特定机位为第二个特定机位的前一机位。
在某些实施例中,为获得聚落的三维图像,可以将特定机位选择为相对所述聚落同一高度、不同角度的位置。
为取得不同的拍摄效果,步骤S501也可选择不同的方式来实现,例如,在某些实施例中,步骤S501具体包括:控制所述无人机100在飞行平面上飞行至特定机位,其中,所述飞行平面垂直于水平面,并且所述无人机100当前机位和所述聚落的连线位于所述飞行平面上,所述特定机位位于所述飞行平面上。进一步地,在一些例子中,无人机100预设有在所述集体照拍摄模式中,所述无人机100在所述特定机位时所述无人机100相对所述聚落的距离,所述方法还包括:在所述飞行平面上根据所述无人机100相对所述聚落的距离,以满足拍摄需求。在另一些例子中,无人机100预设有在所述集体照拍摄模式中,所述无人机100在所述特定机位时所述聚落在所述拍摄画面中所占面积,所述方法还包括:在所述飞行平面上根据所述聚落在所述拍摄画面中所占面积飞行至特定机位,以满足拍摄需求。
在某些实施例中,步骤S501具体包括:以所述聚落的中心为圆心,控制所述无人机100在特定高度下以特定半径绕所述聚落飞行;设定所述无人机100绕飞行过程中的指定位置为所述特定机位。在一些例子中,以所述聚落的中心为圆心,控制所述无人机100在特定高度下以特定半径环绕所述聚落飞行。在一些例子中,以所述聚落的中心为圆心,控制所述无人机100在特定高度下以特定半径绕所述聚落飞行一弧线段。指定位置可以为所述聚落中特定目标的正面、两侧面、背面等位置,具体可根据需要选择。进一步地,特定高度和特定半径也可根据拍摄需求设定,例如,在一实施例中,所述特定高度和所述特定半径分别为所述无人机100进入所述集体照拍摄模式时的高度和与所述聚落的距离。在另一实施例中,所述特定高度和所述特定半径也可为预先设定的默认值,或者,由用户预先输入。
步骤S502:再次触发无人机100搭载的相机进行拍摄。
在执行完步骤S502后,即可获得针对同一聚落所述拍摄的多张图像。其中,触发无人机100搭载的相机进行拍摄的方式可参见上述步骤S203的描述,此处不再赘 述。
在一具体实现方式中,需要针对某一聚落拍摄为3张集体照,特定机位的坐标分别为(x 1,y 1,z 1),(x 2,y 2,z 2),(x 3,y 3,z 3),在导航坐标系下,无人机100在基于触发指令进入集体照拍摄模式时相对于该聚落的偏航角为a、相对于目标聚落的距离为d,特定机位的坐标的计算公式为:
x i=sin(a)*x g+cos(a)*y g
y i=sin(a)*x g+cos(a)*y g
z i=z g+cos(60°)*d;
其中,i=1、2或3,(x g,y g,z g)为聚落的实时坐标。
可选地,第一个特定机位即为聚落斜上方60°的某个位置,并且该位置距离所述聚落仍然是无人机100基于触发指令进入集体照拍摄模式时的距离和方向。
在获得了特定机位后,可以在x,y,z三个方向上方分别做PID控制,从而控制无人机100依次到达三个的特定机位。
本实施例中,在步骤S502之后,所述方法还可以包括如下步骤:获得所述无人机100在至少两个机位上所获得的图像;根据所述无人机100在至少两个机位上所获得的图像,生成所述聚落的三维图像。其中,至少两个机位上所获得的图像中的所述聚落至少部分重合,以实现聚落的三维构图。
进一步地,无人机100预设有至少两个场景模式,例如,高山场景模式、平原场景模式、海洋场景模块式等。其中,不同的场景模式中分别预设有对应的特定机位。为适应不同的场景模式,以获得专业度更高的图像,在步骤S501之前,所述方法还包括:根据当前设定的场景模式,确定所述场景模式对应的特定机位。
此外,触发无人机100搭载的相机进行拍摄之前,所述方法还可以包括:根据当前拍摄画面中的聚落,调整所述无人机100搭载的相机的拍摄角度,以满足拍摄需求。其中,相机拍摄角度可由用户预先设定,也可根据构图设定。在本实施例中,根据构图设定相机的最佳拍摄角度,而构图策略可根据需要设定,例如,在一实施例中,根据所述聚落在所述拍摄画面中的预期位置,调整所述无人机100搭载的相机的拍摄角度。所述预期位置可以为所述聚落的中心点距离所述拍摄画面底部1/3像素高度(1/3像素高度即拍摄画面像素高度/3)的位置,也可以为所述聚落的中心点与所述拍摄画面某一位置之间的距离为预设距离的位置,或者所述聚落的其他位置与所述拍摄画面的某一位置之间的距离为预设距离的位置。
当然,在其他实施例中,也可采用其他构图策略来调整所述无人机100搭载的相机的拍摄角度,以满足实际的拍摄需求,比如,通过分割拍摄画面的场景,将聚落放在相对场景的某个位置,或者,通过分割拍摄画面的场景,将聚落放在相对于场景 的某个比例上等。本实施例中,可基于深度学习来分割拍摄画面的场景。
更进一步地,触发无人机100搭载的相机进行拍摄之前,所述方法还可以包括:控制所述无人机100在当前机位停留预设时长,确保在无人机100挺稳后再控制相机拍摄,以获得质量较高的图像。本实施例的预设时长的大小可根据需要设定,例如,可以为1秒、2秒或者其他时长。
在本实施例中,无人机100可具备自动复位的功能,具体而言,触发无人机100搭载的相机进行拍摄之后,所述方法还包括:确定所述相机拍摄的图像数量达到预设张数时,控制所述无人机100返回至第一次拍摄所述聚落时的机位。其中,预设张数可由用户预先设定。
参见图6,本发明实施例还提供一种集体照拍摄装置,所述装置可以包括存储装置210和处理器220。
所述存储装置210可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储装置210也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储装置210还可以包括上述种类的存储器的组合。
所述处理器220可以是中央处理器(central processing unit,CPU)。所述处理器220还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
可选地,所述存储装置210还用于存储程序指令。所述处理器220可以调用所述程序指令,实现如图2,图3以及图5实施例中所示的相应方法。
所述处理器220调用所述程序指令,当所述程序指令被执行时,所述处理器220用于:基于触发指令进入集体照拍摄模式;在所述集体照拍摄模式中,识别当前拍摄画面中的多个目标;在确定出多个所述目标满足拍摄触发条件时,触发无人机100搭载的相机进行拍摄。
在一个实施例中,所述处理器220,用于基于图像识别和聚类算法,识别当前拍摄画面中的聚落。
在一个实施例中,所述聚落为特定目标所在的聚落;所述特定目标为所述无人机100飞行上电后,所述相机所拍摄到的所述聚落中第一个目标;或者,所述特定目标为所述无人机100的手势控制者。
在一个实施例中,所述处理器220确定出多个所述目标满足拍摄触发条件包括:确定出所述聚落中处于特定姿态的目标大于或等于预设数量,或者确定出所述聚落中处于特定姿态的目标的数量占目标的总数量的比例大于预设比例。
在一个实施例中,所述处理器220确定目标处于特定状态包括:确定所述目标的手势为特定形状。
在一个实施例中,所述处理器220确定目标处于特定姿态包括:确定所述目标处于跳跃状态。
在一个实施例中,所述处理器220确定目标处于跳跃状态包括:确定所述目标与所述无人机100在垂直方向的距离变化满足特定条件。
在一个实施例中,所述处理器220确定目标处于特定姿态包括:确定所述目标处于伸展状态。
在一个实施例中,所述处理器220在确定出多个所述目标满足拍摄触发条件之前,还用于:控制所述无人机100位于所述聚落的正上方;并控制所述相机朝下拍摄。
在一个实施例中,所述处理器220,用于根据人体关节点模型获取所述目标在拍摄画面中的关节点位置;基于所述目标在拍摄画面中的关节点位置确定出目标处于伸展状态。
在一个实施例中,所述处理器220,用于基于所述目标的肘关节、腕关节、膝关节、脚踝中的至少一个与所述目标的躯干的位置关系确定目标处于伸展状态。
在一个实施例中,所述处理器220确定出所述聚落中至少部分目标处于特定姿态包括:确定出所述聚落中至少部分目标处于非常规姿态。
在一个实施例中,所述处理器220,用于根据常规姿态模型,确定出所述聚落中至少部分目标处于非常规姿态。
在一个实施例中,所述处理器220确定出多个所述目标满足拍摄触发条件进一步包括:确定出所述聚落的平均速度小于预设速度阈值。
在一个实施例中,所述处理器220在确定出多个所述目标满足拍摄触发条件时,触发无人机100搭载的相机进行拍摄之后,还用于根据当前拍摄画面中的聚落,控制所述无人机100飞行至特定机位;再次触发所述无人机100搭载的相机进行拍摄。
在一个实施例中,所述特定机位位于所述无人机100在当前机位时的避障视场范围内。
在一个实施例中,所述处理器220根据当前拍摄画面中的聚落,控制所述无人机100飞行至特定机位包括:控制所述无人机100在飞行平面上飞行至特定机位,其中所述飞行平面垂直于水平面,并且所述无人机100当前机位和所述聚落的连线位于 所述飞行平面上,所述特定机位位于所述飞行平面上。
在一个实施例中,所述无人机100预设有在所述集体照拍摄模式中,所述无人机100在所述特定机位时所述无人机100相对所述聚落的距离或者所述聚落在所述拍摄画面中所占面积;所述处理器220还用于在所述飞行平面上根据所述无人机100相对所述聚落的距离或者所述聚落在所述拍摄画面中所占面积飞行至特定机位。
在一个实施例中,所述处理器220,用于以所述聚落的中心为圆心,控制所述无人机100在特定高度下以特定半径绕所述聚落飞行;设定所述无人机100绕飞行过程中的指定位置为所述特定机位。
在一个实施例中,所述特定高度和所述特定半径分别为所述无人机100进入所述集体照拍摄模式时的高度和与所述聚落的距离。
在一个实施例中,所述处理器220在再次触发所述无人机100搭载的相机进行拍摄之后,还用于:获得所述无人机100在至少两个机位上所获得的图像,其中至少两个机位上所获得的图像中的所述聚落至少部分重合;根据所述无人机100在至少两个机位上所获得的图像,生成所述聚落的三维图像。
在一个实施例中,所述无人机100预设有至少两个场景模式,其中不同的场景模式中分别预设有对应的特定机位;所述处理器220在根据当前拍摄画面中的聚落,控制所述无人机100飞行至特定机位之前,还用于根据当前设定的场景模式,确定所述场景模式对应的特定机位。
在一个实施例中,所述处理器220在触发无人机100搭载的相机进行拍摄之前,还用于根据当前拍摄画面中的聚落,调整所述无人机100搭载的相机的拍摄角度。
在一个实施例中,所述处理器220,用于根据所述聚落在所述拍摄画面中的预期位置,调整所述无人机100搭载的相机的拍摄角度。
在一个实施例中,所述预期位置是指所述聚落的中心点距离所述拍摄画面底部1/3像素高度的位置。
在一个实施例中,所述处理器220在触发无人机100搭载的相机进行拍摄之后,还用于确定所述相机拍摄的图像数量达到预设张数时,控制所述无人机100返回至第一次拍摄所述聚落时的机位。
在一个实施例中,所述处理器220,用于根据预设策略,确定所述相机的焦距。
在一个实施例中,所述处理器220,用于根据当前拍摄画面中的聚落,确定所述聚落中距离所述相机最近的目标;基于所述距离最近的目标与所述相机之间的水平距离,确定所述相机的焦距。
在一个实施例中,所述处理器220,用于根据所述聚落中每一目标的尺寸大小, 确定所述聚落中距离所述相机最近的目标。
在一个实施例中,所述处理器220,用于根据颜值计算算法,计算所述聚落中每一目标的颜值高低;以颜值最高的目标与所述相机之间的距离作为所述相机的焦距。
在一个实施例中,所述处理器220,用于以所述聚落中的特定目标作与所述相机之间的距离作为所述相机的焦距。
在一个实施例中,所述特定目标为所述无人机100飞行上电后,所述相机所拍摄到的所述聚落中第一个目标;或者,所述特定目标为所述无人机100的手势控制者。
需要说明的是,本发明实施例的所述处理器220的具体实现可参考上述各个实施例中相应内容的描述,在此不赘述。
本发明实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有程序指令,该程序指令被处理器220运行时,用于执行上述实施例的集体照拍摄方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明部分实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (65)

  1. 一种集体照拍摄方法,其特征在于,所述方法包括:
    基于触发指令进入集体照拍摄模式;
    在所述集体照拍摄模式中,识别当前拍摄画面中的多个目标;
    在确定出多个所述目标满足拍摄触发条件时,触发无人机搭载的相机进行拍摄。
  2. 根据权利要求1所述的方法,其特征在于,所述识别当前拍摄画面中的多个目标,包括:
    基于图像识别和聚类算法,识别当前拍摄画面中的聚落。
  3. 根据权利要求2所述的方法,其特征在于,所述聚落为特定目标所在的聚落;
    所述特定目标为所述无人机飞行上电后,所述相机所拍摄到的所述聚落中第一个目标;或者,
    所述特定目标为所述无人机的手势控制者。
  4. 根据权利要求2所述的方法,其特征在于,所述确定出多个所述目标满足拍摄触发条件,包括:
    确定出所述聚落中处于特定姿态的目标大于或等于预设数量,或者确定出所述聚落中处于特定姿态的目标的数量占目标的总数量的比例大于预设比例。
  5. 根据权利要求4所述的方法,其特征在于,所述确定目标处于特定状态,包括:确定所述目标的手势为特定形状。
  6. 根据权利要求4所述的方法,其特征在于,所述确定目标处于特定姿态,包括:
    确定所述目标处于跳跃状态。
  7. 根据权利要求6所述的方法,其特征在于,所述确定目标处于跳跃状态,包括:
    确定所述目标与所述无人机在垂直方向的距离变化满足特定条件。
  8. 根据权利要求4所述的方法,其特征在于,所述确定目标处于特定姿态,包括:
    确定所述目标处于伸展状态。
  9. 根据权利要求8所述的方法,其特征在于,所述确定出多个所述目标满足拍摄触发条件之前,还包括:
    控制所述无人机位于所述聚落的正上方;
    并控制所述相机朝下拍摄。
  10. 根据权利要求8所述的方法,其特征在于,所述确定出目标处于伸展状态,包括:
    根据人体关节点模型获取所述目标在拍摄画面中的关节点位置;
    基于所述目标在拍摄画面中的关节点位置确定出目标处于伸展状态。
  11. 根据权利要求10所述的方法,其特征在于,所述基于所述目标的关节点位置确定出目标处于伸展状态,包括:
    基于所述目标的肘关节、腕关节、膝关节、脚踝中的至少一个与所述目标的躯干的位置关系确定目标处于伸展状态。
  12. 根据权利要求4所述的方法,其特征在于,所述确定出所述聚落中至少部分目标处于特定姿态,包括:
    确定出所述聚落中至少部分目标处于非常规姿态。
  13. 根据权利要求12所述的方法,其特征在于,所述确定出所述聚落中至少部分目标处于非常规姿态,包括:
    根据常规姿态模型,确定出所述聚落中至少部分目标处于非常规姿态。
  14. 根据权利要求4所述的方法,其特征在于,所述确定出多个所述目标满足拍摄触发条件,进一步包括:
    确定出所述聚落的平均速度小于预设速度阈值。
  15. 根据权利要求2所述的方法,其特征在于,所述在确定出多个所述目标满足拍摄触发条件时,触发无人机搭载的相机进行拍摄之后,还包括:
    根据当前拍摄画面中的聚落,控制所述无人机飞行至特定机位;
    再次触发所述无人机搭载的相机进行拍摄。
  16. 根据权利要求15所述的方法,其特征在于,所述特定机位位于所述无人机在当前机位时的避障视场范围内。
  17. 根据权利要求15所述的方法,其特征在于,所述根据当前拍摄画面中的聚落,控制所述无人机飞行至特定机位,包括:
    控制所述无人机在飞行平面上飞行至特定机位,其中所述飞行平面垂直于水平面,并且所述无人机当前机位和所述聚落的连线位于所述飞行平面上,所述特定机位位于所述飞行平面上。
  18. 根据权利要求17所述的方法,其特征在于,所述无人机预设有在所述集体照拍摄模式中,所述无人机在所述特定机位时所述无人机相对所述聚落的距离或者所述聚落在所述拍摄画面中所占面积;
    所述方法还包括:在所述飞行平面上根据所述无人机相对所述聚落的距离或者所述聚落在所述拍摄画面中所占面积飞行至特定机位。
  19. 根据权利要求15所述的方法,其特征在于,所述根据当前拍摄画面中的聚落,控制所述无人机的飞行,包括:
    以所述聚落的中心为圆心,控制所述无人机在特定高度下以特定半径绕所述聚落飞行;
    设定所述无人机绕飞行过程中的指定位置为所述特定机位。
  20. 根据权利要求19所述的方法,其特征在于,所述特定高度和所述特定半径分别为所述无人机进入所述集体照拍摄模式时的高度和与所述聚落的距离。
  21. 根据权利要求18所述的方法,其特征在于,所述再次触发所述无人机搭载的相机进行拍摄之后,还包括:
    获得所述无人机在至少两个机位上所获得的图像,其中至少两个机位上所获得的图像中的所述聚落至少部分重合;
    根据所述无人机在至少两个机位上所获得的图像,生成所述聚落的三维图像。
  22. 根据权利要求15所述的方法,其特征在于,所述无人机预设有至少两个场景模式,其中不同的场景模式中分别预设有对应的特定机位;所述根据当前拍摄画面中的聚落,控制所述无人机飞行至特定机位之前,还包括:
    根据当前设定的场景模式,确定所述场景模式对应的特定机位。
  23. 根据权利要求1或15所述的方法,其特征在于,所述触发无人机搭载的相机进行拍摄之前,还包括:
    根据当前拍摄画面中的聚落,调整所述无人机搭载的相机的拍摄角度。
  24. 根据权利要求23所述的方法,其特征在于,所述根据当前拍摄画面中的聚落,调整所述无人机搭载的相机的拍摄角度,包括:
    根据所述聚落在所述拍摄画面中的预期位置,调整所述无人机搭载的相机的拍摄角度。
  25. 根据权利要求24所述的方法,其特征在于,所述预期位置是指所述聚落的中心点距离所述拍摄画面底部1/3像素高度的位置。
  26. 根据权利要求15所述的方法,其特征在于,所述触发无人机搭载的相机进行拍摄之后,还包括:
    确定所述相机拍摄的图像数量达到预设张数时,控制所述无人机返回至第一次拍摄所述聚落时的机位。
  27. 根据权利要求2所述的方法,其特征在于,所述触发无人机搭载的相机进行拍摄,包括:
    根据预设策略,确定所述相机的焦距。
  28. 根据权利要求27所述的方法,其特征在于,所述根据预设策略,确定所述相机的焦距,包括:
    根据当前拍摄画面中的聚落,确定所述聚落中距离所述相机最近的目标;
    基于所述距离最近的目标与所述相机之间的水平距离,确定所述相机的焦距。
  29. 根据权利要求28所述的方法,其特征在于,所述根据当前拍摄画面中的聚落,确定所述聚落中距离所述相机最近的目标,包括:
    根据所述聚落中每一目标的尺寸大小,确定所述聚落中距离所述相机最近的目标。
  30. 根据权利要求27所述的方法,其特征在于,所述根据预设策略,确定所述相机的焦距,包括:
    根据颜值计算算法,计算所述聚落中每一目标的颜值高低;
    以颜值最高的目标与所述相机之间的距离作为所述相机的焦距。
  31. 根据权利要求27所述的方法,其特征在于,所述根据预设策略,确定所述相机的焦距,包括:
    以所述聚落中的特定目标作与所述相机之间的距离作为所述相机的焦距。
  32. 根据权利要求31所述的方法,其特征在于,所述特定目标为所述无人机飞行 上电后,所述相机所拍摄到的所述聚落中第一个目标;或者,
    所述特定目标为所述无人机的手势控制者。
  33. 一种集体照拍摄装置,其特征在于,包括:存储装置和处理器;
    所述存储装置,用于存储程序指令;
    所述处理器,调用所述程序指令,当所述程序指令被执行时,用于:
    基于触发指令进入集体照拍摄模式;
    在所述集体照拍摄模式中,识别当前拍摄画面中的多个目标;
    在确定出多个所述目标满足拍摄触发条件时,触发无人机搭载的相机进行拍摄。
  34. 根据权利要求33所述的装置,其特征在于,所述处理器,用于:
    基于图像识别和聚类算法,识别当前拍摄画面中的聚落。
  35. 根据权利要求34所述的装置,其特征在于,所述聚落为特定目标所在的聚落;
    所述特定目标为所述无人机飞行上电后,所述相机所拍摄到的所述聚落中第一个目标;或者,
    所述特定目标为所述无人机的手势控制者。
  36. 根据权利要求34所述的装置,其特征在于,所述处理器确定出多个所述目标满足拍摄触发条件,包括:
    确定出所述聚落中处于特定姿态的目标大于或等于预设数量,或者确定出所述聚落中处于特定姿态的目标的数量占目标的总数量的比例大于预设比例。
  37. 根据权利要求36所述的装置,其特征在于,所述处理器确定目标处于特定状态,包括:确定所述目标的手势为特定形状。
  38. 根据权利要求36所述的装置,其特征在于,所述处理器确定目标处于特定姿态,包括:
    确定所述目标处于跳跃状态。
  39. 根据权利要求38所述的装置,其特征在于,所述处理器确定目标处于跳跃状态,包括:
    确定所述目标与所述无人机在垂直方向的距离变化满足特定条件。
  40. 根据权利要求36所述的装置,其特征在于,所述处理器确定目标处于特定姿态,包括:
    确定所述目标处于伸展状态。
  41. 根据权利要求40所述的装置,其特征在于,所述处理器在确定出多个所述目标满足拍摄触发条件之前,还用于:
    控制所述无人机位于所述聚落的正上方;
    并控制所述相机朝下拍摄。
  42. 根据权利要求40所述的装置,其特征在于,所述处理器,用于:
    根据人体关节点模型获取所述目标在拍摄画面中的关节点位置;
    基于所述目标在拍摄画面中的关节点位置确定出目标处于伸展状态。
  43. 根据权利要求42所述的装置,其特征在于,所述处理器,用于:
    基于所述目标的肘关节、腕关节、膝关节、脚踝中的至少一个与所述目标的躯干的位置关系确定目标处于伸展状态。
  44. 根据权利要求36所述的装置,其特征在于,所述处理器确定出所述聚落中至少部分目标处于特定姿态,包括:
    确定出所述聚落中至少部分目标处于非常规姿态。
  45. 根据权利要求44所述的装置,其特征在于,所述处理器,用于:
    根据常规姿态模型,确定出所述聚落中至少部分目标处于非常规姿态。
  46. 根据权利要求36所述的装置,其特征在于,所述处理器确定出多个所述目标满足拍摄触发条件,进一步包括:
    确定出所述聚落的平均速度小于预设速度阈值。
  47. 根据权利要求34所述的装置,其特征在于,所述处理器在确定出多个所述目标满足拍摄触发条件时,触发无人机搭载的相机进行拍摄之后,还用于:
    根据当前拍摄画面中的聚落,控制所述无人机飞行至特定机位;
    再次触发所述无人机搭载的相机进行拍摄。
  48. 根据权利要求47所述的装置,其特征在于,所述特定机位位于所述无人机在当前机位时的避障视场范围内。
  49. 根据权利要求47所述的装置,其特征在于,所述处理器根据当前拍摄画面中的聚落,控制所述无人机飞行至特定机位,包括:
    控制所述无人机在飞行平面上飞行至特定机位,其中所述飞行平面垂直于水平面,并且所述无人机当前机位和所述聚落的连线位于所述飞行平面上,所述特定机位位于所述飞行平面上。
  50. 根据权利要求49所述的装置,其特征在于,所述无人机预设有在所述集体照拍摄模式中,所述无人机在所述特定机位时所述无人机相对所述聚落的距离或者所述聚落在所述拍摄画面中所占面积;
    所述处理器还用于:在所述飞行平面上根据所述无人机相对所述聚落的距离或者所述聚落在所述拍摄画面中所占面积飞行至特定机位。
  51. 根据权利要求47所述的装置,其特征在于,所述处理器,用于:
    以所述聚落的中心为圆心,控制所述无人机在特定高度下以特定半径绕所述聚落飞行;
    设定所述无人机绕飞行过程中的指定位置为所述特定机位。
  52. 根据权利要求51所述的装置,其特征在于,所述特定高度和所述特定半径分别为所述无人机进入所述集体照拍摄模式时的高度和与所述聚落的距离。
  53. 根据权利要求50所述的装置,其特征在于,所述处理器在再次触发所述无人机搭载的相机进行拍摄之后,还用于:
    获得所述无人机在至少两个机位上所获得的图像,其中至少两个机位上所获得的 图像中的所述聚落至少部分重合;
    根据所述无人机在至少两个机位上所获得的图像,生成所述聚落的三维图像。
  54. 根据权利要求47所述的装置,其特征在于,所述无人机预设有至少两个场景模式,其中不同的场景模式中分别预设有对应的特定机位;所述处理器在根据当前拍摄画面中的聚落,控制所述无人机飞行至特定机位之前,还用于:
    根据当前设定的场景模式,确定所述场景模式对应的特定机位。
  55. 根据权利要求33或47所述的装置,其特征在于,所述处理器在触发无人机搭载的相机进行拍摄之前,还用于:
    根据当前拍摄画面中的聚落,调整所述无人机搭载的相机的拍摄角度。
  56. 根据权利要求55所述的装置,其特征在于,所述处理器,用于:
    根据所述聚落在所述拍摄画面中的预期位置,调整所述无人机搭载的相机的拍摄角度。
  57. 根据权利要求56所述的装置,其特征在于,所述预期位置是指所述聚落的中心点距离所述拍摄画面底部1/3像素高度的位置。
  58. 根据权利要求47所述的装置,其特征在于,所述处理器在触发无人机搭载的相机进行拍摄之后,还用于:
    确定所述相机拍摄的图像数量达到预设张数时,控制所述无人机返回至第一次拍摄所述聚落时的机位。
  59. 根据权利要求34所述的装置,其特征在于,所述处理器,用于:
    根据预设策略,确定所述相机的焦距。
  60. 根据权利要求59所述的装置,其特征在于,所述处理器,用于:
    根据当前拍摄画面中的聚落,确定所述聚落中距离所述相机最近的目标;
    基于所述距离最近的目标与所述相机之间的水平距离,确定所述相机的焦距。
  61. 根据权利要求60所述的装置,其特征在于,所述处理器,用于:
    根据所述聚落中每一目标的尺寸大小,确定所述聚落中距离所述相机最近的目标。
  62. 根据权利要求59所述的装置,其特征在于,所述处理器,用于:
    根据颜值计算算法,计算所述聚落中每一目标的颜值高低;
    以颜值最高的目标与所述相机之间的距离作为所述相机的焦距。
  63. 根据权利要求59所述的装置,其特征在于,所述处理器,用于:
    以所述聚落中的特定目标作与所述相机之间的距离作为所述相机的焦距。
  64. 根据权利要求63所述的装置,其特征在于,所述特定目标为所述无人机飞行上电后,所述相机所拍摄到的所述聚落中第一个目标;或者,
    所述特定目标为所述无人机的手势控制者。
  65. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质中存储有程序指令,该程序指令被处理器运行时,用于执行上述权利要求1至32任一项所述的集体照拍摄方法。
PCT/CN2018/088997 2018-05-30 2018-05-30 集体照拍摄方法和装置 WO2019227333A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201880012007.1A CN110337806A (zh) 2018-05-30 2018-05-30 集体照拍摄方法和装置
PCT/CN2018/088997 WO2019227333A1 (zh) 2018-05-30 2018-05-30 集体照拍摄方法和装置
US17/106,995 US20210112194A1 (en) 2018-05-30 2020-11-30 Method and device for taking group photo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/088997 WO2019227333A1 (zh) 2018-05-30 2018-05-30 集体照拍摄方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/106,995 Continuation US20210112194A1 (en) 2018-05-30 2020-11-30 Method and device for taking group photo

Publications (1)

Publication Number Publication Date
WO2019227333A1 true WO2019227333A1 (zh) 2019-12-05

Family

ID=68139431

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/088997 WO2019227333A1 (zh) 2018-05-30 2018-05-30 集体照拍摄方法和装置

Country Status (3)

Country Link
US (1) US20210112194A1 (zh)
CN (1) CN110337806A (zh)
WO (1) WO2019227333A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677592B (zh) * 2019-10-31 2022-06-10 Oppo广东移动通信有限公司 主体对焦方法、装置、计算机设备和存储介质
CN112752016B (zh) * 2020-02-14 2023-06-16 腾讯科技(深圳)有限公司 一种拍摄方法、装置、计算机设备和存储介质
CN111770279B (zh) * 2020-08-03 2022-04-08 维沃移动通信有限公司 一种拍摄方法及电子设备
CN112511743B (zh) * 2020-11-25 2022-07-22 南京维沃软件技术有限公司 视频拍摄方法和装置
WO2022213311A1 (en) * 2021-04-08 2022-10-13 Qualcomm Incorporated Camera autofocus using depth sensor
CN114779816B (zh) * 2022-05-17 2023-03-24 成都工业学院 一种面向震后废墟环境起降的搜救无人机及其具有的系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130253733A1 (en) * 2012-03-26 2013-09-26 Hon Hai Precision Industry Co., Ltd. Computing device and method for controlling unmanned aerial vehicle in flight space
JP2017065467A (ja) * 2015-09-30 2017-04-06 キヤノン株式会社 無人機およびその制御方法
CN106586011A (zh) * 2016-12-12 2017-04-26 高域(北京)智能科技研究院有限公司 航拍无人飞行器的对准方法及其航拍无人飞行器
CN107087427A (zh) * 2016-11-30 2017-08-22 深圳市大疆创新科技有限公司 飞行器的控制方法、装置和设备以及飞行器
CN107505950A (zh) * 2017-08-26 2017-12-22 上海瞬动科技有限公司合肥分公司 一种无人机全自动智能拍摄集体照方法
CN107703962A (zh) * 2017-08-26 2018-02-16 上海瞬动科技有限公司合肥分公司 一种无人机集体照拍摄方法
CN107835371A (zh) * 2017-11-30 2018-03-23 广州市华科尔科技股份有限公司 一种多旋翼无人机手势自拍方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4896116B2 (ja) * 2008-11-21 2012-03-14 三菱電機株式会社 空中移動体からの自動追尾撮影装置
CN104427238A (zh) * 2013-09-06 2015-03-18 联想(北京)有限公司 一种信息处理方法及电子设备
CN104519261B (zh) * 2013-09-27 2020-01-31 联想(北京)有限公司 一种信息处理方法及电子设备
CN107370946A (zh) * 2017-07-27 2017-11-21 高域(北京)智能科技研究院有限公司 自动调整拍照位置的飞行拍摄装置和方法
CN107566741B (zh) * 2017-10-26 2020-04-14 Oppo广东移动通信有限公司 对焦方法、装置、计算机可读存储介质和计算机设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130253733A1 (en) * 2012-03-26 2013-09-26 Hon Hai Precision Industry Co., Ltd. Computing device and method for controlling unmanned aerial vehicle in flight space
JP2017065467A (ja) * 2015-09-30 2017-04-06 キヤノン株式会社 無人機およびその制御方法
CN107087427A (zh) * 2016-11-30 2017-08-22 深圳市大疆创新科技有限公司 飞行器的控制方法、装置和设备以及飞行器
CN106586011A (zh) * 2016-12-12 2017-04-26 高域(北京)智能科技研究院有限公司 航拍无人飞行器的对准方法及其航拍无人飞行器
CN107505950A (zh) * 2017-08-26 2017-12-22 上海瞬动科技有限公司合肥分公司 一种无人机全自动智能拍摄集体照方法
CN107703962A (zh) * 2017-08-26 2018-02-16 上海瞬动科技有限公司合肥分公司 一种无人机集体照拍摄方法
CN107835371A (zh) * 2017-11-30 2018-03-23 广州市华科尔科技股份有限公司 一种多旋翼无人机手势自拍方法

Also Published As

Publication number Publication date
US20210112194A1 (en) 2021-04-15
CN110337806A (zh) 2019-10-15

Similar Documents

Publication Publication Date Title
US11797009B2 (en) Unmanned aerial image capture platform
US11644832B2 (en) User interaction paradigms for a flying digital assistant
WO2019227333A1 (zh) 集体照拍摄方法和装置
US10979615B2 (en) System and method for providing autonomous photography and videography
US11573562B2 (en) Magic wand interface and other user interaction paradigms for a flying digital assistant
US10969784B2 (en) System and method for providing easy-to-use release and auto-positioning for drone applications
CN108476288B (zh) 拍摄控制方法及装置
US20230280742A1 (en) Magic Wand Interface And Other User Interaction Paradigms For A Flying Digital Assistant
WO2022109860A1 (zh) 跟踪目标对象的方法和云台
WO2022056683A1 (zh) 视场确定方法、视场确定装置、视场确定系统和介质
WO2022000211A1 (zh) 拍摄系统的控制方法、设备、及可移动平台、存储介质
CN116762354A (zh) 影像拍摄方法、控制装置、可移动平台和计算机存储介质
WO2023123254A1 (zh) 无人机的控制方法、装置、无人机及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921109

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18921109

Country of ref document: EP

Kind code of ref document: A1