CN112702532A - Control method and device for autonomous image acquisition of unmanned vehicle - Google Patents

Control method and device for autonomous image acquisition of unmanned vehicle Download PDF

Info

Publication number
CN112702532A
CN112702532A CN202011596853.8A CN202011596853A CN112702532A CN 112702532 A CN112702532 A CN 112702532A CN 202011596853 A CN202011596853 A CN 202011596853A CN 112702532 A CN112702532 A CN 112702532A
Authority
CN
China
Prior art keywords
unmanned vehicle
angle
camera
axis
holder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011596853.8A
Other languages
Chinese (zh)
Other versions
CN112702532B (en
Inventor
陈荟慧
钟委钊
林怡斌
潘芷欣
郑春弟
王爱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN202011596853.8A priority Critical patent/CN112702532B/en
Publication of CN112702532A publication Critical patent/CN112702532A/en
Application granted granted Critical
Publication of CN112702532B publication Critical patent/CN112702532B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of image acquisition, in particular to a control method and a device for automatically acquiring images by an unmanned vehicle, wherein the method comprises the following steps: the method comprises the steps that when a picture shot by the intelligent equipment is obtained, the cloud server determines the positioning coordinate and the equipment direction of the intelligent equipment when the picture is shot; the cloud server issues an image acquisition instruction to the unmanned vehicle, and the image acquisition instruction comprises: the method comprises the steps that when an intelligent device shoots a photo, the positioning coordinate and the device direction of the intelligent device, and the time and the data type of image data collected by an unmanned vehicle are determined, wherein the data type comprises any one of a video or a photo; after the unmanned vehicle navigates to the positioning coordinates according to the image acquisition instruction, adjusting a holder arranged on the unmanned vehicle to enable the instant posture of the holder to be consistent with the pointing direction of the equipment; acquiring image data consistent with the data type at the time of acquiring the data by the unmanned vehicle, and uploading the acquired image data to a cloud server; the invention can carry out long-time automatic image acquisition.

Description

Control method and device for autonomous image acquisition of unmanned vehicle
Technical Field
The invention relates to the technical field of image acquisition, in particular to a control method and a control device for autonomous image acquisition of an unmanned vehicle.
Background
With the development of sensing technology and control technology, unmanned vehicles have been widely used in many scenes, and the image acquisition function of the unmanned vehicles is one of the common functions of the unmanned vehicles.
In the prior art, a remote control mode is mainly adopted for collecting image data by using an unmanned vehicle, a person remotely controls the unmanned vehicle to a specific place to take a picture (or record a video) towards a specific target, and then the unmanned vehicle is remotely controlled to acquire an image. The autonomy of image acquisition is poor, and the requirement of long-time automatic image acquisition cannot be met.
Disclosure of Invention
The invention provides a control method and a control device for automatically acquiring images by an unmanned vehicle, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
In order to achieve the purpose, the invention provides the following technical scheme:
a control method for autonomously acquiring images by an unmanned vehicle comprises the following steps:
the method comprises the steps that when a picture shot by the intelligent equipment is obtained, the cloud server determines the positioning coordinate and the equipment direction of the intelligent equipment when the picture is shot; the device is directed to include: a heading angle, a tilt angle, and a rotation angle of the smart device;
the cloud server issues an image acquisition instruction to the unmanned vehicle, and the image acquisition instruction comprises: the method comprises the steps that when an intelligent device shoots a photo, the positioning coordinate and the device direction of the intelligent device, and the time and the data type of image data collected by an unmanned vehicle are determined, wherein the data type comprises any one of a video or a photo;
after the unmanned vehicle navigates to the positioning coordinates according to the image acquisition instruction, adjusting a holder arranged on the unmanned vehicle to enable the instant posture of the holder to be consistent with the pointing direction of the equipment; and acquiring image data consistent with the data type at the time of acquiring the data by the unmanned vehicle, and uploading the acquired image data to a cloud server.
Further, the determining the location coordinates and the device orientation of the smart device when taking the picture includes:
step S110, when the intelligent device takes a picture, obtaining 8-dimensional context information of the intelligent device, and sending the 8-dimensional context information to a cloud server, wherein the 8-dimensional context information comprises: the method comprises the following steps that positioning coordinates of the intelligent equipment when a picture is shot, the reading value of a 3-axis accelerometer and the reading value of a 3-axis magnetometer are obtained;
step S111, the cloud server calculates the device orientation of the intelligent device when the intelligent device takes a picture according to the reading value of the 3-axis accelerometer and the reading value of the 3-axis magnetometer;
alternatively, the first and second electrodes may be,
step S120, when the intelligent device takes a picture, obtaining 8-dimensional context information of the intelligent device, wherein the 8-dimensional context information comprises: the method comprises the following steps that positioning coordinates of the intelligent equipment when a picture is shot, the reading value of a 3-axis accelerometer and the reading value of a 3-axis magnetometer are obtained;
and S121, the intelligent device calculates the device orientation of the intelligent device when the intelligent device takes a picture according to the reading value of the 3-axis accelerometer and the reading value of the 3-axis magnetometer, and uploads the obtained positioning coordinates and the device orientation to a cloud server.
Further, the unmanned vehicle is provided with a controller, a second communication module, a second positioning module, a driving motor, a holder, a camera, a second accelerometer and a second magnetometer, wherein the second communication module, the second positioning module, the driving motor, the holder, the camera, the second accelerometer and the second magnetometer are connected with the controller; the second communication module is used for establishing communication connection with the cloud server, the second positioning module is used for determining the positioning coordinates of the unmanned vehicle, the cradle head is a two-degree-of-freedom camera cradle head, the cradle head comprises a first steering engine and a second steering engine, the first steering engine rotates around a vertical shaft, the second steering engine rotates around a horizontal shaft, the camera is fixedly installed on the second steering engine, a second camera is arranged behind the camera, and the second accelerometer and the second magnetometer are installed on a cradle head base;
the unmanned vehicle navigates to the positioning coordinate according to the image acquisition instruction, and then adjusts the holder arranged on the unmanned vehicle, so that the instant posture of the holder is consistent with the pointing direction of the equipment, and the method comprises the following steps:
after receiving an image acquisition instruction, the controller determines a navigation path according to the current position of the unmanned vehicle and the positioning coordinates, and controls a driving motor to enable the unmanned vehicle to drive the position of the positioning coordinates according to the navigation path;
after the unmanned vehicle navigates to the positioning coordinates, the instant attitude of the holder is calculated according to the second accelerometer and the second magnetometer, the attitude adjustment angle of the holder is determined according to the instant attitude and the equipment direction, and the controller adjusts the attitude of the holder according to the attitude adjustment angle and the first steering engine and/or the second steering engine, so that the instant attitude of the holder is consistent with the equipment direction.
Further, before determining the attitude adjustment angle of the pan/tilt head according to the instant attitude and the device orientation, the method further includes:
establishing a world coordinate system XYZ, and determining a value range of the equipment pointing in the world coordinate system through data interval conversion.
Further, the establishing a world coordinate system XYZ, and determining a value range of the device pointing in the world coordinate system through data interval conversion includes:
establishing a world coordinate system XYZ, wherein the X axis faces the positive west direction, the Y axis faces the positive north direction, and the Z axis faces the sky direction;
a second camera behind the camera is used for transversely shooting, and when the intelligent terminal is at an initial position, a lens of the second camera is positioned on an X-axis positive axis and the direction of the lens faces to a Y axis;
the device orientation is noted as: d ═ azimuth, roll, where azimuth denotes the azimuth angle of the camera, roll denotes the angle of rotation of the camera, the range of values of the azimuth angle is [0, 360 ° ], and the range of values of the angle of rotation is [0, 180 ° ];
acquiring an azimuth angle of a second camera calculated according to measurement values of a second accelerometer and a second magnetometer, and converting a value range of the azimuth angle into [0, 360 DEG ] through data interval conversion;
and acquiring a rotation angle calculated according to the measurement values of the second accelerometer and the second magnetometer, and converting the value range of the rotation angle into [0, 180 DEG ] through data interval conversion.
Further, the value range of the rotation angle is converted into [0, 180 ° ]throughdata interval conversion, specifically:
performing data interval conversion on the rotation angle through a conversion formula, wherein the conversion formula is as follows:
Figure BDA0002868407950000031
wherein the content of the first and second substances,
Figure BDA0002868407950000032
representing the rotation angle after the conversion, the roll representing the rotation angle obtained before the conversion,
Figure BDA0002868407950000033
indicating the acceleration of the camera in the Z-axis direction.
Further, the determining the attitude adjustment angle of the pan/tilt head according to the instant attitude and the device orientation includes:
determining an instant attitude C _ r of the pan-tilt, said instant attitude C _ r being represented as: c _ r ═ angle _ ch _ r, angle _ cv _ r, where angle _ cv _ r represents the instantaneous angle of rotation of the second steering engine, angle _ ch _ r represents the instantaneous angle of rotation of the first steering engine, the range of angle _ ch _ r is [0, 360 ° ], and the range of angle _ cv _ r is [0, 180 ° ];
calculating the attitude adjustment angle according to the following formula, taking the attitude adjustment angle as C _ a ═ α, β:
α=[(90°+azimuth)-angle_ch_r+360°]mod360,β=roll-angle_cv_r;
wherein (90 degrees + azimuth) represents the azimuth angle of the lens, alpha represents the rotation angle value of the first steering engine, and the value range of alpha is [0 degrees, 360 degrees ]; beta represents the rotation angle value of the second steering engine, and the value range of the beta is [ -180 degrees, 180 degrees ].
Further, the acquiring the image data consistent with the data type at the time when the unmanned vehicle acquires the data comprises:
when the unmanned vehicle reaches the position of the positioning coordinate and the instant posture of the holder is consistent with the pointing direction of the equipment, the controller controls the second camera to acquire image data at set time, wherein the set time comprises any one of a time point or a time period, and the data type comprises any one of a video or a photo.
A control device for autonomous image acquisition of an unmanned vehicle, the device comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is enabled to implement the control method for autonomously acquiring the image by the unmanned vehicle.
The invention has the beneficial effects that: the invention discloses a control method and a device for automatically acquiring images by an unmanned vehicle. The method effectively utilizes the intelligence of people and the fatigue resistance of machines, is flexible to deploy and has wide application scenes.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a control method for autonomous image acquisition of an unmanned vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system architecture in an embodiment of the invention;
FIG. 3 is a schematic structural diagram of an unmanned vehicle control system in an embodiment of the invention
FIG. 4 is a schematic diagram of the smart device during photographing according to the embodiment of the present invention;
FIG. 5 is a schematic view of a cradle head structure in an embodiment of the invention;
FIG. 6 is a schematic diagram of a coordinate system of a pan/tilt head in an embodiment of the present invention;
fig. 7 is a schematic coordinate system diagram of the second camera in the embodiment of the present invention.
Detailed Description
The conception, specific structure and technical effects of the present application will be described clearly and completely with reference to the following embodiments and the accompanying drawings, so that the purpose, scheme and effects of the present application can be fully understood. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Referring to fig. 1, fig. 1 shows a control method for autonomously acquiring an image by an unmanned vehicle according to an embodiment of the present application, where the method includes the following steps:
s100, after acquiring a picture shot by first intelligent equipment, a cloud server determines a positioning coordinate and equipment orientation of the intelligent equipment when the picture is shot; the device is directed to include: a heading angle, a tilt angle, and a rotation angle of the smart device;
in an improved embodiment, the determining the location coordinates and the device pointing direction of the smart device when taking the picture includes:
step S110, when the intelligent device takes a picture, 8-dimensional context information is obtained, and the 8-dimensional context information is sent to a cloud server, wherein the 8-dimensional context information comprises: the method comprises the following steps that positioning coordinates of the intelligent equipment when a picture is shot, the reading value of a 3-axis accelerometer and the reading value of a 3-axis magnetometer are obtained;
step S111, the cloud server calculates the device orientation of the intelligent device when the intelligent device takes a picture according to the reading value of the 3-axis accelerometer and the reading value of the 3-axis magnetometer;
alternatively, the first and second electrodes may be,
step S120, when the intelligent device takes a picture, 8-dimensional context information is obtained, wherein the 8-dimensional context information comprises: the method comprises the following steps that positioning coordinates of the intelligent equipment when a picture is shot, the reading value of a 3-axis accelerometer and the reading value of a 3-axis magnetometer are obtained;
and S121, the intelligent device calculates the device orientation of the intelligent device when the intelligent device takes a picture according to the reading value of the 3-axis accelerometer and the reading value of the 3-axis magnetometer, and uploads the obtained positioning coordinates and the device orientation to a cloud server.
In this embodiment, the 8-dimensional context information is: sensor ═ glon, glat, ax, ay, az, mx, my, mz), where (glon, glat) are the location coordinates of the smart device at the time of taking a picture, (ax, ay, az) are readings of the 3-axis accelerometer, and (mx, my, mz) are readings of the 3-axis magnetometer.
The device pointing is represented as a direction vector: (azimuth, pitch, roll), wherein azimuth represents the heading angle (yaw angle) of the smart device, pitch represents the pitch angle (pitch angle) of the smart device, and roll represents the rotation angle (roll angle) of the smart device. The direction vector can also be obtained by the intelligent equipment through calculation and then uploaded to the cloud server.
Step S200, the cloud server issues an image acquisition instruction to the unmanned vehicle, wherein the image acquisition instruction comprises the following steps: the method comprises the steps that when an intelligent device shoots a photo, the positioning coordinate and the device direction of the intelligent device, and the time and the data type of image data collected by an unmanned vehicle are determined, wherein the data type comprises any one of a video or a photo;
wherein the image capture instruction is represented by 6-dimensional data (time, type, glon, glat, azimuth, roll), the time represents a set time for capturing the image data, and the type represents a data type of the image data, and the data type includes any one of a video or a photo.
Step S300, after the unmanned vehicle navigates to the positioning coordinate according to the image acquisition instruction, adjusting a holder arranged on the unmanned vehicle to enable the instant posture of the holder to be consistent with the pointing direction of the equipment; and acquiring image data consistent with the data type at the time of acquiring the data by the unmanned vehicle, and uploading the acquired image data to a cloud server.
As shown in fig. 2, it should be noted that in the embodiment provided by the present invention, a first camera, a first positioning module (GPS module and/or BDS module), a first accelerometer, a first magnetometer, and other modules are built in the smart device, and these positioning module and sensor module can help the smart device calculate the positioning coordinate and device orientation of the smart device when taking a picture, and these data can be used for the cloud server to issue an image acquisition instruction to the unmanned vehicle.
Referring to fig. 3, in the embodiment of the present invention, a wireless communication network connecting an intelligent device and a cloud server, and an unmanned vehicle are further provided, where the intelligent device is carried by a person and operated, and the unmanned vehicle is provided with a controller, and a second communication module, a second positioning module, a driving motor, a pan/tilt, a camera, a second accelerometer, and a second magnetometer, which are connected to the controller; the second communication module is used for establishing communication connection with the cloud server, the second positioning module is used for determining the positioning coordinates of the unmanned vehicle, the cradle head is a two-degree-of-freedom camera cradle head, the cradle head comprises a first steering engine and a second steering engine, the first steering engine rotates around a vertical shaft, the second steering engine rotates around a horizontal shaft, the camera is fixedly installed on the second steering engine, a second camera is arranged behind the camera, and the second accelerometer and the second magnetometer are installed on a cradle head base; the unmanned vehicle has basic positioning navigation, wireless communication, control and data processing capabilities.
Specifically, firstly, a person selects a proper position and angle, the intelligent device is used for shooting towards a target, 8-dimensional situation information is obtained when the intelligent device shoots a picture, a positioning coordinate (namely a GPS positioning coordinate or a BDS positioning coordinate) is calculated through the intelligent device, and the pointing direction of the device is calculated through the intelligent device or a cloud server; then, the cloud server transmits the positioning coordinates and the equipment pointing direction (6-dimensional data) to the unmanned vehicle in the form of an image acquisition instruction; and after the unmanned vehicle runs to the specified coordinates, the two-degree-of-freedom camera holder is adjusted according to the photographing instruction, so that the second camera behind the camera points to the target to photograph (or pick up) and then uploads the photograph (or the video) to the cloud server.
The invention adopts a man-machine cooperation data acquisition method, manually searches and photographs the target to be monitored, and calculates a command for further monitoring the target according to situation information during photographing so as to instruct an unmanned vehicle to monitor the target at any time. The method effectively utilizes human intelligence and machine fatigue resistance, is flexible in deployment and wide in applicable scene, and can monitor fixed targets, such as equipment, roads, buildings, plants and the like in a factory.
The mode for the unmanned vehicle to autonomously acquire the image data provided by the embodiment needs a high-quality instruction to enable the unmanned vehicle to accurately acquire the target image, and the high-quality instruction is the key for the unmanned vehicle to successfully acquire the expected data. For this reason, there is a need for further improvement of the image capturing function of the unmanned vehicle.
As a further improvement of the above embodiment, the structure and the work flow of the image capturing module are described in detail below.
For ease of presentation, the present invention defines the following rules and assumptions:
(1) the intelligent equipment takes a mobile phone as an example, the mobile phone uses a first camera arranged at the rear to take a picture, and the picture taking gesture is horizontal;
(3) the installation directions of a second camera, a second magnetometer and a second accelerometer of the camera are consistent with that of the mobile phone;
(4) instead of an accelerometer and magnetometer combination, an orientation sensor may be used.
1. Image acquisition module structure and data definition:
as shown in fig. 5, the pan/tilt head is a two-degree-of-freedom camera pan/tilt head, the pan/tilt head includes a first steering engine in a horizontal direction and a second steering engine in a vertical direction, the first steering engine rotates around a vertical axis, the second steering engine rotates around a horizontal axis, the camera is fixedly mounted on the second steering engine, the second accelerometer and the second magnetometer are mounted on a pan/tilt head base, and the second magnetometer and the second accelerometer are used for identifying an instant posture of the pan/tilt head;
in a preferred embodiment, after the unmanned vehicle navigates to the positioning coordinates according to the image acquisition instruction, adjusting a pan-tilt arranged on the unmanned vehicle to make an instant attitude of the pan-tilt consistent with the pointing direction of the device, includes:
after receiving an image acquisition instruction, the controller determines a navigation path according to the current position of the unmanned vehicle and the positioning coordinates, and controls a driving motor to enable the unmanned vehicle to drive the position of the positioning coordinates according to the navigation path;
after the unmanned vehicle navigates to the positioning coordinates, the instant attitude of the holder is calculated according to the second accelerometer and the second magnetometer, the attitude adjustment angle of the holder is determined according to the instant attitude and the equipment direction, and the controller adjusts the attitude of the holder according to the attitude adjustment angle and the first steering engine and/or the second steering engine, so that the instant attitude of the holder is consistent with the equipment direction.
In this embodiment, the assembly positions of the first steering engine, the second steering engine and the camera on the pan tilt are fixed, the second accelerometer and the second magnetometer are mounted on the pan tilt base, and the accurate direction value of the second camera can be calculated based on the reading values of the accelerometer and the magnetometer.
The unmanned vehicle navigates to the positioning coordinate according to the image acquisition instruction, and then adjusts the holder arranged on the unmanned vehicle, so that the instant posture of the holder is consistent with the pointing direction of the equipment, and the method comprises the following steps:
after receiving an image acquisition instruction, the controller determines a navigation path according to the current position of the unmanned vehicle and the positioning coordinates, and controls a driving motor to enable the unmanned vehicle to drive the position of the positioning coordinates according to the navigation path;
after the unmanned vehicle navigates to the positioning coordinates, the instant attitude of the holder is calculated according to the second accelerometer and the second magnetometer, the attitude adjustment angle of the holder is determined according to the instant attitude and the equipment direction, and the controller adjusts the attitude of the holder according to the attitude adjustment angle and the first steering engine and/or the second steering engine, so that the instant attitude of the holder is consistent with the equipment direction.
Referring to fig. 6, in a preferred embodiment, before determining the attitude adjustment angle of the pan/tilt head according to the instant attitude and the device orientation, the method further includes:
establishing a world coordinate system XYZ, and determining a value range of the equipment pointing in the world coordinate system through data interval conversion.
Specifically, the establishing a world coordinate system XYZ, and determining a value range of the device pointing in the world coordinate system through data interval conversion includes:
establishing a world coordinate system XYZ, wherein the X axis faces the positive west direction, the Y axis faces the positive north direction, and the Z axis faces the sky direction;
a second camera behind the camera is used for transversely shooting, and when the intelligent terminal is at an initial position, a lens of the second camera is positioned on an X-axis positive axis and the direction of the lens faces to a Y axis;
the device orientation is noted as: and D ═ azimuth, roll, wherein azimuth represents an azimuth angle of the smart device, roll represents a rotation angle of the smart device, the azimuth angle has a value range of [0, 360 ° ], and the rotation angle has a value range of [0, 180 ° ].
Acquiring an azimuth angle of a second camera calculated according to measurement values of a second accelerometer and a second magnetometer, and converting a value range of the azimuth angle into [0, 360 DEG ] through data interval conversion;
and acquiring a rotation angle calculated according to the measurement values of the second accelerometer and the second magnetometer, and converting the value range of the rotation angle into [0, 180 DEG ] through data interval conversion.
Referring to fig. 4, in order to better explain steering of the steering engine under different conditions, in the embodiment provided by the present invention, a steering engine direction C is recorded as: c ═ angle _ cv, angle _ ch, where angle _ cv represents the angle at which the second steering engine rotates, and angle _ ch represents the angle at which the first steering engine rotates; when angle _ cv is greater than 0, the second steering engine rotates anticlockwise (in the positive direction in the figure) around the OX axis direction; when angle _ ch is greater than 0, the first steering engine rotates clockwise (in the figure, the positive direction rotates) around the OZ axis direction, and when the second camera rotates clockwise around the OX axis direction, the acceleration of the camera in the Z axis direction
Figure BDA0002868407950000071
Acceleration of the camera in the Z-axis direction when the second camera is rotated counterclockwise about the OX-axis
Figure BDA0002868407950000072
Using FIG. 4 as the initial state when
Figure BDA0002868407950000073
When the rotation angle is 0, the second camera directly faces the ground, and the rotation angle from 0 to 90 degrees indicates that the second camera rotates around the OX axis from the OY axis to the OZ axis to the state shown in the figure 4;
when in use
Figure BDA0002868407950000081
When the rotation angle is 0, the second camera is facing the sky, and the rotation angle from 0 to 90 ° indicates that the second camera is rotated around the OX axis from the minus axis of the OY axis to the OZ axis to the state of fig. 4.
As shown in fig. 4, in this embodiment, the direction value of the second camera may be calculated based on the reading of the second accelerometer and the reading of the second magnetometer, where the direction value includes an azimuth angle and a rotation angle. The azimuth angle of the second camera is calculated according to the measured values of the second accelerometer and the second magnetometer, wherein the value range is [ -180 degrees ] and 180 degrees ], 0 represents that the lens of the second camera faces to the true north, 90 degrees represents that the lens of the second camera faces to the true east, 180 degrees or-180 degrees represents that the lens of the second camera faces to the true south, and-90 degrees represents that the lens of the second camera faces to the true west; in this embodiment, the value range of the azimuth angle is converted into [0, 360 ° ], where 0 or 360 ° represents that the lens of the second camera faces north, 90 ° represents that the lens of the second camera faces east, 180 ° represents that the lens of the second camera faces south, and 270 ° represents that the lens of the second camera faces west.
In another embodiment, the azimuth angle and the rotation angle of the second camera are directly obtained by a direction sensor arranged on the holder.
Referring again to fig. 4, in a preferred embodiment, the rotation angle is converted into [0, 180 ° ], specifically:
performing data interval conversion on the rotation angle through a conversion formula, wherein the conversion formula is as follows:
Figure BDA0002868407950000082
wherein the content of the first and second substances,
Figure BDA0002868407950000083
representing the rotation angle after the conversion, the roll representing the rotation angle obtained before the conversion,
Figure BDA0002868407950000084
indicating the acceleration of the camera in the Z-axis direction.
Specifically, the value ranges of the rotation angles of the smart device are all [0, 90 ° ], and the value ranges of the rotation angles of the second camera are converted into [0, 180 ° ]throughthe formula. After conversion, the rotation angle is 0 to indicate that the second camera faces the sky, 90 ° to indicate that the second camera is in the state shown in fig. 4, and 180 ° to indicate that the second camera faces the ground. As shown in fig. 7, the rotation angle has a value range of [0, 90 ° ].
In a preferred embodiment, the determining the attitude adjustment angle of the pan/tilt head according to the instant attitude and the device orientation includes:
determining an instant attitude C _ r of the pan-tilt, said instant attitude C _ r being represented as: c _ r ═ angle _ ch _ r, angle _ cv _ r, where angle _ cv _ r represents the instantaneous angle of rotation of the second steering engine, angle _ ch _ r represents the instantaneous angle of rotation of the first steering engine, the range of angle _ ch _ r is [0, 360 ° ], and the range of angle _ cv _ r is [0, 180 ° ];
in this embodiment, when angle _ ch _ r is 0 or 360 °, it indicates that the lens of the second camera faces north, 90 ° indicates that the lens of the second camera faces east, 180 ° indicates that the lens of the second camera faces south, and 270 ° indicates that the lens of the second camera faces west; when the angle _ cv _ r value is 0, it indicates that the lens of the second camera is facing the sky, when the angle _ cv _ r value is 90 °, it indicates that the lens of the second camera is in the state shown in fig. 4, and when the angle _ cv _ r value is 180 °, it indicates that the lens of the second camera is facing the ground. A schematic diagram of the coordinate system of the lens is shown in fig. 7.
Calculating the attitude adjustment angle according to the following formula, taking the attitude adjustment angle as C _ a ═ α, β:
α=[(90°+azimuth)-angle_ch_r+360°]mod360,β=roll-angle_cv_r;
wherein (90 degrees + azimuth) represents the azimuth angle of the lens, alpha represents the rotation angle value of the first steering engine, and the value range of alpha is [0 degrees, 360 degrees ]; beta represents the rotation angle value of the second steering engine, and the value range of the beta is [ -180 degrees, 180 degrees ].
Specifically, since azimuth represents the direction angle of the smart device, and the direction of the lens of the second camera is obtained by clockwise rotating the smart device by 90 ° around the OX axis, that is, the azimuth angle of the lens of the second camera is equal to the azimuth angle of the smart device plus 90 °, the value range of (90 ° + azimuth) is the same as the value range of azimuth, and both the value ranges are [0, 360 ° ];
because the value range of azimuth is [0 degrees, 360 degrees ], the value range of (90 degrees + azimuth) is [90 degrees, 450 degrees ], and because the value range of angle _ ch _ r is [0 degrees, 360 degrees ], the value range of [ (90 degrees + azimuth) -angle _ ch _ r +360 degrees ] is [90 degrees, 810 degrees ], and the value range obtained after the modulus is 360 is [0 degrees, 360 degrees ];
since the roll value range is [0 °, 180 ° ], and the angle _ cv _ r value range is [0 °, 180 ° ], the value range of β is [ -180 °, 180 ° ], which is calculated according to the formula β -roll _ angle _ cv _ r.
Referring to fig. 5, in the present embodiment, when α > 0, the first steering engine rotates clockwise (in the illustrated forward direction) around the OZ axis; when beta is larger than 0, the second steering engine rotates anticlockwise (rotates in the positive direction in the figure) around the OX axis;
in a specific embodiment, assuming that the instantaneous attitude is C _ r ═ 0, 90 °, the attitude adjustment angle is:
α=[(90°+azimuth)-angle_ch_r+360°]mod360,β=roll-angle_cv_r。
in a preferred embodiment, said acquiring image data in accordance with said data type at the time said unmanned vehicle acquired data comprises:
when the unmanned vehicle reaches the position of the positioning coordinate and the instant posture of the holder is consistent with the pointing direction of the equipment, the controller controls the second camera to acquire image data at set time, wherein the set time comprises any one of a time point or a time period, and the data type comprises any one of a video or a photo.
In one embodiment, the controller controls the second camera to capture the picture at a set time, which may be a time point or a time period. When the set time is a time point and the data type is a picture, the second camera takes a picture at the set time moment; when the time is set as the time period and the data type is the video, the second camera records the video in the time period.
Corresponding to the method in fig. 1, an embodiment of the present invention further provides a control device for autonomously acquiring an image by an unmanned vehicle, where the device includes:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is enabled to implement the control method for autonomously acquiring images by an unmanned vehicle according to any one of the above embodiments.
The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.
The Processor may be a Central-Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application-Specific-Integrated-Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. The general processor can be a microprocessor or the processor can be any conventional processor and the like, the processor is a control center of the control system for automatically acquiring the images of the unmanned vehicle, and various interfaces and lines are utilized to connect various parts of the device which can be operated by the control system for automatically acquiring the images of the whole unmanned vehicle.
The memory can be used for storing the computer program and/or the module, and the processor realizes various functions of the control system for the unmanned vehicle to automatically acquire the image by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart-Media-Card (SMC), a Secure-Digital (SD) Card, a Flash-memory Card (Flash-Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
While the description of the present application has been made in considerable detail and with particular reference to a few illustrated embodiments, it is not intended to be limited to any such details or embodiments or any particular embodiments, but it is to be construed that the present application effectively covers the intended scope of the application by reference to the appended claims, which are interpreted in view of the broad potential of the prior art. Further, the foregoing describes the present application in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial changes from the present application, not presently foreseen, may nonetheless represent equivalents thereto.

Claims (9)

1. A control method for automatically acquiring images by an unmanned vehicle is characterized by comprising the following steps:
the method comprises the steps that when a picture shot by the intelligent equipment is obtained, the cloud server determines the positioning coordinate and the equipment direction of the intelligent equipment when the picture is shot; the device is directed to include: a heading angle, a tilt angle, and a rotation angle of the smart device;
the cloud server issues an image acquisition instruction to the unmanned vehicle, and the image acquisition instruction comprises: the method comprises the steps that when an intelligent device shoots a photo, the positioning coordinate and the device direction of the intelligent device, and the time and the data type of image data collected by an unmanned vehicle are determined, wherein the data type comprises any one of a video or a photo;
after the unmanned vehicle navigates to the positioning coordinates according to the image acquisition instruction, adjusting a holder arranged on the unmanned vehicle to enable the instant posture of the holder to be consistent with the pointing direction of the equipment; and acquiring image data consistent with the data type at the time of acquiring the data by the unmanned vehicle, and uploading the acquired image data to a cloud server.
2. The control method for the unmanned vehicle to autonomously acquire the image according to claim 1, wherein the determining of the positioning coordinates and the device orientation of the intelligent device when taking the picture comprises:
step S110, when the intelligent device takes a picture, obtaining 8-dimensional context information of the intelligent device, and sending the 8-dimensional context information to a cloud server, wherein the 8-dimensional context information comprises: the method comprises the following steps that positioning coordinates of the intelligent equipment when a picture is shot, the reading value of a 3-axis accelerometer and the reading value of a 3-axis magnetometer are obtained;
step S111, the cloud server calculates the device orientation of the intelligent device when the intelligent device takes a picture according to the reading value of the 3-axis accelerometer and the reading value of the 3-axis magnetometer;
alternatively, the first and second electrodes may be,
step S120, when the intelligent device takes a picture, obtaining 8-dimensional context information of the intelligent device, wherein the 8-dimensional context information comprises: the method comprises the following steps that positioning coordinates of the intelligent equipment when a picture is shot, the reading value of a 3-axis accelerometer and the reading value of a 3-axis magnetometer are obtained;
and S121, the intelligent device calculates the device orientation of the intelligent device when the intelligent device takes a picture according to the reading value of the 3-axis accelerometer and the reading value of the 3-axis magnetometer, and uploads the obtained positioning coordinates and the device orientation to a cloud server.
3. The control method for the unmanned vehicle to autonomously acquire images according to claim 1, wherein the unmanned vehicle is provided with a controller, a second communication module, a second positioning module, a driving motor, a holder, a camera, a second accelerometer and a second magnetometer, wherein the second communication module, the second positioning module, the driving motor, the holder, the camera, the second accelerometer and the second magnetometer are connected with the controller; the second communication module is used for establishing communication connection with the cloud server, the second positioning module is used for determining the positioning coordinates of the unmanned vehicle, the cradle head is a two-degree-of-freedom camera cradle head, the cradle head comprises a first steering engine and a second steering engine, the first steering engine rotates around a vertical shaft, the second steering engine rotates around a horizontal shaft, the camera is fixedly installed on the second steering engine, a second camera is arranged behind the camera, and the second accelerometer and the second magnetometer are installed on a cradle head base;
the unmanned vehicle navigates to the positioning coordinate according to the image acquisition instruction, and then adjusts the holder arranged on the unmanned vehicle, so that the instant posture of the holder is consistent with the pointing direction of the equipment, and the method comprises the following steps:
after receiving an image acquisition instruction, the controller determines a navigation path according to the current position of the unmanned vehicle and the positioning coordinates, and controls a driving motor to enable the unmanned vehicle to drive the position of the positioning coordinates according to the navigation path;
after the unmanned vehicle navigates to the positioning coordinates, the instant attitude of the holder is calculated according to the second accelerometer and the second magnetometer, the attitude adjustment angle of the holder is determined according to the instant attitude and the equipment direction, and the controller adjusts the attitude of the holder according to the attitude adjustment angle and the first steering engine and/or the second steering engine, so that the instant attitude of the holder is consistent with the equipment direction.
4. The control method for the unmanned vehicle to autonomously acquire images according to claim 3, wherein before determining the attitude adjustment angle of the pan/tilt head according to the instant attitude and the device orientation, the method further comprises:
establishing a world coordinate system XYZ, and determining a value range of the equipment pointing in the world coordinate system through data interval conversion.
5. The control method for the unmanned vehicle to autonomously acquire images according to claim 4, wherein the establishing of the world coordinate system XYZ, the determining of the value range of the device orientation in the world coordinate system through data interval conversion, comprises:
establishing a world coordinate system XYZ, wherein the X axis faces the positive west direction, the Y axis faces the positive north direction, and the Z axis faces the sky direction;
a second camera behind the camera is used for transversely shooting, and when the intelligent terminal is at an initial position, a lens of the second camera is positioned on an X-axis positive axis and the direction of the lens faces to a Y axis;
the device orientation is noted as: d ═ azimuth, roll, where azimuth denotes the azimuth angle of the camera, roll denotes the angle of rotation of the camera, the range of values of the azimuth angle is [0, 360 ° ], and the range of values of the angle of rotation is [0, 180 ° ];
acquiring an azimuth angle of a second camera calculated according to measurement values of a second accelerometer and a second magnetometer, and converting a value range of the azimuth angle into [0, 360 DEG ] through data interval conversion;
and acquiring a rotation angle calculated according to the measurement values of the second accelerometer and the second magnetometer, and converting the value range of the rotation angle into [0, 180 DEG ] through data interval conversion.
6. The control method for the unmanned vehicle to autonomously acquire images according to claim 5, wherein the rotation angle is converted into [0, 180 ° ]throughdata interval conversion, specifically:
performing data interval conversion on the rotation angle through a conversion formula, wherein the conversion formula is as follows:
Figure FDA0002868407940000021
wherein the content of the first and second substances,
Figure FDA0002868407940000022
representing the rotation angle after the conversion, the roll representing the rotation angle obtained before the conversion,
Figure FDA0002868407940000023
indicating the acceleration of the camera in the Z-axis direction.
7. The control method for the unmanned vehicle to autonomously acquire images according to claim 6, wherein the determining of the attitude adjustment angle of the pan/tilt head according to the instant attitude and the device orientation comprises:
determining an instant attitude C _ r of the pan-tilt, said instant attitude C _ r being represented as: c _ r ═ angle _ ch _ r, angle _ cv _ r, where angle _ cv _ r represents the instantaneous angle of rotation of the second steering engine, angle _ ch _ r represents the instantaneous angle of rotation of the first steering engine, the range of angle _ ch _ r is [0, 360 ° ], and the range of angle _ cv _ r is [0, 180 ° ];
calculating the attitude adjustment angle according to the following formula, taking the attitude adjustment angle as C _ a ═ α, β:
α=[(90°+azimuth)-angle_ch_r+360°]mod360,β=roll-angle_cv_r;
wherein (90 degrees + azimuth) represents the azimuth angle of the lens, alpha represents the rotation angle value of the first steering engine, and the value range of alpha is [0 degrees, 360 degrees ]; beta represents the rotation angle value of the second steering engine, and the value range of the beta is [ -180 degrees, 180 degrees ].
8. The method for controlling the unmanned vehicle to autonomously acquire images according to claim 1, wherein the acquiring of the image data in accordance with the data type at the time of acquiring the data by the unmanned vehicle comprises:
when the unmanned vehicle reaches the position of the positioning coordinate and the instant posture of the holder is consistent with the pointing direction of the equipment, the controller controls the second camera to acquire image data at set time, wherein the set time comprises any one of a time point or a time period, and the data type comprises any one of a video or a photo.
9. A control device for autonomous image acquisition of an unmanned vehicle, the device comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor may implement the method of controlling the autonomous acquisition of images by an unmanned vehicle according to any one of claims 1 to 8.
CN202011596853.8A 2020-12-29 2020-12-29 Control method and device for autonomous image acquisition of unmanned vehicle Active CN112702532B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011596853.8A CN112702532B (en) 2020-12-29 2020-12-29 Control method and device for autonomous image acquisition of unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011596853.8A CN112702532B (en) 2020-12-29 2020-12-29 Control method and device for autonomous image acquisition of unmanned vehicle

Publications (2)

Publication Number Publication Date
CN112702532A true CN112702532A (en) 2021-04-23
CN112702532B CN112702532B (en) 2022-07-15

Family

ID=75512009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011596853.8A Active CN112702532B (en) 2020-12-29 2020-12-29 Control method and device for autonomous image acquisition of unmanned vehicle

Country Status (1)

Country Link
CN (1) CN112702532B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538594A (en) * 2021-06-30 2021-10-22 东风汽车集团股份有限公司 Vehicle-mounted camera calibration method based on direction sensor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104808675A (en) * 2015-03-03 2015-07-29 广州亿航智能技术有限公司 Intelligent terminal-based somatosensory flight operation and control system and terminal equipment
CN106325290A (en) * 2016-09-30 2017-01-11 北京奇虎科技有限公司 Monitoring system and device based on unmanned aerial vehicle
CN106742003A (en) * 2015-11-20 2017-05-31 广州亿航智能技术有限公司 Unmanned plane cloud platform rotation control method based on intelligent display device
CN107343153A (en) * 2017-08-31 2017-11-10 王修晖 A kind of image pickup method of unmanned machine, device and unmanned plane
KR20180068508A (en) * 2016-12-14 2018-06-22 이광섭 Capturing drone system using hand gesture recognition
WO2019127395A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Image capturing and processing method and device for unmanned aerial vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104808675A (en) * 2015-03-03 2015-07-29 广州亿航智能技术有限公司 Intelligent terminal-based somatosensory flight operation and control system and terminal equipment
CN105573330A (en) * 2015-03-03 2016-05-11 广州亿航智能技术有限公司 Aircraft control method based on intelligent terminal
CN106742003A (en) * 2015-11-20 2017-05-31 广州亿航智能技术有限公司 Unmanned plane cloud platform rotation control method based on intelligent display device
CN106325290A (en) * 2016-09-30 2017-01-11 北京奇虎科技有限公司 Monitoring system and device based on unmanned aerial vehicle
KR20180068508A (en) * 2016-12-14 2018-06-22 이광섭 Capturing drone system using hand gesture recognition
CN107343153A (en) * 2017-08-31 2017-11-10 王修晖 A kind of image pickup method of unmanned machine, device and unmanned plane
WO2019127395A1 (en) * 2017-12-29 2019-07-04 深圳市大疆创新科技有限公司 Image capturing and processing method and device for unmanned aerial vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538594A (en) * 2021-06-30 2021-10-22 东风汽车集团股份有限公司 Vehicle-mounted camera calibration method based on direction sensor

Also Published As

Publication number Publication date
CN112702532B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
US10648809B2 (en) Adaptive compass calibration based on local field conditions
US11480291B2 (en) Camera system using stabilizing gimbal
CN108184061B (en) Tracking control method and device for handheld cloud deck, handheld cloud deck and storage medium
US9560274B2 (en) Image generation apparatus and image generation method
US9729788B2 (en) Image generation apparatus and image generation method
CN107065898B (en) Navigation control method and system for underwater unmanned ship
US10284776B2 (en) Image generation apparatus and image generation method
US9894272B2 (en) Image generation apparatus and image generation method
WO2019000325A1 (en) Augmented reality method for aerial photography of unmanned aerial vehicle, processor, and unmanned aerial vehicle
JP6430661B2 (en) Stable platform and tracking control system and method thereof
WO2022077296A1 (en) Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium
WO2021217371A1 (en) Control method and apparatus for movable platform
CN109814588A (en) Aircraft and object tracing system and method applied to aircraft
US20200304719A1 (en) Control device, system, control method, and program
CN111247389B (en) Data processing method and device for shooting equipment and image processing equipment
CN112702532B (en) Control method and device for autonomous image acquisition of unmanned vehicle
CN110337668B (en) Image stability augmentation method and device
US20230359204A1 (en) Flight control method, video editing method, device, uav and storage medium
CN113875222A (en) Shooting control method and device, unmanned aerial vehicle and computer readable storage medium
CN111213365A (en) Shooting control method and controller
WO2019189381A1 (en) Moving body, control device, and control program
JP2018201119A (en) Mobile platform, flying object, support apparatus, portable terminal, method for assisting in photography, program, and recording medium
CN111665870B (en) Track tracking method and unmanned aerial vehicle
JP4999647B2 (en) Aerial photography system and image correction method for aerial photography
CN117412161A (en) Trolley tracking method and device, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant