WO2023077421A1 - Movable platform control method and apparatus, and movable platform and storage medium - Google Patents

Movable platform control method and apparatus, and movable platform and storage medium Download PDF

Info

Publication number
WO2023077421A1
WO2023077421A1 PCT/CN2021/128983 CN2021128983W WO2023077421A1 WO 2023077421 A1 WO2023077421 A1 WO 2023077421A1 CN 2021128983 W CN2021128983 W CN 2021128983W WO 2023077421 A1 WO2023077421 A1 WO 2023077421A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
movable platform
exposure parameter
exposure
view
Prior art date
Application number
PCT/CN2021/128983
Other languages
French (fr)
Chinese (zh)
Inventor
李号
徐佐腾
郭阳阳
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202180101633.XA priority Critical patent/CN117837160A/en
Priority to PCT/CN2021/128983 priority patent/WO2023077421A1/en
Publication of WO2023077421A1 publication Critical patent/WO2023077421A1/en

Links

Images

Definitions

  • the mobile platform is equipped with various sensors, which can collect data from the surrounding environment, and the mobile platform can control its own movement based on the data collected by the sensors. How to control the movable platform to move safely in space has been a technical issue that has been concerned in this field.
  • the embodiment of the present application provides a method for controlling a mobile platform
  • the movable platform includes a first sensor, a second sensor and a third sensor; the first sensor and the third sensor have a first partial field of view that overlaps; the second sensor and the third sensor Having overlapping second partial fields of view; the first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe the scene in the second direction of the movable platform;
  • the methods include:
  • the image collected by the sensor is used to acquire the depth information of the scene in the second direction; and the movable platform is controlled to move in space according to the depth information.
  • the movable platform includes a first sensor, a second sensor and a third sensor; the first sensor and the third sensor have a first partial field of view that overlaps; the second sensor and the third sensor Having overlapping second partial fields of view; the first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe the scene in the second direction of the movable platform;
  • an embodiment of the present application provides a computer-readable storage medium, on which a number of computer instructions are stored, and when the computer instructions are executed, the mobile platform described in the aforementioned first aspect is implemented. The steps of the control method.
  • the third sensor can form a binocular vision system with the first sensor and the second sensor respectively, and the first partial field of view and the second partial field of view respectively observe different first and second directions. Therefore, The first brightness information of the first partial field of view can be obtained for the first partial field of view alone, and the first exposure parameter can be determined based on this. Therefore, the first image captured by the third sensor under the first exposure parameter can be compared with the first partial field of view. The brightness of the field of view is matched, and the quality of the first image is high, so that the depth information of the scene in the first direction can be obtained based on the first image and the image collected by the first sensor.
  • the second brightness information of the second partial field of view can be obtained for the second partial field of view alone, and the second exposure parameter can be determined based on this
  • the second image collected by the third sensor under the second exposure parameter can be Matching the brightness of the second partial field of view, the quality of the second image is relatively high, so that the depth information of the scene in the second direction can be obtained based on the second image and the image collected by the second sensor.
  • the movable platform can obtain reliable depth information of the scene in the first direction and depth information of the scene in the second direction based on the above processing, it can safely control itself to move in space.
  • Fig. 1A is a schematic diagram showing the installation structure of a sensor on a movable platform and the field of view of the sensor according to an embodiment of the present application;
  • Fig. 2 is a schematic flowchart of a method for controlling a mobile platform according to an embodiment of the present application
  • Fig. 3 is a schematic flowchart of some steps in a method for controlling a mobile platform according to an embodiment of the present application
  • Fig. 5 is a schematic flowchart of a method for acquiring an image by a sensor on a movable platform according to an embodiment of the present application
  • Fig. 7 is a schematic flowchart of a method for outputting images by a sensor on a movable platform according to an embodiment of the present application
  • Fig. 8 is a schematic diagram of a first state in a mobile platform control method according to an embodiment of the present application.
  • Fig. 10 is a schematic block diagram of a control device for a movable platform shown according to an embodiment of the present application.
  • Fig. 11 is a schematic block diagram showing a hardware structure of a device in a mobile platform according to an embodiment of the present application.
  • the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification.
  • the method may include more or less steps than those described in this specification.
  • a single step described in this specification may be decomposed into multiple steps for description in other embodiments; multiple steps described in this specification may also be combined into a single step in other embodiments describe.
  • the movable platform in the related art usually uses at least eight independently controllable cameras.
  • a movable platform is designed to be equipped with a sensor with a large field of view.
  • One sensor can overlap the field of view of at least two sensors, that is, one sensor can take care of at least two directions. Therefore, one sensor can form a binocular vision system with at least two sensors, thereby reducing the number of vision sensors on the movable platform.
  • exposure parameters include ideal exposure time, exposure time, analog gain and digital gain, and the relationship between the above parameters is shown in the following formula:
  • the sensor can determine the appropriate exposure parameters by sensing the brightness information of the environment in the field of view and using the above formula. Therefore, the sensor can accurately perceive the brightness information of the environment in the field of view, which will affect the accurate determination of the exposure parameters.
  • the photosensitive element perceives the light and forms an image.
  • the brightness information in the environment can be determined through the brightness information of the image.
  • Light intensity represents brightness. Different photosensitive elements have different perception abilities to light intensity. The perception ability of photosensitive elements to light intensity is the brightness dynamic range of the sensor. scope.
  • the field of view of the sensor carried on the movable platform in the related art is relatively small, and one sensor only forms a binocular vision system with one sensor, and at least eight cameras are carried on the movable platform; therefore, in the related art, due to the small field of view of the sensor , basically will not face the problem of large changes in the environmental information in the field of view due to the large field of view.
  • Another solution can be to optimize the quality of the sensor and increase the dynamic range of the ambient brightness that the sensor can perceive, but currently the price of sensors that can perceive a larger dynamic range is relatively high, so this solution will lead to an increase in the manufacturing cost of the mobile platform. Increase.
  • the movable platform is equipped with a sensor; the sensor in this embodiment refers to a visual sensor that can collect images.
  • the visual sensor has a certain field of view and can overlap with the field of view of at least two sensors. As an example, it can be a wide-angle camera Or a fisheye camera etc.
  • the number of sensors mounted on the movable platform may not be limited to three; moreover, one sensor may have overlapping fields of view with at least two sensors; in practical applications, it may be configured as required.
  • three sensors are taken as an example to illustrate the specific process of applying the mobile platform control method according to the embodiment of the present application when one sensor has overlapping fields of view with two sensors respectively, and the overlapping fields of view face different directions.
  • the design of the three sensors conforms to the principle of "one sensor has overlapping fields of view with two sensors respectively, and the overlapping fields of view face different directions" in this embodiment.
  • the solution provided in this embodiment can be applied.
  • the scheme of this embodiment can be applied to any sensor for exposure processing to collect images .
  • a sensor not only has overlapping fields of view with two sensors, but also has overlapping fields of view with multiple sensors.
  • two of them may face different directions, or both may face different directions; it is understandable , in this case, the solution provided by this embodiment can still be applied.
  • the field of view of the second sensor overlaps with that of the third sensor, that is, the intersection area of F2 and C2 shown in FIG. 1A , that is, the field of view of the second sensor overlaps with the third sensor.
  • the second sensor and the third sensor have a second partial field of view that overlaps.
  • the second partial field of view may refer to the field of view overlapped by the second sensor and the third sensor, or may refer to the field of view of the second sensor and the third sensor.
  • a part of the overlapping field of view of the third sensor, the second partial field of view is used to observe the scene in the second direction of the movable platform.
  • first direction and the second direction in this embodiment adopt a coordinate system (body system) fixed on the movable platform, such as a three-dimensional orthogonal rectangular coordinate system fixed on the movable platform following the right-hand rule, where The origin is at the movable platform or at the center of mass on the movable platform.
  • body system fixed on the movable platform
  • the origin is at the movable platform or at the center of mass on the movable platform.
  • FIG. 2 it is a flowchart of a control method for a movable platform according to an exemplary embodiment of the present application, and the method may include the following steps:
  • the third sensor forms a binocular vision system with the first sensor and the second sensor respectively, and the first partial field of view and the second partial field of view respectively observe different first directions and second directions.
  • the first partial field of view acquire the first brightness information of the first partial field of view, and determine the first exposure parameter based on this, so the first image collected by the third sensor under the first exposure parameter can be compared with the brightness of the first partial field of view Matching, the quality of the first image is relatively high, so that the depth information of the scene in the first direction can be obtained based on the first image and the image collected by the first sensor.
  • the second brightness information of the second partial field of view can be obtained for the second partial field of view alone, and the second exposure parameter can be determined based on this
  • the second image collected by the third sensor under the second exposure parameter can be Matching the brightness of the second partial field of view, the quality of the second image is relatively high, so that the depth information of the scene in the second direction can be obtained based on the second image and the image collected by the second sensor.
  • the movable platform can obtain reliable depth information of the scene in the first direction and depth information of the scene in the second direction based on the above processing, it can safely control itself to move in space.
  • one or more images can be collected by a sensor, and the brightness information of the environment can be estimated according to the brightness information of the images; the first partial field of view in this embodiment
  • the area corresponding to the first partial field of view can be determined from the image collected by the first sensor and/or the third sensor, and by counting the area
  • the image brightness information of the image is used to determine the first brightness information of the first partial field of view.
  • the second luminance information of the second partial field of view in this embodiment can be determined from the images collected by the second sensor and/or the third sensor because the field of view ranges of the second sensor and the third sensor are covered.
  • the second brightness information of the second partial field of view is determined by counting the image brightness information of the region.
  • the determining the first exposure parameter of the third sensor based at least on the first brightness information may be determining the first exposure parameter of the third sensor based only on the first brightness information, or may be combined with other information, such as other environmental parameters in the first partial field of view, or environmental parameters in other areas within the field of vision other than the first partial field of view, or data collected by other sensors on the movable platform, or data that can be Flight status information of the mobile platform, etc.
  • the determining of the second exposure parameter of the third sensor based at least on the second brightness information may be to determine the second exposure parameter of the third sensor based only on the second brightness information, or may be combined with other information as required To determine, for example, other environmental parameters in the second partial field of view or environmental parameters in other areas within the field of vision other than the second partial field of view, or the data collected by other sensors on the movable platform, or the flight of the movable platform status information and more.
  • step S202 and step S203 are not necessarily executed in the order shown and described in the embodiment shown in FIG. 2 .
  • the movable platform in some time periods, can only perform step S202 and then control its own movement; in other time periods, the movable platform can only perform step S203 and then control move by itself; or, the movable platform can control itself to move after steps S202 and S203 are executed.
  • the mobile platform can be configured with multiple schemes for setting the exposure parameters of the sensor, such as the aforementioned global average exposure scheme, and the scheme in this embodiment can be used as one of them. Therefore, the execution timing of the scheme in this embodiment Can be determined according to needs. For example, when the movable platform is in a scene with a small dynamic range of ambient brightness, the movable platform can adopt other schemes for setting exposure parameters; when the movable platform recognizes that it is in a scene with a large dynamic range of ambient brightness, it can trigger execution The scheme of this embodiment.
  • the movable platform compares the image information of different areas in the image collected by the sensor, and determines that the brightness information of different areas is quite different, such as the first brightness information of the first partial field of view and the second The second luminance information difference of the local field of view satisfies the set condition, which can trigger the execution of the solution of this embodiment; wherein, the set condition refers to a condition representing a large difference in image luminance information, which can be set as required, for example, both The brightness value is greater than the set brightness threshold and so on.
  • the solution of this embodiment involves the exposure control of the third sensor.
  • This solution does not adopt the idea of global average exposure, but determines the exposure parameters based on the brightness information of the local field of view. Therefore, this embodiment does not Affected by the brightness of other partial fields of view, it can ensure that the image information of the scene in the partial field of view is collected.
  • the solution of this embodiment can also significantly reduce the influence of the large dynamic range of light intensity in different areas of the field of view, and affect the perception of the large dynamic range by the sensor. The performance requirements are not high, so that the mobile platform can be equipped with lower-cost sensors, and there is no need to choose hardware with a larger dynamic range, so that the image quality collected by the sensor can be guaranteed without increasing the hardware cost.
  • the movable platform determines two sets of exposure parameters of the sensor respectively according to the first brightness information of the first partial field of view and the second brightness information of the second partial field of view to collect different images.
  • the above-mentioned embodiment does not limit the timing of determining the exposure parameters, the timing of controlling the sensor to collect images, the timing of controlling the movement of the movable platform based on the collected images, and the execution timing of each step; it can be understood that in practical applications, Design according to needs, for example, considering one or more factors such as the type of the movable platform, the way of moving the movable platform, and the application scenarios faced. These possible different treatments are all within the scope of the solution of this application.
  • two embodiments are provided to illustrate the control scheme of the mobile platform of the present application.
  • the image quality of the scene in the moving direction of the movable platform can be given priority, and the exposure parameters of the movable platform can be set based on the scenery in the moving direction of the movable platform, so as to give priority to the obstacle avoidance of the movable platform in the moving direction Function.
  • the image quality of the forward field of view is the most important, which determines whether the functions such as obstacle avoidance and circumvention of the movable platform can be realized normally; when moving to the left, the image quality of the left field of view Image quality is the most important; when moving toward the upper left 45 degrees, it is necessary to take into account both left and forward vision directions; direction. The same goes for other moving directions of the movable platform, which will not be repeated here.
  • the exposure parameter it may be determined based on at least the brightness information and the moving direction.
  • FIG. 3 it is a flow chart of determining the first exposure by the mobile platform according to an embodiment shown in this embodiment, which may include the following steps:
  • Step S301 Obtain the moving direction of the movable platform.
  • Step S302 Determine a first exposure parameter of the third sensor based at least on the first brightness information and the moving direction of the movable platform.
  • the moving direction of the movable platform can be obtained in various ways.
  • the movable platform can be equipped with a variety of sensors, such as inertial sensors such as accelerometers and angular velocity meters. direction of movement.
  • how to determine the first exposure parameter based on the first brightness information and the moving direction can be implemented in various ways according to actual needs, which is not limited in this embodiment.
  • the first brightness information corresponds to the first direction
  • the weight of the first brightness information when determining the first exposure parameter can be determined according to the relationship between the moving direction and the first direction.
  • the specific weight can be flexibly selected according to needs.
  • the weight of the first brightness information may be determined according to the relative orientation relationship between the moving direction of the movable platform and the first direction of the third sensor, wherein the weight reflects the first brightness information for the current moving direction of the movable platform. The importance of brightness information in the field of view in the first direction when the three sensors collect images.
  • the relative orientation relationship between the moving direction of the movable platform and the first direction of the third sensor can be divided into two situations: coincident with the first direction and deviation from the first direction.
  • priority is given to the moving direction of the movable platform. The closer the first direction and the second direction are to the moving direction of the movable platform, the higher the importance of the scenery in this direction is, and it can be used when collecting images. Prioritize image quality in that direction. For example, if the moving direction of the movable platform coincides with the first direction, the first exposure parameter of the third sensor may be determined only based on the first brightness information, that is, other directions within the field of view of the third sensor may not be considered.
  • the first exposure parameter can be determined by considering the second brightness information in the second direction;
  • the moving direction of the movable platform is biased towards the first direction, therefore, the weight of the first brightness information of the third sensor is greater than the weight of the second brightness information, based on which it can be determined
  • the first exposure parameter is obtained.
  • the image quality of the scene in the moving direction of the movable platform can be given priority, and the exposure parameters of the movable platform are set based on the brightness information in the moving direction, and the moving
  • the rich image information in the direction enables the movable platform to obtain the depth information of the scene in the moving direction, so as to give priority to the obstacle avoidance function of the movable platform in the moving direction, making the movement of the movable platform safer.
  • the moving direction of the movable platform may be obtained, and the second exposure parameter of the third sensor may be determined based at least on the second brightness information and the moving direction of the movable platform; in some examples, if the moving direction falls within the In a second direction, determining a second exposure parameter of the third sensor based on the second brightness information; if the moving direction is biased toward the second direction relative to the first direction, based on the first brightness information and the second brightness information to determine the first exposure parameter; wherein, the weight of the second brightness information is greater than the weight of the first brightness information.
  • the above embodiment gives priority to the images of the scene in the moving direction of the movable platform, reduces the influence of ambient brightness information in other non-moving directions, and solves the problem that the field of view of the sensors is coupled to each other when the dynamic range of parameters in the environment is large.
  • the image collected by the sensor corresponds to the first direction and the second direction, that is, the collected image can be divided into two parts corresponding to different directions.
  • the weight sum of the two parts in the image is 1 as an example Be explained.
  • the movable platform moves forward or backward, its moving angle is 0° or 180° (-180°), the weight of the image information of area 1 obtained by the third sensor is 1, and the weight of image information of area 2 is is 0.
  • the weight of the image information in sensor area 1 is 0, and the weight of image information in area 2 is 1.
  • the weight of area 1 corresponding to the forward direction in the collected image is 1, and the weight of area 2 corresponding to the left direction in the collected image is 0, that is, the first brightness information is determined only based on the part of the area 1 corresponding to the forward direction in the image. If the movable platform moves to the left, similarly, the second brightness information is determined only based on the part of the region 2 corresponding to the left direction in the image.
  • the weights of the images of the two regions of the third sensor of the movable platform may have a set function relationship with the moving angle of the movable platform.
  • the calculation method of the weight w area_1 of the image of the third sensor area 1 is shown in formula (2):
  • w area_1 refers to the weight of the image of the third sensor area 1
  • theta refers to the moving angle of the movable platform.
  • w area_2 refers to the weight of the image of the third sensor area 2
  • theta refers to the moving angle of the movable platform.
  • the weight of the first brightness information and the weight of the second brightness information can be obtained, and then the average brightness information of the image of the third sensor can be obtained through weighted calculation, and the automatic exposure algorithm can be executed to determine the first exposure of the third sensor parameter.
  • the first exposure parameters include exposure time and analog gain
  • the exposure time and analog gain of the third sensor are calculated by using an automatic exposure algorithm according to the calculated average brightness information of the image.
  • the image can be input into the perception algorithm of the subsequent stage.
  • some perception algorithms that use binocular vision images as input have certain requirements on binocular images, such as semi-global matching and visual odometer, etc., have certain requirements on binocular images, such as the brightness of the left and right images cannot The difference is too much or the exposure time of the left and right images cannot be too different, etc.
  • binocular images such as the brightness of the left and right images cannot be too different, etc.
  • only one luminance information of two adjacent sensors is the same, its weight and the other luminance information and its weight may be very different, which will cause the exposure parameters of the two adjacent sensors to be different, resulting in the left and right images.
  • the difference in brightness cannot be too much or the exposure time of the left and right images cannot be too different.
  • the first exposure parameter of the third sensor can also be adjusted based on the first exposure parameter of the first sensor.
  • the adjustment method can be determined based on the requirements of the perception algorithm for binocular images, and the specific requirements can be determined according to the actual perception algorithm used.
  • a difference threshold can also be preset as required, and if the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor exceeds the difference threshold, the first exposure parameter of the third sensor is adjusted, Make the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor less than or equal to the difference threshold, wherein the difference threshold can be flexibly set according to needs, for example, it can be an empirical value.
  • the adjustment method may include: reducing the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, so as to ensure that the difference between the image captured by the first sensor and the image captured by the third sensor is Satisfy preset brightness conditions and/or preset image content conditions to meet the requirements of post-stage perception algorithms.
  • the value range of the exposure parameter of the third sensor can be determined according to the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, and the value range of the exposure parameter of the third sensor can be controlled within the range of the exposure parameter. value range.
  • the exposure parameters of two adjacent sensors are controlled within the value range of the exposure parameter in the flying direction of the movable platform, and the difference between the two is controlled to be smaller than the value range of the exposure parameter in the flying direction of the movable platform. Ensure that the brightness of the left and right images is similar or the exposure time of the left and right images is similar.
  • the manner of limiting the first exposure parameter of the third sensor may be that when the first exposure parameter of the third sensor is within the value range of the exposure parameter, the third sensor’s The first exposure parameter remains unchanged; or, when the first exposure parameter of the third sensor exceeds the value range of the exposure parameter, the first exposure parameter of the third sensor is adjusted to a boundary value of the value range of the exposure parameter.
  • Adjusting the first exposure parameter of the third sensor to be the boundary value of the value range of the exposure parameter includes two types: If the first exposure parameter of the third sensor is greater than the maximum value of the value range of the exposure parameter, set the first exposure parameter of the third sensor is the maximum value of the value range of the exposure parameter; if the first exposure parameter of the third sensor is smaller than the minimum value of the value range of the exposure parameter, set the first exposure parameter of the third sensor to the minimum value of the value range of the exposure parameter.
  • the value range of the exposure parameter is jointly determined based on the first exposure parameter of the first sensor and its weight, and the first exposure parameter of the third sensor and its weight.
  • the scene information in the moving direction is more important, if the moving direction of the movable platform is biased towards the first direction relative to the second direction, the scene in the first direction will have a greater influence on the scene in the moving direction than The scene in the second direction, so the weight of the first exposure parameter of the third sensor can be controlled to be greater than the weight of the first exposure parameter of the first sensor.
  • the captured image controls the movement of the movable platform. Mobile is safer.
  • a parameter in order to determine the exposure parameter more quickly, in consideration of the moving direction, a parameter can be defined first, which represents the moving direction of the movable platform.
  • the exposure parameter is a virtual exposure parameter, not an actual physical parameter, which can be obtained by weighting the exposure parameters of two adjacent sensors determined based on the moving direction. Therefore, the boundary value of the value range of the exposure parameter is a multiple of the exposure parameter in the moving direction of the movable platform.
  • the multiple of the exposure parameter in the moving direction of the movable platform may be a value determined according to the actual use scene and experience of the movable platform.
  • an optional specific embodiment is provided below.
  • the exposure parameters in the moving direction of the movable platform are obtained by weighting the exposure parameters of the front left and front right sensors;
  • the clockwise direction is preferred , the two sensors determined according to the direction of movement are front right and front left respectively; when the movable platform flies forward and left 45 degrees, the clockwise direction takes priority, and the two sensors determined according to the direction of movement are front right and rear left .
  • the method for determining the sensor is the same when the moving direction of the movable platform is other directions, and details are not described here. It can be clearly obtained that no matter which direction the movable platform moves, two adjacent sensors on the left and right can be determined according to the moving direction.
  • the weights of the exposure parameters of the left and right adjacent sensors can be determined according to the moving direction of the movable platform.
  • An optional embodiment is provided below.
  • the weights of the exposure parameters of the two sensors and the moving angle of the movable platform may have a set functional relationship. For example, if the moving angle of the movable platform satisfies: 45° ⁇
  • w left refers to the weight of the exposure parameters of the sensors adjacent to the left of the moving direction of the movable platform
  • w right refers to the weight of the exposure parameters of the sensors adjacent to the right of the moving direction of the movable platform
  • theta refers to is the movement angle of the movable platform.
  • w left refers to the weight of the exposure parameters of the sensors adjacent to the left of the moving direction of the movable platform
  • w right refers to the weight of the exposure parameters of the sensors adjacent to the right of the moving direction of the movable platform
  • theta refers to is the movement angle of the movable platform.
  • the weights of the exposure parameters of the two sensors are both 0.5;
  • exposure parameters include ideal exposure time and exposure time. If the determined ideal exposure time and exposure time of the third sensor are not within the value range of the exposure parameter, adjust the ideal exposure time and exposure time of the third sensor to be a boundary value of the value range of the exposure parameter.
  • the exposure parameters also include analog gain and digital gain. Based on the adjusted exposure time and the ideal exposure time, the analog gain and the digital gain of the third sensor are adjusted.
  • the exposure parameters are determined by taking into account the moving direction of the movable platform, and the scene where the movable platform may be in is facing a large change during the moving process. Based on this, this embodiment can also obtain the The change of the moving speed, as an example, the moving speed can be obtained by an inertial measurement unit mounted on a movable platform.
  • the change of the moving speed is positively correlated with any of the following: the acquisition frequency of the moving direction of the movable platform, the update frequency of the exposure parameter or the update frequency of the weight of the brightness information, that is, the movement of the movable platform
  • the acquisition frequency of the moving direction of the movable platform the update frequency of the exposure parameter or the update frequency of the weight of the brightness information
  • the movable platform can update the exposure parameters faster to adapt to the movable platform It may be possible to face rapid changes in the scene under high-speed movement, so as to provide safe control of the movable platform.
  • various motion parameters of the movable platform can be determined based on any one of the following coordinate systems: sensor coordinate system (sensor system), body coordinate system (body system), local coordinate system (local system), global coordinate system (global system) and so on.
  • sensor coordinate system sensor system
  • body coordinate system body system
  • local coordinate system local system
  • global coordinate system global system
  • each fisheye camera is divided into left and right parts from the middle, and the part located in the front and rear directions of the drone is named area 1, and the part located in the left and right directions of the drone is named area 2.
  • the drone looks around in four directions and is divided into eight fields of vision, which are denoted by symbols: C1 and C2, D1 and D2, E1 and E2, and F1 and F2.
  • C1 and D1 form the front-view binocular of the drone
  • D2 and E2 form the right-view binocular of the drone
  • E1 and F1 form the rear-view binocular of the drone
  • F2 and C2 form the left-view binocular of the drone.
  • the exposure times of C1 and C2, D1 and D2, E1 and E2, F1 and F2 are the same.
  • theta shows the relationship between the flight angle (theta) and the flying direction of the drone.
  • means the drone is flying forward
  • 90° means the drone is flying to the left
  • 180° (or -180°) means no one.
  • the drone is flying forward
  • -90° means the drone is flying forward.
  • the unit of the flying speed of the UAV is m/s.
  • the ideal exposure time of the fisheye camera of the drone is recorded as icit, the exposure time is recorded as cit, the analog gain is recorded as a_gain, the brightness of the first area is recorded as Lum area_1 , and the brightness of the second area is recorded as Lum area_2 .
  • Use icit c to represent the ideal exposure time of fisheye camera C, use cit c to represent the exposure time of fisheye camera C, and the exposure parameters of other fisheye cameras are the same.
  • the current flight speed and flight direction of the UAV based on the body coordinate system can be calculated through the information of the inertial measurement unit of the UAV.
  • the body coordinate system refers to a three-dimensional orthogonal rectangular coordinate system fixed on the aircraft or aircraft following the right-hand rule, and its origin is located at the center of mass of the aircraft or aircraft.
  • the flying speed of the UAV is lower than the set threshold, the flying direction of the UAV remains the same as the previous frame and will not be updated. Otherwise, update the flight direction of the drone.
  • the calculation method is as follows:
  • w area_1 refers to the weight of the image of area 1 (ie C1) of the fisheye camera C, and theta refers to the flight angle of the drone.
  • w area_2 refers to the weight of the image of area 2 (ie C2) of the fisheye camera C, and theta refers to the flight angle of the drone.
  • Lum avg w area_1 *Lum area_1 +w area_2 *Lum area_2 (10)
  • Lum avg refers to the average brightness of fisheye camera C
  • w area_1 refers to the weight of the image of area 1C1 of fisheye camera C
  • Lum area_1 refers to the brightness of the image of area 1C1 of fisheye camera C
  • w area_2 refers to the weight of the image of the area 2C2 of the fisheye camera C
  • Lum area_2 refers to the brightness of the image of the area 2C2 of the fisheye camera C.
  • the exposure time and simulation gain of the four fisheye cameras on the UAV are obtained. Then, the exposure time and analog gain calculated above are further restricted.
  • the weights of the exposure parameters of the two fisheye cameras are determined according to the flying direction of the drone.
  • Let w left represent the weight of the exposure parameters of the fisheye camera adjacent to the left side of the UAV's flight direction; w right represents the weight of the exposure parameters of the fisheye camera adjacent to the right side of the UAV's flight direction. Its calculation method is:
  • w left refers to the weight of the exposure parameters of the fisheye camera adjacent to the left of the flight direction of the drone
  • w right refers to the weight of the exposure parameters of the fisheye camera adjacent to the right of the flight direction of the drone.
  • Weight, theta refers to the flying angle of the drone.
  • w left refers to the weight of the exposure parameters of the fisheye camera adjacent to the left of the flight direction of the drone
  • w right refers to the weight of the exposure parameters of the fisheye camera adjacent to the right of the flight direction of the drone.
  • Weight, theta refers to the flying angle of the drone.
  • the exposure parameters in the flight direction of the UAV are calculated.
  • the ideal exposure time in the flying direction of the UAV is recorded as icit fly_dir
  • the exposure time is recorded as cit fly_dir .
  • the calculation methods of the above two parameters icit fly_dir and cit fly_dir are shown in formula (15) and formula (16):
  • icit fly_dir w left *icit left +w right *icit right (15)
  • cit fly_dir w left *cit left +w right *cit right (16)
  • icit fly_dir refers to the ideal exposure time in the flying direction of the drone
  • cit fly_dir refers to the exposure time in the flying direction of the drone
  • w left refers to the fisheye adjacent to the left of the flying direction of the drone
  • the weight of the exposure parameters of the camera w right refers to the weight of the exposure parameters of the fisheye camera adjacent to the right of the drone's flight direction
  • icit left refers to the weight of the fisheye camera adjacent to the left of the drone's flight direction
  • Ideal exposure time icit right refers to the ideal exposure time of the fisheye camera adjacent to the right of the drone's flight direction
  • cit left refers to the exposure time of the fisheye camera adjacent to the left of the drone's flight direction
  • cit right refers to the exposure time of the fisheye camera adjacent to the right of the flying direction of the drone.
  • icit x min(icit fly_dir *ratio icit , icit x ), x ⁇ C,D,E,F ⁇ (17)
  • x refers to any one of the four fisheye cameras C, D, E and F on the drone
  • icit x refers to the ideal exposure time of the fisheye camera x on the drone
  • icit fly_dir refers to the The ideal exposure time in the direction of human-machine flight
  • ratio icit refers to a parameter set according to experience.
  • x refers to any one of the four fisheye cameras C, D, E and F on the drone
  • cit x refers to the exposure time of the fisheye camera x on the drone
  • cit fly_dir refers to the The exposure time in the flight direction of the aircraft
  • ratio cit_up refers to a parameter set according to experience
  • ratio cit_low refers to another parameter set according to experience
  • the parameter value of ratio cit_up is greater than the parameter value of ratio cit_low .
  • the exposure parameters are determined in consideration of the moving direction.
  • step S202 and/or step S203 it may also be implemented in other ways without introducing the moving direction.
  • the sensor under the condition that the field of view of the sensors is coupled to each other and it is impossible to look around in four directions at the same time, the sensor can be controlled to perform time-division multiplexing, focusing on different field of view ranges under different time slices, so as to ensure that the images in different directions acquired by the sensor are consistent. It meets the quality requirements, so that the mobile platform can use high-quality images for depth information perception, and it is safer to control the movement of the mobile platform based on the obtained depth information.
  • Step S401 Determine a first exposure parameter of the third sensor based on the first brightness information; determine a second exposure parameter of the third sensor based on the second brightness information.
  • Step S402 Control the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter.
  • the exposure parameter of the third sensor controlling the movable platform is switched between the first exposure parameter and the second exposure parameter, for the third sensor, when the brightness information in the first direction and the second direction differ greatly , and the luminance information in the two directions can also be obtained independently, without being affected by the ambient luminance information in the other direction. Therefore, the movable platform can obtain accurate depth information based on the above scheme, and the movable platform can move safely.
  • This solution does not need to use a sensor capable of sensing a large dynamic range, and the cost is low; at the same time, it solves the problem of being unable to look around in four directions at the same time due to the mutual coupling of the sensor field of view when the dynamic range is large in the environment; guarantees mobility
  • the image quality of the scenery in all directions acquired by the platform sensor enables the normal operation of perception algorithms such as obstacle avoidance in all directions of the movable platform.
  • the third sensor there are multiple ways to control the third sensor to switch between the first exposure parameter and the second exposure parameter, which can be configured according to needs in practical applications, which is not limited in this embodiment.
  • the number of frames whose exposure parameter of the third sensor is the first exposure parameter may be the same as or different from the number of frames whose exposure parameter of the third sensor is the second exposure parameter.
  • the influence of the moving direction of the movable platform can also be increased, for example, when the movable platform moves forward, increase the number of frames when the exposure parameter of the third sensor is the first exposure parameter, so that the exposure parameter of the third sensor The number of frames when the first exposure parameter is greater than the number of frames when the exposure parameter of the third sensor is the second exposure parameter.
  • the time when the third sensor collects a round of images of the first partial field of view and images of the second partial field of view is set as an image collection period of the third sensor.
  • the exposure parameter of the third sensor is controlled to switch between the first exposure parameter and the second exposure parameter.
  • the preset condition that the image acquisition period of the third sensor satisfies may be dividing the image acquisition period into two time slices: configuring the exposure parameter of the third sensor under the first time slice as the first exposure parameter , the first exposure parameter is used to control the third sensor to acquire the image of the first partial field of view; the exposure parameter of the third sensor under the second time slice is configured as the second exposure parameter, and the second exposure parameter is used to control the first
  • the triple sensor acquires images of the second partial field of view.
  • the third sensor is controlled to acquire the image of the scene in the first direction under the first exposure parameter; if the current image acquisition period belongs to the second time slice, the third sensor is controlled to The image of the scene in the second direction is collected under the second exposure parameter.
  • the durations of the first time slice and the second time slice of the image acquisition cycle of the third sensor are both equal to the duration of acquiring one frame of image by the third sensor.
  • one of the image sequences acquired by the third sensor in the first time slice and the image sequence acquired in the second time slice is an odd-numbered frame sequence, and the other is an even-numbered frame sequence.
  • each sensor of the movable platform corresponds to a physical channel and executes the same set of automatic exposure algorithms.
  • the sensors on the movable platform are configured with different exposure parameters under different time slices, which can be understood as dividing each sensor on the movable platform into multiples.
  • each physical channel also becomes multiple virtual channels, and the automatic exposure algorithms executed by each virtual channel are independent of each other.
  • a physical channel corresponding to the third sensor can be divided into two virtual channels according to the two time slices divided by the third sensor; Images are acquired independently between the channels, as shown in Figure 5.
  • the steps of acquiring an image based on the above method include:
  • Step S501 Calculate the number of the virtual channel to which each frame of image in the image sequence acquired by the third sensor belongs.
  • Step S502 Set the exposure parameter of the third sensor as the exposure parameter corresponding to the virtual channel number.
  • Step S503 The movable platform acquires an image based on the exposure parameters set by the third sensor.
  • the number of the virtual channel to which the image of the current frame belongs is also calculated, and corresponding to the number of the virtual channel to which the image of the next frame belongs.
  • all image sequences on the movable platform share and synchronize an image frame number sequence, so that the frame number sequence of the image sequence acquired by the third sensor is the same as the frame number sequence of the image sequence output by the third sensor.
  • the third sensor of the movable platform executes the automatic exposure algorithm and sets the timing diagram of the register, as shown in FIG. 6 .
  • the flow of the method includes:
  • vchn aec_calc refers to the virtual channel number to which the frame number of the current image belongs
  • frame idx refers to the frame number of the current image
  • vchn num refers to the number of virtual channels.
  • the step of dividing the image sequence of the third sensor includes: obtaining the image sequence output by the third sensor, each image in the image sequence corresponds to an image frame number; obtaining the corresponding relationship between the image frame number and the time slice; using the image frame number and the corresponding relationship , acquire the first image acquired under the first exposure parameter from the image sequence.
  • the movable platform includes the following steps:
  • Step S702 Calculate the number of the virtual channel to which each frame of image in the image sequence output by the third sensor belongs.
  • Step S703 Input the image in the image sequence with the same virtual channel number corresponding to the algorithm into the algorithm.
  • Obstacles, tracking and other functions when the movable platform is in a maneuvering state, such as sudden braking, sudden change of direction, this application can well ensure that the exposure of each sensor is appropriate; when the movable platform is in a stable state, such as hovering , there will be no unstable critical states such as jumps in this application.
  • a maneuvering state such as sudden braking, sudden change of direction
  • this specification also provides some other embodiments of the control method of the mobile platform.
  • the first state of the movable platform is defined as the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked, as shown in FIG. 8; the second state of the movable platform
  • the state is that the first partial field of view of the third sensor is not blocked, and the second partial field of view is not blocked, as shown in FIG. 9 .
  • a black tape blocking object can be used to cover a part of the sensor lens area, or other blocking objects such as baffles can block the partial field of view of the sensor.
  • the solution in the related art is adopted, the first partial field of view of the movable platform is blocked, and the automatic exposure algorithm of the sensor in the movable platform will inevitably appear abnormal, resulting in the failure of the obstacle avoidance function of the movable platform.
  • identifying the scene includes identifying depth information and texture features of the scene, which is not limited in this application.
  • the recognition result of the scene in the second direction based on the image collected by the third sensor is the same as the recognition result of the scene in the second direction when the movable platform moves in the second direction.
  • the recognition results of the scene in the second direction obtained based on the image collected by the third sensor are inconsistent. For example, when the movable platform moves towards the first direction in the first state, the recognition result of the scenery in the second direction is unstable; correct recognition result.
  • the correct identification result of the scene by the movable platform includes any of the following: output information of the recognized scene in the user interface; the movable platform successfully avoids obstacles; The correct recognition rate of the scene within is higher than the preset first threshold. It can be understood that other expressions also belong to the protection scope of the present application, which is not limited in the present application.
  • the first state is defined as the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked, as shown in FIG. 8 .
  • the first state is defined as that the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked, as shown in FIG. 8 .
  • the device includes a processor, a memory, and a computer program stored on the memory that can be executed by the processor.
  • the processor executes the computer program, the following methods are implemented:
  • the movable platform is controlled to move in space according to the depth information.
  • the depth information of the scene in the second direction obtained based on the images respectively collected by the third sensor is consistent
  • the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked;
  • the second state includes: the first partial field of view of the third sensor is not blocked, and the second partial field of view is not blocked.
  • the light intensity reflected by the scene in the second partial field of view is higher than the maximum light intensity sensed by the third sensor.
  • the method further includes: acquiring the moving direction of the movable platform;
  • the first exposure parameter of the third sensor is determined based at least on the first brightness information of the third sensor and the moving direction.
  • the third sensor when the third sensor is in the first state, when the movable platform moves toward the first direction, the depth of the scene in the second direction obtained based on the image collected by the third sensor The information is inconsistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
  • the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
  • the first exposure parameter is determined based on the first brightness information and the second brightness information; wherein, the first brightness information The weight of is greater than the weight of the second brightness information.
  • the first exposure parameters include exposure time and analog gain
  • the determining the first exposure parameter of the third sensor based at least on the first brightness information includes:
  • An exposure time and an analog gain of the third sensor are calculated by using an automatic exposure algorithm at least according to the first brightness information.
  • the first exposure parameter of the third sensor is adjusted in the following manner:
  • the first exposure parameter of the first sensor and the first exposure parameter of the third sensor determine the value range of the exposure parameter of the third sensor, and control the first exposure parameter of the third sensor in the within the range of exposure parameters.
  • the weight of the first exposure parameter of the third sensor is greater than the weight of the first exposure parameter of the first sensor.
  • the adjusting the first exposure parameter of the third sensor based on the value range of the exposure parameter includes:
  • the exposure parameters also include: analog gain and digital gain;
  • the acquisition frequency of the moving direction of the movable platform, the update frequency of the exposure parameters or the update frequency of the weight of the brightness information is not limited.
  • the third sensor when the third sensor is in the first state, when the movable platform moves toward the first direction, the depth of the scene in the second direction obtained based on the image collected by the third sensor The information is consistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
  • the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
  • controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter is performed when the image acquisition cycle of the third sensor satisfies a preset condition .
  • controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter includes:
  • the duration of the time slice is the duration of one frame of image collected by the third sensor.
  • the method also includes:
  • the first image collected under the first exposure parameter is acquired from the image sequence.
  • the first sensor, the second sensor and the third sensor are fisheye cameras.
  • the movable platform includes a drone.
  • the embodiment of the present application also provides a movable platform 1100 , including: the movable platform includes a first sensor 1101 , a second sensor 1102 and a third sensor 1103 ;
  • the first sensor 1101 and the third sensor 1103 have overlapping first partial fields of view;
  • the second sensor 1102 and the third sensor 1103 have overlapping second partial fields of view
  • the first partial field of view is used to observe the scene in the first direction of the movable platform 1100;
  • the second partial field of view is used to observe the scene in the second direction of the movable platform 1100;
  • the movable platform also includes a power system 1106 for driving the movable platform to move in space.
  • the movable platform is controlled to move in space according to the depth information.
  • the depth information of the scene in the second direction obtained based on the images respectively collected by the third sensor is consistent
  • the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked;
  • the second state includes: the first partial field of view of the third sensor is not blocked, and the second partial field of view is not blocked.
  • the light intensity reflected by the scene in the second partial field of view is higher than the maximum light intensity sensed by the third sensor.
  • the processor further executes: acquiring a moving direction of the movable platform;
  • the first exposure parameter of the third sensor is determined based at least on the first brightness information of the third sensor and the moving direction.
  • the third sensor when the third sensor is in the first state, when the movable platform moves toward the first direction, the depth of the scene in the second direction obtained based on the image collected by the third sensor The information is inconsistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
  • the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
  • the determining the first exposure parameter of the third sensor based at least on the first brightness information includes:
  • the first exposure parameter is determined based on the first brightness information and the second brightness information; wherein, the first brightness information The weight of is greater than the weight of the second brightness information.
  • the first exposure parameters include exposure time and analog gain
  • the determining the first exposure parameter of the third sensor based at least on the first brightness information includes:
  • An exposure time and an analog gain of the third sensor are calculated by using an automatic exposure algorithm at least according to the first brightness information.
  • the method further includes:
  • the first exposure parameter of the third sensor is adjusted.
  • the processor adjusts the first exposure parameter of the third sensor by:
  • the adjusting the first exposure parameter of the third sensor includes:
  • the first exposure parameter of the first sensor and the first exposure parameter of the third sensor determine the value range of the exposure parameter of the third sensor, and control the first exposure parameter of the third sensor in the within the range of exposure parameters.
  • the value range of the exposure parameter is determined based on the first exposure parameter of the first sensor and its weight, and the first exposure parameter of the third sensor and its weight;
  • the weight of the first exposure parameter of the third sensor is greater than the weight of the first exposure parameter of the first sensor.
  • the exposure parameters include ideal exposure time and exposure time
  • the adjusting the first exposure parameter of the third sensor based on the value range of the exposure parameter includes:
  • the exposure parameters also include: analog gain and digital gain;
  • the adjusting the exposure parameter of the third sensor based on the value range of the exposure parameter further includes:
  • the processor further executes: acquiring a change in the moving speed of the movable platform; wherein, the change in the moving speed is positively correlated with any of the following:
  • the acquisition frequency of the moving direction of the movable platform, the update frequency of the exposure parameters or the update frequency of the weight of the brightness information is not limited.
  • the processor further executes: controlling an exposure parameter of a third sensor to switch between the first exposure parameter and the second exposure parameter.
  • the third sensor when the third sensor is in the first state, when the movable platform moves toward the first direction, the depth of the scene in the second direction obtained based on the image collected by the third sensor The information is consistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
  • the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
  • controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter is performed when the image acquisition cycle of the third sensor satisfies a preset condition .
  • controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter includes:
  • the duration of the time slice is the duration of one frame of image collected by the third sensor.
  • the processor also performs:
  • the first image collected under the first exposure parameter is acquired from the image sequence.
  • the first sensor, the second sensor and the third sensor are fisheye cameras.
  • the movable platform includes a drone.
  • the movable platform may also include other hardware as required.
  • the device may include: a processor, memory, input/output interfaces, communication interfaces, and a bus.
  • the processor, the memory, the input/output interface and the communication interface are connected to each other within the device through the bus.
  • the processor can be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs , so as to realize the technical solutions provided by the embodiments of this specification.
  • the processor can also include a graphics card, and the graphics card can be an Nvidia titan X graphics card or a 1080Ti graphics card.
  • the memory can be implemented in the form of ROM (Read Only Memory, read-only memory), RAM (Random Access Memory, random access memory), static storage device, and dynamic storage device.
  • the memory 1105 can store operating systems and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1105 and invoked by the processor for execution.
  • the input/output interface is used to connect the input/output module to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, mouse, touch screen, microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface is used to connect the communication module to realize the communication interaction between the device and other devices.
  • the communication module can realize communication through wired means (such as USB, network cable, etc.), and can also realize communication through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
  • a bus includes a pathway that carries information between various components of a device such as processors, memory, input/output interfaces, and communication interfaces.
  • the above device only shows a processor, a memory, an input/output interface, a communication interface, and a bus, in a specific implementation process, the device may also include other components necessary for normal operation.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of this specification, and does not necessarily include all the components shown in the figure.
  • the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the steps of the method for controlling the mobile platform described in any of the foregoing embodiments are implemented.
  • Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of storage media for computers include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
  • a typical implementing device is a computer, which may take the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control device, etc. desktops, tablets, wearables, or any combination of these.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiment.
  • the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the functions of each module may be integrated in the same or multiple software and/or hardware implementations. Part or all of the modules can also be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Abstract

A movable platform control method and apparatus, and a movable platform and a storage medium, which can control a movable platform to safely move in a space. In the movable platform, a first sensor and a second sensor respectively have a local field of view, which overlaps with a third sensor, wherein the local field of view is used for observing scenery in a corresponding direction. The control method comprises: acquiring first brightness information of a first local field of view of a third sensor and second brightness information of a second local field of view of the third sensor (S201); determining a first exposure parameter of the third sensor at least on the basis of the first brightness information, controlling the third sensor to collect a first image under the first exposure parameter, and acquiring, on the basis of the first image and an image collected by a first sensor, depth information of scenery in a first direction (S202); determining a second exposure parameter of the third sensor at least on the basis of the second brightness information, controlling the third sensor to collect a second image under the second exposure parameter, and acquiring, on the basis of the second image and an image collected by a second sensor, depth information of scenery in a second direction (S203); and controlling, according to the depth information, a movable platform to move in a space (S204).

Description

可移动平台的控制方法、装置、可移动平台及存储介质Control method and device of movable platform, movable platform and storage medium 技术领域technical field
本申请涉及可移动平台技术领域,具体而言,涉及一种可移动平台的控制方法、装置、可移动平台及计算机可读存储介质。The present application relates to the technical field of movable platforms, and in particular, relates to a method and device for controlling a movable platform, a movable platform, and a computer-readable storage medium.
背景技术Background technique
随着技术的发展,如无人机、自动驾驶车辆、无人物流车或自动清洁设备等可移动平台越来越多被投入使用。通常,可移动平台上搭载有多种传感器,传感器可以对周围环境采集数据,可移动平台可以基于传感器采集的数据控制自身移动。而如何控制可移动平台在空间中安全地移动,是本领域一直关注的技术问题。As technology develops, mobile platforms such as drones, autonomous vehicles, unmanned logistics vehicles or autonomous cleaning equipment are increasingly being used. Usually, the mobile platform is equipped with various sensors, which can collect data from the surrounding environment, and the mobile platform can control its own movement based on the data collected by the sensors. How to control the movable platform to move safely in space has been a technical issue that has been concerned in this field.
发明内容Contents of the invention
为解决以上如何控制可移动平台安全移动的技术问题,本申请提供一种可移动平台的控制方法、装置、可移动平台及计算机可读存储介质。In order to solve the above technical problem of how to control the safe movement of the movable platform, the present application provides a method and device for controlling the movable platform, the movable platform and a computer-readable storage medium.
第一方面,本申请实施例提供一种可移动平台的控制方法;In the first aspect, the embodiment of the present application provides a method for controlling a mobile platform;
其中,所述可移动平台包括第一传感器、第二传感器与第三传感器;所述第一传感器与所述第三传感器具有重合的第一局部视野;所述第二传感器与所述第三传感器具有重合的第二局部视野;所述第一局部视野用于观测所述可移动平台第一方向上的景物;所述第二局部视野用于观测所述可移动平台第二方向上的景物;Wherein, the movable platform includes a first sensor, a second sensor and a third sensor; the first sensor and the third sensor have a first partial field of view that overlaps; the second sensor and the third sensor Having overlapping second partial fields of view; the first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe the scene in the second direction of the movable platform;
所述方法包括:The methods include:
获取所述第三传感器的所述第一局部视野的第一亮度信息和所述第二局部视野的第二亮度信息;acquiring first brightness information of the first partial field of view and second brightness information of the second partial field of view of the third sensor;
至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数;控制所述第三传感器在所述第一曝光参数下采集第一图像;基于所述第一图像和所述第一传感器采集的图像,获取所述第一方向的景物的深度信息;determining a first exposure parameter of the third sensor based at least on the first brightness information; controlling the third sensor to capture a first image under the first exposure parameter; based on the first image and the first Obtain the depth information of the scene in the first direction from the image collected by the sensor;
至少基于所述第二亮度信息确定所述第三传感器的第二曝光参数;控制所述第三传感器在所述第二曝光参数下采集第二图像;基于所述第二图像和所述第二传感器采集的图像,获取所述第二方向的景物的深度信息;根据所述深度信息控制所述可移动平台在空间中移动。determining a second exposure parameter of the third sensor based at least on the second brightness information; controlling the third sensor to capture a second image under the second exposure parameter; based on the second image and the second The image collected by the sensor is used to acquire the depth information of the scene in the second direction; and the movable platform is controlled to move in space according to the depth information.
第二方面,本申请实施例提供一种可移动平台的控制装置;In a second aspect, the embodiment of the present application provides a control device for a movable platform;
其中,所述可移动平台包括第一传感器、第二传感器与第三传感器;所述第一传感器与所述第三传感器具有重合的第一局部视野;所述第二传感器与所述第三传感器具有重合的第二局部视野;所述第一局部视野用于观测所述可移动平台第一方向上的景物;所述第二局部视野用于观测所述可移动平台第二方向上的景物;Wherein, the movable platform includes a first sensor, a second sensor and a third sensor; the first sensor and the third sensor have a first partial field of view that overlaps; the second sensor and the third sensor Having overlapping second partial fields of view; the first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe the scene in the second direction of the movable platform;
所述装置包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现前述第一方面所述的可移动平台的控 制方法。The device includes a processor, a memory, and a computer program stored on the memory that can be executed by the processor. When the processor executes the computer program, the method for controlling the mobile platform described in the first aspect above is implemented. .
第三方面,本申请实施例提供一种可移动平台;In a third aspect, the embodiment of the present application provides a mobile platform;
其中,所述可移动平台包括第一传感器、第二传感器与第三传感器;所述第一传感器与所述第三传感器具有重合的第一局部视野;所述第二传感器与所述第三传感器具有重合的第二局部视野;所述第一局部视野用于观测所述可移动平台第一方向上的景物;所述第二局部视野用于观测所述可移动平台第二方向上的景物;Wherein, the movable platform includes a first sensor, a second sensor and a third sensor; the first sensor and the third sensor have a first partial field of view that overlaps; the second sensor and the third sensor Having overlapping second partial fields of view; the first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe the scene in the second direction of the movable platform;
所述可移动平台包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现前述第一方面所述的可移动平台的控制方法。The mobile platform includes a processor, a memory, and a computer program stored on the memory that can be executed by the processor. When the processor executes the computer program, the above-mentioned mobile platform described in the first aspect is implemented. Control Method.
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现前述第一方面所述的可移动平台的控制方法的步骤。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a number of computer instructions are stored, and when the computer instructions are executed, the mobile platform described in the aforementioned first aspect is implemented. The steps of the control method.
本申请实施例方案中,第三传感器可以分别与第一传感器和第二传感器构成双目视觉系统,而第一局部视野与第二局部视野分别观测不同的第一方向和第二方向,因此,可以单独针对第一局部视野,获取第一局部视野的第一亮度信息,基于此确定第一曝光参数,因此第三传感器在所述第一曝光参数下采集的第一图像,能与第一局部视野的亮度相匹配,第一图像质量较高,从而可基于第一图像和第一传感器采集的图像,能获取到第一方向的景物的深度信息。同理,由于可以单独针对所述第二局部视野,获取第二局部视野的第二亮度信息,基于此确定第二曝光参数,因此第三传感器在第二曝光参数下采集的第二图像,能与第二局部视野的亮度相匹配,第二图像质量较高,从而可基于第二图像和第二传感器采集的图像,能获取到第二方向的景物的深度信息。进一步的,由于可移动平台可以基于上述处理,可以获取到可靠的第一方向的景物的深度信息和第二方向的景物的深度信息,因此可以安全地控制自身在空间中移动。In the embodiment of the present application, the third sensor can form a binocular vision system with the first sensor and the second sensor respectively, and the first partial field of view and the second partial field of view respectively observe different first and second directions. Therefore, The first brightness information of the first partial field of view can be obtained for the first partial field of view alone, and the first exposure parameter can be determined based on this. Therefore, the first image captured by the third sensor under the first exposure parameter can be compared with the first partial field of view. The brightness of the field of view is matched, and the quality of the first image is high, so that the depth information of the scene in the first direction can be obtained based on the first image and the image collected by the first sensor. Similarly, since the second brightness information of the second partial field of view can be obtained for the second partial field of view alone, and the second exposure parameter can be determined based on this, the second image collected by the third sensor under the second exposure parameter can be Matching the brightness of the second partial field of view, the quality of the second image is relatively high, so that the depth information of the scene in the second direction can be obtained based on the second image and the image collected by the second sensor. Furthermore, since the movable platform can obtain reliable depth information of the scene in the first direction and depth information of the scene in the second direction based on the above processing, it can safely control itself to move in space.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present application. For those skilled in the art, other drawings can also be obtained based on these drawings without any creative effort.
图1A是根据本申请的实施例示出的一种可移动平台上传感器的安装结构及传感器的视野范围示意图;Fig. 1A is a schematic diagram showing the installation structure of a sensor on a movable platform and the field of view of the sensor according to an embodiment of the present application;
图1B是根据本申请的实施例示出的一种可移动平台上两个传感器的视野范围的示意图;Fig. 1B is a schematic diagram showing the field of view of two sensors on a movable platform according to an embodiment of the present application;
图2是根据本申请的实施例示出的一种可移动平台的控制方法的示意流程图;Fig. 2 is a schematic flowchart of a method for controlling a mobile platform according to an embodiment of the present application;
图3是根据本申请的实施例示出的一种可移动平台的控制方法中部分步骤的示意流程图;Fig. 3 is a schematic flowchart of some steps in a method for controlling a mobile platform according to an embodiment of the present application;
图4是根据本申请的实施例示出的另一种可移动平台的控制方法中部分步骤的示意流程图;Fig. 4 is a schematic flowchart of some steps in another method for controlling a mobile platform according to an embodiment of the present application;
图5是根据本申请的实施例示出的一种可移动平台上传感器获取图像的方法的示 意流程图;Fig. 5 is a schematic flowchart of a method for acquiring an image by a sensor on a movable platform according to an embodiment of the present application;
图6是根据本申请的实施例示出的一种可移动平台的控制方法中的时序图的示意图;Fig. 6 is a schematic diagram of a sequence diagram in a method for controlling a mobile platform according to an embodiment of the present application;
图7是根据本申请的实施例示出的一种可移动平台上传感器输出图像的方法的示意流程图;Fig. 7 is a schematic flowchart of a method for outputting images by a sensor on a movable platform according to an embodiment of the present application;
图8是根据本申请的实施例示出的一种可移动平台控制方法中的第一状态的示意图;Fig. 8 is a schematic diagram of a first state in a mobile platform control method according to an embodiment of the present application;
图9是根据本申请的实施例示出的一种可移动平台控制方法中的第二状态的示意图;Fig. 9 is a schematic diagram of a second state in a mobile platform control method according to an embodiment of the present application;
图10是根据本申请的实施例示出的一种可移动平台的控制装置的示意框图;Fig. 10 is a schematic block diagram of a control device for a movable platform shown according to an embodiment of the present application;
图11是根据本申请的实施例示出的一种可移动平台中设备的硬件结构的示意框图。Fig. 11 is a schematic block diagram showing a hardware structure of a device in a mobile platform according to an embodiment of the present application.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本申请中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本申请保护的范围。In order to enable those skilled in the art to better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described The embodiments are only some of the embodiments of the present application, but not all of them. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments in this application shall fall within the protection scope of this application.
需要说明的是:在其他实施例中并不一定按照本说明书示出和描述的顺序来执行相应方法的步骤。在一些其他实施例中,其方法所包括的步骤可以比本说明书所描述的更多或更少。此外,本说明书中所描述的单个步骤,在其他实施例中可能被分解为多个步骤进行描述;而本说明书中所描述的多个步骤,在其他实施例中也可能被合并为单个步骤进行描述。It should be noted that in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or less steps than those described in this specification. In addition, a single step described in this specification may be decomposed into multiple steps for description in other embodiments; multiple steps described in this specification may also be combined into a single step in other embodiments describe.
为了控制可移动平台在空间中安全地移动,可移动平台可以通过自身搭载的传感器来观测所处空间中景物的信息,这些传感器包括有激光雷达、毫米波雷达、视觉传感器、红外传感器或TOF(Time of flight,飞行时间)传感器等等。In order to control the mobile platform to move safely in space, the mobile platform can observe the information of the scene in the space through its own sensors, these sensors include laser radar, millimeter wave radar, visual sensor, infrared sensor or TOF ( Time of flight, flight time) sensors and so on.
以视觉传感器为例,在一些场景中,可以采用双目视觉系统。双目视觉系统的原理利用两个相机组成双目,基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息。即利用两个相机组成双目来完成对某一方向的景物的深度信息感知,以通过所感知到的景物的深度信息,来保障可移动平台的避障功能,使得可移动平台能够安全移动。基于普通相机的视场角的限制,使用双目视觉系统获取环视四个方向的深度信息时,每个方向需要有两个双目相机,即每个方向配置两个相机,这两个相机的视野重合,各个方向的双目相机独立控制,因此,相关技术中可移动平台通常至少要使用八个可独立控制的相机。Taking the vision sensor as an example, in some scenarios, a binocular vision system can be used. The principle of the binocular vision system uses two cameras to form a binocular, based on the parallax principle and using imaging equipment to obtain two images of the measured object from different positions, and obtain the three-dimensional geometric information of the object by calculating the position deviation between the corresponding points of the image . That is to use two cameras to form binoculars to complete the depth information perception of the scene in a certain direction, so as to ensure the obstacle avoidance function of the movable platform through the perceived depth information of the scene, so that the movable platform can move safely. Based on the limitation of the field of view of ordinary cameras, when using a binocular vision system to obtain depth information in four directions, two binocular cameras are required for each direction, that is, two cameras are configured in each direction, and the The field of view overlaps, and the binocular cameras in each direction are independently controlled. Therefore, the movable platform in the related art usually uses at least eight independently controllable cameras.
在一些场景中,如无人机或自动清洁设备等可移动平台具有小型化和低成本的要求,因此,本申请发明人希望通过减少视觉传感器的数量来满足该需求。基于此,本实施例方案中,设计了可移动平台搭载视场角较大的传感器,一个传感器可以与至少两个传感器的视野具有重合,即一个传感器可以兼顾至少两个方向。因此,一个传感器可以与至少两个传感器分别构成双目视觉系统,从而可以减少可移动平台上搭载的 视觉传感器的数量。In some scenarios, mobile platforms such as drones or automatic cleaning equipment have requirements for miniaturization and low cost. Therefore, the inventors of the present application hope to meet this requirement by reducing the number of visual sensors. Based on this, in the solution of this embodiment, a movable platform is designed to be equipped with a sensor with a large field of view. One sensor can overlap the field of view of at least two sensors, that is, one sensor can take care of at least two directions. Therefore, one sensor can form a binocular vision system with at least two sensors, thereby reducing the number of vision sensors on the movable platform.
如图1A所示,是本申请根据一示例性实施例示出的一种可移动平台搭载的传感器的示意图。图1A中所示的实施例中,作为示例性的,可移动平台的平台本体为矩形,在矩形的四个夹角位置搭载四个传感器,各传感器的视场角均为180°,且一个传感器与两个传感器分别双目视觉系统。可以理解,实际应用中,可移动平台的构型有多种形式,其搭载的传感器的视场角、数量和搭载位置等均可以有多种实现方式,一个传感器也可以与两个以上传感器构成多个双目视觉系统;例如,可以根据可移动平台的构型、传感器的视场角、可移动平台所需要观测的方向等因素综合设计,本实施例对此不进行限定。As shown in FIG. 1A , it is a schematic diagram of a sensor mounted on a mobile platform according to an exemplary embodiment of the present application. In the embodiment shown in Fig. 1A, as an example, the platform body of the movable platform is a rectangle, and four sensors are mounted on the four corners of the rectangle, and the field angles of each sensor are 180°, and one The sensor is a binocular vision system with two sensors respectively. It can be understood that in practical applications, the configuration of the movable platform has various forms, and the field of view, quantity, and mounting position of the sensors carried by it can be implemented in various ways, and one sensor can also be composed of two or more sensors. Multiple binocular vision systems; for example, can be comprehensively designed according to factors such as the configuration of the movable platform, the field of view of the sensor, and the direction that the movable platform needs to observe, which is not limited in this embodiment.
如图1A所示,可移动平台上搭载的四个传感器为传感器C、传感器D、传感器E和传感器F。以传感器C为例,该传感器C的视场角为180°,其视野范围包括由区域C1和区域C2共同构成的视野范围。同理,其他三个传感器的视野范围如图1A所示。As shown in FIG. 1A , the four sensors mounted on the movable platform are sensor C, sensor D, sensor E and sensor F. Taking the sensor C as an example, the sensor C has an angle of view of 180°, and its field of view includes the field of view jointly formed by the area C1 and the area C2. Similarly, the fields of view of the other three sensors are shown in FIG. 1A .
该传感器C与传感器D构成双目视觉系统,传感器C的视野范围中区域C1与传感器D的视野范围中区域D1交叉的区域,即两者重合的视野,在图1A中采用斜线段进行示意。The sensor C and the sensor D form a binocular vision system. The area where the area C1 in the field of view of the sensor C intersects with the area D1 in the field of view of the sensor D, that is, the field of view where the two overlap, is illustrated by a diagonal line in FIG. 1A .
该传感器C与传感器F构成双目视觉系统,传感器C的视野范围中区域C2与传感器F的视野范围中区域F2交叉的区域,即两者重合的视野,在图1A中采用斜线段进行示意。The sensor C and the sensor F form a binocular vision system. The area where the area C2 in the field of view of the sensor C intersects with the area F2 in the field of view of the sensor F, that is, the field of view where the two overlap, is illustrated by a diagonal line in FIG. 1A .
由此可见,传感器C可以分别与传感器D和传感器F构成双目视觉系统;即,传感器C采集的图像与传感器D采集的图像中,分别朝向相同方向的部分,可以用于双目视觉处理;传感器C采集的图像与传感器F采集的图像中,分别朝向相同方向的部分,可以用于双目视觉处理。也即是,传感器C可以兼顾两个方向,因此可以将传感器得到的图像分割成两部分(具体的分割方式可以根据需要而确定,例如图1A示出的例子可以是平均分成左右两部分)其中一部分与传感器D采集图像的一部分组成双目图像,其中另一部分与传感器F采集图像的一部分组成双目图像。It can be seen that the sensor C can form a binocular vision system with the sensor D and the sensor F respectively; that is, in the image collected by the sensor C and the image collected by the sensor D, the parts facing the same direction respectively can be used for binocular vision processing; Parts of the image collected by the sensor C and the image collected by the sensor F facing the same direction can be used for binocular vision processing. That is to say, the sensor C can take into account both directions, so the image obtained by the sensor can be divided into two parts (the specific division method can be determined according to the needs, for example, the example shown in FIG. 1A can be divided into left and right parts equally) where One part forms a binocular image with a part of the image collected by the sensor D, and the other part forms a binocular image with a part of the image collected by the sensor F.
如此,通过设计可移动平台上搭载视野较大的传感器,使一个传感器与至少两个传感器分别具有重合的视野,因此,可以显著地减少可移动平台上搭载的传感器的数量和成本。In this way, by designing a sensor with a larger field of view mounted on the movable platform, one sensor and at least two sensors respectively have overlapping fields of view, so the number and cost of sensors mounted on the movable platform can be significantly reduced.
虽然通过上述设计节约了可移动平台的成本和功耗,但仍然存在其他需解决的技术问题。具体的,一个传感器分别与至少两个传感器构成双目视觉,即传感器输出的一张图像,可以分别与其他传感器采集的图像进行双目视觉处理,而图像如何采集,将影响到后续的双目视觉处理,进而影响到可移动平台的安全移动。Although the above design saves the cost and power consumption of the mobile platform, there are still other technical problems to be solved. Specifically, one sensor forms binocular vision with at least two sensors, that is, an image output by the sensor can be processed with images collected by other sensors for binocular vision, and how the image is collected will affect the subsequent binocular vision. Vision processing, which in turn affects the safe movement of the movable platform.
例如,传感器在采集图像时需要考虑如何设置合适的曝光参数。相关技术中采用自动曝光(Auto Exposure,AE)算法,来自动计算和调整曝光参数,以使得拍摄景物的中间色调与得到的图像的中间色调尽可能匹配和接近。如果传感器的曝光不合适会导致采集得到的图像过曝或者欠曝,而图像过曝或欠曝,均会导致图像信息的丢失;而可移动平台需要利用传感器采集的图像来获取视野内景物的深度信息,图像信息的丢失会导致可移动平台无法获取到丰富的深度信息,从而可能导致可移动平台无法成功避障。For example, the sensor needs to consider how to set appropriate exposure parameters when capturing images. In the related art, an automatic exposure (Auto Exposure, AE) algorithm is used to automatically calculate and adjust exposure parameters, so that the midtone of the scene to be photographed is as close as possible to the midtone of the obtained image. If the exposure of the sensor is not appropriate, the collected image will be overexposed or underexposed, and the overexposure or underexposure of the image will lead to the loss of image information; and the movable platform needs to use the image collected by the sensor to obtain the image of the scene in the field of view. The loss of depth information and image information will cause the movable platform to fail to obtain rich depth information, which may lead to the failure of the movable platform to successfully avoid obstacles.
一般来说,曝光参数包括理想曝光时间,曝光时间,模拟增益和数字增益,上述参数之间具有如下公式所示的关系:Generally speaking, exposure parameters include ideal exposure time, exposure time, analog gain and digital gain, and the relationship between the above parameters is shown in the following formula:
icit=cit*a_gain*d_gain      (1)icit=cit*a_gain*d_gain (1)
其中,icit指的是理想曝光时间,cit指的是曝光时间,a_gain指的是模拟增益,d_gain指的是数字增益。Among them, icit refers to the ideal exposure time, cit refers to the exposure time, a_gain refers to the analog gain, and d_gain refers to the digital gain.
在一些场景中,传感器可以通过感知视野内环境的亮度信息,并通过利用上述公式来确定合适的曝光参数。因此,传感器准确地感知视野内环境的亮度信息,会影响到曝光参数的准确确定。In some scenes, the sensor can determine the appropriate exposure parameters by sensing the brightness information of the environment in the field of view and using the above formula. Therefore, the sensor can accurately perceive the brightness information of the environment in the field of view, which will affect the accurate determination of the exposure parameters.
环境中的光线经环境中景物所反射后,通过传感器的镜头进入传感器内的感光元件,感光元件对光线的感知并成像,通过图像的亮度信息,可以确定出环境中的亮度信息。光线强度表征亮度,不同感光元件对光线强度具有不同的感知能力,感光元件对光线强度的感知能力,即传感器的亮度动态范围,亮度的动态范围是指亮度在最大值和最小值之间变化的范围。在一些场景中,可移动平台所处的实际环境的亮度动态范围可能非常大,而传感器的亮度动态范围总是有限的,不可能同时拍摄清楚自然界中最亮和最暗的物体;并且,出于成本考虑,若希望可移动平台上搭载亮度动态范围较小的传感器,在实际应用中将面临诸多问题。After the light in the environment is reflected by the scenery in the environment, it enters the photosensitive element in the sensor through the lens of the sensor. The photosensitive element perceives the light and forms an image. The brightness information in the environment can be determined through the brightness information of the image. Light intensity represents brightness. Different photosensitive elements have different perception abilities to light intensity. The perception ability of photosensitive elements to light intensity is the brightness dynamic range of the sensor. scope. In some scenes, the luminance dynamic range of the actual environment where the movable platform is located may be very large, while the luminance dynamic range of the sensor is always limited, and it is impossible to capture the brightest and darkest objects in nature at the same time; and, Due to cost considerations, if it is desired to mount a sensor with a small dynamic range of brightness on a mobile platform, it will face many problems in practical applications.
例如,由于传感器的视野范围较大,使得传感器能够采集到较大范围的图像;因此,在一些场景下,传感器所观测到的范围中,不同区域的环境信息变动可能较大,例如,光线强度的动态范围较大。仍以图1A为例,假设可移动平台中传感器C和传感器D之间朝向强光,例如正对太阳;传感器C和传感器F之间朝向树林,树林的环境亮度显然较低。因此,控制传感器C采用何种曝光参数采集图像,将会影响到所采集的图像质量,影响到后续的双目视觉处理。For example, due to the large field of view of the sensor, the sensor can collect a wide range of images; therefore, in some scenarios, in the range observed by the sensor, the environmental information in different areas may change greatly, for example, the light intensity The dynamic range is larger. Still taking FIG. 1A as an example, assume that the distance between sensor C and sensor D on the movable platform faces strong light, such as facing the sun; the distance between sensor C and sensor F faces the woods, and the ambient brightness of the woods is obviously low. Therefore, controlling which exposure parameters the sensor C uses to collect images will affect the quality of the collected images and the subsequent binocular vision processing.
相关技术中可移动平台搭载的传感器的视场角较小,一个传感器只与一个传感器构成双目视觉系统,可移动平台上搭载至少八个相机;因此,相关技术中由于传感器视场角较小,基本不会面临由于视野范围较大导致视野内环境信息变动较大的问题。The field of view of the sensor carried on the movable platform in the related art is relatively small, and one sensor only forms a binocular vision system with one sensor, and at least eight cameras are carried on the movable platform; therefore, in the related art, due to the small field of view of the sensor , basically will not face the problem of large changes in the environmental information in the field of view due to the large field of view.
另外,由于传感器视场角较小,视野范围内环境信息变动较小,自动曝光算法通常采用的是全局平均的曝光处理思路,算法的输入通常是图像的平均亮度信息,即传感器通过采集的图像,利用图像信息统计出平均亮度信息,以此估计或作为视野范围内环境的平均亮度信息,利用该视野范围内环境的平均亮度信息来确定曝光参数。在上述举例的场景中,传感器C的视野范围内,既有强光,又有较低的亮度,若采用基于全局平均的曝光处理思路,从采集图像中计算出平均亮度信息来确定曝光参数,会导致采集的图像的质量较差的问题,该图像可能既未能匹配视野内的强光环境,又未能匹配视野内的低亮度环境。而传感器C采集的图像中,需要分割出一部分与传感器D采集的图像进行双目视觉处理,该分割出的部分图像与实际环境中的亮度不匹配,导致图像欠曝或过曝,因此将无法获取到可靠的景物信息。同理,传感器C采集的图像中,需要分割出另一部分与与传感器F采集的图像进行双目视觉处理,同理,也未能获取到可靠的景物信息,而可靠的景物信息是控制可移动平台安全移动的关键,最终将导致可移动平台在移动过程中面临较大的安全风险。In addition, due to the small field of view of the sensor and the small change in environmental information within the field of view, the automatic exposure algorithm usually adopts the idea of global average exposure processing. The input of the algorithm is usually the average brightness information of the image, that is, the image collected by the sensor The image information is used to calculate the average brightness information, which is estimated or used as the average brightness information of the environment within the field of view, and the exposure parameter is determined by using the average brightness information of the environment within the field of view. In the above example scene, there is both strong light and low brightness within the field of view of sensor C. If the exposure processing idea based on global average is adopted, the average brightness information is calculated from the collected images to determine the exposure parameters. This will lead to the problem that the quality of the collected image is poor, and the image may fail to match both the strong light environment in the field of view and the low brightness environment in the field of view. However, in the image collected by sensor C, it is necessary to segment a part of the image collected by sensor D for binocular vision processing. The segmented part of the image does not match the brightness of the actual environment, resulting in underexposure or overexposure of the image, so it will not be possible Obtain reliable scene information. In the same way, in the image collected by sensor C, another part needs to be segmented and processed with the image collected by sensor F for binocular vision. The key to the safe movement of the platform will eventually lead to greater security risks for the mobile platform during the movement.
另一些解决方案可以是优化传感器的质量,提高传感器能够感知的环境亮度的动态范围,但目前能够感知较大动态范围的传感器的价格都较高,因此这种方案会导致 可移动平台制造成本的增加。Another solution can be to optimize the quality of the sensor and increase the dynamic range of the ambient brightness that the sensor can perceive, but currently the price of sensors that can perceive a larger dynamic range is relatively high, so this solution will lead to an increase in the manufacturing cost of the mobile platform. Increase.
因此,本申请实施例提出了一种可移动平台的控制方案,以解决上述问题。本实施例的可移动平台可以指代能够移动的任何设备。其中,可移动平台可以包括但不限于陆地交通工具、水中交通工具、空中交通工具以及其他类型的机动载运工具。作为例子,可移动平台可以包括载客载运工具和/或无人机(Unmanned Aerial Vehicle,UAV)等,可移动平台的移动可以包括飞行。Therefore, an embodiment of the present application proposes a mobile platform control solution to solve the above problems. The mobile platform in this embodiment may refer to any mobile device. Wherein, the movable platform may include but not limited to land vehicles, water vehicles, air vehicles and other types of motor vehicles. As an example, the movable platform may include a passenger vehicle and/or an unmanned aerial vehicle (Unmanned Aerial Vehicle, UAV), etc., and the movement of the movable platform may include flying.
所述可移动平台搭载有传感器;本实施例的传感器是指可采集图像的视觉传感器,该视觉传感器具有一定的视场角,可与至少两个传感器的视野重合,作为例子,可以是广角相机或鱼眼相机等。实际应用中,可移动平台上搭载的传感器的数量可以不限于三个;并且,一个传感器可以与至少两个传感器具有重合的视野;实际应用中可以根据需要进行配置。The movable platform is equipped with a sensor; the sensor in this embodiment refers to a visual sensor that can collect images. The visual sensor has a certain field of view and can overlap with the field of view of at least two sensors. As an example, it can be a wide-angle camera Or a fisheye camera etc. In practical applications, the number of sensors mounted on the movable platform may not be limited to three; moreover, one sensor may have overlapping fields of view with at least two sensors; in practical applications, it may be configured as required.
后续实施例中,以三个传感器为例,来说明一个传感器分别与两个传感器具有重合视野,且重合的视野朝向不同方向时,应用本申请实施例的可移动平台的控制方法的具体过程。可以理解,在可移动平台搭载有三个以上传感器的情况下,只要其中有三个传感器的设计,符合本实施例的“一个传感器分别与两个传感器具有重合视野,且重合的视野朝向不同方向”的特点,即可应用本实施例提供的方案。例如,图1A所示实施例中可移动平台中搭载有四个传感器,这四个传感器中任意三个传感器均符合上述设计,因此可应用本实施例方案对任一传感器进行曝光处理以采集图像。In the subsequent embodiments, three sensors are taken as an example to illustrate the specific process of applying the mobile platform control method according to the embodiment of the present application when one sensor has overlapping fields of view with two sensors respectively, and the overlapping fields of view face different directions. It can be understood that in the case where the movable platform is equipped with more than three sensors, as long as there are three of them, the design of the three sensors conforms to the principle of "one sensor has overlapping fields of view with two sensors respectively, and the overlapping fields of view face different directions" in this embodiment. feature, the solution provided in this embodiment can be applied. For example, in the embodiment shown in Figure 1A, there are four sensors mounted on the movable platform, and any three of the four sensors conform to the above-mentioned design, so the scheme of this embodiment can be applied to any sensor for exposure processing to collect images .
在另一些场景中,一个传感器不止与两个传感器具有重合的视野,可以与多个传感器具有重合视野,多个重合视野中,可以有两个朝向不同方向,或者是都朝向不同方向;可以理解,此种情况下,也仍可应用本实施例提供的方案。In other scenarios, a sensor not only has overlapping fields of view with two sensors, but also has overlapping fields of view with multiple sensors. Among the multiple overlapping fields of view, two of them may face different directions, or both may face different directions; it is understandable , in this case, the solution provided by this embodiment can still be applied.
作为例子,可移动平台包括第一传感器、第二传感器与第三传感器,仍以图1A所示为例,第一传感器、第二传感器与第三传感器可以分别为传感器D、传感器F和传感器C。As an example, the movable platform includes a first sensor, a second sensor and a third sensor. Still taking the example shown in FIG. 1A, the first sensor, the second sensor and the third sensor can be sensor D, sensor F and sensor C respectively. .
所述第一传感器与所述第三传感器的视野重合,即图1A示出的C1和D1的交叉区域,即第一传感器与所述第三传感器重合的视野。所述第一传感器与所述第三传感器具有重合的第一局部视野,在一些例子中,该第一局部视野可以是指第一传感器与所述第三传感器重合的视野;在另一些例子中,第一局部视野也可以是指第一传感器与所述第三传感器重合的视野中的部分;例如,考虑到相机的视场角越大,采集图像的边缘的畸变可能也越大,因此第一局部视野可以选择第一传感器与所述第三传感器重合的视野中的部分,例如去除边缘的中心区域等,从而可减少图像畸变的影响。如图1B所示,是本实施例示出的传感器C的视野范围和传感器D的视野范围,其中,传感器C的视野范围和传感器D的视野范围有重合的视野,在两者重合的视野中,存在部分视野,即本实施例的所述第一传感器与所述第三传感器具有重合的第一局部视野,所述第一局部视野用于观测所述可移动平台第一方向上的景物。The fields of view of the first sensor and the third sensor overlap, that is, the intersection area of C1 and D1 shown in FIG. 1A , that is, the fields of view of the first sensor and the third sensor overlap. The first sensor and the third sensor have a first partial field of view that overlaps. In some examples, the first partial field of view may refer to the field of view that the first sensor overlaps with the third sensor; in other examples , the first partial field of view may also refer to the part of the field of view that the first sensor overlaps with the third sensor; A partial field of view may select a part of the field of view where the first sensor overlaps with the third sensor, for example, remove the central area of the edge, etc., thereby reducing the influence of image distortion. As shown in FIG. 1B, it is the field of view of the sensor C and the field of view of the sensor D shown in this embodiment, wherein the field of view of the sensor C and the field of view of the sensor D have overlapped fields of view, and in the field of view that both overlap, There is a partial field of view, that is, the first sensor and the third sensor in this embodiment have overlapping first partial fields of view, and the first partial field of view is used to observe the scene in the first direction of the movable platform.
同理,所述第二传感器与所述第三传感器的视野重合,即图1A示出的F2和C2的交叉区域,即第二传感器与所述第三传感器重合的视野。所述第二传感器与所述第三传感器具有重合的第二局部视野,该第二局部视野可以是指第二传感器与所述第三传感器重合的视野,也可以是指第二传感器与所述第三传感器重合的视野中的部分,所述第二局部视野用于观测所述可移动平台第二方向上的景物。其中,本实施例的第 一方向和第二方向,采用的是固定于可移动平台的坐标系(body系),如固定在可移动平台上的遵循右手法则的三维正交直角坐标系,其原点位于可移动平台或可移动平台上的质心。Similarly, the field of view of the second sensor overlaps with that of the third sensor, that is, the intersection area of F2 and C2 shown in FIG. 1A , that is, the field of view of the second sensor overlaps with the third sensor. The second sensor and the third sensor have a second partial field of view that overlaps. The second partial field of view may refer to the field of view overlapped by the second sensor and the third sensor, or may refer to the field of view of the second sensor and the third sensor. A part of the overlapping field of view of the third sensor, the second partial field of view is used to observe the scene in the second direction of the movable platform. Wherein, the first direction and the second direction in this embodiment adopt a coordinate system (body system) fixed on the movable platform, such as a three-dimensional orthogonal rectangular coordinate system fixed on the movable platform following the right-hand rule, where The origin is at the movable platform or at the center of mass on the movable platform.
作为例子,第一局部视野和第二局部视野的视野范围可以是不同的,两者之间可以不存在重合的范围,当然,也可以存在部分重合的范围。As an example, the field of view ranges of the first partial field of view and the second partial field of view may be different, and there may be no overlapped range between the two, of course, there may also be a partially overlapped range.
通过上述对可移动平台上搭载的传感器的设计,相应地,如图2所示,是本申请根据一示例性实施例示出的可移动平台的控制方法的流程图,该方法可包括以下步骤:Through the above-mentioned design of the sensors carried on the movable platform, correspondingly, as shown in FIG. 2 , it is a flowchart of a control method for a movable platform according to an exemplary embodiment of the present application, and the method may include the following steps:
步骤S201:获取所述第三传感器的所述第一局部视野的第一亮度信息和所述第二局部视野的第二亮度信息。Step S201: Obtain first brightness information of the first partial field of view and second brightness information of the second partial field of view of the third sensor.
步骤S202:至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数;控制所述第三传感器在所述第一曝光参数下采集第一图像;基于所述第一图像和所述第一传感器采集的图像,获取所述第一方向的景物的深度信息。Step S202: Determine a first exposure parameter of the third sensor based at least on the first brightness information; control the third sensor to capture a first image under the first exposure parameter; based on the first image and the The image collected by the first sensor is used to obtain the depth information of the scene in the first direction.
步骤S203:至少基于所述第二亮度信息确定所述第三传感器的第二曝光参数;控制所述第三传感器在所述第二曝光参数下采集第二图像;基于所述第二图像和所述第二传感器采集的图像,获取所述第二方向的景物的深度信息。Step S203: Determine a second exposure parameter of the third sensor based at least on the second brightness information; control the third sensor to acquire a second image under the second exposure parameter; based on the second image and the The image collected by the second sensor is used to obtain the depth information of the scene in the second direction.
步骤S204:根据所述深度信息控制所述可移动平台在空间中移动。Step S204: Control the movable platform to move in space according to the depth information.
本实施例中,第三传感器分别与第一传感器和第二传感器构成双目视觉系统,而第一局部视野与第二局部视野分别观测不同的第一方向和第二方向,因此,可以单独针对第一局部视野,获取第一局部视野的第一亮度信息,基于此确定第一曝光参数,因此第三传感器在所述第一曝光参数下采集的第一图像,能与第一局部视野的亮度相匹配,第一图像质量较高,从而可基于第一图像和第一传感器采集的图像,能获取到第一方向的景物的深度信息。同理,由于可以单独针对所述第二局部视野,获取第二局部视野的第二亮度信息,基于此确定第二曝光参数,因此第三传感器在第二曝光参数下采集的第二图像,能与第二局部视野的亮度相匹配,第二图像质量较高,从而可基于第二图像和第二传感器采集的图像,能获取到第二方向的景物的深度信息。进一步的,由于可移动平台可以基于上述处理,可以获取到可靠的第一方向的景物的深度信息和第二方向的景物的深度信息,因此可以安全地控制自身在空间中移动。In this embodiment, the third sensor forms a binocular vision system with the first sensor and the second sensor respectively, and the first partial field of view and the second partial field of view respectively observe different first directions and second directions. The first partial field of view, acquire the first brightness information of the first partial field of view, and determine the first exposure parameter based on this, so the first image collected by the third sensor under the first exposure parameter can be compared with the brightness of the first partial field of view Matching, the quality of the first image is relatively high, so that the depth information of the scene in the first direction can be obtained based on the first image and the image collected by the first sensor. Similarly, since the second brightness information of the second partial field of view can be obtained for the second partial field of view alone, and the second exposure parameter can be determined based on this, the second image collected by the third sensor under the second exposure parameter can be Matching the brightness of the second partial field of view, the quality of the second image is relatively high, so that the depth information of the scene in the second direction can be obtained based on the second image and the image collected by the second sensor. Furthermore, since the movable platform can obtain reliable depth information of the scene in the first direction and depth information of the scene in the second direction based on the above processing, it can safely control itself to move in space.
实际应用中,获取亮度信息的方式可以有多种实现方式,作为例子,可以由传感器采集一张或多张图像,根据图像亮度信息来估计环境的亮度信息;本实施例的第一局部视野的第一亮度信息,由于第一传感器和第三传感器的视野范围都覆盖到,可以从第一传感器和/或第三传感器采集的图像中,确定出对应第一局部视野的区域,通过统计该区域的图像亮度信息来确定第一局部视野的第一亮度信息。同理,本实施例的第二局部视野的第二亮度信息,由于第二传感器和第三传感器的视野范围都覆盖到,可以从第二传感器和/或第三传感器采集的图像中,确定出对应第二局部视野的区域,通过统计该区域的图像亮度信息来确定第二局部视野的第二亮度信息。In practical applications, there are many ways to obtain brightness information. As an example, one or more images can be collected by a sensor, and the brightness information of the environment can be estimated according to the brightness information of the images; the first partial field of view in this embodiment For the first brightness information, since the field of view of the first sensor and the third sensor are covered, the area corresponding to the first partial field of view can be determined from the image collected by the first sensor and/or the third sensor, and by counting the area The image brightness information of the image is used to determine the first brightness information of the first partial field of view. Similarly, the second luminance information of the second partial field of view in this embodiment can be determined from the images collected by the second sensor and/or the third sensor because the field of view ranges of the second sensor and the third sensor are covered. For the region corresponding to the second partial field of view, the second brightness information of the second partial field of view is determined by counting the image brightness information of the region.
在一些例子中,所述至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数,可以是只基于第一亮度信息确定第三传感器的第一曝光参数,也可以根据需要结合其他信息来确定,例如第一局部视野内的其他环境参数,或者是视野范围内非第一局部视野的其他区域的环境参数,或者是可移动平台上的其他传感器采集的数据,还可以是可移动平台的飞行状态信息等等。同理,所述至少基于所述第二亮度信息确 定所述第三传感器的第二曝光参数,可以是只基于第二亮度信息确定第三传感器的第二曝光参数,也可以根据需要结合其他信息来确定,例如第二局部视野内的其他环境参数或者视野范围内非第二局部视野的其他区域的环境参数,或者是可移动平台上的其他传感器采集的数据,还可以是可移动平台的飞行状态信息等等。In some examples, the determining the first exposure parameter of the third sensor based at least on the first brightness information may be determining the first exposure parameter of the third sensor based only on the first brightness information, or may be combined with other information, such as other environmental parameters in the first partial field of view, or environmental parameters in other areas within the field of vision other than the first partial field of view, or data collected by other sensors on the movable platform, or data that can be Flight status information of the mobile platform, etc. Similarly, the determining of the second exposure parameter of the third sensor based at least on the second brightness information may be to determine the second exposure parameter of the third sensor based only on the second brightness information, or may be combined with other information as required To determine, for example, other environmental parameters in the second partial field of view or environmental parameters in other areas within the field of vision other than the second partial field of view, or the data collected by other sensors on the movable platform, or the flight of the movable platform status information and more.
实际应用中,步骤S202和步骤S203并不一定按照图2所示实施例示出和描述的顺序来执行相应步骤。另外,在控制可移动平台在空间中移动的过程中,在一些时间段内,可移动平台可以只执行S202步骤后来控制自身移动;在另一些时间段内,可移动平台可以只执行S203步骤后来控制自身移动;或者,可移动平台可以在执行S202步骤和S203步骤后来控制自身移动。In practical applications, step S202 and step S203 are not necessarily executed in the order shown and described in the embodiment shown in FIG. 2 . In addition, in the process of controlling the movement of the movable platform in space, in some time periods, the movable platform can only perform step S202 and then control its own movement; in other time periods, the movable platform can only perform step S203 and then control move by itself; or, the movable platform can control itself to move after steps S202 and S203 are executed.
实际应用中,可移动平台可以配置有多种设置传感器的曝光参数的方案,例如前述的全局平均曝光的方案,而本实施例方案可以作为其中之一,因此,本实施例中方案的执行时机可以根据需要而确定。例如,当可移动平台处于环境亮度动态范围较小的场景时,可移动平台可以采用其他设置曝光参数的方案;当可移动平台识别到自身处于环境亮度动态范围较大的场景时,可以触发执行本实施例的方案。例如,可移动平台通过传感器采集的图像,通过对图像中不同区域的图像信息的比对,确定出不同区域的亮度信息差异较大,例如第一局部视野的第一亮度信息和所述第二局部视野的第二亮度信息差异满足设定条件,可以触发执行本实施例的方案;其中,该设定条件是指表征图像亮度信息差异较大的条件,其可以根据需要设定,例如两者的亮度值大于设定亮度阈值等等。In practical applications, the mobile platform can be configured with multiple schemes for setting the exposure parameters of the sensor, such as the aforementioned global average exposure scheme, and the scheme in this embodiment can be used as one of them. Therefore, the execution timing of the scheme in this embodiment Can be determined according to needs. For example, when the movable platform is in a scene with a small dynamic range of ambient brightness, the movable platform can adopt other schemes for setting exposure parameters; when the movable platform recognizes that it is in a scene with a large dynamic range of ambient brightness, it can trigger execution The scheme of this embodiment. For example, the movable platform compares the image information of different areas in the image collected by the sensor, and determines that the brightness information of different areas is quite different, such as the first brightness information of the first partial field of view and the second The second luminance information difference of the local field of view satisfies the set condition, which can trigger the execution of the solution of this embodiment; wherein, the set condition refers to a condition representing a large difference in image luminance information, which can be set as required, for example, both The brightness value is greater than the set brightness threshold and so on.
由上述实施例可见,本实施例方案中涉及第三传感器的曝光控制,该方案并未采用全局平均曝光的思路,而是基于局部视野的亮度信息来确定曝光参数,因此,本实施例不会受到其他局部视野的亮度的影响,能够保证采集到该局部视野内的景物的图像信息。另外,由于可以控制传感器分别在第一曝光参数和第二曝光参数采集图像,因此本实施例方案也可以显著地降低视野范围不同区域的光线强度动态范围较大的影响,对传感器感知大动态范围的性能要求不高,使得可移动平台可搭载较低成本的传感器,可以无需选择较大动态范围的硬件,从而可以在保证传感器采集的图像质量的前提下,无需增加较多的硬件成本。It can be seen from the above embodiments that the solution of this embodiment involves the exposure control of the third sensor. This solution does not adopt the idea of global average exposure, but determines the exposure parameters based on the brightness information of the local field of view. Therefore, this embodiment does not Affected by the brightness of other partial fields of view, it can ensure that the image information of the scene in the partial field of view is collected. In addition, since the sensor can be controlled to collect images at the first exposure parameter and the second exposure parameter respectively, the solution of this embodiment can also significantly reduce the influence of the large dynamic range of light intensity in different areas of the field of view, and affect the perception of the large dynamic range by the sensor. The performance requirements are not high, so that the mobile platform can be equipped with lower-cost sensors, and there is no need to choose hardware with a larger dynamic range, so that the image quality collected by the sensor can be guaranteed without increasing the hardware cost.
由上述实施例可见,可移动平台根据第一局部视野的第一亮度信息和所述第二局部视野的第二亮度信息,分别确定了传感器的两份曝光参数来采集不同图像。上述实施例,对于曝光参数的确定时机、控制传感器采集图像的时机、基于采集的图像控制可移动平台移动时机,以及各步骤的执行时机等等,均未进行限定;可以理解,实际应用中可以根据需要,例如考虑可移动平台的类型、可移动平台的移动方式、所面临的应用场景等一种或多种因素来设计,这些可能不同的处理均在本申请方案所涵盖的范围内。接下来通过不同场景及需求,提供了两种实施例来说明本申请可移动平台的控制方案。It can be seen from the above embodiments that the movable platform determines two sets of exposure parameters of the sensor respectively according to the first brightness information of the first partial field of view and the second brightness information of the second partial field of view to collect different images. The above-mentioned embodiment does not limit the timing of determining the exposure parameters, the timing of controlling the sensor to collect images, the timing of controlling the movement of the movable platform based on the collected images, and the execution timing of each step; it can be understood that in practical applications, Design according to needs, for example, considering one or more factors such as the type of the movable platform, the way of moving the movable platform, and the application scenarios faced. These possible different treatments are all within the scope of the solution of this application. Next, through different scenarios and requirements, two embodiments are provided to illustrate the control scheme of the mobile platform of the present application.
在一些例子中,考虑到可移动平台最主要的需求是在空间中向某个方向移动,基于此,提供了考虑可移动平台的移动方向的实施例。作为例子,可以优先考虑可移动平台移动方向上的景物的图像质量,基于可移动平台移动方向上的景物来设置可移动平台的曝光参数,从而优先保障可移动平台在该移动方向上的避障功能。In some examples, considering that the main requirement of the movable platform is to move in a certain direction in space, based on this, an embodiment considering the moving direction of the movable platform is provided. As an example, the image quality of the scene in the moving direction of the movable platform can be given priority, and the exposure parameters of the movable platform can be set based on the scenery in the moving direction of the movable platform, so as to give priority to the obstacle avoidance of the movable platform in the moving direction Function.
作为例子,当可移动平台向前移动时,前向视野的图像质量是最重要的,它决定 着可移动平台避障、绕行等功能能否正常实现;向左移动时,左向视野的图像质量最重要;而朝着左上45度移动时,则需要同时兼顾左向视野和前向视野两个方向;而朝着左上30度移动时,同样需要同时兼顾左向视野和前向视野两个方向。可移动平台的其他移动方向同理,在此不进行赘述。As an example, when the movable platform moves forward, the image quality of the forward field of view is the most important, which determines whether the functions such as obstacle avoidance and circumvention of the movable platform can be realized normally; when moving to the left, the image quality of the left field of view Image quality is the most important; when moving toward the upper left 45 degrees, it is necessary to take into account both left and forward vision directions; direction. The same goes for other moving directions of the movable platform, which will not be repeated here.
因此,在确定曝光参数时,可以至少基于亮度信息和移动方向来确定。Therefore, when determining the exposure parameter, it may be determined based on at least the brightness information and the moving direction.
作为例子,以第一曝光参数的确定为例,如图3所示,是本实施例根据一实施例示出的可移动平台确定第一曝光的流程图,可包括如下步骤:As an example, taking the determination of the first exposure parameter as an example, as shown in FIG. 3 , it is a flow chart of determining the first exposure by the mobile platform according to an embodiment shown in this embodiment, which may include the following steps:
步骤S301:获取可移动平台的移动方向。Step S301: Obtain the moving direction of the movable platform.
步骤S302:至少基于所述第一亮度信息和可移动平台的移动方向确定第三传感器的第一曝光参数。Step S302: Determine a first exposure parameter of the third sensor based at least on the first brightness information and the moving direction of the movable platform.
其中,可移动平台的移动方向可以通过多种方式获取到,作为例子,可移动平台中可以搭载有多种传感器,例如加速度计和角速度计等惯性传感器,通过惯性传感器可以获取到可移动平台的移动方向。Among them, the moving direction of the movable platform can be obtained in various ways. As an example, the movable platform can be equipped with a variety of sensors, such as inertial sensors such as accelerometers and angular velocity meters. direction of movement.
其中,如何基于第一亮度信息和移动方向确定第一曝光参数,实际应用中根据需要可以有多种实现方式,本实施例对此不作限定。例如,第一亮度信息对应第一方向,可以根据移动方向与第一方向的关系来确定第一亮度信息在确定第一曝光参数时的权重,当然,具体的权重大小可以根据需要灵活选择。Wherein, how to determine the first exposure parameter based on the first brightness information and the moving direction can be implemented in various ways according to actual needs, which is not limited in this embodiment. For example, the first brightness information corresponds to the first direction, and the weight of the first brightness information when determining the first exposure parameter can be determined according to the relationship between the moving direction and the first direction. Of course, the specific weight can be flexibly selected according to needs.
例如,可以根据可移动平台的移动方向和第三传感器的第一方向的相对方位关系,确定所述第一亮度信息的权重,其中,权重反映了对于当前可移动平台的移动方向而言,第三传感器在采集图像时第一方向的视野内的亮度信息的重要度。For example, the weight of the first brightness information may be determined according to the relative orientation relationship between the moving direction of the movable platform and the first direction of the third sensor, wherein the weight reflects the first brightness information for the current moving direction of the movable platform. The importance of brightness information in the field of view in the first direction when the three sensors collect images.
作为例子,可移动平台的移动方向和第三传感器的第一方向的相对方位关系可以分为两种情况:与第一方向重合以及偏离第一方向。在本实施例中,优先考虑可移动平台的移动方向,第一方向和第二方向中离可移动平台的移动方向越近,在该方向内的景物的重要性越高,在采集图像时可以优先考虑该方向的图像质量。例如,若可移动平台的移动方向与第一方向重合,可以只基于第一亮度信息确定第三传感器的第一曝光参数,即第三传感器视野内的其他方向可以不考虑。另外一种情况,若可移动平台的移动方向偏离第一方向,即未完全落入第一方向,可以结合考虑第二方向的第二亮度信息来确定第一曝光参数;在可移动平台的移动方向偏离第一方向时,相对于第二方向而言,可移动平台的移动方向偏向第一方向,因此,第三传感器的第一亮度信息的权重大于第二亮度信息的权重,基于此可以确定出的第一曝光参数。As an example, the relative orientation relationship between the moving direction of the movable platform and the first direction of the third sensor can be divided into two situations: coincident with the first direction and deviation from the first direction. In this embodiment, priority is given to the moving direction of the movable platform. The closer the first direction and the second direction are to the moving direction of the movable platform, the higher the importance of the scenery in this direction is, and it can be used when collecting images. Prioritize image quality in that direction. For example, if the moving direction of the movable platform coincides with the first direction, the first exposure parameter of the third sensor may be determined only based on the first brightness information, that is, other directions within the field of view of the third sensor may not be considered. In another case, if the moving direction of the movable platform deviates from the first direction, that is, it does not completely fall into the first direction, the first exposure parameter can be determined by considering the second brightness information in the second direction; When the direction deviates from the first direction, relative to the second direction, the moving direction of the movable platform is biased towards the first direction, therefore, the weight of the first brightness information of the third sensor is greater than the weight of the second brightness information, based on which it can be determined The first exposure parameter is obtained.
由于该第一曝光参数考虑了可移动平台的移动方向,可以优先考虑可移动平台移动方向上的景物的图像质量,基于移动方向上的亮度信息来设置可移动平台的曝光参数,可以采集到移动方向上的丰富的图像信息,使得可移动平台可以获取到移动方向上景物的深度信息,从而优先保障可移动平台在该移动方向上的避障功能,使可移动平台的移动更加安全。Since the first exposure parameter takes into account the moving direction of the movable platform, the image quality of the scene in the moving direction of the movable platform can be given priority, and the exposure parameters of the movable platform are set based on the brightness information in the moving direction, and the moving The rich image information in the direction enables the movable platform to obtain the depth information of the scene in the moving direction, so as to give priority to the obstacle avoidance function of the movable platform in the moving direction, making the movement of the movable platform safer.
上述实施例以确定第一曝光参数为例进行说明,在具体实现时,对于第二曝光参数的确定过程也同理。例如,可以获取可移动平台的移动方向,至少基于所述第二亮度信息和可移动平台的移动方向确定第三传感器的第二曝光参数;在一些例子中,若所述移动方向落入所述第二方向,基于所述第二亮度信息确定所述第三传感器的第二曝光参数;若相对于所述第一方向,所述移动方向偏向所述第二方向,基于所述第一 亮度信息和所述第二亮度信息确定所述第一曝光参数;其中,所述第二亮度信息的权重大于所述第一亮度信息的权重。The foregoing embodiment is described by taking the determination of the first exposure parameter as an example, and the same is true for the determination process of the second exposure parameter during specific implementation. For example, the moving direction of the movable platform may be obtained, and the second exposure parameter of the third sensor may be determined based at least on the second brightness information and the moving direction of the movable platform; in some examples, if the moving direction falls within the In a second direction, determining a second exposure parameter of the third sensor based on the second brightness information; if the moving direction is biased toward the second direction relative to the first direction, based on the first brightness information and the second brightness information to determine the first exposure parameter; wherein, the weight of the second brightness information is greater than the weight of the first brightness information.
上述实施例优先考虑了可移动平台移动方向上的景物的图像,降低了其他非移动方向的环境亮度信息的影响,解决了环境中参数动态范围大的情况下,由于传感器视野相互耦合进而导致可移动平台无法正常工作的问题;不需要使用能够感知大动态范围的传感器,成本较低;同时保障传感器采集的景物的图像质量,不会降低传感器感知的图像帧率。The above embodiment gives priority to the images of the scene in the moving direction of the movable platform, reduces the influence of ambient brightness information in other non-moving directions, and solves the problem that the field of view of the sensors is coupled to each other when the dynamic range of parameters in the environment is large. The problem that the mobile platform cannot work normally; there is no need to use a sensor that can perceive a large dynamic range, and the cost is low; at the same time, the image quality of the scene collected by the sensor is guaranteed, and the image frame rate perceived by the sensor will not be reduced.
接下来通过一些实施例来说明确定曝光参数的一些实现过程。Next, some implementation processes of determining exposure parameters are described through some embodiments.
本实施例中,以可移动平台为无人机为例,theta(θ)表示无人机的移动方向的角度,其中,theta=0°表示向前飞行;theta=90°表示向左飞行;theta=-90°表示向右飞行;theta=180°(或-180°)表示向后飞行。In this embodiment, taking the movable platform as an unmanned aerial vehicle as an example, theta (θ) represents the angle of the moving direction of the unmanned aerial vehicle, wherein, theta=0° represents forward flight; theta=90° represents leftward flight; theta=-90° indicates rightward flight; theta=180° (or -180°) indicates backward flight.
本实施例中,传感器采集的图像对应有第一方向和第二方向,即采集的图像可以分割出分别对应不同方向的两个部分,本实施例以图像中两部分的权重和为1为例进行说明。当可移动平台朝前或朝后移动时,其移动的角度为0°或者180°(-180°),第三传感器获得的区域1的图像信息的权重为1,区域2的图像信息的权重为0。同理,当可移动平台朝左或朝右移动时,传感器区域1的图像信息的权重为0,区域2的图像信息的权重为1。例如,可移动平台朝前移动时,以传感器C为例,其采集的图像中前行方向所对应的区域1的权重为1,其采集的图像中左侧方向所对应的区域2的权重为0,即只基于图像中前行方向所对应的区域1的部分确定出第一亮度信息。若可移动平台朝左移动,同理,只基于图像中左侧方向所对应的区域2的部分确定出第二亮度信息。In this embodiment, the image collected by the sensor corresponds to the first direction and the second direction, that is, the collected image can be divided into two parts corresponding to different directions. In this embodiment, the weight sum of the two parts in the image is 1 as an example Be explained. When the movable platform moves forward or backward, its moving angle is 0° or 180° (-180°), the weight of the image information of area 1 obtained by the third sensor is 1, and the weight of image information of area 2 is is 0. Similarly, when the movable platform moves left or right, the weight of the image information in sensor area 1 is 0, and the weight of image information in area 2 is 1. For example, when the movable platform moves forward, taking sensor C as an example, the weight of area 1 corresponding to the forward direction in the collected image is 1, and the weight of area 2 corresponding to the left direction in the collected image is 0, that is, the first brightness information is determined only based on the part of the area 1 corresponding to the forward direction in the image. If the movable platform moves to the left, similarly, the second brightness information is determined only based on the part of the region 2 corresponding to the left direction in the image.
若可移动平台朝左前方移动,可以基于该移动方向,基于第一亮度信息和第二亮度信息来确定。在本实施例中,为了便于计算,可移动平台的第三传感器两个区域的图像的权重可以与可移动平台的移动角度具有设定函数关系。例如,第三传感器区域1的图像的权重w area_1计算方法如公式(2)所示: If the movable platform moves toward the left front, it may be determined based on the moving direction, based on the first brightness information and the second brightness information. In this embodiment, for the convenience of calculation, the weights of the images of the two regions of the third sensor of the movable platform may have a set function relationship with the moving angle of the movable platform. For example, the calculation method of the weight w area_1 of the image of the third sensor area 1 is shown in formula (2):
w area_1=cos 2(theta)                   (2) w area_1 = cos 2 (theta) (2)
其中,w area_1指的是第三传感器区域1的图像的权重,theta指的是可移动平台的移动角度。 Wherein, w area_1 refers to the weight of the image of the third sensor area 1, and theta refers to the moving angle of the movable platform.
区域2的图像的权重w area_2计算方法如公式(3)所示: The calculation method of the weight w area_2 of the image in area 2 is shown in formula (3):
w area_2=sin 2(theta)                   (3) w area_2 = sin 2 (theta) (3)
其中,w area_2指的是第三传感器区域2的图像的权重,theta指的是可移动平台的移动角度。 Wherein, w area_2 refers to the weight of the image of the third sensor area 2, and theta refers to the moving angle of the movable platform.
通过上述计算,得到第一亮度信息的权重和第二亮度信息的权重,进而可以加权计算得到第三传感器的图像的平均亮度信息后,并执行自动曝光算法,确定出第三传感器的第一曝光参数。Through the above calculation, the weight of the first brightness information and the weight of the second brightness information can be obtained, and then the average brightness information of the image of the third sensor can be obtained through weighted calculation, and the automatic exposure algorithm can be executed to determine the first exposure of the third sensor parameter.
可选地,第一曝光参数包括曝光时间和模拟增益,根据计算得到的图像的平均亮度信息,利用自动曝光算法计算得到第三传感器的曝光时间和模拟增益。Optionally, the first exposure parameters include exposure time and analog gain, and the exposure time and analog gain of the third sensor are calculated by using an automatic exposure algorithm according to the calculated average brightness information of the image.
可移动平台的传感器获取图像后,可将图像输入后级的感知算法中。在一些例子中,一些以双目视觉图像为输入的感知算法对双目图像有一定的要求,例如半全局匹配和视觉里程计等,对双目图像有一定要求,如左右目图像的亮度不能相差太多或者 左右目图像的曝光时间不能相差太多等等。但由于两个相邻传感器只有一个亮度信息相同,其权重以及另一个亮度信息及其权重可能存在很大差异,这样就会导致两个相邻的传感器的曝光参数不同,进而导致左右目图像的亮度不能相差太多或者左右目图像的曝光时间不能相差太多。After the sensor of the movable platform acquires the image, the image can be input into the perception algorithm of the subsequent stage. In some examples, some perception algorithms that use binocular vision images as input have certain requirements on binocular images, such as semi-global matching and visual odometer, etc., have certain requirements on binocular images, such as the brightness of the left and right images cannot The difference is too much or the exposure time of the left and right images cannot be too different, etc. However, since only one luminance information of two adjacent sensors is the same, its weight and the other luminance information and its weight may be very different, which will cause the exposure parameters of the two adjacent sensors to be different, resulting in the left and right images. The difference in brightness cannot be too much or the exposure time of the left and right images cannot be too different.
因此,本实施例方案在确定第三传感器的第一曝光参数后,还可以基于第一传感器的第一曝光参数,调整第三传感器的第一曝光参数,实际应用中,可以根据需要设定多种调整方式,例如可以基于感知算法对双目图像的要求来确定调整方式,具体的要求可以根据实际所使用的感知算法而确定。或者,还可以根据需要预先设定一差异阈值,若第一传感器的第一曝光参数与第三传感器的第一曝光参数的差异超过该差异阈值,将第三传感器的第一曝光参数进行调整,使第一传感器的第一曝光参数与第三传感器的第一曝光参数的差小于或等于该差异阈值,其中,该差异阈值可以根据需要灵活设定,例如可以是经验值等。Therefore, after the first exposure parameter of the third sensor is determined in the scheme of this embodiment, the first exposure parameter of the third sensor can also be adjusted based on the first exposure parameter of the first sensor. For example, the adjustment method can be determined based on the requirements of the perception algorithm for binocular images, and the specific requirements can be determined according to the actual perception algorithm used. Alternatively, a difference threshold can also be preset as required, and if the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor exceeds the difference threshold, the first exposure parameter of the third sensor is adjusted, Make the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor less than or equal to the difference threshold, wherein the difference threshold can be flexibly set according to needs, for example, it can be an empirical value.
在一些例子中,调整方式可以包括:缩小第一传感器的第一曝光参数与第三传感器的第一曝光参数之间的差异,进而保证第一传感器采集的图像和第三传感器采集的图像之间满足预设亮度条件和/或预设图像内容条件,以符合后级感知算法的要求。In some examples, the adjustment method may include: reducing the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, so as to ensure that the difference between the image captured by the first sensor and the image captured by the third sensor is Satisfy preset brightness conditions and/or preset image content conditions to meet the requirements of post-stage perception algorithms.
在一些实施例中,可以根据第一传感器的第一曝光参数和第三传感器的第一曝光参数,确定第三传感器的曝光参数取值范围,控制第三传感器的第一曝光参数在曝光参数取值范围内。将两个相邻的传感器的曝光参数都控制在可移动平台飞行方向上的曝光参数的取值范围内,控制两者的差异小于可移动平台飞行方向上的曝光参数的取值范围,就可以保证左右目图像的亮度相差不多或者左右目图像的曝光时间相差不多。In some embodiments, the value range of the exposure parameter of the third sensor can be determined according to the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, and the value range of the exposure parameter of the third sensor can be controlled within the range of the exposure parameter. value range. The exposure parameters of two adjacent sensors are controlled within the value range of the exposure parameter in the flying direction of the movable platform, and the difference between the two is controlled to be smaller than the value range of the exposure parameter in the flying direction of the movable platform. Ensure that the brightness of the left and right images is similar or the exposure time of the left and right images is similar.
基于确定的第三传感器的曝光参数的取值范围,限制第三传感器的第一曝光参数的方式,可以是当第三传感器的第一曝光参数位于曝光参数的取值范围时,第三传感器的第一曝光参数保持不变;或者,可以是当第三传感器的第一曝光参数超出曝光参数的取值范围时,调整第三传感器的第一曝光参数为曝光参数取值范围的边界值。调整第三传感器的第一曝光参数为曝光参数取值范围的边界值包括两种:若第三传感器的第一曝光参数大于曝光参数取值范围的最大值,设置第三传感器的第一曝光参数为曝光参数取值范围的最大值;若第三传感器的第一曝光参数小于曝光参数取值范围的最小值,设置第三传感器的第一曝光参数为曝光参数取值范围的最小值。Based on the determined value range of the exposure parameter of the third sensor, the manner of limiting the first exposure parameter of the third sensor may be that when the first exposure parameter of the third sensor is within the value range of the exposure parameter, the third sensor’s The first exposure parameter remains unchanged; or, when the first exposure parameter of the third sensor exceeds the value range of the exposure parameter, the first exposure parameter of the third sensor is adjusted to a boundary value of the value range of the exposure parameter. Adjusting the first exposure parameter of the third sensor to be the boundary value of the value range of the exposure parameter includes two types: If the first exposure parameter of the third sensor is greater than the maximum value of the value range of the exposure parameter, set the first exposure parameter of the third sensor is the maximum value of the value range of the exposure parameter; if the first exposure parameter of the third sensor is smaller than the minimum value of the value range of the exposure parameter, set the first exposure parameter of the third sensor to the minimum value of the value range of the exposure parameter.
在一些实施例中,曝光参数取值范围是基于第一传感器的第一曝光参数及其权重,以及第三传感器的第一曝光参数及其权重共同确定的。其中,因为移动方向上的景物信息比较重要,若相对于第二方向而言,可移动平台的移动方向偏向第一方向时,第一方向上的景物的对于移动方向上的景物的影响程度大于第二方向上的景物,所以可以控制第三传感器的第一曝光参数的权重大于第一传感器的第一曝光参数的权重,通过上述调整方案,基于此得到的曝光参数采集图像控制可移动平台的移动更加安全。In some embodiments, the value range of the exposure parameter is jointly determined based on the first exposure parameter of the first sensor and its weight, and the first exposure parameter of the third sensor and its weight. Wherein, because the scene information in the moving direction is more important, if the moving direction of the movable platform is biased towards the first direction relative to the second direction, the scene in the first direction will have a greater influence on the scene in the moving direction than The scene in the second direction, so the weight of the first exposure parameter of the third sensor can be controlled to be greater than the weight of the first exposure parameter of the first sensor. Through the above adjustment scheme, based on the obtained exposure parameters, the captured image controls the movement of the movable platform. Mobile is safer.
确定曝光参数的取值范围的方法有很多种,在一些实施例中,为了更加快速地确定曝光参数,在考虑移动方向的情况下,可以先定义一个参数,其表征可移动平台移动方向上的曝光参数,该参数是一个虚拟的曝光参数,并不是一个实际存在的物理参数,其可以是基于移动方向确定的相邻的两个传感器的曝光参数经过加权求得的。因此,曝光参数的取值范围的边界值是可移动平台移动方向上的曝光参数的倍数。其中,可移动平台移动方向上的曝光参数的倍数可以是根据可移动平台实际使用场景和使用 经验确定的一个值。There are many ways to determine the value range of the exposure parameter. In some embodiments, in order to determine the exposure parameter more quickly, in consideration of the moving direction, a parameter can be defined first, which represents the moving direction of the movable platform. The exposure parameter is a virtual exposure parameter, not an actual physical parameter, which can be obtained by weighting the exposure parameters of two adjacent sensors determined based on the moving direction. Therefore, the boundary value of the value range of the exposure parameter is a multiple of the exposure parameter in the moving direction of the movable platform. Wherein, the multiple of the exposure parameter in the moving direction of the movable platform may be a value determined according to the actual use scene and experience of the movable platform.
下面提供一个可选的具体实施例,在该可选的实施例中,以一个具有四个传感器的可移动平台为例,四个传感器分别位于可移动平台的前左、前右、后右和后左。当可移动平台朝前移动时,可移动平台移动方向上的曝光参数由前左和前右两个传感器的曝光参数加权求得;当可移动平台朝左上30度飞行时,以顺时针方向优先,根据移动方向确定的两个传感器分别为前右和前左;当可移动平台朝前左45度飞行时,以顺时针方向优先,根据移动方向确定的两个传感器分别为前右和后左。可移动平台的移动方向为其他方向时确定传感器的方法同理,在此不进行赘述。可以明显得到,无论可移动平台往哪个方向移动,都可以根据移动方向,确定出左右两个相邻的传感器。An optional specific embodiment is provided below. In this optional embodiment, take a movable platform with four sensors as an example, and the four sensors are respectively located on the front left, front right, rear right and rear left. When the movable platform is moving forward, the exposure parameters in the moving direction of the movable platform are obtained by weighting the exposure parameters of the front left and front right sensors; when the movable platform is flying 30 degrees to the left, the clockwise direction is preferred , the two sensors determined according to the direction of movement are front right and front left respectively; when the movable platform flies forward and left 45 degrees, the clockwise direction takes priority, and the two sensors determined according to the direction of movement are front right and rear left . The method for determining the sensor is the same when the moving direction of the movable platform is other directions, and details are not described here. It can be clearly obtained that no matter which direction the movable platform moves, two adjacent sensors on the left and right can be determined according to the moving direction.
可选地,左右两个相邻传感器的曝光参数的权重可以根据可移动平台的移动方向确定出。下面提供一个可选的实施例,在该可选的实施例中,两个传感器的曝光参数的权重与可移动平台的移动角度可以具有设定函数关系。例如,若可移动平台的移动角度满足:45°<|theta|≤135°,两个相邻传感器曝光参数的权重w left和w right的计算方法可以如公式(4)和公式(5)所示: Optionally, the weights of the exposure parameters of the left and right adjacent sensors can be determined according to the moving direction of the movable platform. An optional embodiment is provided below. In this optional embodiment, the weights of the exposure parameters of the two sensors and the moving angle of the movable platform may have a set functional relationship. For example, if the moving angle of the movable platform satisfies: 45°<|theta|≤135°, the calculation methods of the weights w left and w right of the exposure parameters of two adjacent sensors can be as shown in formula (4) and formula (5) Show:
w left=sin 2(theta-45)                   (4) w left = sin 2 (theta-45) (4)
w right=cos 2(theta-45)                  (5) w right = cos 2 (theta-45) (5)
其中,w left指的是与可移动平台的移动方向左相邻的传感器的曝光参数的权重,w right指的是与可移动平台的移动方向右相邻的传感器的曝光参数的权重,theta指的是可移动平台的移动角度。 Among them, w left refers to the weight of the exposure parameters of the sensors adjacent to the left of the moving direction of the movable platform, w right refers to the weight of the exposure parameters of the sensors adjacent to the right of the moving direction of the movable platform, and theta refers to is the movement angle of the movable platform.
若可移动平台的移动角度不满足:45°<|theta|≤135°,两个相邻传感器曝光参数的权重w left和w right的计算方法如公式(6)和公式(7)所示: If the moving angle of the movable platform does not satisfy: 45°<|theta|≤135°, the calculation methods of the weights w left and w right of the exposure parameters of two adjacent sensors are shown in formula (6) and formula (7):
w left=cos 2(theta-45)                  (6) w left = cos 2 (theta-45) (6)
w right=sin 2(theta-45)                  (7) w right = sin 2 (theta-45) (7)
其中,w left指的是与可移动平台的移动方向左相邻的传感器的曝光参数的权重,w right指的是与可移动平台的移动方向右相邻的传感器的曝光参数的权重,theta指的是可移动平台的移动角度。 Among them, w left refers to the weight of the exposure parameters of the sensors adjacent to the left of the moving direction of the movable platform, w right refers to the weight of the exposure parameters of the sensors adjacent to the right of the moving direction of the movable platform, and theta refers to is the movement angle of the movable platform.
两个传感器中越靠近可移动平台移动方向的其曝光参数的权重越大。应用上面的权重计算方法,当移动角度theta=0时,可移动平台移动方向上左右两个相邻的传感器为前左和前右,这时,两个传感器的曝光参数的权重都为0.5;当移动角度theta=46时,可移动平台移动方向上左右两个相邻的传感器为后左和前左,两个传感器它们的权值分别为0.0003和0.9997。Among the two sensors, the closer to the moving direction of the movable platform, the greater the weight of the exposure parameter. Applying the above weight calculation method, when the moving angle theta=0, the left and right adjacent sensors in the moving direction of the movable platform are the front left and the front right. At this time, the weights of the exposure parameters of the two sensors are both 0.5; When the moving angle theta=46, the left and right adjacent sensors in the moving direction of the movable platform are rear left and front left, and their weights are 0.0003 and 0.9997 respectively.
在一些实施例中,曝光参数包括理想曝光时间和曝光时间。若确定出的第三传感器的理想曝光时间和曝光时间未在曝光参数的取值范围内,调整第三传感器的理想曝光时间和曝光时间为曝光参数取值范围的边界值。In some embodiments, exposure parameters include ideal exposure time and exposure time. If the determined ideal exposure time and exposure time of the third sensor are not within the value range of the exposure parameter, adjust the ideal exposure time and exposure time of the third sensor to be a boundary value of the value range of the exposure parameter.
可选地,曝光参数还包括模拟增益和数字增益。基于调整后的曝光时间和理想曝光时间,调整第三传感器的模拟增益和数字增益。Optionally, the exposure parameters also include analog gain and digital gain. Based on the adjusted exposure time and the ideal exposure time, the analog gain and the digital gain of the third sensor are adjusted.
本实施例是结合考虑了可移动平台的移动方向来确定曝光参数,而可移动平台在移动过程中,可能所处场景面临较大变化,基于此,本实施例还可以获取所述可移动平台的移动速度的变化,作为例子,移动速度的获取可以通过可移动平台搭载的惯性测量单元获取到。并且,所述移动速度的变化与如下任一正相关:所述可移动平台的 移动方向的获取频率、曝光参数的更新频率或亮度信息的权重的更新频率,也即是,可移动平台的移动速度变化越快,即相应地提高可移动平台的移动方向的获取频率、曝光参数的更新频率或亮度信息的权重的更新频率,从而使可移动平台更快地更新曝光参数,以适应可移动平台可能高速移动下所面临的场景的快速变化,以提供可移动平台的安全控制。可以理解的是,可移动平台各种运动参数可以基于如下任一一种坐标系确定:传感器坐标系(sensor系)、机体坐标系(body系)、本地坐标系(local系)、全局坐标系(global系)等。当然,获取可移动平台移动速度和移动方向的方法还有很多,可移动平台也可以基于其他坐标系和方式来获取可移动平台的移动速度和移动方向,本申请对此并不进行限定。In this embodiment, the exposure parameters are determined by taking into account the moving direction of the movable platform, and the scene where the movable platform may be in is facing a large change during the moving process. Based on this, this embodiment can also obtain the The change of the moving speed, as an example, the moving speed can be obtained by an inertial measurement unit mounted on a movable platform. In addition, the change of the moving speed is positively correlated with any of the following: the acquisition frequency of the moving direction of the movable platform, the update frequency of the exposure parameter or the update frequency of the weight of the brightness information, that is, the movement of the movable platform The faster the speed changes, the correspondingly increase the acquisition frequency of the moving direction of the movable platform, the update frequency of the exposure parameters or the update frequency of the weight of the brightness information, so that the movable platform can update the exposure parameters faster to adapt to the movable platform It may be possible to face rapid changes in the scene under high-speed movement, so as to provide safe control of the movable platform. It can be understood that various motion parameters of the movable platform can be determined based on any one of the following coordinate systems: sensor coordinate system (sensor system), body coordinate system (body system), local coordinate system (local system), global coordinate system (global system) and so on. Certainly, there are many methods for obtaining the moving speed and moving direction of the movable platform, and the moving platform may also obtain the moving speed and moving direction of the movable platform based on other coordinate systems and methods, which are not limited in this application.
下面再通过一个实施例来对上述方法步骤的总体流程进行说明:Next, an embodiment is used to illustrate the overall process of the above-mentioned method steps:
以安装四个鱼眼相机的无人机为例,参见图1A所示,无人机的头部朝上,四个鱼眼相机分别安装在前左、前右、后右和后左,用符号标记为:C、D、E和F。同时,每个鱼眼相机从正中间被分为左右两部分,将位于无人机前后方向的部分命名为区域1,位于无人机左右方向的部分命名为区域2。无人机环视四个方向分为八个视野范围,用符号记为:C1和C2、D1和D2、E1和E2以及F1和F2。Take a UAV with four fisheye cameras installed as an example, as shown in Figure 1A. The symbols are labeled: C, D, E and F. At the same time, each fisheye camera is divided into left and right parts from the middle, and the part located in the front and rear directions of the drone is named area 1, and the part located in the left and right directions of the drone is named area 2. The drone looks around in four directions and is divided into eight fields of vision, which are denoted by symbols: C1 and C2, D1 and D2, E1 and E2, and F1 and F2.
显然,C1和D1组成无人机的前视双目,D2和E2组成无人机的右视双目,E1和F1组成无人机的后视双目,F2和C2组成无人机的左视双目。同时,C1和C2、D1和D2、E1和E2、F1和F2的曝光时间是相同的。Obviously, C1 and D1 form the front-view binocular of the drone, D2 and E2 form the right-view binocular of the drone, E1 and F1 form the rear-view binocular of the drone, and F2 and C2 form the left-view binocular of the drone. As binocular. Meanwhile, the exposure times of C1 and C2, D1 and D2, E1 and E2, F1 and F2 are the same.
首先表明飞行角度(theta)与无人机的飞行方向之间的关系,0°表示无人机向前飞行,90°表示无人机向左飞行,180°(或-180°)表示无人机向前飞行,-90°表示无人机向前飞行。无人机的飞行速度的单位是m/s。Firstly, it shows the relationship between the flight angle (theta) and the flying direction of the drone. 0° means the drone is flying forward, 90° means the drone is flying to the left, and 180° (or -180°) means no one. The drone is flying forward, and -90° means the drone is flying forward. The unit of the flying speed of the UAV is m/s.
将无人机的鱼眼相机的理想曝光时间记为icit,曝光时间记为cit,模拟增益记为a_gain,第一区域的亮度记为Lum area_1,第二区域的亮度记为Lum area_2。用icit c表示鱼眼相机C的理想曝光时间,用cit c表示鱼眼相机C的曝光时间,其他鱼眼相机的曝光参数同理。 The ideal exposure time of the fisheye camera of the drone is recorded as icit, the exposure time is recorded as cit, the analog gain is recorded as a_gain, the brightness of the first area is recorded as Lum area_1 , and the brightness of the second area is recorded as Lum area_2 . Use icit c to represent the ideal exposure time of fisheye camera C, use cit c to represent the exposure time of fisheye camera C, and the exposure parameters of other fisheye cameras are the same.
下面通过上述无人机上的一个鱼眼相机来对上述方法步骤的流程进行说明:The flow of the above-mentioned method steps is described below through a fisheye camera on the above-mentioned drone:
以鱼眼相机C为例,显然,C1和C2的曝光参数是一致的,当C1和C2分别拍摄的视野内的景物中的参数存在较大差异时,超过鱼眼相机C所能感知的动态范围,两张图像至少存在一张图像处于过曝或者欠曝,这种情况下的鱼眼相机无法兼顾两张图像的质量。Taking fisheye camera C as an example, it is obvious that the exposure parameters of C1 and C2 are the same. When there is a large difference in the parameters of the scene in the field of view captured by C1 and C2, the dynamics that exceed the perception of fisheye camera C Range, at least one of the two images is overexposed or underexposed. In this case, the fisheye camera cannot take into account the quality of the two images.
首先,可通过无人机的惯性测量单元信息,计算得到当前无人机基于机体坐标系(body系)的飞行速度和飞行方向。其中,机体坐标系(body系)是指固定在飞行器或者飞机上的遵循右手法则的三维正交直角坐标系,其原点位于飞行器或飞机上的质心。First, the current flight speed and flight direction of the UAV based on the body coordinate system (body system) can be calculated through the information of the inertial measurement unit of the UAV. Wherein, the body coordinate system (body system) refers to a three-dimensional orthogonal rectangular coordinate system fixed on the aircraft or aircraft following the right-hand rule, and its origin is located at the center of mass of the aircraft or aircraft.
若无人机的飞行速度小于设定的阈值,则无人机的飞行方向保持与上一帧相同,不进行更新。否则更新无人机的飞行方向。If the flying speed of the UAV is lower than the set threshold, the flying direction of the UAV remains the same as the previous frame and will not be updated. Otherwise, update the flight direction of the drone.
然后,根据无人机的飞行方向确定出无人机的飞行角度,计算得到鱼眼相机C获得的两张图像的亮度的权重,计算方法如下:Then, determine the flight angle of the drone according to the flight direction of the drone, and calculate the weight of the brightness of the two images obtained by the fisheye camera C. The calculation method is as follows:
区域1的图像的权重计算方法如公式(8)所示:The weight calculation method of the image in area 1 is shown in formula (8):
w area_1=cos 2(theta)                   (8) w area_1 = cos 2 (theta) (8)
其中,w area_1指的是鱼眼相机C的区域1(即C1)的图像的权重,theta指的是无人机的飞行角度。 Among them, w area_1 refers to the weight of the image of area 1 (ie C1) of the fisheye camera C, and theta refers to the flight angle of the drone.
区域2的图像的权重计算方法如公式(9)所示:The weight calculation method of the image in area 2 is shown in formula (9):
w area_2=sin 2(theta)                    (9) w area_2 = sin 2 (theta) (9)
其中,w area_2指的是鱼眼相机C的区域2(即C2)的图像的权重,theta指的是无人机的飞行角度。 Among them, w area_2 refers to the weight of the image of area 2 (ie C2) of the fisheye camera C, and theta refers to the flight angle of the drone.
然后,根据上面鱼眼相机C两张图像的亮度及权重计算得到鱼眼相机C图像的平均亮度。将鱼眼相机C左右两部分图像的平均亮度进行加权,得到最终自动曝光算法的输入的平均亮度Lum avg,如公式(10)所示: Then, calculate the average brightness of the image of fisheye camera C according to the brightness and weight of the two images of fisheye camera C above. The average brightness of the left and right images of the fisheye camera C is weighted to obtain the average brightness Lum avg of the input of the final automatic exposure algorithm, as shown in formula (10):
Lum avg=w area_1*Lum area_1+w area_2*Lum area_2             (10) Lum avg =w area_1 *Lum area_1 +w area_2 *Lum area_2 (10)
其中,Lum avg指的是鱼眼相机C的平均亮度,w area_1指的是鱼眼相机C的区域1C1的图像的权重,Lum area_1指的是鱼眼相机C的区域1C1的图像的亮度,w area_2指的是鱼眼相机C的区域2C2的图像的权重,Lum area_2指的是鱼眼相机C的区域2C2的图像的亮度。 Among them, Lum avg refers to the average brightness of fisheye camera C, w area_1 refers to the weight of the image of area 1C1 of fisheye camera C, Lum area_1 refers to the brightness of the image of area 1C1 of fisheye camera C, w area_2 refers to the weight of the image of the area 2C2 of the fisheye camera C, and Lum area_2 refers to the brightness of the image of the area 2C2 of the fisheye camera C.
得到鱼眼相机C图像的平均亮度后,执行自动曝光算法,计算出鱼眼相机C的曝光时间和模拟增益。其他传感器的曝光时间和模拟增益计算方法同理,在此不进行赘述。After obtaining the average brightness of the image of the fisheye camera C, execute the automatic exposure algorithm to calculate the exposure time and analog gain of the fisheye camera C. The exposure time and analog gain calculation methods of other sensors are the same, and will not be repeated here.
通过上述计算过程得到无人机上4个鱼眼相机的曝光时间和模拟增益。然后,对上面计算得到的曝光时间和模拟增益进行进一步地限制。首先,根据无人机的飞行方向确定出计算当然无人机飞行方向上的曝光参数需要的两个鱼眼相机。例如,当角度theta=0时,无人机飞行方向上左右两个相邻的相机为C和D;当theta=46时,无人机飞行方向左右两个相邻的相机为F和C。Through the above calculation process, the exposure time and simulation gain of the four fisheye cameras on the UAV are obtained. Then, the exposure time and analog gain calculated above are further restricted. First, determine the two fisheye cameras needed to calculate the exposure parameters in the flight direction of the drone according to the flight direction of the drone. For example, when the angle theta=0, the two adjacent cameras in the flight direction of the drone are C and D; when theta=46, the two adjacent cameras in the flight direction of the drone are F and C.
确定两个鱼眼相机之后,再根据无人机飞行方向确定两个鱼眼相机的曝光参数的权重。设w left代表与无人机飞行方向左边相邻的鱼眼相机的曝光参数的权重;w right代表与无人机飞行方向右边相邻的鱼眼相机的曝光参数的权重。其计算方法为: After the two fisheye cameras are determined, the weights of the exposure parameters of the two fisheye cameras are determined according to the flying direction of the drone. Let w left represent the weight of the exposure parameters of the fisheye camera adjacent to the left side of the UAV's flight direction; w right represents the weight of the exposure parameters of the fisheye camera adjacent to the right side of the UAV's flight direction. Its calculation method is:
若无人机的飞行角度满足:45°<|theta|≤135°,无人机飞行方向上左右两个相邻鱼眼相机曝光参数的权重w left和w right的计算方法如公式(11)和公式(12)所示: If the flight angle of the UAV satisfies: 45°<|theta|≤135°, the calculation method of the weights w left and w right of the exposure parameters of the two adjacent fisheye cameras on the left and right of the UAV flight direction is as shown in formula (11) and formula (12):
w left=sin 2(theta-45)                   (11) w left = sin 2 (theta-45) (11)
w right=cos 2(theta-45)                  (12) w right = cos 2 (theta-45) (12)
其中,w left指的是与无人机的飞行方向左相邻的鱼眼相机的曝光参数的权重,w right指的是与无人机的飞行方向右相邻的鱼眼相机的曝光参数的权重,theta指的是无人机的飞行角度。 Among them, w left refers to the weight of the exposure parameters of the fisheye camera adjacent to the left of the flight direction of the drone, and w right refers to the weight of the exposure parameters of the fisheye camera adjacent to the right of the flight direction of the drone. Weight, theta refers to the flying angle of the drone.
若无人机的飞行角度不满足:45°<|theta|≤135°,无人机飞行方向上左右两个相邻鱼眼相机曝光参数的权重w left和w right的计算方法如公式(13)和公式(14)所示: If the flight angle of the UAV does not satisfy: 45°<|theta|≤135°, the calculation method of the weight w left and w right of the exposure parameters of two adjacent fisheye cameras on the left and right of the UAV flight direction is as follows: formula (13 ) and formula (14):
w left=cos 2(theta-45)                  (13) w left = cos 2 (theta-45) (13)
w right=sin 2(theta-45)                  (14) w right = sin 2 (theta-45) (14)
其中,w left指的是与无人机的飞行方向左相邻的鱼眼相机的曝光参数的权重,w right指的是与无人机的飞行方向右相邻的鱼眼相机的曝光参数的权重,theta指的是无人机的飞行角度。 Among them, w left refers to the weight of the exposure parameters of the fisheye camera adjacent to the left of the flight direction of the drone, and w right refers to the weight of the exposure parameters of the fisheye camera adjacent to the right of the flight direction of the drone. Weight, theta refers to the flying angle of the drone.
根据上述计算方法确定出的无人机飞行方向左右两个相邻鱼眼相机曝光参数及其 权重,计算得到无人机飞行方向上的曝光参数。无人机飞行方向上的理想曝光时间记为icit fly_dir,曝光时间记为cit fly_dir,上述两个参数icit fly_dir和cit fly_dir计算方法如公式(15)和公式(16)所示: According to the exposure parameters and weights of the two adjacent fisheye cameras on the left and right of the flight direction of the UAV determined by the above calculation method, the exposure parameters in the flight direction of the UAV are calculated. The ideal exposure time in the flying direction of the UAV is recorded as icit fly_dir , and the exposure time is recorded as cit fly_dir . The calculation methods of the above two parameters icit fly_dir and cit fly_dir are shown in formula (15) and formula (16):
icit fly_dir=w left*icit left+w right*icit right          (15) icit fly_dir = w left *icit left +w right *icit right (15)
cit fly_dir=w left*cit left+w right*cit right             (16) cit fly_dir = w left *cit left +w right *cit right (16)
其中,icit fly_dir指的是无人机飞行方向上的理想曝光时间,cit fly_dir指的是无人机飞行方向上的曝光时间,w left指的是与无人机飞行方向左相邻的鱼眼相机的曝光参数的权重,w right指的是与无人机飞行方向右相邻的鱼眼相机的曝光参数的权重,icit left指的是与无人机飞行方向左相邻的鱼眼相机的理想曝光时间,icit right指的是与无人机飞行方向右相邻的鱼眼相机的理想曝光时间,cit left指的是与无人机飞行方向左相邻的鱼眼相机的曝光时间,cit right指的是与无人机飞行方向右相邻的鱼眼相机的曝光时间。 Among them, icit fly_dir refers to the ideal exposure time in the flying direction of the drone, cit fly_dir refers to the exposure time in the flying direction of the drone, and w left refers to the fisheye adjacent to the left of the flying direction of the drone The weight of the exposure parameters of the camera, w right refers to the weight of the exposure parameters of the fisheye camera adjacent to the right of the drone's flight direction, and icit left refers to the weight of the fisheye camera adjacent to the left of the drone's flight direction Ideal exposure time, icit right refers to the ideal exposure time of the fisheye camera adjacent to the right of the drone's flight direction, cit left refers to the exposure time of the fisheye camera adjacent to the left of the drone's flight direction, cit right refers to the exposure time of the fisheye camera adjacent to the right of the flying direction of the drone.
计算得到无人机飞行方向上的曝光参数后,使用无人机飞行方向上的曝光参数来限制无人机上每个鱼眼相机的曝光参数。为了保证鱼眼相机两个图像的亮度差异处于容许区间内,根据计算得到的无人机飞行方向上的理想曝光时间icit fly_dir,对无人机上每个鱼眼相机的理想曝光时间进行限制,限制的方法如公式(17)所示: After calculating the exposure parameters in the flight direction of the UAV, use the exposure parameters in the flight direction of the UAV to limit the exposure parameters of each fisheye camera on the UAV. In order to ensure that the brightness difference between the two images of the fisheye camera is within the allowable range, according to the calculated ideal exposure time icit fly_dir in the flight direction of the drone, the ideal exposure time of each fisheye camera on the drone is limited. The method of is shown in formula (17):
icit x=min(icit fly_dir*ratio icit,icit x),x∈{C,D,E,F}       (17) icit x = min(icit fly_dir *ratio icit , icit x ), x∈{C,D,E,F} (17)
其中,x指的是无人机上四个鱼眼相机C,D,E和F中的任一个,icit x指的是无人机上的鱼眼相机x的理想曝光时间,icit fly_dir指的是无人机飞行方向上的理想曝光时间,ratio icit指的是根据经验设置的一个参数。 Among them, x refers to any one of the four fisheye cameras C, D, E and F on the drone, icit x refers to the ideal exposure time of the fisheye camera x on the drone, icit fly_dir refers to the The ideal exposure time in the direction of human-machine flight, ratio icit refers to a parameter set according to experience.
然后,为了保证鱼眼相机两个图像的内容差异处于容许区间内,使用无人机飞行方向上的曝光时间,限制无人机上每个鱼眼相机的曝光时间。根据上述计算得到无人机飞行方向上的曝光时间cit fly_dir,对无人机上每个鱼眼相机进行限制的方法如公式(18)所示: Then, in order to ensure that the content difference between the two images of the fisheye camera is within the allowable range, the exposure time in the flying direction of the drone is used to limit the exposure time of each fisheye camera on the drone. According to the above calculation, the exposure time cit fly_dir in the flying direction of the UAV is obtained, and the method of limiting each fisheye camera on the UAV is shown in formula (18):
cit x=min(cit fly_dir*ratio cit_up,max(cit fly_dir*ratio cit_low,icit x)),x∈{C,D,E,F}    (18) cit x = min(cit fly_dir *ratio cit_up ,max(cit fly_dir *ratio cit_low ,icit x )), x∈{C,D,E,F} (18)
其中,x指的是无人机上四个鱼眼相机C,D,E和F中的任一个,cit x指的是无人机上的鱼眼相机x的曝光时间,cit fly_dir指的是无人机飞行方向上的曝光时间,ratio cit_up指的是根据经验设置的一个参数,ratio cit_low指的是根据经验设置的另一个参数,ratio cit_up的参数值大于ratio cit_low的参数值。 Among them, x refers to any one of the four fisheye cameras C, D, E and F on the drone, cit x refers to the exposure time of the fisheye camera x on the drone, and cit fly_dir refers to the The exposure time in the flight direction of the aircraft, ratio cit_up refers to a parameter set according to experience, ratio cit_low refers to another parameter set according to experience, and the parameter value of ratio cit_up is greater than the parameter value of ratio cit_low .
最后,根据无人机上每个鱼眼相机限制后的理想曝光时间和曝光时间,分别调整每个鱼眼相机的模拟增益和数字增益。基于上述方案设置并调整鱼眼相机的曝光参数后,不需要使用能够感知大动态范围的鱼眼相机的条件下,即使环境中的参数的动态范围超过鱼眼相机的感知范围,无人机也可以正常进行各种工作,如避障等。Finally, according to the ideal exposure time and exposure time after each fisheye camera limit on the drone, the analog gain and digital gain of each fisheye camera were adjusted respectively. After setting and adjusting the exposure parameters of the fisheye camera based on the above scheme, it is not necessary to use a fisheye camera that can perceive a large dynamic range. Various tasks such as obstacle avoidance can be performed normally.
由上述实施例可见,该实施例可以不需要使用动态范围高的传感器,一般消费级产品的感知相机即可满足要求,从而降低了可移动平台的成本。在计算各个传感器的曝光时间时,综合考虑了可移动平台的移动状态、各个传感器中各区域的亮度等,使得最终可移动平台各方向的成像亮度均处于在各种限制条件下的最优状态,从而提高可移动平台避障、跟踪等功能的效果;当可移动平台处于机动状态,如急刹、突然变向时,本实施例能很好地保证各传感器的曝光合适;当可移动平台处于稳定状态,如悬停时,本实施例也不会出现诸如跳变等不稳定的临界状态。It can be seen from the above embodiment that in this embodiment, a sensor with a high dynamic range may not be used, and a perception camera of a general consumer product may meet the requirements, thereby reducing the cost of the movable platform. When calculating the exposure time of each sensor, the moving state of the movable platform, the brightness of each area in each sensor, etc. are considered comprehensively, so that the imaging brightness of the final movable platform in all directions is in the optimal state under various constraints. , so as to improve the effect of the movable platform obstacle avoidance, tracking and other functions; when the movable platform is in a maneuvering state, such as sudden braking, sudden change of direction, this embodiment can well ensure that the exposure of each sensor is appropriate; when the movable platform In a stable state, such as when hovering, there will be no unstable critical states such as jumps in this embodiment.
上述实施例结合考虑了移动方向来确定曝光参数,作为实现步骤S202和/或步骤 S203的另一种可选方式,也可以不引入移动方向,通过其他方式来实现。例如,可以在传感器视野相互耦合导致无法同时兼顾环视四个方向的条件下,控制传感器进行分时复用,在不同的时间片下侧重不同的视野范围,来保证传感器获取的不同方向的图像都符合质量要求,进而使得可移动平台可以利用高质量的图像进行深度信息的感知,基于得到的深度信息控制可移动平台的移动更加安全。为实现此目的,可以控制可移动平台传感器的曝光参数在基于不同视野得到的亮度信息对应的曝光参数下进行切换,作为例子,如图4所示,是根据本申请的实施例示出的另一种可移动平台的控制方法中部分步骤的示意流程图,可包括如下步骤:In the above embodiments, the exposure parameters are determined in consideration of the moving direction. As another optional way to implement step S202 and/or step S203, it may also be implemented in other ways without introducing the moving direction. For example, under the condition that the field of view of the sensors is coupled to each other and it is impossible to look around in four directions at the same time, the sensor can be controlled to perform time-division multiplexing, focusing on different field of view ranges under different time slices, so as to ensure that the images in different directions acquired by the sensor are consistent. It meets the quality requirements, so that the mobile platform can use high-quality images for depth information perception, and it is safer to control the movement of the mobile platform based on the obtained depth information. To achieve this goal, the exposure parameters of the movable platform sensor can be controlled to switch under the exposure parameters corresponding to the brightness information obtained based on different fields of view. As an example, as shown in FIG. A schematic flowchart of some steps in a method for controlling a movable platform may include the following steps:
步骤S401:基于第一亮度信息确定第三传感器的第一曝光参数;基于第二亮度信息确定第三传感器的第二曝光参数。Step S401: Determine a first exposure parameter of the third sensor based on the first brightness information; determine a second exposure parameter of the third sensor based on the second brightness information.
步骤S402:控制第三传感器的曝光参数在第一曝光参数和第二曝光参数之间切换。Step S402: Control the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter.
本实施例中,可以分别获取可移动平台两个方向上的景物的亮度信息,并以此确定出第三传感器的两套曝光参数。然后控制第三传感器的曝光参数在两套曝光参数之间不断切换;其中,当第三传感器的曝光参数为第一曝光参数时,第三传感器获取可移动平台第一方向上的景物的图像;当第三传感器的曝光参数为第二曝光参数时,第三传感器获取可移动平台第二方向上的景物的图像。由于两个方向上的景物的图像是分开获取的,互相之间不会产生影响。In this embodiment, the brightness information of the scene in the two directions of the movable platform can be obtained respectively, and two sets of exposure parameters of the third sensor can be determined accordingly. Then control the exposure parameter of the third sensor to continuously switch between the two sets of exposure parameters; wherein, when the exposure parameter of the third sensor is the first exposure parameter, the third sensor obtains the image of the scene in the first direction of the movable platform; When the exposure parameter of the third sensor is the second exposure parameter, the third sensor acquires an image of the scene in the second direction of the movable platform. Since the images of the scene in the two directions are acquired separately, there is no influence on each other.
由于控制可移动平台的第三传感器的曝光参数在第一曝光参数和第二曝光参数之间切换,因此,对于第三传感器而言,在第一方向和第二方向的亮度信息相差很大时,也可以独立地获取两个方向上的亮度信息,不受另一个方向的环境亮度信息的影响。因此,可移动平台基于上述方案可以得到准确的深度信息,可移动平台可以安全地移动。这种方案不需要使用能够感知大动态范围的传感器,成本较低;同时解决了环境中动态范围大的情况下,由于传感器视野相互耦合进而导致无法同时兼顾环视四个方向的问题;保障可移动平台传感器获取的所有方向上的景物的图像质量,进而使可移动平台各个方向的避障等感知算法正常运行。Since the exposure parameter of the third sensor controlling the movable platform is switched between the first exposure parameter and the second exposure parameter, for the third sensor, when the brightness information in the first direction and the second direction differ greatly , and the luminance information in the two directions can also be obtained independently, without being affected by the ambient luminance information in the other direction. Therefore, the movable platform can obtain accurate depth information based on the above scheme, and the movable platform can move safely. This solution does not need to use a sensor capable of sensing a large dynamic range, and the cost is low; at the same time, it solves the problem of being unable to look around in four directions at the same time due to the mutual coupling of the sensor field of view when the dynamic range is large in the environment; guarantees mobility The image quality of the scenery in all directions acquired by the platform sensor enables the normal operation of perception algorithms such as obstacle avoidance in all directions of the movable platform.
本实施例中,控制第三传感器切换第一曝光参数和第二曝光参数的方式有多种,实际应用中可以根据需要进行配置,本实施例对此不进行限定。例如,第三传感器的曝光参数为第一曝光参数的帧数,与第三传感器的曝光参数为第二曝光参数的帧数可以相同,也可以不同。可选地,还可以增加可移动平台的移动方向的影响,例如,可移动平台向前移动时,增加第三传感器的曝光参数为第一曝光参数时的帧数,使得第三传感器的曝光参数为第一曝光参数时的帧数大于第三传感器的曝光参数为第二曝光参数时的帧数。In this embodiment, there are multiple ways to control the third sensor to switch between the first exposure parameter and the second exposure parameter, which can be configured according to needs in practical applications, which is not limited in this embodiment. For example, the number of frames whose exposure parameter of the third sensor is the first exposure parameter may be the same as or different from the number of frames whose exposure parameter of the third sensor is the second exposure parameter. Optionally, the influence of the moving direction of the movable platform can also be increased, for example, when the movable platform moves forward, increase the number of frames when the exposure parameter of the third sensor is the first exposure parameter, so that the exposure parameter of the third sensor The number of frames when the first exposure parameter is greater than the number of frames when the exposure parameter of the third sensor is the second exposure parameter.
在一些实施例中,将第三传感器采集一轮第一局部视野的图像和第二局部视野的图像的时间设为第三传感器的图像采集周期。在第三传感器的图像采集周期满足预设条件的情况下,控制第三传感器的曝光参数在第一曝光参数和第二曝光参数之间切换。其中,第三传感器的图像采集周期满足的预设条件,可以是将图像采集周期分为两个时间片:配置第一时间片下的所述第三传感器的曝光参数为所述第一曝光参数,第一曝光参数用于控制第三传感器获取第一局部视野的图像;配置第二时间片下的所述第三传感器的曝光参数为所述第二曝光参数,第二曝光参数用于控制第三传感器获取第二局部视野的图像。In some embodiments, the time when the third sensor collects a round of images of the first partial field of view and images of the second partial field of view is set as an image collection period of the third sensor. In the case that the image acquisition cycle of the third sensor satisfies the preset condition, the exposure parameter of the third sensor is controlled to switch between the first exposure parameter and the second exposure parameter. Wherein, the preset condition that the image acquisition period of the third sensor satisfies may be dividing the image acquisition period into two time slices: configuring the exposure parameter of the third sensor under the first time slice as the first exposure parameter , the first exposure parameter is used to control the third sensor to acquire the image of the first partial field of view; the exposure parameter of the third sensor under the second time slice is configured as the second exposure parameter, and the second exposure parameter is used to control the first The triple sensor acquires images of the second partial field of view.
因此,若当前的图像采集周期属于第一时间片,控制第三传感器在第一曝光参数下采集第一方向上的景物的图像;若当前的图像采集周期属于第二时间片,控制第三传感器在第二曝光参数下采集第二方向上的景物的图像。Therefore, if the current image acquisition period belongs to the first time slice, the third sensor is controlled to acquire the image of the scene in the first direction under the first exposure parameter; if the current image acquisition period belongs to the second time slice, the third sensor is controlled to The image of the scene in the second direction is collected under the second exposure parameter.
在一些实施例中,第三传感器图像采集周期的第一时间片和第二时间片的时长都等于第三传感器采集一帧图像的时长。此时,第三传感器的第一时间片获取的图像序列和第二时间片获取的图像序列,其中一个是奇数帧序列,另一个是偶数帧序列。In some embodiments, the durations of the first time slice and the second time slice of the image acquisition cycle of the third sensor are both equal to the duration of acquiring one frame of image by the third sensor. At this time, one of the image sequences acquired by the third sensor in the first time slice and the image sequence acquired in the second time slice is an odd-numbered frame sequence, and the other is an even-numbered frame sequence.
一般情况下,可移动平台的每个传感器都对应一个的物理通道,并执行一套相同的自动曝光算法。执行上述方案后,可移动平台上的传感器在不同时间片下配置了不同的曝光参数,可以理解为将可移动平台上的每个传感器分为多个。与此相应的,每个物理通道也变成多个虚拟通道,每个虚拟通道执行的自动曝光算法互相独立。In general, each sensor of the movable platform corresponds to a physical channel and executes the same set of automatic exposure algorithms. After implementing the above solution, the sensors on the movable platform are configured with different exposure parameters under different time slices, which can be understood as dividing each sensor on the movable platform into multiples. Correspondingly, each physical channel also becomes multiple virtual channels, and the automatic exposure algorithms executed by each virtual channel are independent of each other.
在一些实施例中,可以根据第三传感器划分出的两个时间片将第三传感器对应的一个物理通道划分为两个虚拟通道;基于划分出的两个虚拟通道,第三传感器在两个虚拟通道之间各自独立地获取图像,参见图5所示。基于上述方法获取图像的步骤包括:In some embodiments, a physical channel corresponding to the third sensor can be divided into two virtual channels according to the two time slices divided by the third sensor; Images are acquired independently between the channels, as shown in Figure 5. The steps of acquiring an image based on the above method include:
步骤S501:计算第三传感器获取的图像序列中每帧图像所属的虚拟通道号。Step S501: Calculate the number of the virtual channel to which each frame of image in the image sequence acquired by the third sensor belongs.
步骤S502:设置第三传感器的曝光参数为虚拟通道号对应的曝光参数。Step S502: Set the exposure parameter of the third sensor as the exposure parameter corresponding to the virtual channel number.
步骤S503:可移动平台基于第三传感器设置的曝光参数获取图像。Step S503: The movable platform acquires an image based on the exposure parameters set by the third sensor.
在一些实施例中,在当前图像对应的帧时计算当前帧的图像所属的虚拟通道号之外,还计算下一帧图像所属的虚拟通道号,并对下一帧图像所属的虚拟通道号对应的曝光参数进行存储。In some embodiments, in addition to calculating the number of the virtual channel to which the image of the current frame belongs when calculating the frame corresponding to the current image, the number of the virtual channel to which the image of the next frame belongs is also calculated, and corresponding to the number of the virtual channel to which the image of the next frame belongs The exposure parameters are stored.
可选地,将下一帧图像所属的虚拟通道号对应的曝光参数存储在可移动平台上第三传感器的寄存器中。Optionally, the exposure parameter corresponding to the virtual channel number to which the next frame of image belongs is stored in a register of the third sensor on the movable platform.
可选地,在可移动平台中所有图像序列共享及同步一个图像帧号序列,以便第三传感器采集时获得图像序列的帧号序列和第三传感器输出图像序列的帧号序列相同。Optionally, all image sequences on the movable platform share and synchronize an image frame number sequence, so that the frame number sequence of the image sequence acquired by the third sensor is the same as the frame number sequence of the image sequence output by the third sensor.
下面再通过一个实施例来对步骤S501至S503的方法的流程进行说明:Next, an embodiment is used to illustrate the flow of the method in steps S501 to S503:
在时间片三等分的情况下,可移动平台的第三传感器执行自动曝光算法和设置寄存器的时序图,参见图6所示。以当前图像的帧号为5为例,方法的流程包括:In the case that the time slice is divided into thirds, the third sensor of the movable platform executes the automatic exposure algorithm and sets the timing diagram of the register, as shown in FIG. 6 . Taking the frame number of the current image as 5 as an example, the flow of the method includes:
计算当前图像帧号frame idx执行自动曝光算法对应的虚拟通道号vchn aec_calc,计算方法如公式(19)所示: Calculate the current image frame number frame idx to execute the virtual channel number vchn aec_calc corresponding to the automatic exposure algorithm, and the calculation method is shown in formula (19):
vchn aec_calc=frame idx%vchn num              (19) vchn aec_calc = frame idx %vchn num (19)
其中,vchn aec_calc指的是当前图像的帧号所属的虚拟通道号,frame idx指的是当前图像的帧号,vchn num指的是虚拟通道的数量。 Among them, vchn aec_calc refers to the virtual channel number to which the frame number of the current image belongs, frame idx refers to the frame number of the current image, and vchn num refers to the number of virtual channels.
得到当前图像帧号对应的虚拟通道号vchn aec_calc=2。设置第三传感器的曝光参数是虚拟通道号为2时所对应的曝光参数,之后第三传感器执行自动曝光算法,获取对应可移动平台的方向上的景物, The virtual channel number vchn aec_calc =2 corresponding to the current image frame number is obtained. The exposure parameter of the third sensor is set to be the corresponding exposure parameter when the virtual channel number is 2, and then the third sensor executes the automatic exposure algorithm to obtain the scenery corresponding to the direction of the movable platform,
同时,计算当前图像帧号下一帧所对应的虚拟通道号vchn aec_set,计算方法如公式(20)所示: At the same time, calculate the virtual channel number vchn aec_set corresponding to the next frame of the current image frame number, and the calculation method is shown in formula (20):
vchn aec_set=(frame idx+1)%vchn num             (20) vchn aec_set = (frame idx +1)% vchn num (20)
其中,vchn aec_set指的是当前图像帧的下一帧图像所属的虚拟通道号,frame idx指的是当前图像的帧号,vchn num指的是虚拟通道的数量。 Among them, vchn aec_set refers to the virtual channel number of the next frame of the current image frame, frame idx refers to the frame number of the current image, and vchn num refers to the number of virtual channels.
得到当前图像帧号下一帧所对应的虚拟通道号vchn aec_set=0,将其对应的曝光参数设置到寄存器中。 Obtain the virtual channel number vchn aec_set =0 corresponding to the next frame of the current image frame number, and set its corresponding exposure parameters into the register.
例如,在获取下一帧的图像前,读取当前帧图像暂存的下一帧图像所属的虚拟通道号对应的曝光参数,并将其设置到传感器的寄存器中;在获取下一帧的图像后,基于下一帧的图像计算得到的下一帧图像所属的虚拟通道号与上面设置到鱼眼相机寄存器中的虚拟通道号相同。For example, before acquiring the image of the next frame, read the exposure parameter corresponding to the virtual channel number of the next frame image temporarily stored in the current frame image, and set it in the register of the sensor; After that, the virtual channel number of the next frame image calculated based on the image of the next frame is the same as the virtual channel number set in the register of the fisheye camera above.
由于第三传感器的曝光参数在两套曝光参数之间切换,在周围环境变化大的情况下,第三传感器获取的图像序列可能是波动的、跳变的。但是,相同曝光参数下的第三传感器获取的图像序列是平稳的、均匀的。获取可移动平台所有方向的景物的图像后,需要将不同曝光参数下的图像从第三传感器获取的图像序列中正确地划分出来。可以根据上述图像序列的帧号与时间片或者虚拟通道的对应关系,将不同曝光参数下的图像从第三传感器获取的图像序列中正确地划分出来。划分第三传感器图像序列的步骤包括:获取第三传感器输出的图像序列,图像序列中各图像对应有图像帧号;获取图像帧号与时间片之间的对应关系;利用图像帧号以及对应关系,从图像序列中获取在第一曝光参数下采集的第一图像。Since the exposure parameters of the third sensor switch between the two sets of exposure parameters, the sequence of images acquired by the third sensor may fluctuate and jump when the surrounding environment changes greatly. However, the image sequence acquired by the third sensor under the same exposure parameters is smooth and uniform. After acquiring the images of the scene in all directions of the movable platform, it is necessary to correctly divide the images under different exposure parameters from the image sequence acquired by the third sensor. The images under different exposure parameters can be correctly divided from the image sequence acquired by the third sensor according to the corresponding relationship between the frame number of the above image sequence and the time slice or virtual channel. The step of dividing the image sequence of the third sensor includes: obtaining the image sequence output by the third sensor, each image in the image sequence corresponds to an image frame number; obtaining the corresponding relationship between the image frame number and the time slice; using the image frame number and the corresponding relationship , acquire the first image acquired under the first exposure parameter from the image sequence.
下面再提供一个实施例进行说明,该实施例中涉及步骤S401到步骤S403,以及步骤S501到步骤S503的实施例,参见图7所示,在这个实施例中,可移动平台包括如下步骤:Another embodiment is provided below for illustration. This embodiment involves steps S401 to S403, and steps S501 to S503, as shown in FIG. 7. In this embodiment, the movable platform includes the following steps:
步骤S701:根据第三传感器曝光参数与可移动平台的不同方向的景物之间的对应关系,确定算法所对应的虚拟通道号。Step S701: According to the corresponding relationship between the exposure parameters of the third sensor and the scenes in different directions of the movable platform, determine the virtual channel number corresponding to the algorithm.
步骤S702:计算第三传感器输出的图像序列中每帧图像所属的虚拟通道号。Step S702: Calculate the number of the virtual channel to which each frame of image in the image sequence output by the third sensor belongs.
步骤S703:将图像序列中与算法对应的虚拟通道号相同的图像输入到算法中。Step S703: Input the image in the image sequence with the same virtual channel number corresponding to the algorithm into the algorithm.
在可移动平台环视感知的解决方案中,本申请实施例不需要使用动态范围极高的相机,一般消费级产品的传感器即可满足要求,从而极大的降低成本;在计算各个传感器的曝光时间时,综合考虑了可移动平台的移动状态、各个传感器中各区域的亮度等,使得最终可移动平台各方向的成像亮度均处于在各种限制条件下的最优状态,从而提高可移动平台避障、跟踪等功能的效果;当可移动平台处于机动状态,如急刹、突然变向时,本申请能很好地保证各传感器的曝光合适;当可移动平台处于稳定状态,如悬停时,本申请也不会出现诸如跳变等不稳定的临界状态。In the mobile platform surround-view perception solution, the embodiment of the present application does not need to use a camera with a very high dynamic range, and the sensors of general consumer products can meet the requirements, thereby greatly reducing the cost; when calculating the exposure time of each sensor In this process, the moving state of the movable platform, the brightness of each area in each sensor, etc. are considered comprehensively, so that the final imaging brightness of the movable platform in all directions is in the optimal state under various constraints, thereby improving the avoidance of the movable platform. Obstacles, tracking and other functions; when the movable platform is in a maneuvering state, such as sudden braking, sudden change of direction, this application can well ensure that the exposure of each sensor is appropriate; when the movable platform is in a stable state, such as hovering , there will be no unstable critical states such as jumps in this application.
针对本实施例中的可移动平台的控制方法,本说明书还提供了另一些可移动平台的控制方法的实施例。For the control method of the mobile platform in this embodiment, this specification also provides some other embodiments of the control method of the mobile platform.
针对图2所述实施例,首先,定义可移动平台的第一状态是第三传感器的第一局部视野受到遮挡,第二局部视野未受到遮挡,参见图8所示;可移动平台的第二状态是第三传感器的第一局部视野未受到遮挡,第二局部视野未受到遮挡,参见图9所示。实际应用中,遮挡的方式可以有多种,例如可以采用黑色胶带遮挡物覆盖住传感器镜头的部分区域,或者还可以是挡板等其他遮挡物遮挡住传感器的局部视野。For the embodiment described in FIG. 2, first, the first state of the movable platform is defined as the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked, as shown in FIG. 8; the second state of the movable platform The state is that the first partial field of view of the third sensor is not blocked, and the second partial field of view is not blocked, as shown in FIG. 9 . In practical applications, there are various ways of blocking, for example, a black tape blocking object can be used to cover a part of the sensor lens area, or other blocking objects such as baffles can block the partial field of view of the sensor.
若可移动平台应用前述图2所示实施例,可移动平台上的第三传感器处于第一状态和第二状态时,基于第三传感器分别采集的图像得到的第二方向内的景物的识别结果一致。也即是,可移动平台无论是在第一状态,还是在第二状态,在空间中移动时,可移动平台对第二方向内的景物都具有正确的识别结果。If the movable platform uses the aforementioned embodiment shown in Figure 2, when the third sensor on the movable platform is in the first state and the second state, the recognition results of the scene in the second direction obtained based on the images respectively collected by the third sensor unanimous. That is, no matter whether the movable platform is in the first state or the second state, when the movable platform moves in space, the movable platform has correct recognition results for the scene in the second direction.
若采用相关技术中的方案,可移动平台的第一局部视野受到遮挡,可移动平台中传感器的自动曝光算法必将出现异常,导致可移动平台的避障功能无法正常工作。If the solution in the related art is adopted, the first partial field of view of the movable platform is blocked, and the automatic exposure algorithm of the sensor in the movable platform will inevitably appear abnormal, resulting in the failure of the obstacle avoidance function of the movable platform.
特别地,对于动态范围较小的低成本传感器,本实施例的可移动平台均能够安全控制移动。例如,可以在第一状态下,设置可移动平台上的第三传感器第二局部视野内景物所反射的光线强度,高于第三传感器所感知的最大光线强度。作为一种例子,可以是将可移动平台第三传感器的第二局部视野朝向强光,由于第一局部视野受到遮挡,因此第三传感器处于极端的高动态场景;而利用上述方案,可移动平台在空间中移动时,其对第二方向内的景物都具有正确的识别结果。Especially, for low-cost sensors with a small dynamic range, the mobile platforms of this embodiment can all be safely controlled to move. For example, in the first state, the light intensity reflected by the scene in the second partial field of view of the third sensor on the movable platform may be set to be higher than the maximum light intensity sensed by the third sensor. As an example, the second partial field of view of the third sensor of the movable platform may be directed toward strong light, and since the first partial field of view is blocked, the third sensor is in an extreme high-dynamic scene; When moving in space, it has correct recognition results for the scenes in the second direction.
可选地,可移动平台对景物的正确识别结果,包括如下任一:在用户界面中输出识别到的景物的信息;所述可移动平台成功避开障碍物;所述可移动平台对局部视野内的景物的正确识别率高于预设第一阈值。可以理解的是,还包括其他表现形式,同样属于本申请保护范围,本申请对此并不进行限定。Optionally, the correct identification result of the scene by the movable platform includes any of the following: output information of the recognized scene in the user interface; the movable platform successfully avoids obstacles; The correct recognition rate of the scene within is higher than the preset first threshold. It can be understood that other expressions also belong to the protection scope of the present application, which is not limited in the present application.
可选地,对景物的识别包括对景物的深度信息和纹理特征的识别,本申请对此并不进行限定。Optionally, identifying the scene includes identifying depth information and texture features of the scene, which is not limited in this application.
针对图3所示实施例,首先,定义第一状态是第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡,参见图8所示。For the embodiment shown in FIG. 3 , firstly, the first state is defined as the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked, as shown in FIG. 8 .
可移动平台上的第三传感器处于第一状态下,可移动平台朝第一方向移动时,基于第三传感器采集的图像得到的第二方向的景物的识别结果,与可移动平台朝第二方向移动时,基于第三传感器采集的图像得到的第二方向的景物的识别结果不一致。例如,可移动平台在第一状态下朝第一方向移动时对第二方向内的景物的识别结果不稳定;可移动平台在第二状态下朝第二方向对第二方向内的景物都具有正确的识别结果。When the third sensor on the movable platform is in the first state, when the movable platform moves in the first direction, the recognition result of the scene in the second direction based on the image collected by the third sensor is the same as the recognition result of the scene in the second direction when the movable platform moves in the second direction. When moving, the recognition results of the scene in the second direction obtained based on the image collected by the third sensor are inconsistent. For example, when the movable platform moves towards the first direction in the first state, the recognition result of the scenery in the second direction is unstable; correct recognition result.
可选地,可移动平台对景物的正确识别结果,包括如下任一:在用户界面中输出识别到的景物的信息;所述可移动平台成功避开障碍物;所述可移动平台对局部视野内的景物的正确识别率高于预设第一阈值。可以理解的是,还包括其他表现形式,同样属于本申请保护范围,本申请对此并不进行限定。Optionally, the correct identification result of the scene by the movable platform includes any of the following: output information of the recognized scene in the user interface; the movable platform successfully avoids obstacles; The correct recognition rate of the scene within is higher than the preset first threshold. It can be understood that other expressions also belong to the protection scope of the present application, which is not limited in the present application.
可选地,可移动平台对景物的识别结果不稳定,包括如下任一:在用户界面中输出提示所述识别结果不稳定或无识别结果的提示消息;所述可移动平台未能成功避开障碍物;所述可移动平台对局部视野内的景物的正确识别率低于预设第二阈值。可以理解的是,还包括其他表现形式,同样属于本申请保护范围,本申请对此并不进行限定。Optionally, the recognition result of the scene by the movable platform is unstable, including any of the following: outputting a prompt message in the user interface prompting that the recognition result is unstable or has no recognition result; Obstacles: the correct recognition rate of the movable platform for the scene in the partial field of view is lower than the preset second threshold. It can be understood that other expressions also belong to the protection scope of the present application, which is not limited in the present application.
针对图4所示实施例,首先,定义第一状态是第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡,参见图8所示。For the embodiment shown in FIG. 4 , firstly, the first state is defined as the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked, as shown in FIG. 8 .
可移动平台上的第三传感器处于第一状态时,可移动平台朝第一方向移动时,基于第三传感器采集的图像得到的第二方向的景物的识别结果,与可移动平台朝第二方向移动时,基于第三传感器采集的图像得到的第二方向的景物的识别结果一致。例如,可移动平台在第一状态下,朝第一方向和第二方向分别运动时,可移动平台对第二方向内的景物都具有正确的识别结果。When the third sensor on the movable platform is in the first state, and when the movable platform moves in the first direction, the recognition result of the scene in the second direction obtained based on the image collected by the third sensor is the same as the recognition result of the scene in the second direction when the movable platform moves in the second direction. When moving, the recognition results of the scene in the second direction obtained based on the image collected by the third sensor are consistent. For example, when the movable platform moves in the first direction and the second direction respectively in the first state, the movable platform has correct recognition results for the scenes in the second direction.
可选地,对景物的正确识别结果,包括如下任一:在用户界面中输出识别到的景物的信息;所述可移动平台成功避开障碍物;所述可移动平台对局部视野内的景物的正确识别率高于预设第一阈值。可以理解的是,还包括其他表现形式,同样属于本申 请保护范围,本申请对此并不进行限定。Optionally, the correct recognition result of the scene includes any of the following: outputting the information of the recognized scene in the user interface; the movable platform successfully avoids obstacles; The correct recognition rate is higher than the preset first threshold. It can be understood that other forms of expression are also included, which also belong to the protection scope of this application, and this application does not limit it.
针对图4所示实施例,下面提供另一种实施例,本实施例不需要可移动平台进行移动。首先,定义第一状态是第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡,参见图8所示。Regarding the embodiment shown in FIG. 4 , another embodiment is provided below, and this embodiment does not require a movable platform to move. Firstly, the first state is defined as that the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked, as shown in FIG. 8 .
本实施例中,可移动平台上的第三传感器处于第一状态时,基于第三传感器采集的图像得到的第二方向的景物具有正确的识别结果。In this embodiment, when the third sensor on the movable platform is in the first state, the scene in the second direction obtained based on the image collected by the third sensor has a correct recognition result.
可选地,可移动平台对景物的正确识别结果,包括如下任一:在用户界面中输出识别到的景物的信息;所述可移动平台成功避开障碍物;所述可移动平台对局部视野内的景物的正确识别率高于预设第一阈值。可以理解的是,还包括其他表现形式,同样属于本申请保护范围,本申请对此并不进行限定。Optionally, the correct identification result of the scene by the movable platform includes any of the following: output information of the recognized scene in the user interface; the movable platform successfully avoids obstacles; The correct recognition rate of the scene within is higher than the preset first threshold. It can be understood that other expressions also belong to the protection scope of the present application, which is not limited in the present application.
可选地,对景物的识别包括对景物的深度信息和纹理特征的识别,本申请对此并不进行限定。Optionally, identifying the scene includes identifying depth information and texture features of the scene, which is not limited in this application.
由上述实施例可见,可以理解,可移动平台可以基于两个传感器得到的深度信息作为一个整体,再与第三个传感器构成上述方法和步骤,以解决多个传感器视野互相耦合产生的曝光问题。因此,本申请不只限于两个传感器之间的问题,对于解决多个传感器由于耦合造成的曝光问题同样适用,本申请对此不进行赘述。It can be seen from the above embodiments that it can be understood that the movable platform can be integrated based on the depth information obtained by the two sensors, and then form the above method and steps with the third sensor to solve the exposure problem caused by the mutual coupling of multiple sensor fields of view. Therefore, the present application is not limited to the problem between two sensors, and is also applicable to solving the exposure problem caused by the coupling of multiple sensors, and this application will not describe it in detail.
本申请还提供了一种可移动平台的控制装置,所述可移动平台包括第一传感器、第二传感器与第三传感器,所述第一传感器与所述第三传感器具有重合的第一局部视野,所述第二传感器与所述第三传感器具有重合的第二局部视野,所述第一局部视野用于观测所述可移动平台第一方向上的景物;所述第二局部视野用于观测所述可移动平台第二方向上的景物,参见图10所示。The present application also provides a control device for a movable platform, the movable platform includes a first sensor, a second sensor and a third sensor, the first sensor and the third sensor have overlapping first partial fields of view , the second sensor and the third sensor have a second partial field of view that overlaps, the first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe The scene in the second direction of the movable platform is shown in FIG. 10 .
所述装置包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现如下方法:The device includes a processor, a memory, and a computer program stored on the memory that can be executed by the processor. When the processor executes the computer program, the following methods are implemented:
获取所述第三传感器的所述第一局部视野的第一亮度信息和所述第二局部视野的第二亮度信息;acquiring first brightness information of the first partial field of view and second brightness information of the second partial field of view of the third sensor;
至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数;控制所述第三传感器在所述第一曝光参数下采集第一图像;基于所述第一图像和所述第一传感器采集的图像,获取所述第一方向的景物的深度信息;determining a first exposure parameter of the third sensor based at least on the first brightness information; controlling the third sensor to capture a first image under the first exposure parameter; based on the first image and the first Obtain the depth information of the scene in the first direction from the image collected by the sensor;
至少基于所述第二亮度信息确定所述第三传感器的第二曝光参数;控制所述第三传感器在所述第二曝光参数下采集第二图像;基于所述第二图像和所述第二传感器采集的图像,获取所述第二方向的景物的深度信息;determining a second exposure parameter of the third sensor based at least on the second brightness information; controlling the third sensor to capture a second image under the second exposure parameter; based on the second image and the second acquire the depth information of the scene in the second direction from the image collected by the sensor;
根据所述深度信息控制所述可移动平台在空间中移动。The movable platform is controlled to move in space according to the depth information.
在一些例子中,所述第三传感器处于第一状态和第二状态时,基于所述第三传感器分别采集的图像得到的所述第二方向内的景物的深度信息一致;In some examples, when the third sensor is in the first state and the second state, the depth information of the scene in the second direction obtained based on the images respectively collected by the third sensor is consistent;
其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡;Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked;
所述第二状态包括:第三传感器的所述第一局部视野未受到遮挡,所述第二局部视野未受到遮挡。The second state includes: the first partial field of view of the third sensor is not blocked, and the second partial field of view is not blocked.
在一些例子中,所述第二局部视野内景物所反射的光线强度,高于所述第三传感器所感知的最大光线强度。In some examples, the light intensity reflected by the scene in the second partial field of view is higher than the maximum light intensity sensed by the third sensor.
在一些例子中,所述方法还包括:获取所述可移动平台的移动方向;In some examples, the method further includes: acquiring the moving direction of the movable platform;
所述第三传感器的第一曝光参数是,至少基于所述第三传感器的第一亮度信息和所述移动方向确定的。The first exposure parameter of the third sensor is determined based at least on the first brightness information of the third sensor and the moving direction.
在一些例子中,所述第三传感器处于第一状态下,所述可移动平台朝所述第一方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息,与所述可移动平台朝第二方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息不一致;In some examples, when the third sensor is in the first state, when the movable platform moves toward the first direction, the depth of the scene in the second direction obtained based on the image collected by the third sensor The information is inconsistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡。Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
在一些例子中,所述至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数,包括:In some examples, the determining the first exposure parameter of the third sensor based at least on the first brightness information includes:
若所述移动方向落入所述第一方向,基于所述第一亮度信息确定所述第三传感器的第一曝光参数;If the moving direction falls into the first direction, determining a first exposure parameter of the third sensor based on the first brightness information;
若相对于所述第二方向,所述移动方向偏向所述第一方向,基于所述第一亮度信息和所述第二亮度信息确定所述第一曝光参数;其中,所述第一亮度信息的权重大于所述第二亮度信息的权重。If the moving direction is biased toward the first direction relative to the second direction, the first exposure parameter is determined based on the first brightness information and the second brightness information; wherein, the first brightness information The weight of is greater than the weight of the second brightness information.
在一些例子中,所述第一曝光参数包括曝光时间和模拟增益;In some examples, the first exposure parameters include exposure time and analog gain;
所述至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数,包括:The determining the first exposure parameter of the third sensor based at least on the first brightness information includes:
至少根据所述第一亮度信息,利用自动曝光算法计算得到所述第三传感器的曝光时间和模拟增益。An exposure time and an analog gain of the third sensor are calculated by using an automatic exposure algorithm at least according to the first brightness information.
在一些例子中,所述确定所述第三传感器的第一曝光参数后,所述方法还包括:In some examples, after determining the first exposure parameter of the third sensor, the method further includes:
基于所述第一传感器的第一曝光参数,调整所述第三传感器的第一曝光参数。Based on the first exposure parameter of the first sensor, the first exposure parameter of the third sensor is adjusted.
在一些例子中,通过如下方式调整所述第三传感器的第一曝光参数:In some examples, the first exposure parameter of the third sensor is adjusted in the following manner:
缩小所述第一传感器的第一曝光参数与所述第三传感器的第一曝光参数的差异,以使所述第一传感器采集的图像和第三传感器采集的图像满足预设亮度条件和/或预设图像内容条件。reducing the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, so that the image captured by the first sensor and the image captured by the third sensor meet a preset brightness condition and/or Preset image content conditions.
在一些例子中,所述调整所述第三传感器的第一曝光参数,包括:In some examples, the adjusting the first exposure parameter of the third sensor includes:
根据所述第一传感器的第一曝光参数和所述第三传感器的第一曝光参数,确定所述第三传感器的曝光参数取值范围,控制所述第三传感器的第一曝光参数在所述曝光参数取值范围内。According to the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, determine the value range of the exposure parameter of the third sensor, and control the first exposure parameter of the third sensor in the within the range of exposure parameters.
在一些例子中,所述曝光参数取值范围,是基于第一传感器的第一曝光参数及其权重,第三传感器的第一曝光参数及其权重确定的;In some examples, the value range of the exposure parameter is determined based on the first exposure parameter of the first sensor and its weight, and the first exposure parameter of the third sensor and its weight;
其中,所述第三传感器的第一曝光参数的权重大于所述第一传感器的第一曝光参数的权重。Wherein, the weight of the first exposure parameter of the third sensor is greater than the weight of the first exposure parameter of the first sensor.
在一些例子中,所述曝光参数包括理想曝光时间和曝光时间;In some examples, the exposure parameters include ideal exposure time and exposure time;
所述基于所述曝光参数取值范围调整所述第三传感器的第一曝光参数,包括:The adjusting the first exposure parameter of the third sensor based on the value range of the exposure parameter includes:
若确定出的第三传感器的理想曝光时间和曝光时间未在所述曝光参数取值范围内,调整所述第三传感器的理想曝光时间和曝光时间为所述曝光参数取值范围的边界值。If the determined ideal exposure time and exposure time of the third sensor are not within the value range of the exposure parameter, adjusting the ideal exposure time and exposure time of the third sensor to a boundary value of the value range of the exposure parameter.
在一些例子中,所述曝光参数还包括:模拟增益和数字增益;In some examples, the exposure parameters also include: analog gain and digital gain;
所述基于所述曝光参数取值范围调整所述第三传感器的曝光参数,还包括:The adjusting the exposure parameter of the third sensor based on the value range of the exposure parameter further includes:
基于调整后的曝光时间和理想曝光时间,调整所述第三传感器的模拟增益和数字增益。Adjusting the analog gain and digital gain of the third sensor based on the adjusted exposure time and the ideal exposure time.
在一些例子中,所述方法还包括:获取所述可移动平台的移动速度的变化;其中,所述移动速度的变化与如下任一正相关:In some examples, the method further includes: acquiring a change in the moving speed of the movable platform; wherein, the change in the moving speed is positively correlated with any of the following:
所述可移动平台的移动方向的获取频率、曝光参数的更新频率或亮度信息的权重的更新频率。The acquisition frequency of the moving direction of the movable platform, the update frequency of the exposure parameters or the update frequency of the weight of the brightness information.
在一些例子中,所述方法还包括:控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换。In some examples, the method further includes: controlling an exposure parameter of a third sensor to switch between the first exposure parameter and the second exposure parameter.
在一些例子中,所述第三传感器处于第一状态时,所述可移动平台朝所述第一方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息,与所述可移动平台朝第二方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息一致;In some examples, when the third sensor is in the first state, when the movable platform moves toward the first direction, the depth of the scene in the second direction obtained based on the image collected by the third sensor The information is consistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡。Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
在一些例子中,所述控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换,是在所述第三传感器的图像采集周期满足预设条件下执行的。In some examples, the controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter is performed when the image acquisition cycle of the third sensor satisfies a preset condition .
在一些例子中,所述控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换,包括:In some examples, the controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter includes:
若当前的图像采集周期属于第一时间片,控制所述第三传感器在所述第一曝光参数下采集第一图像;If the current image acquisition period belongs to the first time slice, controlling the third sensor to acquire the first image under the first exposure parameters;
若当前的图像采集周期属于第二时间片,控制所述第三传感器在所述第二曝光参数下采集第二图像。If the current image acquisition period belongs to the second time slice, controlling the third sensor to acquire a second image under the second exposure parameters.
在一些例子中,所述时间片的时长为所述第三传感器采集一帧图像的时长。In some examples, the duration of the time slice is the duration of one frame of image collected by the third sensor.
在一些例子中,所述方法还包括:In some examples, the method also includes:
获取第三传感器输出的图像序列,所述图像序列中各图像对应有图像帧号;Obtain an image sequence output by the third sensor, where each image in the image sequence corresponds to an image frame number;
获取图像帧号与时间片之间的对应关系;Obtain the correspondence between the image frame number and the time slice;
利用所述图像帧号以及所述对应关系,从所述图像序列中获取在第一曝光参数下采集的第一图像。Using the image frame number and the corresponding relationship, the first image collected under the first exposure parameter is acquired from the image sequence.
在一些例子中,所述第一传感器、第二传感器和第三传感器为鱼眼相机。In some examples, the first sensor, the second sensor and the third sensor are fisheye cameras.
在一些例子中,所述可移动平台包括无人机。In some examples, the movable platform includes a drone.
在一些实施例中,本申请实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。In some embodiments, the functions or modules included in the device provided by the embodiments of the present application can be used to execute the methods described in the above method embodiments, and its specific implementation can refer to the descriptions of the above method embodiments. For brevity, here No longer.
如图11所示,是本申请实施例还提供一种可移动平台1100,包括:所述可移动平台包括第一传感器1101、第二传感器1102与第三传感器1103;As shown in FIG. 11 , the embodiment of the present application also provides a movable platform 1100 , including: the movable platform includes a first sensor 1101 , a second sensor 1102 and a third sensor 1103 ;
所述第一传感器1101与所述第三传感器1103具有重合的第一局部视野;The first sensor 1101 and the third sensor 1103 have overlapping first partial fields of view;
所述第二传感器1102与所述第三传感器1103具有重合的第二局部视野;The second sensor 1102 and the third sensor 1103 have overlapping second partial fields of view;
所述第一局部视野用于观测所述可移动平台1100第一方向上的景物;所述第二局部视野用于观测所述可移动平台1100第二方向上的景物;The first partial field of view is used to observe the scene in the first direction of the movable platform 1100; the second partial field of view is used to observe the scene in the second direction of the movable platform 1100;
所述可移动平台还包括处理器1104、存储器1105、存储在所述存储器上可被所述 处理器执行的计算机程序。The removable platform also includes a processor 1104, a memory 1105, and a computer program stored on the memory that can be executed by the processor.
其中,可移动平台还包括有动力系统1106,用于驱动可移动平台在空间中移动。Wherein, the movable platform also includes a power system 1106 for driving the movable platform to move in space.
所述处理器执行所述计算机程序时实现如下方法:When the processor executes the computer program, the following methods are implemented:
获取所述第三传感器的所述第一局部视野的第一亮度信息和所述第二局部视野的第二亮度信息;acquiring first brightness information of the first partial field of view and second brightness information of the second partial field of view of the third sensor;
至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数;控制所述第三传感器在所述第一曝光参数下采集第一图像;基于所述第一图像和所述第一传感器采集的图像,获取所述第一方向的景物的深度信息;determining a first exposure parameter of the third sensor based at least on the first brightness information; controlling the third sensor to capture a first image under the first exposure parameter; based on the first image and the first Obtain the depth information of the scene in the first direction from the image collected by the sensor;
至少基于所述第二亮度信息确定所述第三传感器的第二曝光参数;控制所述第三传感器在所述第二曝光参数下采集第二图像;基于所述第二图像和所述第二传感器采集的图像,获取所述第二方向的景物的深度信息;determining a second exposure parameter of the third sensor based at least on the second brightness information; controlling the third sensor to capture a second image under the second exposure parameter; based on the second image and the second acquire the depth information of the scene in the second direction from the image collected by the sensor;
根据所述深度信息控制所述可移动平台在空间中移动。The movable platform is controlled to move in space according to the depth information.
在一些例子中,所述第三传感器处于第一状态和第二状态时,基于所述第三传感器分别采集的图像得到的所述第二方向内的景物的深度信息一致;In some examples, when the third sensor is in the first state and the second state, the depth information of the scene in the second direction obtained based on the images respectively collected by the third sensor is consistent;
其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡;Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked;
所述第二状态包括:第三传感器的所述第一局部视野未受到遮挡,所述第二局部视野未受到遮挡。The second state includes: the first partial field of view of the third sensor is not blocked, and the second partial field of view is not blocked.
在一些例子中,所述第二局部视野内景物所反射的光线强度,高于所述第三传感器所感知的最大光线强度。In some examples, the light intensity reflected by the scene in the second partial field of view is higher than the maximum light intensity sensed by the third sensor.
在一些例子中,所述处理器还执行:获取所述可移动平台的移动方向;In some examples, the processor further executes: acquiring a moving direction of the movable platform;
所述第三传感器的第一曝光参数是,至少基于所述第三传感器的第一亮度信息和所述移动方向确定的。The first exposure parameter of the third sensor is determined based at least on the first brightness information of the third sensor and the moving direction.
在一些例子中,所述第三传感器处于第一状态下,所述可移动平台朝所述第一方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息,与所述可移动平台朝第二方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息不一致;In some examples, when the third sensor is in the first state, when the movable platform moves toward the first direction, the depth of the scene in the second direction obtained based on the image collected by the third sensor The information is inconsistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡。Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
在一些例子中,所述至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数,包括:In some examples, the determining the first exposure parameter of the third sensor based at least on the first brightness information includes:
若所述移动方向落入所述第一方向,基于所述第一亮度信息确定所述第三传感器的第一曝光参数;If the moving direction falls into the first direction, determining a first exposure parameter of the third sensor based on the first brightness information;
若相对于所述第二方向,所述移动方向偏向所述第一方向,基于所述第一亮度信息和所述第二亮度信息确定所述第一曝光参数;其中,所述第一亮度信息的权重大于所述第二亮度信息的权重。If the moving direction is biased toward the first direction relative to the second direction, the first exposure parameter is determined based on the first brightness information and the second brightness information; wherein, the first brightness information The weight of is greater than the weight of the second brightness information.
在一些例子中,所述第一曝光参数包括曝光时间和模拟增益;In some examples, the first exposure parameters include exposure time and analog gain;
所述至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数,包括:The determining the first exposure parameter of the third sensor based at least on the first brightness information includes:
至少根据所述第一亮度信息,利用自动曝光算法计算得到所述第三传感器的曝光时间和模拟增益。An exposure time and an analog gain of the third sensor are calculated by using an automatic exposure algorithm at least according to the first brightness information.
在一些例子中,所述确定所述第三传感器的第一曝光参数后,所述方法还包括:In some examples, after determining the first exposure parameter of the third sensor, the method further includes:
基于所述第一传感器的第一曝光参数,调整所述第三传感器的第一曝光参数。Based on the first exposure parameter of the first sensor, the first exposure parameter of the third sensor is adjusted.
在一些例子中,所述处理器通过如下方式调整所述第三传感器的第一曝光参数:In some examples, the processor adjusts the first exposure parameter of the third sensor by:
缩小所述第一传感器的第一曝光参数与所述第三传感器的第一曝光参数的差异,以使所述第一传感器采集的图像和第三传感器采集的图像满足预设亮度条件和/或预设图像内容条件。reducing the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, so that the image captured by the first sensor and the image captured by the third sensor meet a preset brightness condition and/or Preset image content conditions.
在一些例子中,所述调整所述第三传感器的第一曝光参数,包括:In some examples, the adjusting the first exposure parameter of the third sensor includes:
根据所述第一传感器的第一曝光参数和所述第三传感器的第一曝光参数,确定所述第三传感器的曝光参数取值范围,控制所述第三传感器的第一曝光参数在所述曝光参数取值范围内。According to the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, determine the value range of the exposure parameter of the third sensor, and control the first exposure parameter of the third sensor in the within the range of exposure parameters.
在一些例子中,所述曝光参数取值范围,是基于第一传感器的第一曝光参数及其权重,第三传感器的第一曝光参数及其权重确定的;In some examples, the value range of the exposure parameter is determined based on the first exposure parameter of the first sensor and its weight, and the first exposure parameter of the third sensor and its weight;
其中,所述第三传感器的第一曝光参数的权重大于所述第一传感器的第一曝光参数的权重。Wherein, the weight of the first exposure parameter of the third sensor is greater than the weight of the first exposure parameter of the first sensor.
在一些例子中,所述曝光参数包括理想曝光时间和曝光时间;In some examples, the exposure parameters include ideal exposure time and exposure time;
所述基于所述曝光参数取值范围调整所述第三传感器的第一曝光参数,包括:The adjusting the first exposure parameter of the third sensor based on the value range of the exposure parameter includes:
若确定出的第三传感器的理想曝光时间和曝光时间未在所述曝光参数取值范围内,调整所述第三传感器的理想曝光时间和曝光时间为所述曝光参数取值范围的边界值。If the determined ideal exposure time and exposure time of the third sensor are not within the value range of the exposure parameter, adjusting the ideal exposure time and exposure time of the third sensor to a boundary value of the value range of the exposure parameter.
在一些例子中,所述曝光参数还包括:模拟增益和数字增益;In some examples, the exposure parameters also include: analog gain and digital gain;
所述基于所述曝光参数取值范围调整所述第三传感器的曝光参数,还包括:The adjusting the exposure parameter of the third sensor based on the value range of the exposure parameter further includes:
基于调整后的曝光时间和理想曝光时间,调整所述第三传感器的模拟增益和数字增益。Adjusting the analog gain and digital gain of the third sensor based on the adjusted exposure time and the ideal exposure time.
在一些例子中,所述处理器还执行:获取所述可移动平台的移动速度的变化;其中,所述移动速度的变化与如下任一正相关:In some examples, the processor further executes: acquiring a change in the moving speed of the movable platform; wherein, the change in the moving speed is positively correlated with any of the following:
所述可移动平台的移动方向的获取频率、曝光参数的更新频率或亮度信息的权重的更新频率。The acquisition frequency of the moving direction of the movable platform, the update frequency of the exposure parameters or the update frequency of the weight of the brightness information.
在一些例子中,所述处理器还执行:控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换。In some examples, the processor further executes: controlling an exposure parameter of a third sensor to switch between the first exposure parameter and the second exposure parameter.
在一些例子中,所述第三传感器处于第一状态时,所述可移动平台朝所述第一方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息,与所述可移动平台朝第二方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息一致;In some examples, when the third sensor is in the first state, when the movable platform moves toward the first direction, the depth of the scene in the second direction obtained based on the image collected by the third sensor The information is consistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡。Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
在一些例子中,所述控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换,是在所述第三传感器的图像采集周期满足预设条件下执行的。In some examples, the controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter is performed when the image acquisition cycle of the third sensor satisfies a preset condition .
在一些例子中,所述控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换,包括:In some examples, the controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter includes:
若当前的图像采集周期属于第一时间片,控制所述第三传感器在所述第一曝光参数下采集第一图像;If the current image acquisition period belongs to the first time slice, controlling the third sensor to acquire the first image under the first exposure parameters;
若当前的图像采集周期属于第二时间片,控制所述第三传感器在所述第二曝光参数下采集第二图像。If the current image acquisition period belongs to the second time slice, controlling the third sensor to acquire a second image under the second exposure parameters.
在一些例子中,所述时间片的时长为所述第三传感器采集一帧图像的时长。In some examples, the duration of the time slice is the duration of one frame of image collected by the third sensor.
在一些例子中,所述处理器还执行:In some examples, the processor also performs:
获取第三传感器输出的图像序列,所述图像序列中各图像对应有图像帧号;Obtain an image sequence output by the third sensor, where each image in the image sequence corresponds to an image frame number;
获取图像帧号与时间片之间的对应关系;Obtain the correspondence between the image frame number and the time slice;
利用所述图像帧号以及所述对应关系,从所述图像序列中获取在第一曝光参数下采集的第一图像。Using the image frame number and the corresponding relationship, the first image collected under the first exposure parameter is acquired from the image sequence.
在一些例子中,所述第一传感器、第二传感器和第三传感器为鱼眼相机。In some examples, the first sensor, the second sensor and the third sensor are fisheye cameras.
在一些例子中,所述可移动平台包括无人机。In some examples, the movable platform includes a drone.
实际应用中,根据需要可移动平台还可包括有其他的硬件。例如,该设备可以包括:处理器、存储器、输入/输出接口、通信接口和总线。其中处理器、存储器、输入/输出接口和通信接口通过总线实现彼此之间在设备内部的通信连接。In practical applications, the movable platform may also include other hardware as required. For example, the device may include: a processor, memory, input/output interfaces, communication interfaces, and a bus. The processor, the memory, the input/output interface and the communication interface are connected to each other within the device through the bus.
处理器可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。处理器还可以包括显卡,所述显卡可以是Nvidia titan X显卡或者1080Ti显卡等。The processor can be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs , so as to realize the technical solutions provided by the embodiments of this specification. The processor can also include a graphics card, and the graphics card can be an Nvidia titan X graphics card or a 1080Ti graphics card.
存储器可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器1105可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器1105中,并由处理器来调用执行。The memory can be implemented in the form of ROM (Read Only Memory, read-only memory), RAM (Random Access Memory, random access memory), static storage device, and dynamic storage device. The memory 1105 can store operating systems and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1105 and invoked by the processor for execution.
输入/输出接口用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。The input/output interface is used to connect the input/output module to realize information input and output. The input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions. The input device may include a keyboard, mouse, touch screen, microphone, various sensors, etc., and the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
通信接口用于连接通信模块,以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。The communication interface is used to connect the communication module to realize the communication interaction between the device and other devices. The communication module can realize communication through wired means (such as USB, network cable, etc.), and can also realize communication through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
总线包括一通路,在设备的各个组件(例如处理器、存储器、输入/输出接口和通信接口)之间传输信息。A bus includes a pathway that carries information between various components of a device such as processors, memory, input/output interfaces, and communication interfaces.
需要说明的是,尽管上述设备仅示出了处理器、存储器、输入/输出接口、通信接口以及总线,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。It should be noted that although the above device only shows a processor, a memory, an input/output interface, a communication interface, and a bus, in a specific implementation process, the device may also include other components necessary for normal operation. In addition, those skilled in the art can understand that the above-mentioned device may only include components necessary to implement the solutions of the embodiments of this specification, and does not necessarily include all the components shown in the figure.
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述可移动平台的控制方法的步骤。The embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the steps of the method for controlling the mobile platform described in any of the foregoing embodiments are implemented.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、 只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media, including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information. Information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for computers include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本说明书实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本说明书实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本说明书实施例各个实施例或者实施例的某些部分所述的方法。It can be known from the above description of the implementation manners that those skilled in the art can clearly understand that the embodiments of this specification can be implemented by means of software plus a necessary general hardware platform. Based on this understanding, the essence of the technical solutions of the embodiments of this specification or the part that contributes to the prior art can be embodied in the form of software products, and the computer software products can be stored in storage media, such as ROM/RAM, A magnetic disk, an optical disk, etc., include several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to execute the methods described in various embodiments or some parts of the embodiments of this specification.
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。The systems, devices, modules, or units described in the above embodiments can be specifically implemented by computer chips or entities, or by products with certain functions. A typical implementing device is a computer, which may take the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control device, etc. desktops, tablets, wearables, or any combination of these.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,在实施本说明书实施例方案时可以把各模块的功能在同一个或多个软件和/或硬件中实现。也可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。Each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiment. The device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the functions of each module may be integrated in the same or multiple software and/or hardware implementations. Part or all of the modules can also be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.
以上所述仅是本说明书实施例的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本说明书实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本说明书实施例的保护范围。The above is only the specific implementation of the embodiment of this specification. It should be pointed out that for those of ordinary skill in the art, without departing from the principle of the embodiment of this specification, some improvements and modifications can also be made. These Improvements and modifications should also be regarded as the scope of protection of the embodiments of this specification.

Claims (25)

  1. 一种可移动平台的控制方法,其特征在于,所述可移动平台包括第一传感器、第二传感器与第三传感器;A method for controlling a movable platform, characterized in that the movable platform includes a first sensor, a second sensor and a third sensor;
    所述第一传感器与所述第三传感器具有重合的第一局部视野;the first sensor and the third sensor have overlapping first partial fields of view;
    所述第二传感器与所述第三传感器具有重合的第二局部视野;the second sensor has a coincident second partial field of view with the third sensor;
    所述第一局部视野用于观测所述可移动平台第一方向上的景物;所述第二局部视野用于观测所述可移动平台第二方向上的景物;The first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe the scene in the second direction of the movable platform;
    所述方法包括:The methods include:
    获取所述第三传感器的所述第一局部视野的第一亮度信息和所述第二局部视野的第二亮度信息;acquiring first brightness information of the first partial field of view and second brightness information of the second partial field of view of the third sensor;
    至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数;控制所述第三传感器在所述第一曝光参数下采集第一图像;基于所述第一图像和所述第一传感器采集的图像,获取所述第一方向的景物的深度信息;determining a first exposure parameter of the third sensor based at least on the first brightness information; controlling the third sensor to capture a first image under the first exposure parameter; based on the first image and the first Obtain the depth information of the scene in the first direction from the image collected by the sensor;
    至少基于所述第二亮度信息确定所述第三传感器的第二曝光参数;控制所述第三传感器在所述第二曝光参数下采集第二图像;基于所述第二图像和所述第二传感器采集的图像,获取所述第二方向的景物的深度信息;determining a second exposure parameter of the third sensor based at least on the second brightness information; controlling the third sensor to capture a second image under the second exposure parameter; based on the second image and the second acquire the depth information of the scene in the second direction from the image collected by the sensor;
    根据所述深度信息控制所述可移动平台在空间中移动。The movable platform is controlled to move in space according to the depth information.
  2. 根据权利要求1所述的方法,其特征在于,所述第三传感器处于第一状态和第二状态时,基于所述第三传感器分别采集的图像得到的所述第二方向内的景物的深度信息一致;The method according to claim 1, wherein when the third sensor is in the first state and the second state, the depth of the scene in the second direction obtained based on the images respectively collected by the third sensor consistent information;
    其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡;Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked;
    所述第二状态包括:第三传感器的所述第一局部视野未受到遮挡,所述第二局部视野未受到遮挡。The second state includes: the first partial field of view of the third sensor is not blocked, and the second partial field of view is not blocked.
  3. 根据权利要求2所述的方法,其特征在于,所述第二局部视野内景物所反射的光线强度,高于所述第三传感器所感知的最大光线强度。The method according to claim 2, characterized in that the light intensity reflected by the scene in the second partial field of view is higher than the maximum light intensity sensed by the third sensor.
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:
    获取所述可移动平台的移动方向;Acquiring the moving direction of the movable platform;
    所述第三传感器的第一曝光参数,是至少基于所述第一亮度信息和所述移动方向确定的。The first exposure parameter of the third sensor is determined based on at least the first brightness information and the moving direction.
  5. 根据权利要求4所述的方法,其特征在于,所述第三传感器处于第一状态下,所述可移动平台朝所述第一方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息,与所述可移动平台朝第二方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息不一致;The method according to claim 4, wherein when the third sensor is in the first state, when the movable platform moves toward the first direction, the obtained information based on the image collected by the third sensor The depth information of the scene in the second direction is inconsistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
    其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡。Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
  6. 根据权利要求4所述的方法,其特征在于,所述至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数,包括:The method according to claim 4, wherein the determining the first exposure parameter of the third sensor based at least on the first brightness information comprises:
    若所述移动方向落入所述第一方向,基于所述第一亮度信息确定所述第三传感器的第一曝光参数;If the moving direction falls into the first direction, determining a first exposure parameter of the third sensor based on the first brightness information;
    若相对于所述第二方向,所述移动方向偏向所述第一方向,基于所述第一亮度信息和所述第二亮度信息确定所述第一曝光参数;其中,所述第一亮度信息的权重大于所述第二亮度信息的权重。If the moving direction is biased toward the first direction relative to the second direction, the first exposure parameter is determined based on the first brightness information and the second brightness information; wherein, the first brightness information The weight of is greater than the weight of the second brightness information.
  7. 根据权利要求4所述的方法,其特征在于,所述第一曝光参数包括曝光时间和模拟增益;The method according to claim 4, wherein the first exposure parameters include exposure time and analog gain;
    所述至少基于所述第一亮度信息确定所述第三传感器的第一曝光参数,包括:The determining the first exposure parameter of the third sensor based at least on the first brightness information includes:
    至少根据所述第一亮度信息,利用自动曝光算法计算得到所述第三传感器的曝光时间和模拟增益。An exposure time and an analog gain of the third sensor are calculated by using an automatic exposure algorithm at least according to the first brightness information.
  8. 根据权利要求6所述的方法,其特征在于,所述确定所述第三传感器的第一曝光参数后,所述方法还包括:The method according to claim 6, wherein after determining the first exposure parameter of the third sensor, the method further comprises:
    基于所述第一传感器的第一曝光参数,调整所述第三传感器的第一曝光参数。Based on the first exposure parameter of the first sensor, the first exposure parameter of the third sensor is adjusted.
  9. 根据权利要求8所述的方法,其特征在于,通过如下方式调整所述第三传感器的第一曝光参数:The method according to claim 8, characterized in that the first exposure parameter of the third sensor is adjusted in the following manner:
    缩小所述第一传感器的第一曝光参数与所述第三传感器的第一曝光参数的差异,以使所述第一传感器采集的图像和第三传感器采集的图像满足预设亮度条件和/或预设图像内容条件。reducing the difference between the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, so that the image captured by the first sensor and the image captured by the third sensor meet a preset brightness condition and/or Preset image content conditions.
  10. 根据权利要求8所述的方法,其特征在于,所述调整所述第三传感器的第一曝光参数,包括:The method according to claim 8, wherein the adjusting the first exposure parameter of the third sensor comprises:
    根据所述第一传感器的第一曝光参数和所述第三传感器的第一曝光参数,确定所述第三传感器的曝光参数取值范围,控制所述第三传感器的第一曝光参数在所述曝光参数取值范围内。According to the first exposure parameter of the first sensor and the first exposure parameter of the third sensor, determine the value range of the exposure parameter of the third sensor, and control the first exposure parameter of the third sensor in the within the range of exposure parameters.
  11. 根据权利要求10所述的方法,其特征在于,所述曝光参数取值范围,是基于第一传感器的第一曝光参数及其权重,以及第三传感器的第一曝光参数及其权重确定的;The method according to claim 10, wherein the value range of the exposure parameter is determined based on the first exposure parameter of the first sensor and its weight, and the first exposure parameter of the third sensor and its weight;
    其中,所述第三传感器的第一曝光参数的权重大于所述第一传感器的第一曝光参数的权重。Wherein, the weight of the first exposure parameter of the third sensor is greater than the weight of the first exposure parameter of the first sensor.
  12. 根据权利要求10所述的方法,其特征在于,所述曝光参数包括理想曝光时间和曝光时间;The method according to claim 10, wherein the exposure parameters include ideal exposure time and exposure time;
    所述基于所述曝光参数取值范围调整所述第三传感器的第一曝光参数,包括:The adjusting the first exposure parameter of the third sensor based on the value range of the exposure parameter includes:
    若确定出的第三传感器的理想曝光时间和曝光时间未在所述曝光参数取值范围内,调整所述第三传感器的理想曝光时间和曝光时间为所述曝光参数取值范围的边界值。If the determined ideal exposure time and exposure time of the third sensor are not within the value range of the exposure parameter, adjusting the ideal exposure time and exposure time of the third sensor to a boundary value of the value range of the exposure parameter.
  13. 根据权利要求12所述的方法,其特征在于,所述曝光参数还包括:模拟增益和数字增益;The method according to claim 12, wherein the exposure parameters further comprise: analog gain and digital gain;
    所述基于所述曝光参数取值范围调整所述第三传感器的曝光参数,还包括:The adjusting the exposure parameter of the third sensor based on the value range of the exposure parameter further includes:
    基于调整后的曝光时间和理想曝光时间,调整所述第三传感器的模拟增益和数字增益。Adjusting the analog gain and digital gain of the third sensor based on the adjusted exposure time and the ideal exposure time.
  14. 根据权利要求4至13任一所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 4 to 13, further comprising:
    获取所述可移动平台的移动速度的变化;其中,所述移动速度的变化与如下任一正相关:Acquiring the change of the moving speed of the movable platform; wherein, the change of the moving speed is positively correlated with any of the following:
    所述可移动平台的移动方向的获取频率、曝光参数的更新频率或亮度信息的权重的更新频率。The acquisition frequency of the moving direction of the movable platform, the update frequency of the exposure parameters or the update frequency of the weight of the brightness information.
  15. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:
    控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换。controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter.
  16. 根据权利要求15所述的方法,其特征在于,所述第三传感器处于第一状态时,所述可移动平台朝所述第一方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息,与所述可移动平台朝第二方向移动时,基于所述第三传感器采集的图像得到的所述第二方向的景物的深度信息一致;The method according to claim 15, wherein when the third sensor is in the first state, when the movable platform moves toward the first direction, the obtained information obtained based on the image collected by the third sensor The depth information of the scene in the second direction is consistent with the depth information of the scene in the second direction obtained based on the image collected by the third sensor when the movable platform moves in the second direction;
    其中,所述第一状态包括:第三传感器的所述第一局部视野受到遮挡,所述第二局部视野未受到遮挡。Wherein, the first state includes: the first partial field of view of the third sensor is blocked, and the second partial field of view is not blocked.
  17. 根据权利要求15所述的方法,其特征在于,所述控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换,是在所述第三传感器的图像采集周期满足预设条件下执行的。The method according to claim 15, wherein the controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter is during the image acquisition of the third sensor The cycle is executed when the preset conditions are met.
  18. 根据权利要求17所述的方法,其特征在于,所述控制第三传感器的曝光参数在所述第一曝光参数和所述第二曝光参数之间切换,包括:The method according to claim 17, wherein the controlling the exposure parameter of the third sensor to switch between the first exposure parameter and the second exposure parameter comprises:
    若当前的图像采集周期属于第一时间片,控制所述第三传感器在所述第一曝光参数下采集第一图像;If the current image acquisition period belongs to the first time slice, controlling the third sensor to acquire the first image under the first exposure parameters;
    若当前的图像采集周期属于第二时间片,控制所述第三传感器在所述第二曝光参数下采集第二图像。If the current image acquisition period belongs to the second time slice, controlling the third sensor to acquire a second image under the second exposure parameters.
  19. 根据权利要求18所述的方法,其特征在于,所述时间片的时长为所述第三传感器采集一帧图像的时长。The method according to claim 18, wherein the duration of the time slice is the duration of one frame of image collected by the third sensor.
  20. 根据权利要求18所述的方法,其特征在于,所述方法还包括:The method according to claim 18, further comprising:
    获取第三传感器输出的图像序列,所述图像序列中各图像对应有图像帧号;Obtain an image sequence output by the third sensor, where each image in the image sequence corresponds to an image frame number;
    获取图像帧号与时间片之间的对应关系;Obtain the correspondence between the image frame number and the time slice;
    利用所述图像帧号以及所述对应关系,从所述图像序列中获取在第一曝光参数下采集的第一图像。Using the image frame number and the corresponding relationship, the first image collected under the first exposure parameter is acquired from the image sequence.
  21. 根据权利要求1所述的方法,其特征在于,所述第一传感器、第二传感器和第三传感器为鱼眼相机。The method according to claim 1, wherein the first sensor, the second sensor and the third sensor are fisheye cameras.
  22. 根据权利要求1所述的方法,其特征在于,所述可移动平台包括无人机。The method of claim 1, wherein the movable platform comprises a drone.
  23. 一种可移动平台的控制装置,其特征在于,所述可移动平台包括第一传感器、第二传感器与第三传感器;A control device for a movable platform, characterized in that the movable platform includes a first sensor, a second sensor and a third sensor;
    所述第一传感器与所述第三传感器具有重合的第一局部视野;the first sensor and the third sensor have overlapping first partial fields of view;
    所述第二传感器与所述第三传感器具有重合的第二局部视野;the second sensor has a coincident second partial field of view with the third sensor;
    所述第一局部视野用于观测所述可移动平台第一方向上的景物;所述第二局部视野用于观测所述可移动平台第二方向上的景物;The first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe the scene in the second direction of the movable platform;
    所述装置包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至22任一所述的方法。The device includes a processor, a memory, and a computer program stored in the memory and executable by the processor, and the processor implements the method according to any one of claims 1 to 22 when executing the computer program.
  24. 一种可移动平台,其特征在于,所述可移动平台包括第一传感器、第二传感器与第三传感器;A movable platform, characterized in that the movable platform includes a first sensor, a second sensor and a third sensor;
    所述第一传感器与所述第三传感器具有重合的第一局部视野;the first sensor and the third sensor have overlapping first partial fields of view;
    所述第二传感器与所述第三传感器具有重合的第二局部视野;the second sensor has a coincident second partial field of view with the third sensor;
    所述第一局部视野用于观测所述可移动平台第一方向上的景物;所述第二局部视野用于观测所述可移动平台第二方向上的景物;The first partial field of view is used to observe the scene in the first direction of the movable platform; the second partial field of view is used to observe the scene in the second direction of the movable platform;
    所述可移动平台还包括处理器、存储器、存储在所述存储器上可被所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至22任一所述的方法。The mobile platform further includes a processor, a memory, and a computer program stored on the memory that can be executed by the processor. When the processor executes the computer program, the invention described in any one of claims 1 to 22 is realized. method.
  25. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有若干计算机指令,所述计算机指令被执行时实现权利要求1至22任一所述的可移动平台的控制方法的步骤。A computer-readable storage medium, characterized in that several computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the method for controlling the mobile platform according to any one of claims 1 to 22 is realized A step of.
PCT/CN2021/128983 2021-11-05 2021-11-05 Movable platform control method and apparatus, and movable platform and storage medium WO2023077421A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180101633.XA CN117837160A (en) 2021-11-05 2021-11-05 Control method and device for movable platform, movable platform and storage medium
PCT/CN2021/128983 WO2023077421A1 (en) 2021-11-05 2021-11-05 Movable platform control method and apparatus, and movable platform and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/128983 WO2023077421A1 (en) 2021-11-05 2021-11-05 Movable platform control method and apparatus, and movable platform and storage medium

Publications (1)

Publication Number Publication Date
WO2023077421A1 true WO2023077421A1 (en) 2023-05-11

Family

ID=86240474

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128983 WO2023077421A1 (en) 2021-11-05 2021-11-05 Movable platform control method and apparatus, and movable platform and storage medium

Country Status (2)

Country Link
CN (1) CN117837160A (en)
WO (1) WO2023077421A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100194902A1 (en) * 2009-02-05 2010-08-05 National Chung Cheng University Method for high dynamic range imaging
CN105979238A (en) * 2016-07-05 2016-09-28 深圳市德赛微电子技术有限公司 Method for controlling global imaging consistency of multiple cameras
CN107071291A (en) * 2016-12-28 2017-08-18 深圳众思科技有限公司 Image processing method, device and electronic equipment
CN108377345A (en) * 2018-04-11 2018-08-07 浙江大华技术股份有限公司 A kind of exposure parameter values determine method, apparatus, multi-lens camera and storage medium
CN108401457A (en) * 2017-08-25 2018-08-14 深圳市大疆创新科技有限公司 A kind of control method of exposure, device and unmanned plane
CN108933902A (en) * 2018-07-27 2018-12-04 顺丰科技有限公司 Panoramic picture acquisition device builds drawing method and mobile robot
CN109417604A (en) * 2017-11-30 2019-03-01 深圳市大疆创新科技有限公司 Variation calibration method, binocular vision system and computer readable storage medium
CN109698913A (en) * 2018-12-29 2019-04-30 深圳市道通智能航空技术有限公司 A kind of image display method, device and electronic equipment
CN111107303A (en) * 2018-10-25 2020-05-05 中华映管股份有限公司 Driving image system and driving image processing method
CN112004029A (en) * 2019-05-27 2020-11-27 Oppo广东移动通信有限公司 Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100194902A1 (en) * 2009-02-05 2010-08-05 National Chung Cheng University Method for high dynamic range imaging
CN105979238A (en) * 2016-07-05 2016-09-28 深圳市德赛微电子技术有限公司 Method for controlling global imaging consistency of multiple cameras
CN107071291A (en) * 2016-12-28 2017-08-18 深圳众思科技有限公司 Image processing method, device and electronic equipment
CN108401457A (en) * 2017-08-25 2018-08-14 深圳市大疆创新科技有限公司 A kind of control method of exposure, device and unmanned plane
CN109417604A (en) * 2017-11-30 2019-03-01 深圳市大疆创新科技有限公司 Variation calibration method, binocular vision system and computer readable storage medium
CN108377345A (en) * 2018-04-11 2018-08-07 浙江大华技术股份有限公司 A kind of exposure parameter values determine method, apparatus, multi-lens camera and storage medium
CN108933902A (en) * 2018-07-27 2018-12-04 顺丰科技有限公司 Panoramic picture acquisition device builds drawing method and mobile robot
CN111107303A (en) * 2018-10-25 2020-05-05 中华映管股份有限公司 Driving image system and driving image processing method
CN109698913A (en) * 2018-12-29 2019-04-30 深圳市道通智能航空技术有限公司 A kind of image display method, device and electronic equipment
CN112004029A (en) * 2019-05-27 2020-11-27 Oppo广东移动通信有限公司 Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium

Also Published As

Publication number Publication date
CN117837160A (en) 2024-04-05

Similar Documents

Publication Publication Date Title
EP3397554B1 (en) System and method for utilization of multiple-camera network to capture static and/or motion scenes
US11790481B2 (en) Systems and methods for fusing images
US11158056B2 (en) Surround camera system with seamless stitching for arbitrary viewpoint selection
US20200162655A1 (en) Exposure control method and device, and unmanned aerial vehicle
CN111105450A (en) Electronic device and method for disparity estimation
EP3213290B1 (en) Data processing apparatus, imaging apparatus and data processing method
WO2017020150A1 (en) Image processing method, device and camera
EP3915087B1 (en) An electronic device applying bokeh effect to image and controlling method thereof
CN107613262B (en) Visual information processing system and method
US20190230269A1 (en) Monitoring camera, method of controlling monitoring camera, and non-transitory computer-readable storage medium
US20180136477A1 (en) Imaging apparatus and automatic control system
KR20200117562A (en) Electronic device, method, and computer readable medium for providing bokeh effect in video
CN110751336B (en) Obstacle avoidance method and obstacle avoidance device of unmanned carrier and unmanned carrier
EP3734969A1 (en) Information processing device, information processing method, and recording medium
JPWO2020004029A1 (en) Controls and methods, as well as programs
WO2023077421A1 (en) Movable platform control method and apparatus, and movable platform and storage medium
CN115989680A (en) Image stabilizing method and electronic device thereof
WO2022016340A1 (en) Method and system for determining exposure parameters of main camera device, mobile platform, and storage medium
CN115494856A (en) Obstacle avoidance method and device, unmanned aerial vehicle and electronic equipment
WO2020215214A1 (en) Image processing method and apparatus
WO2020000311A1 (en) Method, apparatus and device for image processing, and unmanned aerial vehicle
US10778959B2 (en) Robot-based 3D picture shooting method and system, and robot using the same
US20230216999A1 (en) Systems and methods for image reprojection
WO2022178781A1 (en) Electric device, method of controlling electric device, and computer readable storage medium
US20240064417A1 (en) Systems and methods for multi-context image capture