WO2021195879A1 - 图像传感器保护方法、装置、图像采集装置及无人机 - Google Patents

图像传感器保护方法、装置、图像采集装置及无人机 Download PDF

Info

Publication number
WO2021195879A1
WO2021195879A1 PCT/CN2020/082182 CN2020082182W WO2021195879A1 WO 2021195879 A1 WO2021195879 A1 WO 2021195879A1 CN 2020082182 W CN2020082182 W CN 2020082182W WO 2021195879 A1 WO2021195879 A1 WO 2021195879A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
image
target object
image sensor
infrared
Prior art date
Application number
PCT/CN2020/082182
Other languages
English (en)
French (fr)
Inventor
张青涛
赵新涛
庹伟
王浩伟
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080004881.8A priority Critical patent/CN112689989A/zh
Priority to PCT/CN2020/082182 priority patent/WO2021195879A1/zh
Publication of WO2021195879A1 publication Critical patent/WO2021195879A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation

Definitions

  • This application relates to the technical field of sensor protection, and in particular to an image sensor protection method, device, image acquisition device and unmanned aerial vehicle.
  • Some image sensors are prone to damage to the sensor when shooting some high-energy objects due to their own material characteristics. For example, when an infrared sensor is shooting objects such as the sun and lasers, if you look directly at these objects for a long time, It will have a burning effect on the infrared sensor, causing temporary or permanent damage to the infrared sensor. Therefore, for this type of image sensor that is easy to cause damage, it is necessary to determine whether these objects are within the viewing angle of the image sensor during use, so as to decide whether to take certain protective measures for the image sensor to avoid being damaged by this type of object. .
  • the present application provides an image sensor protection method, device, image acquisition device, and unmanned aerial vehicle.
  • an image sensor protection method including:
  • the target object is within the viewing angle range of the second image sensor according to the positional relationship between the first image sensor and the second image sensor, and the target object Is an object that causes damage to the second image sensor;
  • an image sensor protection device comprising: a processor, a memory, and a computer program stored on the memory, and when the processor executes the calculation program, the following steps are implemented :
  • the first image contains a target object
  • an image acquisition device including a first image sensor, a second image sensor, and an image sensor protection device, the image sensor protection device including: a processor, a memory, and storage on the memory When the processor executes the calculation program, the following steps are implemented:
  • the first image contains a target object
  • an unmanned aerial vehicle including an image acquisition device, the image acquisition device including a first image sensor, a second image sensor, and an image sensor protection device, the image sensor protection device including:
  • the processor executes the calculation program, the processor implements the following steps:
  • the first image contains a target object
  • an infrared sensor protection method including:
  • the sensor statistical information being collected based on at least one sensor other than the infrared sensor
  • the determination result it is determined whether to perform the protection operation of the infrared sensor.
  • an infrared sensor protection device includes a processor and a memory.
  • the memory is used to store a computer program. The following steps:
  • the sensor statistical information being collected based on at least one sensor other than the infrared sensor
  • the determination result it is determined whether to perform the protection operation of the infrared sensor.
  • an image acquisition device including an infrared sensor, at least one other sensor other than the infrared sensor, and an infrared sensor protection device, the infrared sensor protection device including: a processor, a memory And a computer program stored on the memory.
  • the processor executes the calculation program, the following steps are implemented:
  • the sensor statistical information being collected based on at least one sensor other than the infrared sensor
  • the determination result it is determined whether to perform the protection operation of the infrared sensor.
  • an unmanned aerial vehicle including an image acquisition device including an infrared sensor, at least one other sensor other than the infrared sensor, and an infrared sensor protection device.
  • the infrared sensor protection device includes a processor, a memory, and a computer program stored on the memory. When the processor executes the calculation program, the following steps are implemented:
  • the sensor statistical information being collected based on at least one sensor other than the infrared sensor
  • the determination result it is determined whether to perform the protection operation of the infrared sensor.
  • a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, it implements any of the image sensors in the embodiments of this specification Protection method or infrared sensor protection method.
  • the accuracy of target object detection in the image can also be improved, and the phenomenon of misjudgment can be avoided.
  • the detection result of the target object detection on the infrared image can be combined with other sensor collections to assist in determining the viewing angle of the infrared sensor
  • the statistical information of the target object in the range is used to comprehensively determine whether the target object exists in the viewing angle range of the infrared sensor, and perform a preset protection operation according to the determination result to protect the infrared sensor to prevent the target object from damaging it.
  • Fig. 1 is a flowchart of an image sensor protection method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the viewing angle relationship of an image sensor according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the viewing angle relationship of an image sensor according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of an application scenario of an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the logical structure of an image sensor protection device according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of the logical structure of an image acquisition device according to an embodiment of the present application.
  • Fig. 7 is a flowchart of an infrared sensor protection method according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a device according to an embodiment of the present application.
  • Fig. 9 is a schematic diagram of detecting the sun from an infrared image according to an embodiment of the present application.
  • Fig. 10 is a schematic diagram of detecting the sun from an infrared image according to an embodiment of the present application.
  • Fig. 11 is a schematic diagram of another application scenario of an embodiment of the present application.
  • Fig. 12 is a flowchart of an infrared sensor protection method according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the logical structure of an infrared sensor protection device according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the logical structure of another image acquisition device according to an embodiment of the present application.
  • Some image sensors are prone to damage to the sensor when shooting some high-energy objects due to their own material characteristics. For example, when an infrared sensor is shooting objects such as the sun and lasers, if you look directly at these objects for a long time, It will have a burning effect on the infrared sensor, causing temporary or permanent damage to the infrared sensor. Therefore, for this type of image sensor that is easy to cause damage, it is necessary to determine whether these objects are within the viewing angle of the infrared sensor during use, so as to decide whether to take certain protective measures for the image sensor to avoid being damaged by this type of object. .
  • the determination can be made by detecting whether the target object exists in the image collected by the image sensor.
  • this method has relatively large limitations and requires that the image sensor must be in a working state, and the image sensor in a non-working state cannot collect images, so it cannot be determined.
  • some sensitive image sensors may be damaged even if they are in a short working state. Therefore, the image collected by the image sensor is used to determine whether the angle of view exists.
  • the target object also has certain risks. For example, at noon, the temperature of the sun is high.
  • the infrared sensor is shooting at the sun, it may also cause damage to the infrared sensor during a relatively short time of shooting.
  • the captured images have low resolution and more noise. If the target object is detected through the images collected by the image sensor, the detection results are not very accurate, and it is easy to miss detection or misjudgment. Phenomenon.
  • the present application provides an image sensor protection method, which can determine whether there is a target object that damages the image sensor in the viewing angle of the image sensor that needs to be protected through an image collected by another image sensor.
  • the image sensor The processing flow of the protection method is shown in Figure 1, including the following steps:
  • the target object is within the viewing angle range of the second image sensor according to the positional relationship between the first image sensor and the second image sensor, and
  • the target object is an object that causes damage to the second image sensor
  • S106 Determine whether to perform a protection operation for the second image sensor according to the determination result.
  • the second image sensor of the present application may be an image sensor that is easily damaged by certain target objects and needs to be protected, such as an infrared sensor.
  • the first image sensor of the present application may be less likely to be damaged by the target object than the second image sensor, or the resolution of the captured image may be higher than that of the image captured by the first image sensor, which is more beneficial to the target.
  • Image sensor for object detection may be a visible light sensor
  • the second image sensor may be an infrared sensor.
  • the first image sensor and the second image sensor may also be other image sensor combinations, which is not limited in this application.
  • the image sensor protection method of the present application may be executed by an image sensor protection device.
  • the image sensor protection device and the protected second image sensor may be integrated on one device, for example, the second image sensor and the image sensor protection may be integrated.
  • the device s camera, drone and other equipment.
  • the image sensor protection device and the second image sensor can also be located in two different devices.
  • the first image sensor and the second image sensor are installed on the drone, and the image The sensor protection device can be integrated on the control terminal of the drone.
  • the drone sends the image collected by the first image sensor to the control terminal.
  • the control terminal determines whether the target object is within the viewing angle range of the second image sensor and makes After the decision whether to perform the protection operation, the corresponding instruction is sent to the UAV.
  • the target object of the present application includes any object that causes damage to the second image sensor.
  • the target object may be the sun.
  • it may also be other objects, such as lasers. limit.
  • the first image sensor and the second image sensor in this application can be integrated on one device or located in two independent devices.
  • the relative position of the two image sensors can be fixed or changed, as long as the positions of the two can be determined Relationship is fine.
  • the first image sensor may be in a working state
  • the second image sensor may be in a working state or in a non-working state.
  • the second image sensor in order to prevent the target object from causing damage to the second image sensor in the working state as much as possible, can be left in the non-working state before it is determined that the target object is not within the viewing angle range of the second image sensor. state.
  • the first image collected by the first sensor can be acquired, and then the target object can be detected on the first image.
  • the detection of the target object on the first image can use a general target detection algorithm.
  • a deep learning algorithm can be used.
  • the image classification model detects whether the target object is contained in the image.
  • algorithms such as hough circle detection algorithm, center of gravity method, or least square method can also be used to detect whether the first image contains the sun.
  • the target object When the target object is detected in the first image, it means that the target object is within the viewing angle of the first image sensor. At this time, it can be further determined whether the target object is based on the positional relationship between the first image sensor and the second image sensor. Within the viewing angle range of the second image sensor, and then determine whether to perform a protection operation on the second image sensor according to the determination result.
  • this method is used for the second image sensor.
  • the second image sensor can be in a non-working state. It is controlled to be in the working state only when it is determined that there is no target object within the viewing angle range, which can better protect the second image sensor.
  • the target object can also be detected by the image collected by the first sensor with a higher resolution to improve the final judgment result. Accuracy, to avoid misjudgments affecting user experience.
  • the target object when determining whether the target object is within the viewing angle range of the second image sensor based on the positional relationship between the first image sensor and the second image sensor, it can be simply based on the difference between the first image sensor and the second image sensor.
  • the relationship between the viewing angles is used to determine whether the target object is within the viewing angle range of the second image sensor.
  • the overlapping angle of view of the first image sensor and the second image sensor can be determined according to the positional relationship between the first image sensor and the second image sensor, and then according to the overlapping angle of view between the two image sensors to determine whether the target object is in the second Within the viewing angle of the image sensor.
  • each image sensor has a shooting angle of view, and only the scene within that angle of view can fall into the image it collects.
  • the positional relationship of determines the overlapping viewing angles of the two sensors, and then determining whether the scene within the viewing angle range of the first image sensor will also be within the viewing angle range of the second image sensor according to the overlapping viewing angles. For example, as shown in FIG.
  • the viewing angle of the first image sensor 21 is ⁇ 1
  • the viewing angle of the second image sensor 22 is ⁇ 2
  • the viewing angles of the two image sensors do not overlap at all, that is, the overlapping viewing angle is 0, then the first image transmission Sense 21
  • the scene within the viewing angle range will inevitably not fall into the viewing angle range of the second image sensor 22, that is, if the viewing angles of the two sensors do not overlap, if the target object is detected in the first image, it means that the target object is not there.
  • the viewing angle range of the second image sensor is a viewing angle range of the second image sensor.
  • the viewing angle of the first image sensor 31 is ⁇ 1
  • the viewing angle of the first image sensor 32 is ⁇ 2
  • the viewing angle of the first image sensor 32 basically completely falls within the viewing angle range of the second image sensor 31, then If the target object is detected in the first image, it means that the target object is also within the viewing angle range of the second image sensor.
  • the overlapping viewing angle of the first image sensor and the second image sensor is greater than the preset angle, it can be considered that the target object is within the viewing angle range of the first image sensor, and it is also in the second image sensor. Within the viewing angle. For example, if the overlapping viewing angle of the first image sensor and the second image sensor reaches more than 95% of the viewing angle of the first image sensor, it is considered that the target object is within the viewing angle range of the first image sensor, and there is a high probability that the second image sensor Within the viewing angle of the image sensor.
  • a simple determination can be made based on the viewing angle relationship of the two sensors.
  • the viewing angle relationship of the two sensors cannot accurately determine whether the target object is within the viewing angle range of the second image sensor. Then the position information of the target object in the first image can be further determined, and then the mapping relationship between the first image and the second image is determined according to the position relationship between the first image sensor and the second image sensor, and then the mapping relationship is determined according to the position information and the mapping relationship. Whether the target object is in the second image, if so, it is determined that the target object is within the viewing angle range of the second image sensor.
  • the mapping relationship may be the mapping relationship between the pixel points of the first image and the second image, or the mapping relationship between the coordinate systems of the two images. Since the positional relationship between the two sensors is known, and the internal parameters of the two sensors can also be calibrated, the first image and the first image can be determined according to the positional relationship of the two sensors and the internal parameters of the two sensors. The mapping relationship of two images.
  • the mapping relationship may be the mapping relationship between the coordinate system of the first image and the coordinate system of the second image. It is assumed that the mapping between the coordinate system of the first image and the coordinate system of the second image is The relationship is represented by a matrix H. P1 represents the pixel coordinates in the coordinate system of the first image, and P2 represents the pixel coordinates in the coordinate system of the second image. Then H can be determined by formula (1). The relationship between P1 and P2 is as follows Formula (2):
  • K is a matrix representing the positional relationship between the first image sensor and the first image sensor
  • R1 represents the internal parameter matrix of the first image sensor
  • R2 represents the internal parameter matrix of the second image sensor.
  • the position information may be the coordinates of the target object in the coordinate system of the first image, and the coordinates of the target object in the first image coordinate system may be mapped to the coordinate system of the second image through the mapping relationship H, and then the coordinates are determined Whether it falls into the second image, if so, it means that the second image also contains the target object.
  • the determination result of whether the target object is within the viewing angle range of the second image sensor is obtained according to the positional relationship between the first image sensor and the second image sensor, it may be determined whether to perform a corresponding protection operation according to the determination result. For example, if it is determined that the target object is not within the viewing angle range of the second image sensor, the second image sensor does not need to be protected. At this time, if the second image sensor is in working state, the second image sensor can be kept working. If the second image sensor is in a non-working state, it can be switched to a working state.
  • the shutter of the second image sensor can be closed to avoid direct sunlight or other direct exposure to the second image sensor, or the second image sensor can be controlled Rotate to move the target object out of the viewing angle range of the second image sensor.
  • the second image sensor can be driven to rotate by controlling the rotation of the PTZ.
  • the second image sensor is installed on a movable device such as a drone, the movable device can be controlled to rotate to Drive the second image sensor to rotate.
  • the direction of rotation can be determined according to the current position of the target object in the second image and the viewing angle of the second image sensor, as well as the estimated angle to be rotated so that the target object is outside the viewing angle range of the second image sensor.
  • the second image sensor after performing the protection operation on the second image sensor, the second image sensor cannot be kept in the protected state, otherwise it will affect the normal operation of the second image sensor. For example, after closing the shutter of the second image sensor, the second image sensor cannot be kept Keep the shutter of the second image sensor in the closed state, so that the second image sensor cannot work. Therefore, a trigger mechanism for canceling the protection operation of the second image sensor needs to be set to ensure that the second image sensor re-operates. Therefore, in some embodiments, some designated conditions can be preset, and when these designated conditions are triggered, the protection operation of the second image sensor can be cancelled.
  • these specified conditions may be when the shutter is closed for a preset period of time, when the rotation angle of the second image sensor is detected to reach a preset angle, or the detection of the target object in the first image is continued, and the detection of the target object is continued according to the first image.
  • the protection operation of the second image sensor can be cancelled.
  • the second image sensor is located on some movable equipment, and the equipment is always in a moving state, it can be considered that after the equipment moves for a period of time, the target object may not be within the viewing angle of the second image sensor, and the protection can be canceled. operate.
  • the second image sensor is installed on some equipment that can be rotated, such as a PTZ or drone, then it can be detected whether the rotation angle of the second image sensor reaches the preset threshold, and if the rotation angle reaches the preset threshold Threshold, it can also be considered that the target object is highly likely to be out of the viewing angle range of the second image sensor, and the protection operation can also be cancelled at this time.
  • canceling the protection operation may be to reopen the shutter of the second image sensor to make it continue to work, or to control the rotation of the second image sensor to make it continue to work.
  • the second image sensor can be installed on the drone.
  • the drone includes a gyroscope.
  • the shutter can be closed to check the second image.
  • the sensor is protected.
  • the second image sensor can also be driven to rotate by controlling the rotation of the drone, so that the target object is outside the viewing angle of the second image sensor.
  • the drone's gyroscope detects the rotation of the drone After the angle reaches the preset angle, the shutter can be reopened so that the second image sensor can continue to work.
  • the UAV 40 is equipped with an infrared sensor 41 and a visible light sensor 42, which are used to collect infrared images and visible light images.
  • the visible light sensor The positional relationship with the infrared sensor is known.
  • the visible light sensor and the infrared sensor are arranged side by side facing the same direction. In this case, the overlapping area of the viewing angle of the visible light sensor and the infrared sensor is relatively large.
  • the visible light sensor and the infrared sensor may face different directions, and there is an overlap area in the viewing angle range of the visible light sensor and the infrared sensor.
  • the infrared sensor 41 When the drone is performing reconnaissance and inspection tasks outdoors, it will inevitably capture the sun. Since the infrared sensor 41 is in direct sunlight, it is easy to produce a burning effect and cause irreversible damage to the infrared sensor. Therefore, the infrared sensor needs to be protected. , To avoid being damaged by the sun. Among them, the infrared sensor 41 may be in a working state or in a non-working state. The specific process of determining whether to protect the infrared sensor is as follows:
  • the deep learning algorithm can be used to detect the sun in the visible light image.
  • a large number of images containing the sun can be used in advance to test the preset nerves.
  • the network model is trained to obtain an image classification model, and then the image classification model is used to detect the sun on the visible light image. Since the positional relationship and viewing angle of the visible light sensor and the infrared sensor are known, the overlapping viewing angles of the two sensors can be determined according to the positional relationship and the viewing angle of the visible light sensor and the infrared sensor.
  • the viewing angles of the two sensors do not overlap at all, then When the visible light image detects the sun, it can be determined that there is no sun in the viewing angle of the infrared sensor. Of course, if the viewing angles of the visible light sensor and the infrared sensor almost completely overlap, when the sun is detected in the visible light image, it can be determined that the sun is also present in the viewing angle of the infrared sensor.
  • the position information of the sun in the visible light image can be further determined.
  • Corresponding coordinates in the visible light image coordinate system and then determine the mapping relationship between the visible light image coordinate system and the infrared image coordinate system according to the internal parameters of the infrared sensor, the internal parameters of the visible light sensor, and the positional relationship between the two, and the sun is in the The corresponding coordinates in the visible light image coordinate system are mapped to the infrared image coordinate system, and then it is determined whether the mapped coordinates are on the infrared image. If so, it is determined that the sun also exists on the infrared image, that is, the sun is within the viewing angle of the infrared sensor.
  • the protection operation of the infrared sensor can be performed to prevent the infrared sensor from being burned by the sun.
  • the shutter of the infrared sensor can be closed, or the drone can be controlled to rotate so that the sun moves out of the range of the infrared sensor's viewing angle.
  • the shutter can be reopened or the infrared sensor can be controlled to re-work, or it can be detected by the drone's gyroscope. After the drone's rotation angle reaches the preset value, the shutter can be reopened.
  • the infrared sensor when the infrared sensor is in a non-working state, it can also detect whether there is the sun in the viewing angle of the infrared sensor, so as to determine whether to perform the protection operation, and avoid the infrared sensor from being burned by the sun as much as possible.
  • the accuracy of the detection result can also be improved.
  • the present application also provides an image sensor protection device.
  • the device 50 includes a processor 51, a memory 52, and a computer program stored on the memory.
  • the processor executes all When the calculation program is described, the following steps are implemented:
  • the first image contains a target object
  • the method when the processor is configured to determine whether the target object is within the viewing angle range of the second image sensor according to the positional relationship between the first image sensor and the second image sensor, the method includes:
  • the method when the processor is configured to determine whether the target object is within the viewing angle range of the second image sensor based on the overlapping viewing angles, the method includes:
  • the overlapping viewing angle is greater than the preset angle, it is determined that the target object is within the viewing angle range of the second image sensor.
  • the method when the processor is configured to determine whether the target object is within the viewing angle range of the second image sensor according to the positional relationship between the first image sensor and the second image sensor, the method includes:
  • the mapping relationship is a mapping relationship between the coordinate system of the first image and the coordinate system of the second image, and the position information is that the target object is in the first image. Coordinates in the coordinate system;
  • the method includes:
  • the coordinates are mapped to the coordinate system of the second image through the mapping relationship, and it is determined whether the mapped coordinates are in the second image.
  • the method when the processor is configured to determine whether to perform a protection operation on the second image sensor according to the determination result, the method includes:
  • any one of the following protection operations is performed:
  • the second image sensor is controlled to rotate so that the target object is outside the viewing angle range of the second image sensor.
  • the processor is further configured to:
  • the method when the processor is used to cancel the protection operation of the infrared sensor, the method includes:
  • the rotation of the second image sensor is controlled to make the second image sensor work again.
  • the specified conditions include:
  • the shutter closed time is longer than the preset time
  • the rotation angle of the second image sensor reaches a preset threshold.
  • the first image sensor and the second image sensor are installed on a drone, the drone includes a gyroscope, and the rotation angle of the second image sensor is detected by the gyroscope get.
  • the first image sensor is a visible light sensor
  • the second image sensor is an infrared sensor
  • the target object includes the sun.
  • the second image sensor is in a non-working state.
  • the present application also provides an image acquisition device.
  • the image acquisition device 60 includes a first image sensor 61, a second image sensor 62, and an image sensor protection device 63.
  • the image sensor protection device It includes a processor 631, a memory 632, and a computer program stored on the memory. When the processor executes the calculation program, the following steps are implemented:
  • the first image contains a target object
  • the present application also provides an unmanned aerial vehicle including an image acquisition device.
  • the image acquisition device includes a first image sensor, a second image sensor, and image sensor protection.
  • the image sensor protection device includes a processor, a memory, and a computer program stored on the memory. When the processor executes the calculation program, the following steps are implemented:
  • the first image contains a target object
  • the image acquisition device may be a camera, or the image acquisition device may be a drone, and the drone is equipped with the first image sensor and the second image sensor.
  • the drone is provided with a pan/tilt, the first image sensor and the second image sensor may be arranged on the pan/tilt, and the pan/tilt may control the image sensor to move in a specific direction.
  • the infrared sensor when the infrared sensor is shooting some high-energy objects, it is easy to cause damage to the sensor. For example, for objects such as the sun, laser, etc., if it appears in the shooting angle of the infrared sensor for a long time, it will burn the infrared sensor. Effect, resulting in temporary or permanent damage to the infrared sensor. Therefore, during the use of the infrared sensor, it is necessary to determine whether these objects are within the viewing angle of the infrared sensor, so as to decide whether to take certain protective measures for the infrared sensor to avoid being damaged by such objects.
  • Infrared sensors are usually made of materials such as vanadium oxide or polysilicon. Regardless of which of the above-mentioned materials, infrared sensors will produce burns when shooting high-energy objects, such as the sun and laser, for a long time. Burning effect can easily cause damage to the sensor. For example, when shooting outdoors, if the infrared sensor is facing the sun for a long time, it is easy to be burned by the high temperature of the sun and cause irreversible device damage. Therefore, it is necessary to take some protection operations to protect the infrared sensor to avoid being damaged by such high-energy objects.
  • the target object that would damage the infrared sensor such as the sun is within the viewing angle of the infrared sensor, and if so, take the corresponding protective operation. Determining whether this type of target object appears within the viewing angle of the infrared sensor is usually performed by detecting the target object on the infrared image collected by the infrared sensor. If the above-mentioned target object appears in the infrared image, protective measures are taken. However, because the resolution of infrared images is often poor, only infrared images are used to detect target objects. The accuracy is often not high, and it is easy to cause misjudgment.
  • the present application provides a method for protecting an infrared sensor, which can combine information from sensors other than the infrared sensor to assist in determining whether the target object causing damage to the infrared sensor is within the viewing angle range of the infrared sensor.
  • the method is shown in FIG. 7 and includes the following steps:
  • S702 Perform detection of a target object on the infrared image collected by the infrared sensor, where the target object is an object that causes damage to the infrared sensor;
  • S704 Determine whether the target object is within the viewing angle range of the infrared sensor according to the detection result and sensor statistical information, the sensor statistical information being based on collection obtained by at least one sensor other than the infrared sensor;
  • S706 Determine whether to perform a protection operation for the infrared sensor according to the determination result.
  • the infrared sensor protection method of the present application can be executed by an infrared sensor protection device.
  • the infrared sensor protection device and the infrared sensor can be integrated on one device.
  • the infrared sensor protection device and the infrared sensor can also be located in two different devices.
  • the infrared sensor is installed on the drone, and the infrared sensor protection device can be integrated on the control terminal of the drone.
  • the infrared images collected by the man-machine and the statistical information of other sensors are sent to the control terminal.
  • the control terminal determines whether the target object is within the viewing angle of the infrared sensor, and makes a decision whether to perform the protection operation, and then sends the corresponding instructions to the unmanned machine.
  • the target object of the present application includes any object that causes damage to the infrared sensor.
  • the target object may be the sun, of course, it may also be other objects, which is not limited in the present application.
  • the infrared image collected by the infrared sensor can be acquired first, and then the target object is detected on the infrared image, and the detection result that the infrared image contains the target object or does not contain the target object can be obtained. Since the infrared image collected by the infrared sensor usually has a relatively low resolution, if only the target object is detected on the infrared image, and the detection result is used to determine whether to perform the protection operation on the infrared sensor, the result will be inaccurate.
  • the target object is the sun
  • a circular target in some scenes, it can also be an arc target
  • the temperature of the sun is often high, so the temperature can be combined
  • there may be other high-temperature circular targets in the infrared image which will misjudge the sun as the sun and perform protection operations, which will seriously affect the user experience.
  • the present application can combine the statistical information of at least one sensor other than the infrared sensor to assist in determining whether the sun exists in the infrared image, and to improve the accuracy of the determination result.
  • Other sensors can be other image sensors, such as visible light sensors, ultraviolet sensors, etc., or other sensors used to detect the motion state of objects, such as motion state sensors, GPS sensors, gyroscopes, inertial sensors, etc., as long as they are based on the
  • the statistical information obtained by the sensor can be used to assist in determining whether the target object is within the viewing angle range of the infrared sensor, which is not limited in this application.
  • the other sensors and infrared sensors can be integrated on one device, or they can be located on different devices.
  • the other sensors are visible light sensors
  • the infrared sensor and the visible light sensor can be installed on the same device, for example, the device It can be a dual-lens camera, an unmanned aerial vehicle equipped with an infrared sensor and a visible light sensor, etc.
  • the device can be a dual-lens camera, an unmanned aerial vehicle equipped with an infrared sensor and a visible light sensor, etc.
  • FIG. 8 a device to which the infrared sensor protection method provided in this application is applicable.
  • the device includes both an infrared sensor 80 and a visible light sensor 81.
  • the statistical information of the visible light sensor 81 is used, such as environmental brightness information, current weather information, etc.
  • the infrared sensor and the visible light sensor can also be located in two separate devices, as long as the two devices are in the same environment, the two devices can have a certain angle of view overlap, or there is no overlap of the angle of view.
  • other sensors are used to detect the motion state, it is required that other sensors can detect the motion state of the infrared sensor, for example, it can detect the current height information of the infrared sensor, rotation angle information, etc., and combine these statistical information to assist in the determination. Whether the target object is within the viewing angle of the infrared sensor.
  • the accuracy of the determination result can be improved, and the infrared sensor can be effectively protected without causing damage. Misjudgments cause inconvenience to users.
  • the infrared image can be down-sampled before the target object detection is performed on the infrared image, so as to reduce the complexity of the infrared image.
  • the amount of data and the multiple of the down-sampling must ensure that the target object can be identified from the down-sampled infrared image. Under this premise, the smaller the amount of data after the down-sampling, the more beneficial it is to improve the detection speed.
  • the target object that causes damage to the infrared sensor may be an object whose shape meets a preset shape and whose temperature meets a preset temperature threshold.
  • the target object is the sun, its shape is generally a relatively regular circle, and its temperature is generally higher than other things, and it is often the object with the highest temperature in the infrared image.
  • a preliminary screening can be performed in the infrared image according to the temperature of the target object, and then further screening according to the shape of the target object.
  • Detecting the target object in the infrared image can also be preliminary screening according to the shape of the target object, and then further screening according to the temperature of the target object, so as to accurately detect the target object in the infrared image.
  • the first area when detecting the target object in the infrared image, the first area may be determined from the infrared image first, where the first area is the image area with the same shape as the target object, and then according to The temperature of the target object is detected from the first area that is preliminarily screened. For example, if the temperature of the target object is higher than a certain threshold, it can be determined whether there is an area higher than the threshold in the first area, and if it exists, it can be Determine the image area corresponding to the target object.
  • the target object may be an object with a higher temperature, such as the sun. Therefore, the temperature of the entire target object in the infrared image is not only higher than a certain threshold, but also the temperature difference with other objects is higher than a certain threshold. . Therefore, after the first area is determined according to the shape of the target object, it can be determined whether the gray value of each pixel in the first area is higher than the first preset threshold, and the edge pixels of the first area and the first area Whether the gray value difference of neighboring pixels other than that is higher than the second preset threshold, if these two conditions are met, the first area can be determined as the image area corresponding to the target object.
  • the second area may also be determined from the infrared image according to the temperature of the target object. For example, if the temperature of the target object is higher than a certain threshold, the second area may be an image area with a temperature higher than the threshold.
  • the target object is determined from the second area according to the shape of the target object, for example, detecting whether there is an area consistent with the shape of the target object in the second area, and if it exists, it is determined that the area is an image area corresponding to the target object.
  • the target object is the object with the highest temperature in the infrared image
  • the following is an example to illustrate the detection of the target object. Assuming that the target object is the sun, the sun usually appears as a relatively regular circle, and its temperature is usually higher than other things, you can use the following two methods to perform the sun on the infrared image The detection:
  • Method 2 As shown in Figure 10, you can first determine the pixel with the largest gray value in the infrared image (P0 in Figure 10), and then determine an M ⁇ N area 101 with the pixel P0 as the center, and then Perform circular target detection on this M ⁇ N area 101. If a circular area is detected, it is determined that there is a sun in the infrared image; if a circular area is not detected, it is determined that there is no sun in the infrared image.
  • circle detection can use general algorithms such as hough circle detection algorithm, center of gravity method, or least square method. Through double screening, the detection efficiency of the target object can be improved, and the accuracy of the detection result can also be improved.
  • the statistical information of other sensors can be combined to comprehensively determine whether the target object is within the viewing angle range of the infrared sensor.
  • the statistical information of other sensors when determining whether the target object is within the viewing angle range of the infrared sensor in combination with the statistical information of other sensors, one can be used for characterization based on the detection result obtained by detecting the target object on the infrared image and the sensor statistical information. The target confidence of the target object within the viewing angle range of the infrared sensor, and then it is determined that the target object is within the viewing angle range of the infrared sensor according to the target confidence.
  • the confidence level A1 can be determined according to the detection result of the infrared image, and then the confidence level A2 can be determined according to the sensor statistical information. These two confidence levels can represent the probability of the target object within the viewing angle of the infrared sensor, and then combine these two confidence levels To comprehensively determine the final target confidence, for example, the target confidence can be the product of A1 and A2.
  • the target confidence can be the product of A1 and A2.
  • other algorithms can also be used to calculate the target confidence, and then determine whether the target object is within the viewing angle range of the infrared sensor based on the target confidence Within, for example, when the target confidence is greater than a certain threshold, such as 95%, it is determined that the target object is within the viewing angle range of the infrared sensor.
  • the statistical information of other sensors may be one or more of the following: visible light statistical information obtained based on the visible light sensor, motion state information detected by the gyroscope, motion state information detected by the motion sensor, and detected by the inertial sensor.
  • the movement status information and the altitude information detected by the GPS sensor may be one or more of the following: visible light statistical information obtained based on the visible light sensor, motion state information detected by the gyroscope, motion state information detected by the motion sensor, and detected by the inertial sensor.
  • the visible light statistics may be automatic exposure statistics and/or automatic white balance statistics.
  • the visible light sensor can be set to automatic exposure (AE: Automatic Exposure) mode or automatic white balance (AWB: Automatic White Balance) mode, and the automatic exposure statistics and automatic white balance statistics of the visible light sensor in these two modes can be obtained.
  • the automatic exposure statistical information may be statistical information used to determine the brightness of the environment where the visible light sensor is currently located.
  • the automatic exposure statistical information may be based on the currently set exposure parameter (aperture parameter). , Exposure time and sensitivity) and the ambient brightness information of the current scene calculated from the calibration data of the camera. It can also be AE statistical information such as overexposure information and histogram statistical information of the screen.
  • the average brightness B of the current environment can be determined by formula (1):
  • the brightness value Bv of the current environment can be calculated according to formula (2)
  • A is the aperture value
  • T is the exposure time
  • B is the average ambient brightness
  • Sx is the sensitivity
  • K is the reflected light metering correction constant
  • Bv is the current ambient brightness value
  • N is the constant, which is approximately 0.3.
  • the environmental brightness is often greater when the sun is present, and therefore, it is possible to determine whether there is a sun in the current scene according to the calculated environmental brightness value. For example, when the environmental brightness value is greater than the preset threshold, it is considered that the sun exists in the current scene. Since the infrared sensor and the visible light sensor are in the same environment, if the sun exists in the current scene, the probability that the circular target detected in the infrared image is the sun It has been further increased, and its test results have also been further verified.
  • the sun detected by the infrared image may not be the real sun, and there may be misjudgments, and the reliability of the detection results will be reduced.
  • the brightness of the environment may not be calculated based on the exposure parameters.
  • the current brightness of the environment may be roughly determined based on the overexposure information and grayscale histogram statistical information of the screen. For example, if the picture is always over-exposed, it means that the ambient brightness is higher, and the probability of the sun is higher.
  • the automatic white balance statistical information may be used to determine the type of the scene where the visible light sensor is currently located, and the type of the scene includes one or more of day, night, indoor, and outdoor.
  • the AWB system can classify the current scene (such as scenery, indoor, blue sky, night, etc.), so the probability of the sun in the current scene can be determined according to the scene type divided by AWB, for example, if If it is determined that the current scene is dark or indoor, the probability of the sun in the current scene is very low. If it is determined that the current scene is outdoor or in the daytime, the probability of the sun in the current scene is high. Since the infrared sensor and the visible light sensor are in the same environment, it can The infrared image detection result is further verified according to whether the visible light sensor is in a scene with the sun.
  • the GPS sensor, gyroscope, motion sensor and other sensors can be used to determine the motion state of the infrared sensor to detect the infrared sensor.
  • the inspection results of the image are verified.
  • the altitude information detected by GPS can be used to determine whether the drone is located outdoors or indoors.
  • the flight altitude of the human machine has been lower than the preset threshold value, such as 3 meters, it is very likely that the drone is indoors at the moment.
  • the probability of the sun in the current scene is relatively low. If it is detected that the flying altitude of the drone is always higher than the preset threshold, such as 3 meters, the drone is likely to be outdoors at the moment, and the probability of the sun in the current scene is also relatively high.
  • the preset threshold such as 3 meters
  • the gyroscope detects that the current UAV rotation angle is small or remains unchanged, or the motion state sensor detects the current UAV's movement direction, current position. If the posture has not changed, the probability of the sun appearing in the viewing angle of the infrared sensor is also low at this time. Similarly, if it has been determined that the sun appears in the current angle of view of the infrared sensor, when the gyroscope and motion sensors detect that the drone is in flight, the probability of the sun appearing in the angle of view of the infrared sensor is also higher.
  • a confidence level that characterizes the target object within the viewing angle range of the infrared sensor can be obtained based on the statistical information of each sensor, and then the final target confidence level can be determined by combining the confidence levels , Determine the final judgment result according to the target confidence.
  • the GPS sensor can be used to determine the height information of the infrared sensor, for example, the infrared sensor, the visible light sensor, and the GPS sensor are installed on the drone at the same time.
  • the third confidence level within the viewing angle range of the sensor, and finally the fourth confidence level used to characterize the target object within the viewing angle range of the infrared sensor is determined according to the height information collected by GPS, and then according to the first confidence level, the second confidence level, and the first confidence level.
  • the three confidence degrees and the fourth confidence degrees get the target confidence.
  • the target confidence For example, if the first confidence, the second confidence, the third confidence and the fourth confidence are A1, A2, A3, A4, then the target confidence
  • the degree A can be the product of A1, A2, A3, A4, or the current weighted average of A1, A2, A3, A4, etc.
  • the specific calculation method can be set according to the actual situation, and the target confidence is determined, and then based on the target confidence To determine whether the target object is within the viewing angle of the infrared sensor.
  • the light image collected by the light sensor may be combined to further assist the determination.
  • the visible light image collected by the visible light sensor can also be detected for the target object. If the target object is detected, it means that the target object is within the viewing angle range of the visible light sensor. Then, it can be determined whether the target object is within the viewing angle range of the infrared sensor according to the positional relationship between the visible sensor and the infrared sensor, and the first determination result can be obtained. After the first determination result is obtained, the target confidence can be determined based on the detection result of the target object detection on the infrared image, the statistical information of the visible light sensor, and the first determination result, and then the final determination result can be obtained according to the target confidence.
  • the target when determining whether the target object is within the viewing angle range of the infrared image sensor based on the positional relationship between the infrared sensor and the visible light image sensor, the target can be simply determined based on the viewing angle relationship between the visible light sensor and the infrared image sensor. Whether the subject is within the viewing angle of the infrared sensor. For example, the overlapping viewing angle of the visible light sensor and the infrared sensor can be determined according to the positional relationship between the visible light sensor and the infrared image sensor, and then whether the target object is within the viewing angle range of the infrared image sensor can be determined according to the overlapping viewing angle between the two image sensors.
  • each image sensor has a shooting angle of view, and only the scene within this angle of view can fall into the image it collects. Therefore, it can be determined according to the size of the angle of view of the visible light sensor and the infrared sensor and the positional relationship between the two sensors.
  • the overlapping viewing angles of the two sensors are then used to determine whether the scene within the viewing angle range of the visible light sensor will also be within the viewing angle range of the infrared sensor based on the overlapping viewing angle.
  • the viewing angles of the infrared sensor and the visible light sensor do not overlap at all, that is, the overlapping viewing angle is 0, then the scene within the viewing angle range of the visible light sensor must not fall into the viewing angle range of the infrared sensor, that is, if the two The sensor viewing angles do not overlap.
  • the target object is detected in the visible light image, it means that the target object is not within the viewing angle range of the infrared sensor.
  • the viewing angle of the visible light sensor completely falls within the viewing angle range of the infrared sensor, and the target object is detected in the visible light image, it means that the target object is also within the viewing angle range of the infrared sensor.
  • the overlapping viewing angle of the visible light sensor and the infrared sensor is greater than the preset angle, it can be considered that the target object is within the viewing angle range of the visible light sensor, and it is also within the viewing angle range of the infrared sensor.
  • the overlapping viewing angle of the visible light sensor and the infrared sensor reaches more than 95% of the viewing angle of the visible light sensor, it is considered that the target object is within the viewing angle range of the visible light sensor, and there is a high probability that it is also within the viewing angle range of the infrared sensor.
  • a simple determination can be made based on the viewing angle relationship between the two sensors.
  • the mapping relationship may be the mapping relationship between the pixel points of the visible light image and the infrared image, or the mapping relationship between the coordinate systems of the two images. Since the positional relationship between the two sensors is known, and the internal parameters of the two sensors can also be calibrated, the visible light image and the infrared image can be determined according to the positional relationship of the two sensors and the internal parameters of the two sensors The mapping relationship.
  • the mapping relationship may be the mapping relationship between the coordinate system of the visible light image and the coordinate system of the infrared image. It is assumed that the mapping relationship between the coordinate system of the visible light image and the coordinate system of the infrared image uses a matrix H Means that P1 represents the pixel coordinates in the coordinate system of the visible light image, and P2 represents the pixel coordinates in the coordinate system of the infrared image, then H can be determined by formula (1), and the relationship between P1 and P2 is as formula (2):
  • K is a matrix representing the positional relationship between the visible light sensor and the visible light sensor
  • R1 represents the internal parameter matrix of the visible light sensor
  • R2 represents the internal parameter matrix of the visible light sensor.
  • the position information can be the coordinates of the target object in the visible light image coordinate system, and the coordinates of the target object in the visible light image coordinate system can be mapped to the infrared image coordinate system through the mapping relationship H, and then it is determined whether the coordinates fall into the coordinate system of the infrared image. In the infrared image, if it is, it means that the infrared image also contains the target object.
  • the target object in order to more accurately determine whether there is a target object on the infrared image, it can also be based on whether the position of the target object detected on the visible light image matches the position of the target object detected on the infrared image. Judgment, in this way, more accurate judgment results can be obtained.
  • the target object can be detected in the infrared image. After the target object is detected, the second position information of the target object in the infrared image is determined, and then the target object detection is performed on the visible light image.
  • the target object If the target object is detected, it is determined The first position information of the target object in the visible light image, and then the first position information and the second position information are mapped to the same coordinate system according to the mapping relationship between the visible light image and the infrared image, for example, to the coordinate system of the infrared image, or Map to the coordinate system of the visible light image, and then determine whether the two position information mapped to the same coordinate system are consistent, and the second determination result is obtained. If the two position information mapped to the same coordinate system are consistent, the infrared image The probability of the existence of the target object is very high.
  • the target confidence can be comprehensively determined according to the detection result of the target object detection on the infrared image, the statistical information of the optical sensor and the second judgment result, and then the final judgment result can be obtained according to the target confidence.
  • the accuracy of the judgment result obtained in this way is high, and the probability of misjudgment is very low.
  • the shutter of the infrared sensor can be closed to avoid direct sunlight and other infrared sensors, or the infrared sensor can be rotated to move the target object out of the viewing angle of the infrared sensor.
  • the infrared sensor can be rotated by controlling the rotation of the pan-tilt. If the infrared sensor is installed on a movable device such as a drone, the movable device can be controlled to rotate to drive the infrared sensor to rotate.
  • the direction of rotation can be determined according to the current position of the target object in the infrared image and the viewing angle of the infrared sensor, and the estimated angle to be rotated so that the target object is outside the viewing angle of the infrared sensor.
  • some specified conditions can be preset, and when these specified conditions are triggered, the shutter of the infrared sensor can be reopened. Wherein, these specified conditions may be when the shutter is closed for a preset time period or when the rotation angle of the infrared sensor is detected to reach a preset angle.
  • the infrared sensor is located on some movable equipment, and the equipment is always in a moving state, it can be considered that after the equipment moves for a period of time, the target object may not be within the viewing angle of the infrared sensor, and the shutter can be opened at this time.
  • the infrared sensor is installed on some equipment that can be rotated, such as a PTZ or drone, then it can be detected whether the rotation angle of the infrared sensor reaches the preset threshold, and if the rotation angle reaches the preset threshold, it can also be It is considered that the target object is likely to be out of the viewing angle of the infrared sensor, and the shutter can also be reopened at this time.
  • the infrared sensor can be installed on the drone.
  • the drone includes a gyroscope.
  • the shutter can be closed to protect the infrared sensor.
  • the rotation of the drone it is also possible to control the rotation of the drone to drive the infrared sensor to rotate, so that the target object is outside the viewing angle of the infrared sensor.
  • the drone's gyroscope detects that the drone's rotation angle reaches the preset angle, it can be reset. Open the shutter so that the infrared sensor can continue to work.
  • the drone 110 is equipped with an infrared sensor 111 and a visible light sensor 112, which are used to collect infrared images and visible light images.
  • unmanned The aircraft also includes a gyroscope (located inside the drone, not shown in Figure 11) for detecting the movement attitude of the drone, and a GPS sensor (located inside the drone, not shown in Figure 11) for Position the drone.
  • a gyroscope located inside the drone, not shown in Figure 11
  • GPS sensor located inside the drone, not shown in Figure 11
  • the infrared image collected by the infrared sensor 111 can be obtained to detect whether the sun exists in the infrared image.
  • the infrared image can be down-sampled first to obtain the down-sampled image. It is higher than other objects, so you can determine the pixel with the largest gray value from the down-sampled image, determine an M ⁇ N area with this pixel as the center, and then perform circle detection on this M ⁇ N area or Arc detection, if there is a circular area or an arc-shaped area, it is determined that the sun exists in the infrared image.
  • the confidence level Sconf1 used to characterize the sun's viewing angle range of the infrared sensor can be determined.
  • the visible light sensor 112 can be set to automatic exposure mode and automatic white balance mode, in the automatic exposure mode, the current environment can be calculated according to the set exposure parameters (sensitivity, exposure time and aperture parameters) and the calibration data of the visible light sensor Therefore, the automatic exposure statistics can be obtained from the visible light sensor to determine the brightness value of the current environment. Since the ambient brightness is often higher than that in a cloudy day when the sun is present, it can be determined whether the current ambient brightness value is greater than the preset value. Set the threshold. If it is greater than the threshold, it is considered that the sun exists in the current environment, and the second confidence level Sconf2 used to characterize the sun's viewing angle range of the infrared sensor can be determined. Of course, it is also possible to obtain image overexposure information or histogram statistical information from the visible light sensor to determine the brightness of the current environment, and obtain Sconf2 that determines that the sun exists in the current scene.
  • the current scene can be classified (such as landscape, indoor, blue sky, night, etc.). Since the sun can only appear in outdoor scenes or daytime scenes, automatic white balance statistical information can be obtained from the visible light sensor. When the current scene is classified as outdoor or daytime in the statistical information, the confidence level Sconf3 that the sun exists in the current scene can be determined.
  • a certain distance such as 3 meters
  • the drone's flying altitude is higher than a certain distance (such as 3 meters)
  • the flying altitude of the drone is always lower than a certain distance (such as 3 meters)
  • the drone is indoors and there is no sun. Therefore, the confidence that the sun exists in the current scene can be determined based on the altitude information collected by GPS Sconf4.
  • the infrared sensor can be controlled to close the shutter, or the drone can be controlled to rotate a certain angle so that the sun moves out of the infrared sensor's viewing angle. After closing the shutter, the shutter closing time can be counted.
  • the shutter closing time reaches the preset time, the shutter can be reopened, or when the gyroscope detects that the rotation angle of the drone reaches a certain angle, the shutter can be reopened. To ensure that the infrared sensor continues to work.
  • the present application also provides an infrared sensor protection device.
  • the infrared sensor protection device 130 includes a processor 131 and a memory 132.
  • the memory 132 is used to store a computer program, and the processor 131 When the calculation program is executed, the following steps are implemented:
  • the sensor statistical information being collected based on at least one sensor other than the infrared sensor
  • the determination result it is determined whether to perform the protection operation of the infrared sensor.
  • the target object is an object whose shape meets a preset shape and temperature meets a preset temperature threshold.
  • the method when the processor is used to detect the target object on the infrared image collected by the infrared sensor, the method includes:
  • a second area is determined from the infrared image according to the temperature of the target object, and the target object is detected from the second area based on the preset shape.
  • the method when the processor is configured to detect the target object from the first area based on the temperature of the target object, the method includes:
  • the second area is a designated area determined centered on a pixel with the largest gray value in the infrared image.
  • the method before detecting the target object on the infrared image collected by the infrared sensor, the method further includes:
  • Down-sampling processing is performed on the infrared image.
  • the sensor statistical information includes one or more of the following: visible light statistical information obtained based on a visible light sensor, motion state information detected by a gyroscope, motion state information detected by a motion sensor, and motion state information detected by an inertial sensor. Movement status information, altitude information detected by GPS sensors.
  • the visible light statistical information includes automatic exposure statistical information and/or automatic white balance statistical information.
  • the automatic exposure statistical information is used to determine the brightness of the environment where the visible light sensor is currently located.
  • the automatic white balance statistical information is used to determine the type of the scene where the visible light sensor is currently located, and the type of the scene includes one or more of day, night, indoor, and outdoor.
  • the method when the processor is configured to determine whether the target object is within the viewing angle range of the infrared sensor according to the detection result and sensor statistical information, the method includes:
  • the statistical information includes: automatic exposure statistical information, automatic white balance statistical information obtained based on visible light sensors, and height information detected by a GPS sensor. Characterizing the target confidence of the target object within the viewing angle range of the infrared sensor includes:
  • the processor is further configured to:
  • the target confidence that is used to characterize the target object within the viewing angle range of the infrared sensor is determined.
  • the processor method is also used to:
  • a target confidence that is used to characterize the target object within the viewing angle range of the infrared sensor is determined.
  • the method when the processor is configured to determine whether to perform a protection operation on the infrared sensor according to the determination result, the method includes:
  • any one of the following protection operations is performed:
  • the infrared sensor is controlled to rotate so that the target object is outside the viewing angle range of the infrared sensor.
  • the processor is further configured to:
  • the shutter of the infrared sensor is reopened.
  • the specified conditions include:
  • the shutter closed time is longer than the preset time
  • the rotation angle of the infrared sensor reaches a preset threshold.
  • the infrared sensor is installed on a drone, the drone includes a gyroscope, and the rotation angle of the infrared sensor is detected by the gyroscope.
  • the target object includes the sun.
  • the specific implementation details of the infrared sensor protection device when the infrared sensor is protected can be referred to the description in each embodiment of the above infrared sensor protection method, which will not be repeated here.
  • the present application also provides an image acquisition device.
  • the image acquisition device 140 includes an infrared sensor 141, at least one other sensor 142 other than the infrared sensor, and an infrared sensor protection device 143,
  • the infrared sensor protection device 143 includes a processor 1431, a memory 1432, and a computer program stored on the memory. When the processor 1431 executes the calculation program, the following steps are implemented:
  • the sensor statistical information being collected based on at least one sensor other than the infrared sensor
  • the determination result it is determined whether to perform the protection operation of the infrared sensor.
  • the specific implementation details of the infrared sensor protection device when the infrared sensor is protected can be referred to the description in each embodiment of the above infrared sensor protection method, which will not be repeated here.
  • the present application also provides an unmanned aerial vehicle.
  • the unmanned aerial vehicle includes an image acquisition device. Refer to FIG. 14 for a schematic diagram of the image acquisition device.
  • the image acquisition device includes an infrared sensor, and at least one other than the infrared sensor Other sensors and infrared sensor protection devices.
  • the infrared sensor protection device includes a processor, a memory, and a computer program stored on the memory. When the processor executes the calculation program, the following steps are implemented:
  • the sensor statistical information being collected based on at least one sensor other than the infrared sensor
  • the determination result it is determined whether to perform the protection operation of the infrared sensor.
  • an embodiment of this specification also provides a computer storage medium in which a program is stored, and the program is executed by a processor to implement the image sensor protection method or the infrared sensor protection method in any of the foregoing embodiments.
  • the embodiments of this specification may adopt the form of a computer program product implemented on one or more storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing program codes.
  • Computer usable storage media include permanent and non-permanent, removable and non-removable media, and information storage can be achieved by any method or technology.
  • the information can be computer readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • CD-ROM compact disc
  • DVD digital versatile disc
  • Magnetic cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • the relevant part can refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
  • Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.

Abstract

一种图像传感器保护方法、装置、图像采集装置及无人机。所述图像传感器保护方法包括:获取第一图像传感器采集的第一图像;当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内,所述目标对象为对所述第二图像传感器造成损伤的对象;根据判定结果确定是否执行对所述第二图像传感器的保护操作。在第二图像传感器处于非工作状态时,也可以确定其视角内是否存在对其造成损伤的目标对象,可以更好的对第二图像传感器进行保护。

Description

图像传感器保护方法、装置、图像采集装置及无人机 技术领域
本申请涉及传感器保护技术领域,具体而言,涉及一种图像传感器保护方法、装置、图像采集装置及无人机。
背景技术
有些图像传感器由于自身的材料特性,在对一些高能量对象进行拍摄时,易造成传感器的损伤,比如,红外传感器在对太阳、激光等一类对象进行拍摄时,如果长时间直视这些对象,会对红外传感器产生灼烧效应,导致红外传感器出现短暂或永久性损伤。因而,对于这类易造成损伤的图像传感器,在使用过程中,有必要判定这些对象是否在图像传感器的视角范围内,从而决定是否对图像传感器采取一定的保护措施,避免被这一类对象损伤。
发明内容
有鉴于此,本申请提供一种图像传感器保护方法、装置、图像采集装置及无人机。
根据本申请的第一方面,提供一种图像传感器保护方法,所述方法包括:
获取第一图像传感器采集的第一图像;
当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内,所述目标对象为对所述第二图像传感器造成损伤的对象;
根据判定结果确定是否执行对所述第二图像传感器的保护操作。
根据本申请的第二方面,提供一种图像传感器保护装置,所述装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
获取所述第一图像传感器采集的第一图像;
当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
根据判定结果确定是否执行对所述第二图像传感器的保护操作。
根据本申请的第三方面,提供一种图像采集装置,包括第一图像传感器、第二图像传感器和图像传感器保护装置,所述图像传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
获取所述第一图像传感器采集的第一图像;
当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
根据判定结果确定是否执行对所述第二图像传感器的保护操作。
根据本申请的第四方面,提供一种无人机,包括图像采集装置,所述图像采集装置包括第一图像传感器、第二图像传感器和图像传感器保护装置,所述图像传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
获取所述第一图像传感器采集的第一图像;
当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
根据判定结果确定是否执行对所述第二图像传感器的保护操作。
根据本申请的第五方面,提供一种红外传感器保护方法,所述方法包括:
对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
根据判定结果确定是否执行对所述红外传感器的保护操作。
根据本申请的第六方面,提供一种红外传感器保护装置,所述红外传感器保护装置包括:处理器、存储器,所述存储器用于存储计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
根据判定结果确定是否执行对所述红外传感器的保护操作。
根据本申请的第七方面,提供一种图像采集装置,包括红外传感器、除所述红外传感器之外的至少一个其他的传感器和红外传感器保护装置,所述红外传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
根据判定结果确定是否执行对所述红外传感器的保护操作。
根据本申请的第八方面,提供一种无人机,包括图像采集装置,所述图像采集装 置包括红外传感器、除所述红外传感器之外的至少一个其他的传感器和红外传感器保护装置,所述红外传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
根据判定结果确定是否执行对所述红外传感器的保护操作。
根据本申请的第九方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本说明书实施例中任一项图像传感器保护方法或红外传感器保护方法。
应用本申请提供的方案,一方面,当第一图像传感器和第二图像传感器的位置关系已知时,可以检测第一图像传感器采集的第一图像中是否存在对第二图像传感器造成损伤的目标对象,如果存在,则根据第一图像传感器和第二图像传感器的位置关系判定该目标对象是否在第二图像传感器的视角范围内,并根据判定确定是否对第二图像传感器执行保护操作。通过这种方式,在第二图像传感器处于非工作状态时,也可以确定其视角内是否存在对其造成损伤的目标对象,可以更好的对第二图像传感器进行保护。且在第一图像传感器的分辨率高于第二图像传感器的分辨率的场景,也可以提高在图像中进行目标对象检测的准确度,避免误判的现象。另一方面,在确定红外传感器的视角范围内是否存在对红外传感器造成损害的目标对象时,可以结合对红外图像进行目标对象检测的检测结果,以及其他的传感器采集得到的可以辅助判定红外传感器视角范围内是否存在目标对象的统计信息,来综合判定红外传感器视角范围内是否存在目标对象,并根据判定结果执行预设的保护操作,对红外传感器进行保护,避免目标对象对其造成损害。通过结合其他传感器的统计信息来进一步判定,可以提升判定结果的准确度,避免因出现误判,提高了用户的体验。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例的一种图像传感器保护方法的流程图。
图2是本申请一个实施例的一种图像传感器视角关系的示意图。
图3是本申请一个实施例的一种图像传感器视角关系的示意图。
图4是本申请一个实施例的一个应用场景示意图。
图5是本申请一个实施例的一种图像传感器保护装置的逻辑结构示意图。
图6是本申请一个实施例的一种图像采集装置的逻辑结构示意图。
图7是本申请一个实施例的一种红外传感器保护方法的流程图。
图8是本申请一个实施例的一种设备的示意图。
图9是本申请一个实施例的从红外图像检测太阳的示意图。
图10是本申请一个实施例的从红外图像检测太阳的示意图。
图11是本申请一个实施例的另一个应用场景示意图。
图12是本申请一个实施例的一种红外传感器保护方法的流程图。
图13是本申请一个实施例的一种红外传感器保护装置的逻辑结构示意图。
图14是本申请一个实施例的另一种图像采集装置的逻辑结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
有些图像传感器由于自身的材料特性,在对一些高能量对象进行拍摄时,易造成传感器的损伤,比如,红外传感器在对太阳、激光等一类对象进行拍摄时,如果长时间直视这些对象,会对红外传感器产生灼烧效应,导致红外传感器出现短暂或永久性损伤。因而,对于这类易造成损伤的图像传感器,在使用过程中,有必要判定这些对象是否在红外传感器的视角范围内,从而决定是否对图像传感器采取一定的保护措施,避免被这一类对象损伤。
通常在判定图像传感器的视角范围内是否存在会对其造成损伤的目标对象时,可以通过检测图像传感器采集的图像中是否存在目标对象来进行判定。但是,这种方式局限性比较大,要求图像传感器必须处于工作状态,对于非工作状态的图像传感器无法采集图像,因而无法判定。并且,在目标对象能量较大、温度较高的场景,对于某些比较敏感的图像传感器,即便处于短暂的工作状态,也有可能会被损伤,因而通过图像传感器采集的图像来判定其视角是否存在目标对象,也存在一定的风险。举个例子,当中午时分,太阳的温度较高,如果红外传感器对着太阳拍摄,可能在比较短时间的拍摄过程中也会对红外传感器造成损伤。另外,对于有些图像传感器,其采集的图像分辨率较低,噪声较多,如果通过对该图像传感器采集的图像进行目标对象的检测,检测结果也不太准确,易出现漏检或者误判的现象。
基于此,本申请提供了一种图像传感器保护方法,可以通过另一个图像传感器采集的图像来判定需要保护的图像传感器的视角中是否存在对其造成损伤的目标对象,具体的,所述图像传感器保护方法的处理流程如图1所示,包括以下步骤:
S102、获取第一图像传感器采集的第一图像;
S104、当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内,所述目标对象为对所述第二图像传感器造成损伤的对象;
S106、根据判定结果确定是否执行对所述第二图像传感器的保护操作。
本申请的第二图像传感器可以是易被某些目标对象损伤,需要对其采取保护措施的图像传感器,比如红外传感器。本申请的第一图像传感器可以是与第二图像传感器相比,更 不易被该目标对象损伤,或者是采集的图像分辨率比第一图像传感器采集的图像的分辨率更高、更有利于目标对象的检测的图像传感器。其中,在某些实施例中,第一图像传感器可以是可见光传感器,第二图像传感器可以是红外传感器。当然,第一图像传感器和第二图像传感器也可以是其他的图像传感器组合,本申请不作限制。
本申请的图像传感器保护方法可以由图像传感保护装置执行,该图像传感器保护装置和被保护的第二图像传感器可以集成在一个设备上,比如,可以是集成了第二图像传感器和图像传感器保护装置的相机、无人机等设备,当然,该图像传感器保护装置和第二图像传感器也可以分别位于两个不同的设备,比如第一图像传感器和第二图像传感器安装于无人机上,而图像传感器保护装置可以集成在无人机的控制终端上,无人机将第一图像传感器采集的图像发送给控制终端,由控制终端判定目标对象是否在第二图像传感器的视角范围内,并做出是否执行保护操作的决策后,再发送相应的指令给无人机。
本申请的目标对象包括对第二图像传感器造成损伤的任一对象,比如,在某些实施例中,该目标对象可以是太阳,当然,也可以是其他的对象,比如激光等,本申请不作限制。
本申请中的第一图像传感器和第二图像传感器可以集成在一个设备上,也可以位于独立的两个设备,两个图像传感器的相对位置可以固定,也可以变化,只要可以确定两者的位置关系即可。
其中,第一图像传感器可以处于工作状态,第二图像传感器可以处于工作工作状态,也可以处于非工作状态。在某些实施例中,为了尽可能避免目标对象对处于工作状态的第二图像传感器造成损害,在没有确定目标对象不在第二图像传感器的视角范围内之前,可以让第二图像传感器处于非工作状态。
本申请中,对于第一图像传感器采集的图像统称为第一图像,对于第二图像传感器采集的图像统称为第二图像。
由于第一图像传感器处于工作状态,因而可以获取第一传感器采集的第一图像,然后对第一图像进行目标对象的检测。其中,对第一图像进行目标对象的检测可以采用通用的目标检测算法,比如,可以采用深度学习算法,首先用包含目标对象的大量图像对神经网络模型进行训练,得到图像分类模型,然后通过该图像分类模型检测图像中是否包含目标对象。当然,如果目标对象为太阳,由于太阳一般呈现比较规则的圆形,还可以采用hough圆检测算法、重心法或最小二乘法等算法来检测第一图像中是否包含太阳。
当在第一图像中检测到目标对象后,说明目标对象在第一图像传感器的视角范围内,这时,可以进一步根据第一图像传感器和第二图像传感器之间的位置关系来判定目标对象是否在第二图像传感器的视角范围内,然后再根据判定结果来决定是否对第二图像传感器执行保护操作。
通过判定第一图像传感器采集的图像是否存在目标对象,然后再根据第一图像传感器与第二图像传感器的位置关系来判定目标对象是否在第二图像传感器的视角范围内,这种方式对第二图像传感器的工作状态没有限制,第二图像传感器可以处于非工作状态,在判定其视角范围内没有目标对象时,才控制其处于工作状态,可以对第二图像传感器起到更好的保护作用。并且,对于第二图像传感器采集的图像分辨率较低,不利于目标对象的准确检测的场景,也可以通过分辨率更高的第一传感器采集的图像进行目标对象的检测,以提高最终判定结果的准确性,避免出现误判影响用户体验。
在某些实施例中,在根据第一图像传感器和第二图像传感器的位置关系判定目标对象 是否在第二图像传感器的视角范围内时,可以简单的根据第一图像传感器和第二图像传感器之间的视角关系来判定目标对象是否在第二图像传感器的视角范围内。比如,可以根据第一图像传感器与第二图像传感器的位置关系确定第一图像传感器与第二图像传感器的重叠视角,然后再根据两个图像传感器之间的重叠视角来判定目标对象是否在第二图像传感器的视角范围内。比如,每个图像传感器都有一个拍摄视角,该视角范围内的景物才可以落入其采集的图像中,因而可以根据第一图像传感器与第二图像传感器的视角的大小以及两个传感器之间的位置关系确定两个传感器的重叠视角,然后根据重叠视角来判定在第一图像传感器视角范围内的景物是否也会在第二图像传感器的视角范围内。比如,如图2所示,第一图像传感器21的视角为θ1,第二图像传感器22的视角为θ2,两个图像传感器的视角完全不重叠,即重叠视角为0,那么在第一图像传感21视角范围内的景物必然不会落入到第二图像传感器22的视角范围内,也就说,如果两个传感器视角没有重叠,如果在第一图像检测到了目标对象,则说明目标对象不在第二图像传感器的视角范围内。当然,如图3所示,第一图像传感器31的视角为θ1,第一图像传感器32的视角为θ2,第一图像传感器32的视角基本上完全落入第二图像传感器31的视角范围,则在第一图像检测到了目标对象,则说明目标对象也在第二图像传感器的视角范围内。
当然,在某些实施例中,如果第一图像传感器和第二图像传感器的重叠视角大于预设角度,则可以认为目标对象在第一图像传感器的视角范围内,则也在第二图像传感器的视角范围内。比如说,第一图像传感器和第二图像传感器的重叠视角达到了的第一图像传感器视角的95%以上,则认为目标对象在第一图像传感器的视角范围内,那么也有很大概率在第二图像传感器的视角范围内。
当然,通过两个传感器的视角关系可以做一个简单的判定,在某些实施例中,如果根据两个传感器的视角关系没法准确地确定目标对象是否在第二图像传感器的视角范围内时,则可以进一步确定目标对象在第一图像的位置信息,然后根据第一图像传感器和第二图像传感器的位置关系确定第一图像和第二图像的映射关系,然后根据该位置信息和映射关系来确定目标对象是否在第二图像中,如果在,则判定目标对象在第二图像传感器的视角范围内。其中,映射关系可以是第一图像和第二图像的像素点之间的映射关系,也可以是两种图像的坐标系之间映射关系。由于两个传感器之间的位置关系是已知的,并且两个传感器的内参数也可以标定得到,因此,可以根据两个传感器的位置关系和两个传感器的内参数来确定第一图像和第二图像的映射关系。
其中,在某些实施例,该映射关可以是第一图像的坐标系和第二图像的坐标系之间的映射关系,假设第一图像的坐标系和第二图像的坐标系之间的映射关系用矩阵H表示,P1表示第一图像的坐标系下的像素坐标,P2表示第二图像的坐标系下的像素坐标,则H可以通过公式(1)确定,P1和P2之间的关系如公式(2):
H=KR1R2   公式(1)
P2=HP1      公式(2)
其中,K为表示第一图像传感器和第一图像传感器的位置关系的矩阵,R1表示第一图像传感器的内参矩阵,R2表示第二图像传感器的内参矩阵。
该位置信息可以是目标对象在第一图像的坐标系下的坐标,可以将目标对象在第一图像坐标系下的坐标通过该映射关系H映射到第二图像的坐标系下,然后判定该坐标是否落 入第二图像中,如果是,则说明第二图像中也包含目标对象。
在根据第一图像传感器和第二图像传感器的位置关系得到目标对象是否在第二图像传感器的视角范围内的判定结果后,可以根据判定结果确定是否执行相应的保护操作。比如,如果确定目标对象不在第二图像传感器的视角范围内,则可以不用对第二图像传感器采取保护操作,此时,如果第二图像传感器处于工作状态,则可以保持第二图像传感器继续工作,如果第二图像传感器处于非工作状态,则可以将其切换成工作状态。
当然,如果判定目标对象在第二图像传感器的视角范围内,则可以采取相应的保护操作,比如,可以关闭第二图像传感器的快门,避免阳光等直射第二图像传感器,或者控制第二图像传感器转动,以使目标对象移出第二图像传感器的视角范围。如果第二图像传感器安装在云台上,则可以通过控制云台转动来带动第二图像传感器转动,如果第二图像传感器安装于无人机等可移动设备上,则可以控制可移动设备转动以带动第二图像传感器转动。当然,可以根据目标对象当前在第二图像的位置和第二图像传感器的视角来确定转动方向,以及预估应该转动多大角度,才能使目标对象位于第二图像传感器的视角范围之外,当然,也可以每次转动一个预设角度,然后再进行一次目标对象是否位于第二图像传感器的视角范围内的检测操作,直至检测到目标对象不在第二图像传感器的视角范围内则停止转动。
当然,在对第二图像传感器执行保护操作后,不能使第二图像传感器一直处于被保护状态,否则会影响第二图像传感器的正常工作,比如,在关闭第二图像传感器的快门后,不能一直保持第二图像传感器快门处于关闭状态,这样第二图像传感器无法工作,所以,还需设置取消对第二图像传感器的保护操作的触发机制,以保证第二图像传感器重新工作。所以,在某些实施例中,可以预先设置一些指定条件,当这些指定条件被触发后,则可以取消对第二图像传感器的保护操作。其中,这些指定条件可以是当快门关闭时长达到预设的时长,当检测到第二图像传感器转动的角度达到预设角度,或者是继续对第一图像进行目标对象的检测,并且根据第一图像传感器和第二图像传感器的位置关系判定目标对象不在第二图像传感器的视角范围内时,则可以取消对第二图像传感器的保护操作。
比如,如果第二图像传感器位于一些可移动的设备上,设备一直处于移动状态,因而,可以认为设备运动一段时间后,则目标对象可能不在第二图像传感器的视角范围内,这时可以取消保护操作。当然,如果第二图像传感器安装于一些可以转动的设备上,比如云台或者无人机等设备,这时可以通过检测第二图像传感器的转动角度是否达到预设阈值,如果转动角度达到预设阈值,也可以认为目标对象很大概率不在第二图像传感器的视角范围内,这时也可以取消保护操作。当然,也可以是继续检测目标对象是否在第二图像传感器的视角范围内,如果不在,也可以取消保护操作。
其中,在某些实施例中,取消保护操作可以是重新开启第二图像传感器的快门,使其继续工作,或者是控制第二图像传感器转动,使其继续工作。
在某些实施例中,第二图像传感器可以安装于无人机上,无人机包括陀螺仪,在检测到目标对象位于第二图像传感器的视角范围内时,可以关闭快门,以对第二图像传感器进行保护,当然,也可以通过控制无人机转动来带动第二图像传感器转动,以使目标对象位于第二图像传感器的视角之外,当无人机的陀螺仪检测到无人机的转动角度达到预设角度后,则可以重新开启快门,以便第二图像传感器可以继续工作。
为了进一步解释本申请的图像传感器保护方法,以下结合一个具体实施例加以解释。
如图4所示,为本申请的图像传感器保护方法的一个应用场景示意图,无人机40安装有一个红外传感器41、一个可见光传感器42,分别用于采集红外图像和可见光图像,其中,可见光传感器和红外传感器的位置关系是已知的。例如,可见光传感器和红外传感器朝向同一方向并排设置,这种情况下可见光传感器和红外传感器的视角重叠区域较大。当然,在其他实现方式中,可见光传感器和红外传感器可以朝向不同方向,且可见光传感器和红外传感器的视角范围存在重叠区域。无人机在室外执行侦查和巡检任务时,不可避免会拍摄到太阳,由于红外传感器41在太阳直射下,容易产生灼烧效应,对红外传感器造成不可逆的损伤,因而需要对红外传感器进行保护,避免被太阳损伤。其中,红外传感器41可以处于工作状态,也可以处于非工作状态。判定是否对红外传感器进行保护操作的具体过程如下:
首先,获取可见光传感器采集的可见光图像,然后检测可见光图像中是否包含太阳,其中,可以采用深度学习算法对可见光图像进行太阳的检测,具体的,可以预先用大量包含太阳的图像对预设的神经网络模型进行训练,得到图像分类模型,然后采用该图像分类模型对可见光图像进行太阳检测。由于可见光传感器和红外传感器的位置关系和视角是已知的,可以先根据可见光传感器和红外传感器的位置关系和视角大小确定两个传感器的重叠视角,如果两个传感器视角完全没有重叠,则当在可见光图像检测到太阳时,则可以判定红外传感器的视角中不存在太阳。当然,如果可见光传感器和红外传感器的视角几乎完全重叠,则当在可见光图像检测到太阳时,则可以判定红外传感器的视角中也存在太阳。
当然,如果在可见光图像中检测到太阳,并且根据两个传感器的视角关系无法判定太阳是否在红外传感器的视角范围内,则可以进一步确定太阳在可见光图像的位置信息,该位置信息可以是太阳在可见光图像坐标系下对应的坐标,然后根据红外传感器的内参数、可见光传感器的内参数以及两者的位置关系确定可见光图像坐标系和红外图像坐标系的映射关系,并且通过该映射关系将太阳在可见光图像坐标系下对应的坐标映射到红外图像坐标系下,然后判定映射后的坐标是否在红外图像上,如果在,则判定红外图像上也存在太阳,即太阳在红外传感器的视角范围内。
在确定太阳在红外传感器视角范围内后,可以执行对红外传感器的保护操作,以避免红外传感器被太阳灼伤。比如,可以关闭红外传感器的快门,或者控制无人机转动,以使太阳移出红外传感器的视角范围内。当然,也可以设定取消对红外传感器的保护的触发机制,比如,快门关闭时长达到预设值后,可以重新开启快门或者控制红外传感器转动,以便重新工作,或者通过无人机的陀螺仪检测到无人机旋转角度达到预设值后,可以重新开启快门。当然,也可以在执行保护操作后,继续对可见光图像进行太阳检测,然后通过可见光传感器和红外传感器的位置关系来判定红外传感器视角范围内是否存在太阳,当判定红外传感器的视角范围内不存在太阳,则取消保护操作。
通过上述方式,可以在红外传感器处于非工作状态时,也可以检测红外传感器的视角中是否存在太阳,从而决定是否执行保护操作,可以尽量避免红外传感器被太阳灼伤。此外,通过在分辨率更高的可见光图像上进行太阳的检测,也可以提高检测结果的准确度。
相应的,本申请还提供了一种图像传感器保护装置,如图5所示,所述装置50包括:处理器51、存储器52以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
获取所述第一图像传感器采集的第一图像;
当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
根据判定结果确定是否执行对所述第二图像传感器的保护操作。
在某些实施例中,所述处理器用于根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内时,包括:
根据所述第一图像传感器与所述第二图像传感器的位置关系确定所述第一图像传感器与所述第二图像传感器的重叠视角;
基于所述重叠视角判定所述目标对象是否在所述第二图像传感器的视角范围内。
在某些实施例中,所述处理器用于基于所述重叠视角判定所述目标对象是否在所述第二图像传感器的视角范围内时,包括:
若所述重叠视角大于预设角度,则判定所述目标对象在所述第二图像传感器的视角范围内。
在某些实施例中,所述处理器用于根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内时,包括:
确定所述目标对象在所述第一图像中的位置信息;
根据所述第一图像传感器与所述第二图像传感器的位置关系确定所述第一图像与所述第二图像传感器采集的第二图像之间的映射关系;
根据所述位置信息和所述映射关系确定所述目标对象是否在所述第二图像中;
如果在,则判定所述目标对象在所述第二图像传感器的视角范围内。
在某些实施例中,所述映射关系为所述第一图像的坐标系和所述第二图像的坐标系之间的映射关系,所述位置信息为所述目标对象在所述第一图像的坐标系下的坐标;
所述处理器用于根据所述位置信息和所述映射关系确定所述目标对象是否在所述第二图像中时,包括:
将所述坐标通过所述映射关系映射到所述第二图像的坐标系下,并判定映射后的坐标是否在所述第二图像中。
在某些实施例中,所述处理器用于根据判定结果确定是否执行对所述第二图像传感器的保护操作时,包括:
当判定所述目标对象在所述第二图像传感器的视角范围内,则执行以下任一保护操作:
关闭所述第二图像传感器的快门;或
控制所述第二图像传感器转动,以使所述目标对象在所述第二图像传感器的视角范围之外。
在某些实施例中,在执行所述保护操作之后,所述处理器还用于:
当指定条件触发后,取消对所述红外传感器的保护操作。
在某些实施例中,所述处理器用于取消对所述红外传感器的保护操作时,包括:
开启所述第二图像传感器的快门,或
控制所述第二图像传感器转动,以使所述第二图像传感器重新工作。
在某些实施例中,所述指定条件包括:
对所述第一图像进行所述目标对象的检测,并根据所述第一图像传感器和第二图像传感器的位置关系判定所述目标对象不在所述第二图像传感器的视角范围内;或
快门关闭时长大于预设时长;或
所述第二图像传感器的旋转角度达到预设阈值。
在某些实施例中,所述第一图像传感器和所述第二图像传感器安装于无人机,所述无人机包含陀螺仪,所述第二图像传感器的旋转角度通过所述陀螺仪检测得到。
在某些实施例中,所述第一图像传感器为可见光传感器,所述第二图像传感器为红外传感器。
在某些实施例中,所述目标对象包括太阳。
在某些实施例中,所述第二图像传感器处于非工作状态。
其中,所述图像保护装置在对图像传感器进行保护时的具体实施细节可参考上述图像传感器保护方法中各实施例的描述,在此不再赘述。
此外,本申请还提供了一种图像采集装置,如图6所示,所述图像采集装置60包括第一图像传感器61、第二图像传感器62和图像传感器保护装置63,所述图像传感器保护装置包括:处理器631、存储器632以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
获取所述第一图像传感器采集的第一图像;
当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
根据判定结果确定是否执行对所述第二图像传感器的保护操作。
进一步地,本申请还提供了一种无人机,包括图像采集装置,其中图像采集装置的结构示意图可参考图6,所述图像采集装置包括第一图像传感器、第二图像传感器和图像传感器保护装置,所述图像传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
获取所述第一图像传感器采集的第一图像;
当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
根据判定结果确定是否执行对所述第二图像传感器的保护操作。
其中,所述图像保护装置在对图像传感器进行保护时的具体实施细节可参考上述图像传感器保护方法中各实施例的描述,在此不再赘述。
可选的,所述图像采集装置可以是相机,或者所述图像采集装置为无人机,该无人机上搭载上述第一图像传感器和第二图像传感器。可选的,无人机上设置有云台,该第一图像传感器和第二图像传感器可以设置在云台上,云台可以控制图像传感器朝特定方向运动。
此外,红外传感器在对一些高能量对象进行拍摄时,易造成传感器的损伤,比如,针对太阳、激光等一类对象,如果长时间出现在红外传感器的拍摄视角中,会对红外传感器产生灼烧效应,导致红外传感器出现短暂或永久性损伤。因而,在红外传感器的使用过程中,有必要判定这些对象是否在红外传感器的视角范围内,从而决定是否对红外传感器采取一定的保护措施,避免被这一类对象损伤。
红外传感器通常都是通过氧化钒或者多晶硅等材料制作而成,无论是上述哪种材料制备的红外传感器,当其长时间对着一些高能量对象进行拍摄时,比如太阳、激光等,都会产生灼烧效应,易造成传感器的损伤。比如,在室外拍摄时,如果红外传感 器长时间对着太阳拍摄,容易被太阳的高温灼伤而导致不可逆的器件损坏。因此,有必要采取一些保护操作对红外传感器进行保护,避免被这一类高能量的物体损伤。
通常,在对红外传感器采取保护操作之前,可以先判断太阳等会对红外传感器产生损伤的目标对象是否出现在红外传感器的视角范围内,如果会,则采取相应的保护操作。而判定这一类目标对象是否出现在红外传感器的视角范围内通常是通过对红外传感器采集的红外图像进行目标对象的检测,如果红外图像中出现上述目标对象,则采取保护措施。但是由于红外图像的分辨率往往较差,仅通过红外图像进行目标对象的检测,准确度往往不高,易造成误判的现象,比如,当太阳等目标对象不在红外传感器的视角范围内,由于误判也采取保护操作,严重影响用户的体验。因而,有必要提供一种红外传感器的保护方法,可以更加准确地判断出太阳等易对红外传感器造成损伤的目标对象是否在红外传感器的视角范围内,以便决定是否执行相应的保护操作。
基于此,本申请提供了一种红外传感器的保护方法,所述方法可以结合除红外传感器以外的其他传感器的信息来辅助判定对红外传感器造成损伤的目标对象是否在红外传感器的视角范围内,以提高判定结果的准确性,具体的,所述方法如图7所示,包括以下步骤:
S702、对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
S704、根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
S706、根据判定结果确定是否执行对所述红外传感器的保护操作。
本申请的红外传感器保护方法可以由红外传感保护装置执行,该红外传感器保护装置和红外传感器可以集成在一个设备上,比如,可以是集成了红外传感器和红外传感器保护装置的红外相机、无人机等设备,当然,该红外传感器保护装置和红外传感器也可以分别位于两个不同的设备,比如红外传感器安装于无人机上,而红外传感器保护装置可以集成在无人机的控制终端上,无人机采集的红外图像和其他传感器的统计信息发送给控制终端,控制终端判定目标对象是否在红外传感器的视角范围内,并做出是否执行保护操作的决策后,再发送相应的指令给无人机。
本申请的目标对象包括任一对红外传感器造成损伤的对象,比如,在某些实施例中,该目标对象可以是太阳,当然,也可以是其他的对象,本申请不作限制。
在判定目标对象是否在红外传感器的视角范围内时,可以先获取红外传感器采集的红外图像,然后对红外图像进行目标对象的检测,得到红外图像包含目标对象或者不包含目标对象的检测结果。由于红外传感器采集的红外图像通常分辨率比较低,如果仅仅是对红外图像进行目标对象的检测,根据检测结果来确定是否执行对红外传感器的保护操作,结果会不太准确。假设目标对象为太阳,通常可以通过在红外图像进行圆形目标(在某些场景,也可以是弧形目标)检测来判定红外图像中是否存在太阳,并且太阳温度往往较高,因而可以结合温度一起判定,如果判定红外图像中存在太阳,则执行相应的保护操作。但是,很多情况下,可能红外图像中存在其他温度较高的圆形目标,这样便会误判成太阳,并且执行保护操作,严重影响用户体验。
因此,本申请可以结合除红外传感器以外的至少一个其他的传感器的统计信息, 来辅助判定红外图像中是否存在太阳,提高判定结果的准确性。其他的传感器可以是其他的图像传感器,比如可见光传感器、紫外传感器等,也可以是其他用于检测物体运动状态的传感器,比如运动状态传感器、GPS传感器、陀螺仪、惯性传感器等,只要是根据该传感器得到的统计信息可以用于辅助判定目标对象是否在红外传感器的视角范围内即可,本申请不作限制。
其中,其他传感器和红外传感器可以集成在一个设备上,也可以位于不同的设备,举个例子,假设其他的传感器为可见光传感器,那么红外传感器和可见光传感器可以安装于同一个设备,比如,该设备可以是双光相机、安装有红外传感器和可见光传感器的无人机等。如图8所示,为本申请提供的红外传感器保护方法适用的一个设备,该设备同时包含红外传感器80和可见光传感器81,通过可见光传感器81的统计信息,比如环境亮度信息、当前的天气信息等来辅助判定在目标对象是否在红外传感器80的视角范围内。当然,红外传感器和可见光传感器也可以位于两个独立的设备,只要两个设备在同一个环境下即可,两个设备可以有一定的视角重叠,也可以没有视角重叠。当然,如果其他传感器是用于检测运动状态的传感器,则要求其他传感器可以检测到红外传感器的运动状态,比如,可以检测红外传感器当前的高度信息,旋转角度信息等,结合这些统计信息来辅助判定目标对象是否在红外传感器的视角范围内。
通过结合其他的传感器的统计信息来辅助判定对红外传感器造成损伤的目标对象是否位于红外传感器的视角范围内,可以提高判定结果的准确性,既可以有效地对红外传感器进行保护,又不会因为误判给用户造成不便。
在判定目标对象是否位于红外传感器的视角范围内时,判定速度越快,则执行保护操作的响应速度也越快,则可以尽量缩短红外传感器拍摄目标对象的时间,减小对红外传感器的损害。因此,在某些实施例中,为了降低目标对象检测过程中的运算复杂度,提高运算速度,在对红外图像进行目标对象的检测之前,可以先对红外图像进行下采样处理,减少红外图像的数据量,下采样的倍数需保证可以从下采样后的红外图像中识别出目标对象,在这个前提下,下采样后的数据量越小,越有利于提高检测速度。
在某些实施例中,对红外传感器造成损伤的目标对象可以是形状满足预设形状且温度满足预设温度阈值的对象。比如,目标对象为太阳,其形状一般为较规则的圆形,且其温度一般高于其他事物,往往为红外图像中温度最高的对象。
在对形状和温度都满足一定条件的目标对象进行检测时,为了提高检测速率和准确度,可以先根据目标对象的温度在红外图像中进行初步筛选,然后再根据目标对象的形状进一步筛选,以检测出红外图像中的目标对象,当然,也可以是先根据目标对象的形状进行初步筛选,然后再根据目标对象的温度进一步筛选,以准确检测出红外图像中的目标对象。所以,在某些实施例中,在对红外图像进行目标对象的检测时,可以先从红外图像中确定第一区域,其中,第一区域为形状和目标对象形状一致的图像区域,然后再根据所述目标对象的温度从初步筛选的第一区域中检测目标对象,比如,目标对象的温度高于一定阈值,则可以判定第一区域中是否存在高于该阈值的区域,如果存在,则可以判定为目标对象对应的图像区域。
在某些实施例中,目标对象可以是温度较高的对象,比如太阳,因而整个目标对象在红外图像对应的温度不仅高于一定阈值,并且与其他对象的温度差值也会高于一 定阈值。因而,在根据目标对象的形状确定第一区域后,可以判定第一区域中的各像素点的灰度值是否高于第一预设阈值,并且第一区域的边缘像素点与该第一区域以外的邻近像素点的灰度值差值是否高于第二预设阈值,如果符合这两个条件,即可以将该第一区域判定为目标对象对应的图像区域。
在某些实施例中,也可以根据目标对象的温度从红外图像中确定第二区域,比如,目标对象的温度高于一定阈值,则第二区域可以是温度高于该阈值的图像区域,然后再根据目标对象的形状从第二区域中确定出目标对象,比如,检测第二区域中是否存在和目标对象形状一致的区域,如果存在,则判定该区域为目标对象对应的图像区域。当然,在某些实施例中,如果目标对象为红外图像中温度最高的对象,在确定红外图像区域时,可以先确定红外图像中灰度值最大的像素点,然后以该像素点为中心确定一个指定区域,作为第二区域。比如,可以以该灰度值最大的像素点为中心,确定一个M×N的区域,作为第二区域,然后再从第二区域中检测与目标对象形状一致的区域。
以下结合一个例子对目标对象的检测加以说明,假设目标对象为太阳,太阳通常呈现为比较规则的圆形,并且其温度通常高于其他的事物,则可以采用以下两种方式对红外图像进行太阳的检测:
方式1:如图9所示,可以先对红外图像进行圆形目标检测,检测出红外图像中的圆形区域91。然后可以进一步判定这些圆形区域中的像素点(如图9中的P1、P2)的灰度值是否高于第一预设阈值,并且这些圆形区域31的边缘像素点(如图9中的P2)与其在圆形区域91外的邻近像素点(如图9中的S1)的灰度值差值是否高于第二预设阈值,如果符合上述条件,则可以判定该圆形区域91即为太阳对应的图像区域,即可以判定红外图像中存在太阳。如果,不存在圆形区域或者圆形区域的像素点的灰度值都不符合上述条件,则可以判定红外图像中不存在太阳。
方式2:如图10所示,可以先确定红外图像中灰度值最大的像素点(如图10中的P0),然后以该像素点P0为中心确定一个M×N的区域101,然后再对这个M×N的区域101进行圆形目标检测,如果检测到圆形区域,即判定红外图像中存在太阳,如果未检测到圆形区域,则判定红外图像中不存在太阳
其中,圆形检测可以采用hough圆检测算法、重心法或最小二乘法等通用算法。通过双重筛选,即可以提高目标对象的检测效率,也可以提高检测结果的准确度。
在对红外图像进行目标对象检测,得到检测结果后,可以结合其他传感器的统计信息来综合判定目标对象是否在红外传感器的视角范围内。在某些实施例中,在结合其他传感器的统计信息来判定目标对象是否存在红外传感器的视角范围内时,可以根据对红外图像进行目标对象检测得到的检测结果以及传感器统计信息得到一个用于表征目标对象在红外传感器的视角范围内的目标置信度,然后根据这个目标置信度来判定目标对象是在红外传感器的视角范围内。比如,可以根据红外图像的检测结果确定置信度A1,然后根据传感器统计信息确定置信度A2,这两个置信度可以表征目标对象在红外传感器的视角范围内的概率,然后结合这两个置信度来综合确定最终的目标置信度,比如,目标置信度可以是A1的A2的乘积,当然,也可以通过其他算法计算目标置信度,然后根据目标置信度来判定目标对象是否在红外传感器的视角范围内,比如,当目标置信度大于一定阈值,比如95%,则判定目标对象在红外传感器的视角 范围内。
在某些实施例中,其他传感器的统计信息可以是以下一种或者多种:基于可见光传感器得到的可见光统计信息、陀螺仪检测的运动状态信息、运动传感器检测的运动状态信息、惯性传感器检测得到的运动状态信息和GPS传感器检测的高度信息。
在某些实施例中,可见光统计信息可以是自动曝光统计信息和/或自动白平衡统计信息。其中,可见光传感器可以设置成自动曝光(AE:Automatic Exposure)模式或者自动白平衡(AWB:Automatic White Balance)模式,可以获取可见光传感器在这两种模式下的自动曝光统计信息以及自动白平衡统计信息。其中,在某些实施例中,所述自动曝光统计信息可以是用于确定可见光传感器当前所处环境的亮度的统计信息,比如,自动曝光统计信息可以是根据当前设定的曝光参数(光圈参数、曝光时间和感光度)以及相机的标定数据计算得到的当前场景的环境亮度信息,也可以是画面的过曝信息、直方图统计信息等AE统计信息。
通常,在AE模式中,可以通过公式(1)来确定当前环境的平均亮度B:
Figure PCTCN2020082182-appb-000001
进而,可以根据公式(2)计算得到当前环境的亮度值Bv,
Figure PCTCN2020082182-appb-000002
其中:A为光圈值,T为曝光时间;B为环境平均亮度;Sx为感光度;K为反射光测光矫正常量,Bv为当前环境亮度值,N为常量,近似为0.3。
由于在有太阳的天气和阴天,环境亮度差别比较大,有太阳的时候环境亮度往往较大,因而,可以根据计算得到的环境亮度值来判定当前场景是否存在太阳。比如,环境亮度值大于预设阈值时,则认为当前场景存在太阳,由于红外传感器和可见光传感器处于相同的环境,如果当前场景中存在太阳,则红外图像中检测到的圆形目标为太阳的概率就进一步增大了,其检测结果也得到了进一步的验证。当然,如果根据环境亮度判定当前场景不存在太阳,则红外图像检测的太阳可能并非是真正的太阳,可能存在误判,其检测结果的可信度就会降低。当然,在某些实施例中,也可以不根据曝光参数来计算环境的亮度,比如可以根据画面的过曝信息和灰度直方图统计信息来粗略判定当前的环境亮度。比如,如果画面一直处于过曝状态,则说明环境亮度较高,则存在太阳的概率较大。
在某些实施例中,自动白平衡统计信息可以用于确定可见光传感器当前所处场景的类型,所述场景的类型包括白天、黑夜、室内、室外中的一种或多种。在可见光传感器设置为AWB模式时,可以AWB系统可以对当前场景进行分类(如风景、室内、蓝天、夜晚等),因而可以根据AWB划分的场景类型来确定当前场景存在太阳的概率,比如,如果确定当前场景为黑夜或室内,则当前场景存在太阳的概率很低,如果确定当前场景为室外或者白天,则当前场景存在太阳的概率较高,由于红外传感器和可见光传感器处于相同的环境,因而可以根据可见光传感器是否处于有太阳的场景来对红外图像的检测结果进一步验证。
在某些实施例,如果红外传感器和GPS传感器、陀螺仪、运动传感器等传感器安装于同一个设备,则可以用GPS传感器、陀螺仪、运动传感器等传感器来判定红外传感器的运动状态,来对红外图像的检测结果进行验证。比如,当红外传感器和GPS传感器、陀螺仪、运动传感器等都安装于无人机,则可以根据GPS检测的高度信息来确定此时无人机位于室外还是室内,比如,如果通过GPS检测到无人机的飞行高度一直低于预设阈值,比如3米,则很有可能此刻无人机正处于室内,因而,当前场景存在太阳的概率也比较低。如果检测到无人机的飞行高度一直高于预设阈值,比如3米,则此刻无人机很大可能处于室外,当前场景存在太阳的概率也比较高。
如果已经判定当前红外传感器的视角中没有太阳,而根据陀螺仪检测到当前无人机旋转角度很小,或者保持不变,或者根据运动状态传感器检测到当前无人机的运动方向、当前位置、姿态等都未发生变化,则此时红外传感器的视角中出现太阳的概率也较低。同样的,如果已经判定当前红外传感器的视角中出现太阳,在陀螺仪和运动传感器等检测到无人机处于飞行状态时,则红外传感器的视角中出现太阳的概率也较高。
在确定目标对象是否在红外传感器的视角范围内时,可以基于各传感器的统计信息分别得到一个表征目标对象在红外传感器的视角范围内的置信度,然后综合各置信度来确定最终的目标置信度,根据目标置信度来确定最终的判定结果。
在某些实施例中,如果红外传感器与可将光传感器处于相同的环境、且可以用GPS传感器来确定红外传感器的高度信息,比如,红外传感器、可见光传感器以及GPS传感器同时安装于无人机的场景,则可以先根据红外图像中目标对象的检测结果确定用于表征目标对象在红外传感器的视角范围内的第一置信度,然后根据从可见光传感器得到的自动曝光统计信息,比如当前环境亮度确定用于表征目标对象在红外传感器的视角范围内的第二置信度,再根据从可见光传感器得到的自动白平衡统计信息,比如当前场景为黑夜还是白天、室内还是室外确定用于表征目标对象在红外传感器的视角范围内的第三置信度,最后根据GPS采集的高度信息确定用于表征目标对象在红外传感器的视角范围内的第四置信度,然后根据第一置信度、第二置信度、第三置信度和第四置信度得到目标置信度,比如,第一置第一置信度、第二置信度、第三置信度和第四置信度分别为A1、A2、A3、A4,则目标置信度A可以是A1、A2、A3、A4的乘积,当前也可以是A1、A2、A3、A4的加权均值等,具体计算方法可根据实际情况设置,确定目标置信度后,再根据目标置信度来判定目标对象是否在红外传感器的视角范围内。
在某些实施例中,为了更加准确地判定目标对象是否在红外传感器的视角范围内,还可以结合可将光传感器采集的可将光图像来进一步辅助判定。比如,在红外传感器和可见光传感器的位置关系已知的情况下,可以对可见光传感器采集的可见光图像也进行目标对象的检测,如果检测到目标对象,则说明目标对象在可见光传感器的视角范围内,则可以根据可见传感器和红外传感器的位置关系判定目标对象是否在所述红外传感器的视角范围内,并得到第一判定结果。在得到第一判定结果后,可以结合根据对红外图像进行目标对象检测的检测结果、可见光传感器的统计信息以及该第一判定结果来确定目标置信度,再根据目标置信度得到最终的判定结果。
在某些实施例中,在根据红外传感器和可见光图像传感器的位置关系判定目标对 象是否在红外图像传感器的视角范围内时,可以简单的根据可见光传感器和红外图像传感器之间的视角关系来判定目标对象是否在红外传感器的视角范围内。比如,可以根据可见光传感器与红外像传感器的位置关系确定可见光传感器与红外传感器的重叠视角,然后再根据两个图像传感器之间的重叠视角来判定目标对象是否在红外图像传感器的视角范围内。比如,每个图像传感器都有一个拍摄视角,该视角范围内的景物才可以落入其采集的图像中,因而可以根据可见光传感器与红外传感器的视角的大小以及两个传感器之间的位置关系确定两个传感器的重叠视角,然后根据重叠视角来判定在可见光传感器视角范围内的景物是否也会在红外传感器的视角范围内。比如,如果红外传感器和可见光传感器的视角完全不重叠,即重叠视角为0,那么在可见光传感视角范围内的景物必然不会落入到红外传感器的视角范围内,也就说,如果两个传感器视角没有重叠,如果在可见光图像检测到了目标对象,则说明目标对象不在红外传感器的视角范围内。当然,如果,可见光传感器的视角完全落入红外传感器的视角范围,则在可见光图像检测到了目标对象,则说明目标对象也在红外传感器的视角范围内。
当然,在某些实施例中,如果可见光传感器和红外传感器的重叠视角大于预设角度,则可以认为目标对象在可见光传感器的视角范围内,则也在红外传感器的视角范围内。比如说,可见光传感器和红外传感器的重叠视角达到了的可见光传感器视角的95%以上,则认为目标对象在可见光传感器的视角范围内,那么也有很大概率在红外传感器的视角范围内。
当然,通过两个传感器的视角关系可以做一个简单的判定,在某些实施例中,如果根据两个传感器的视角关系没法准确地确定目标对象是否在红外传感器的视角范围内时,则可以进一步确定目标对象在可见光图像的位置信息,然后根据可见光传感器和红外传感器的位置关系确定可见光图像和红外图像的映射关系,然后根据该位置信息和映射关系来确定目标对象是否在红外图像中,如果在,则判定目标对象在红外传感器的视角范围内。其中,映射关系可以是可见光图像和红外图像的像素点之间的映射关系,也可以是两种图像的坐标系之间映射关系。由于两个传感器之间的位置关系是已知的,并且两个传感器的内参数也可以标定得到,因此,可以根据两个传感器的位置关系和两个传感器的内参数来确定可见光图像和红外图像的映射关系。
其中,在某些实施例,该映射关可以是可见光图像的坐标系和红外图像的坐标系之间的映射关系,假设可见光图像的坐标系和红外图像的坐标系之间的映射关系用矩阵H表示,P1表示可见光图像的坐标系下的像素坐标,P2表示红外图像的坐标系下的像素坐标,则H可以通过公式(1)确定,P1和P2之间的关系如公式(2):
H=KR1R2   公式(1)
P2=HP1    公式(2)
其中,K为表示可见光传感器和可见光传感器的位置关系的矩阵,R1表示可见光传感器的内参矩阵,R2表示可见光传感器的内参矩阵。
该位置信息可以是目标对象在可见光图像的坐标系下的坐标,可以将目标对象在可见光图像坐标系下的坐标通过该映射关系H映射到红外图像的坐标系下,然后判定该坐标是否落入红外图像中,如果是,则说明红外图像中也包含目标对象。
当然,在某些实施例中,为了更准确地判定红外图像上是否存在目标对象,还可以根据可见光图像上的检测到的目标对象的位置与红外图像上检测到的目标对象的位置是否匹配来判定,通过这种方式,可以得到更加准确的判定结果。具体的,可以对红外图像进行目标对象的检测,检测到目标对象后,确定目标对象在红外图像中的第二位置信息,然后对可见光图像进行目标对象的检测,若检测到目标对象后,确定目标对象在可见光图像中的第一位置信息,然后根据可见光图像和红外图像的映射关系将第一位置信息和第二位置信息映射到同一个坐标系,比如,映射到红外图像的坐标系,或者映射到可见光图像的坐标系,然后判定映射到同一个坐标系后的两个位置信息是否一致,得到第二判定结果,如果映射到同一个坐标系后的两个位置信息一致,则说明红外图像中存在目标对象的概率很大。然后可以根据对红外图像进行目标对象检测的检测结果、可将光传感器的统计信息和该第二判定结果来综合确定目标置信度,再根据目标置信度得到最终的判定结果。通过这种方式得到的判定结果的准确度较高,出现误判的概率很低。
在结合红外传感器采集的红外图像以及其他的传感器的统计信息得到目标对象是否在红外传感器的视角范围的最终的判定结果后,可以根据最终的判定结果来决定是否执行对红外传感器的保护操作。如果确定目标对象不在红外传感器的视角范围内,则可以不采取任何措施,继续保持红外传感器正常工作。如果判定目标对象在红外传感器的视角范围内,则可以关闭红外传感器的快门,避免太阳光等直射红外传感器,或者控制红外传感器转动,让目标对象移出红外传感器的视角。如果红外传感器安装在云台上,则可以通过控制云台转动来带动红外传感器转动,如果红外传感器安装于无人机等可移动设备上,则可以控制可移动设备转动以带动红外传感器转动。当然,可以根据目标对象当前在红外图像的位置和红外传感器的视角来确定转动方向,以及预估应该转动多大角度,才能使目标对象位于红外传感器的视角范围之外,当然,也可以每次转动一个预设角度,然后再进行一次目标对象是否位于红外传感器的视角范围内的检测操作,直至检测到目标对象不在红外传感器的视角范围内则停止转动。
当然,在关闭红外传感器的快门后,不能一直保持红外传感器快门处于关闭状态,这样红外传感器无法工作,所以,还需设置红外传感器快门重新开启的触发机制,以保证红外传感器重新工作。所以,在某些实施例中,可以预先设置一些指定条件,当这些指定条件被触发后,则可以重新开启红外传感器的快门。其中,这些指定条件可以是当快门关闭时长达到预设的时长或者当检测到红外传感器转动的角度达到预设角度。比如,如果红外传感器位于一些可移动的设备上,设备一直处于移动状态,因而,可以认为设备运动一段时间后,则目标对象可能不在红外传感器的视角范围内,这时可以开启快门。当然,如果红外传感器安装于一些可以转动的设备上,比如云台或者无人机等设备,这时可以通过检测红外传感器的转动角度是否达到预设阈值,如果转动角度达到预设阈值,也可以认为目标对象很大概率不在红外传感器的视角范围内,这时也可以重新开启快门。
在某些实施例中,红外传感器可以安装于无人机上,无人机包括陀螺仪,在检测到目标对象位于红外传感器的视角范围内时,可以关闭快门,以对红外传感器进行保护,当然,也可以通过控制无人机转动来带动红外传感器转动,以使目标对象位于红外传感器的视角之外,当无人机的陀螺仪检测到无人机的转动角度达到预设角度后, 则可以重新开启快门,以便红外传感器可以继续工作。
为了进一步解释本申请提供的红外传感器保护方法,以下结合一个具体的实施例加以解释。
如图11所示,为本申请的红外传感器保护方法的一个应用场景示意图,无人机110安装有一个红外传感器111、一个可见光传感器112,分别用于采集红外图像和可见光图像,同时,无人机上还包括陀螺仪(位于无人机内部,图11中未示出),用于检测无人机的运动姿态,以及GPS传感器(位于无人机内部,图11中未示出),用于对无人机进行定位。无人机在室外执行侦查和巡检任务时,不可避免会拍摄到太阳,由于红外传感器在太阳直射下,容易产生灼烧效应,对红外传感器造成不可逆的损伤,因而需要对红外传感器进行保护,避免被太阳损伤。执行保护操作的过程如图12所示,具体如下:
首先,可以获取红外传感器111采集的红外图像,检测红外图像中是否存在太阳,具体的,为了降低运算复杂度,可以先对红外图像进行下采样处理,得到下采样后的图像,由于太阳温度往往高于其他物体,所以,可以从下采样后的图像中确定灰度值最大的像素点,以该像素点为中心确定一个M×N区域,然后再对这个M×N区域进行圆形检测或者弧形检测,如果存在圆形区域或者弧形区域,则判定红外图像中存在太阳。当然,也可以先从下采样的图像中检测出圆形区域,然后判定各圆形区域的像素点的灰度值是否大于第一预设阈值,圆形区域的边缘像素点与圆形区域外的邻近像素点的灰度差值是否大于第二预设阈值,如果存在这样的圆形区域,则判定红外图像中存在太阳。在判定红外图像中存在太阳后,可以确定用于表征太阳在红外传感器视角范围的置信度Sconf1。
另外,由于可见光传感器112可以设置成自动曝光模式和自动白平衡模式,在自动曝光模式中,可以根据设定的曝光参数(敏感度、曝光时长和光圈参数)以及可见光传感器的标定数据计算当前环境的亮度,因而可以从可见光传感器获取自动曝光统计信息,从而确定当前环境的亮度值,由于有太阳的时候,环境亮度往往高于阴天时的环境亮度,因此,可以判定当前环境亮度值是否大于预设阈值,如果大于,则认为当前环境存在太阳,并且可以确定用于表征太阳在红外传感器视角范围的第二置信度Sconf2。当然,也可以从可见光传感器获取画面过曝信息或者直方图统计信息,来确定当前环境的亮度,得到确定当前场景存在太阳的Sconf2。
在自动白平衡模式,可以对当前场景进行分类(如风景、室内、蓝天、夜晚等)。由于太阳只可能出现在室外场景或者白天的场景,因此,可以从可见光传感器获取自动白平衡统计信息,统计信息中对当前场景分类为室外或者白天时,可以确定当前场景存在太阳的置信度Sconf3。
此外,还可以获取无人机的GPS传感器确定的无人机的高度信息,当无人机飞行高度高于一定距离(如3米),则可认为当前无人机处于室外,可能存在太阳,如果无人机的飞行高度一直低于一定距离(如3米),则可以认为无人机处于室内,则不可能存在太阳,因而可以根据GPS采集的高度信息得到确定当前场景存在太阳的置信度Sconf4。
在得到Sconf1、Sconf2、Sconf3、Sconf4后,可以结合多个置信度得到最终的置信度Sconf,并且Sconf当大于一定阈值时,则认为太阳在红外传感器的视角范围内, 则可以执行相应的保护措施,避免红外传感器被太阳灼伤。具体的,可以控制红外传感器关闭快门,或者控制无人机旋转一定的角度,以便太阳移出红外传感器的视角。在关闭快门后,可以统计快门关闭时长,当快门关闭时长达到预设时长,则可以重新开启快门,或者当陀螺仪检测到无人机的旋转角度达到一定的角度后,则可以重新开启快门,以保证红外传感器继续工作。
相应的,本申请还提供一种红外传感器保护装置,如图13所示,所述红外传感器保护装置130包括:处理器131、存储器132,所述存储器132用于存储计算机程序,所述处理器131执行所述计算程序时,实现以下步骤:
对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
根据判定结果确定是否执行对所述红外传感器的保护操作。
在某些实施例中,所述目标对象为形状满足预设形状且温度满足预设温度阈值的对象。
在某些实施例中,所述处理器用于对红外传感器采集的红外图像进行目标对象的检测时,包括:
从所述红外图像中确定第一区域,所述第一区域为所述预设形状的图像区域;
基于所述目标对象的温度从所述第一区域检测所述目标对象;或
根据所述目标对象的温度从所述红外图像中确定第二区域,基于所述预设形状从所述第二区域检测所述目标对象。
在某些实施例中,所述处理器用于基于所述目标对象的温度从所述第一区域检测所述目标对象时,包括:
将像素点的灰度值高于第一预设阈值,且边缘像素点与所述第一区域之外的邻近像素点的灰度值差值大于第二预设阈值的所述第一区域确定为所述目标对象。
在某些实施例中,所述第二区域为以所述红外图像中灰度值最大的像素点为中心确定的指定区域。
在某些实施例中,对红外传感器采集的红外图像进行目标对象的检测之前,还包括:
对所述红外图像进行下采样处理。
在某些实施例中,所述传感器统计信息包括以下一种或多种:基于可见光传感器得到的可见光统计信息、陀螺仪检测的运动状态信息、运动传感器检测的运动状态信息、惯性传感器检测得到的运动状态信息、GPS传感器检测的高度信息。
在某些实施例中,所述可见光统计信息包括:自动曝光统计信息和/或自动白平衡统计信息。
在某些实施例中,所述自动曝光统计信息用于确定所述可见光传感器当前所处环境的亮度。
在某些实施例中,所述自动白平衡统计信息用于确定所述可见光传感器当前所处场景的类型,所述场景的类型包括白天、黑夜、室内、室外中的一种或多种。
在某些实施例中,所述处理器用于根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内时,包括:
根据所述检测结果和所述传感器统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度;
根据所述目标置信度判定所述目标对象是否在所述红外传感器的视角范围内。
在某些实施例中,所述统计信息包括:基于可见光传感器得到的自动曝光统计信息、自动白平衡统计信息和GPS传感器检测的高度信息,根据所述检测结果和所述传感器统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度,包括:
根据所述检测结果确定用于表征所述目标对象在所述红外传感器的视角范围内的第一置信度;
根据所述自动曝光统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的第二置信度;
根据所述自动白平衡统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的第三置信度;
根据所述高度信息确定用于表征所述目标对象在所述红外传感器的视角范围内的第四置信度;
根据所述第一置信度、所述第二置信度、所述第三置信度和所述第四置信度得到所述目标置信度。
在某些实施例中,所述处理器还用于:
对所述可见光传感器采集的可见光图像进行所述目标对象的检测;
若检测到所述目标对象,则基于所述可见传感器和所述红外传感器的位置关系判定所述目标对象是否在所述红外传感器的视角范围内,并得到第一判定结果;
根据所述检测结果、所述传感器统计信息和所述第一判定结果确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度。
在某些实施例中,所述处理器法还用于:
对所述可见光传感器采集的可见光图像进行所述目标对象的检测;
若检测到所述目标对象,则确定所述目标对象在所述可见光图像中的第一位置信息;
确定所述目标对象在所述红外图像中的第二位置信息;
根据所述第一位置信息、所述第二位置信息以及所述可见光图像和红外图像之间的映射关系判定所述目标对象是否在所述红外传感器的视角范围内,得到第二判定结果;
根据所述检测结果、所述传感器统计信息和所述第二判定结果确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度。
在某些实施例中,所述处理器用于根据判定结果确定是否执行对所述红外传感器的保护操作时,包括:
当判定所述目标对象在所述红外传感器的视角范围内,则执行以下任一保护操作:
关闭所述红外传感器的快门;或
控制所述红外传感器转动,以使所述目标对象在所述红外传感器的视角范围之外。
在某些实施例中,在关闭所述红外传感器的快门之后,所述处理器还用于:
当指定条件被触发后,重新开启所述红外传感器的快门。
在某些实施例中,所述指定条件包括:
快门关闭时长大于预设时长;或
所述红外传感器的旋转角度达到预设阈值。
在某些实施例中,所述红外传感器安装于无人机,所述无人机包括陀螺仪,所述红外传感器的旋转角度通过所述陀螺仪检测得到。
在某些实施例中,所述目标对象包括太阳。
其中,红外传感器保护装置在对红外传感器执行保护时的具体实施细节可参考上述红外传感器保护方法中各实施例中的描述,在此不再赘述。
此外,本申请还提供一种图像采集装置,如图14所示,所述图像采集装置140包括红外传感器141、除所述红外传感器之外的至少一个其他的传感器142和红外传感器保护装置143,所述红外传感器保护装置143包括:处理器1431、存储器1432以及存储在所述存储器上的计算机程序,所述处理器1431执行所述计算程序时,实现以下步骤:
对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
根据判定结果确定是否执行对所述红外传感器的保护操作。
其中,红外传感器保护装置在对红外传感器执行保护时的具体实施细节可参考上述红外传感器保护方法中各实施例中的描述,在此不再赘述。
进一步地,本申请还提供一种无人机,无人机包括图像采集装置,图像采集装置的示意图可参考图14,所述图像采集装置包括红外传感器、除所述红外传感器之外的至少一个其他的传感器和红外传感器保护装置,所述红外传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
根据判定结果确定是否执行对所述红外传感器的保护操作。
其中,红外传感器保护装置在对红外传感器执行保护时的具体实施细节可参考上述红外传感器保护方法中各实施例中的描述,在此不再赘述。相应地,本说明书实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中的图像传感器保护方法或红外传感器保护方法。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存 储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本申请实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (69)

  1. 一种图像传感器保护方法,其特征在于,所述方法包括:
    获取第一图像传感器采集的第一图像;
    当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内,所述目标对象为对所述第二图像传感器造成损伤的对象;
    根据判定结果确定是否执行对所述第二图像传感器的保护操作。
  2. 根据权利要求1所述的方法,其特征在于,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内,包括:
    根据所述第一图像传感器与所述第二图像传感器的位置关系确定所述第一图像传感器与所述第二图像传感器的重叠视角;
    基于所述重叠视角判定所述目标对象是否在所述第二图像传感器的视角范围内。
  3. 根据权利要求2所述的方法,其特征在于,基于所述重叠视角判定所述目标对象是否在所述第二图像传感器的视角范围内,包括:
    若所述重叠视角大于预设角度,则判定所述目标对象在所述第二图像传感器的视角范围内。
  4. 根据权利要求1所述的方法,其特征在于,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内,包括:
    确定所述目标对象在所述第一图像中的位置信息;
    根据所述第一图像传感器与所述第二图像传感器的位置关系确定所述第一图像与所述第二图像传感器采集的第二图像之间的映射关系;
    根据所述位置信息和所述映射关系确定所述目标对象是否在所述第二图像中;
    如果在,则判定所述目标对象在所述第二图像传感器的视角范围内。
  5. 根据权利要求4所述的方法,其特征在于,所述映射关系为所述第一图像的坐标系和所述第二图像的坐标系之间的映射关系,所述位置信息为所述目标对象在所述第一图像的坐标系下的坐标;
    根据所述位置信息和所述映射关系确定所述目标对象是否在所述第二图像中,包括:
    将所述坐标通过所述映射关系映射到所述第二图像的坐标系下,并判定映射后的 坐标是否在所述第二图像中。
  6. 根据权利要求1所述的方法,其特征在于,根据判定结果确定是否执行对所述第二图像传感器的保护操作,包括:
    当判定所述目标对象在所述第二图像传感器的视角范围内,则执行以下任一保护操作:
    关闭所述第二图像传感器的快门;或
    控制所述第二图像传感器转动,以使所述目标对象在所述第二图像传感器的视角范围之外。
  7. 根据权利要求6所述的方法,其特征在于,在执行所述保护操作之后,还包括:
    当指定条件触发后,取消对所述红外传感器的保护操作。
  8. 根据权利要求7所述的方法,其特征在于,取消对所述红外传感器的保护操作,包括:
    开启所述第二图像传感器的快门,或
    控制所述第二图像传感器转动,以使所述第二图像传感器重新工作。
  9. 根据权利要求7所述的方法,其特征在于,所述指定条件包括:
    对所述第一图像进行所述目标对象的检测,并根据所述第一图像传感器和第二图像传感器的位置关系判定所述目标对象不在所述第二图像传感器的视角范围内;或
    快门关闭时长大于预设时长;或
    所述第二图像传感器的旋转角度达到预设阈值。
  10. 根据权利要求9所述的方法,其特征在于,所述第一图像传感器和所述第二图像传感器安装于无人机,所述无人机包含陀螺仪,所述第二图像传感器的旋转角度通过所述陀螺仪检测得到。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述第一图像传感器为可见光传感器,所述第二图像传感器为红外传感器。
  12. 根据权利要求1所述的方法,其特征在于,所述目标对象包括太阳。
  13. 根据权利要求1所述的方法,其特征在于,所述第二图像传感器处于非工作状态。
  14. 一种图像传感器保护装置,其特征在于,所述装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
    获取所述第一图像传感器采集的第一图像;
    当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
    根据判定结果确定是否执行对所述第二图像传感器的保护操作。
  15. 根据权利要求14所述的图像传感器保护装置,其特征在于,所述处理器用于根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内时,包括:
    根据所述第一图像传感器与所述第二图像传感器的位置关系确定所述第一图像传感器与所述第二图像传感器的重叠视角;
    基于所述重叠视角判定所述目标对象是否在所述第二图像传感器的视角范围内。
  16. 根据权利要求15所述的图像传感器保护装置,其特征在于,所述处理器用于基于所述重叠视角判定所述目标对象是否在所述第二图像传感器的视角范围内时,包括:
    若所述重叠视角大于预设角度,则判定所述目标对象在所述第二图像传感器的视角范围内。
  17. 根据权利要求14所述的图像传感器保护装置,其特征在于,所述处理器用于根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内时,包括:
    确定所述目标对象在所述第一图像中的位置信息;
    根据所述第一图像传感器与所述第二图像传感器的位置关系确定所述第一图像与所述第二图像传感器采集的第二图像之间的映射关系;
    根据所述位置信息和所述映射关系确定所述目标对象是否在所述第二图像中;
    如果在,则判定所述目标对象在所述第二图像传感器的视角范围内。
  18. 根据权利要求17所述的图像传感器保护装置,其特征在于,所述映射关系为所述第一图像的坐标系和所述第二图像的坐标系之间的映射关系,所述位置信息为所述目标对象在所述第一图像的坐标系下的坐标;
    所述处理器用于根据所述位置信息和所述映射关系确定所述目标对象是否在所述第二图像中时,包括:
    将所述坐标通过所述映射关系映射到所述第二图像的坐标系下,并判定映射后的坐标是否在所述第二图像中。
  19. 根据权利要求14所述的图像传感器保护装置,其特征在于,所述处理器用于根据判定结果确定是否执行对所述第二图像传感器的保护操作时,包括:
    当判定所述目标对象在所述第二图像传感器的视角范围内,则执行以下任一保护操作:
    关闭所述第二图像传感器的快门;或
    控制所述第二图像传感器转动,以使所述目标对象在所述第二图像传感器的视角范围之外。
  20. 根据权利要求19所述的图像传感器保护装置,其特征在于,在执行所述保护操作之后,所述处理器还用于:
    当指定条件触发后,取消对所述红外传感器的保护操作。
  21. 根据权利要求20所述的图像传感器保护装置,其特征在于,所述处理器用于取消对所述红外传感器的保护操作时,包括:
    开启所述第二图像传感器的快门,或
    控制所述第二图像传感器转动,以使所述第二图像传感器重新工作。
  22. 根据权利要求20所述的图像传感器保护装置,其特征在于,所述指定条件包括:
    对所述第一图像进行所述目标对象的检测,并根据所述第一图像传感器和第二图像传感器的位置关系判定所述目标对象不在所述第二图像传感器的视角范围内;或
    快门关闭时长大于预设时长;或
    所述第二图像传感器的旋转角度达到预设阈值。
  23. 根据权利要求22所述的图像传感器保护装置,其特征在于,所述第一图像传感器和所述第二图像传感器安装于无人机,所述无人机包含陀螺仪,所述第二图像传感器的旋转角度通过所述陀螺仪检测得到。
  24. 根据权利要求14-23任一项所述的图像传感器保护装置,其特征在于,所述第一图像传感器为可见光传感器,所述第二图像传感器为红外传感器。
  25. 根据权利要求14所述的图像传感器保护装置,其特征在于,所述目标对象包括太阳。
  26. 根据权利要求14所述的图像传感器保护装置,其特征在于,所述第二图像传感器处于非工作状态。
  27. 一种图像采集装置,其特征在于,包括第一图像传感器、第二图像传感器和图像传感器保护装置,所述图像传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
    获取所述第一图像传感器采集的第一图像;
    当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
    根据判定结果确定是否执行对所述第二图像传感器的保护操作。
  28. 一种无人机,其特征在于,包括图像采集装置,所述图像采集装置包括第一图像传感器、第二图像传感器和图像传感器保护装置,所述图像传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
    获取所述第一图像传感器采集的第一图像;
    当检测到所述第一图像包含目标对象时,根据所述第一图像传感器与所述第二图像传感器的位置关系判定所述目标对象是否在所述第二图像传感器的视角范围内;
    根据判定结果确定是否执行对所述第二图像传感器的保护操作。
  29. 一种红外传感器保护方法,其特征在于,所述方法包括:
    对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
    根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
    根据判定结果确定是否执行对所述红外传感器的保护操作。
  30. 根据权利要求29所述的方法,其特征在于,所述目标对象为形状满足预设形状且温度满足预设温度阈值的对象。
  31. 根据权利要求30所述的方法,其特征在于,对红外传感器采集的红外图像进行目标对象的检测,包括:
    从所述红外图像中确定第一区域,所述第一区域为所述预设形状的图像区域;
    基于所述目标对象的温度从所述第一区域检测所述目标对象;或
    根据所述目标对象的温度从所述红外图像中确定第二区域,基于所述预设形状从所述第二区域检测所述目标对象。
  32. 根据权利要求31所述的方法,其特征在于,基于所述目标对象的温度从所述第一区域检测所述目标对象,包括:
    将像素点的灰度值高于第一预设阈值,且边缘像素点与所述第一区域之外的邻近像素点的灰度值差值大于第二预设阈值的所述第一区域确定为所述目标对象。
  33. 根据权利要求31所述的方法,其特征在于,所述第二区域为以所述红外图像 中灰度值最大的像素点为中心确定的指定区域。
  34. 根据权利要求29所述的方法,其特征在于,对红外传感器采集的红外图像进行目标对象的检测之前,还包括:
    对所述红外图像进行下采样处理。
  35. 根据权利要求29所述的方法,其特征在于,所述传感器统计信息包括以下一种或多种:基于可见光传感器得到的可见光统计信息、陀螺仪检测的运动状态信息、运动传感器检测的运动状态信息、惯性传感器检测得到的运动状态信息、GPS传感器检测的高度信息。
  36. 根据权利要求35所述的方法,其特征在于,所述可见光统计信息包括:自动曝光统计信息和/或自动白平衡统计信息。
  37. 根据权利要求36所述的方法,其特征在于,所述自动曝光统计信息用于确定所述可见光传感器当前所处环境的亮度。
  38. 根据权利要求36所述的方法,其特征在于,所述自动白平衡统计信息用于确定所述可见光传感器当前所处场景的类型,所述场景的类型包括白天、黑夜、室内、室外中的一种或多种。
  39. 根据权利要求29所述的方法,其特征在于,根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,包括:
    根据所述检测结果和所述传感器统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度;
    根据所述目标置信度判定所述目标对象是否在所述红外传感器的视角范围内。
  40. 根据权利要求39所述的方法,其特征在于,所述统计信息包括:基于可见光传感器得到的自动曝光统计信息、自动白平衡统计信息和GPS传感器检测的高度信息,根据所述检测结果和所述传感器统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度,包括:
    根据所述检测结果确定用于表征所述目标对象在所述红外传感器的视角范围内的第一置信度;
    根据所述自动曝光统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的第二置信度;
    根据所述自动白平衡统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的第三置信度;
    根据所述高度信息确定用于表征所述目标对象在所述红外传感器的视角范围内的 第四置信度;
    根据所述第一置信度、所述第二置信度、所述第三置信度和所述第四置信度得到所述目标置信度。
  41. 根据权利要求39所述的方法,其特征在于,所述方法还包括:
    对所述可见光传感器采集的可见光图像进行所述目标对象的检测;
    若检测到所述目标对象,则基于所述可见传感器和所述红外传感器的位置关系判定所述目标对象是否在所述红外传感器的视角范围内,并得到第一判定结果;
    根据所述检测结果、所述传感器统计信息和所述第一判定结果确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度。
  42. 根据权利要求39所述的方法,其特征在于,所述方法还包括:
    对所述可见光传感器采集的可见光图像进行所述目标对象的检测;
    若检测到所述目标对象,则确定所述目标对象在所述可见光图像中的第一位置信息;
    确定所述目标对象在所述红外图像中的第二位置信息;
    根据所述第一位置信息、所述第二位置信息以及所述可见光图像和红外图像之间的映射关系判定所述目标对象是否在所述红外传感器的视角范围内,得到第二判定结果;
    根据所述检测结果、所述传感器统计信息和所述第二判定结果确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度。
  43. 根据权利要求29所述的方法,其特征在于,根据判定结果确定是否执行对所述红外传感器的保护操作,包括:
    当判定所述目标对象在所述红外传感器的视角范围内,则执行以下任一保护操作:
    关闭所述红外传感器的快门;或
    控制所述红外传感器转动,以使所述目标对象在所述红外传感器的视角范围之外。
  44. 根据权利要求43所述的方法,其特征在于,在关闭所述红外传感器的快门之后,还包括:
    当指定条件被触发后,重新开启所述红外传感器的快门。
  45. 根据权利要求44所述的方法,其特征在于,所述指定条件包括:
    快门关闭时长大于预设时长;或
    所述红外传感器的旋转角度达到预设阈值。
  46. 根据权利要求45所述的方法,其特征在于,所述红外传感器安装于无人机, 所述无人机包括陀螺仪,所述红外传感器的旋转角度通过所述陀螺仪检测得到。
  47. 根据权利要求29所述的方法,其特征在于,所述目标对象包括太阳。
  48. 一种红外传感器保护装置,其特征在于,所述红外传感器保护装置包括:处理器、存储器,所述存储器用于存储计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
    对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
    根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
    根据判定结果确定是否执行对所述红外传感器的保护操作。
  49. 根据权利要求48所述的红外传感器保护装置,其特征在于,所述目标对象为形状满足预设形状且温度满足预设温度阈值的对象。
  50. 根据权利要求49所述的红外传感器保护装置,其特征在于,所述处理器用于对红外传感器采集的红外图像进行目标对象的检测时,包括:
    从所述红外图像中确定第一区域,所述第一区域为所述预设形状的图像区域;
    基于所述目标对象的温度从所述第一区域检测所述目标对象;或
    根据所述目标对象的温度从所述红外图像中确定第二区域,基于所述预设形状从所述第二区域检测所述目标对象。
  51. 根据权利要求50所述的红外传感器保护装置,其特征在于,所述处理器用于基于所述目标对象的温度从所述第一区域检测所述目标对象时,包括:
    将像素点的灰度值高于第一预设阈值,且边缘像素点与所述第一区域之外的邻近像素点的灰度值差值大于第二预设阈值的所述第一区域确定为所述目标对象。
  52. 根据权利要求50所述的红外传感器保护装置,其特征在于,所述第二区域为以所述红外图像中灰度值最大的像素点为中心确定的指定区域。
  53. 根据权利要求48所述的红外传感器保护装置,其特征在于,所述处理器用于对红外传感器采集的红外图像进行目标对象的检测之前,还用于:
    对所述红外图像进行下采样处理。
  54. 根据权利要求48所述的红外传感器保护装置,其特征在于,所述传感器统计信息包括以下一种或多种:基于可见光传感器得到的可见光统计信息、陀螺仪检测的运动状态信息、运动传感器检测的运动状态信息、惯性传感器检测得到的运动状态信 息、GPS传感器检测的高度信息。
  55. 根据权利要求54所述的红外传感器保护装置,其特征在于,所述可见光统计信息包括:自动曝光统计信息和/或自动白平衡统计信息。
  56. 根据权利要求55所述的红外传感器保护装置,其特征在于,所述自动曝光统计信息用于确定所述可见光传感器当前所处环境的亮度。
  57. 根据权利要求55所述的红外传感器保护装置,其特征在于,所述自动白平衡统计信息用于确定所述可见光传感器当前所处场景的类型,所述场景的类型包括白天、黑夜、室内、室外中的一种或多种。
  58. 根据权利要求48所述的红外传感器保护装置,其特征在于,所述处理器用于根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内时,包括:
    根据所述检测结果和所述传感器统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度;
    根据所述目标置信度判定所述目标对象是否在所述红外传感器的视角范围内。
  59. 根据权利要求58所述的红外传感器保护装置,其特征在于,所述统计信息包括:基于可见光传感器得到的自动曝光统计信息、自动白平衡统计信息和GPS传感器检测的高度信息,根据所述检测结果和所述传感器统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度,包括:
    根据所述检测结果确定用于表征所述目标对象在所述红外传感器的视角范围内的第一置信度;
    根据所述自动曝光统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的第二置信度;
    根据所述自动白平衡统计信息确定用于表征所述目标对象在所述红外传感器的视角范围内的第三置信度;
    根据所述高度信息确定用于表征所述目标对象在所述红外传感器的视角范围内的第四置信度;
    根据所述第一置信度、所述第二置信度、所述第三置信度和所述第四置信度得到所述目标置信度。
  60. 根据权利要求58所述的红外传感器保护装置,其特征在于,所述处理器还用于:
    对所述可见光传感器采集的可见光图像进行所述目标对象的检测;
    若检测到所述目标对象,则基于所述可见传感器和所述红外传感器的位置关系判定所述目标对象是否在所述红外传感器的视角范围内,并得到第一判定结果;
    根据所述检测结果、所述传感器统计信息和所述第一判定结果确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度。
  61. 根据权利要求58述的红外传感器保护装置,其特征在于,所述处理器法还用于:
    对所述可见光传感器采集的可见光图像进行所述目标对象的检测;
    若检测到所述目标对象,则确定所述目标对象在所述可见光图像中的第一位置信息;
    确定所述目标对象在所述红外图像中的第二位置信息;
    根据所述第一位置信息、所述第二位置信息以及所述可见光图像和红外图像之间的映射关系判定所述目标对象是否在所述红外传感器的视角范围内,得到第二判定结果;
    根据所述检测结果、所述传感器统计信息和所述第二判定结果确定用于表征所述目标对象在所述红外传感器的视角范围内的目标置信度。
  62. 根据权利要求48所述的红外传感器保护装置,其特征在于,所述处理器用于根据判定结果确定是否执行对所述红外传感器的保护操作时,包括:
    当判定所述目标对象在所述红外传感器的视角范围内,则执行以下任一保护操作:
    关闭所述红外传感器的快门;或
    控制所述红外传感器转动,以使所述目标对象在所述红外传感器的视角范围之外。
  63. 根据权利要求62所述的红外传感器保护装置,其特征在于,在关闭所述红外传感器的快门之后,所述处理器还用于:
    当指定条件被触发后,重新开启所述红外传感器的快门。
  64. 根据权利要求63所述的红外传感器保护装置,其特征在于,所述指定条件包括:
    快门关闭时长大于预设时长;或
    所述红外传感器的旋转角度达到预设阈值。
  65. 根据权利要求64所述的红外传感器保护装置,其特征在于,所述红外传感器安装于无人机,所述无人机包括陀螺仪,所述红外传感器的旋转角度通过所述陀螺仪检测得到。
  66. 根据权利要求48所述的红外传感器保护装置,其特征在于,所述目标对象包 括太阳。
  67. 一种图像采集装置,其特征在于,包括红外传感器、除所述红外传感器之外的至少一个其他的传感器和红外传感器保护装置,所述红外传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
    对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
    根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
    根据判定结果确定是否执行对所述红外传感器的保护操作。
  68. 一种无人机,其特征在于,包括图像处理装置,所述图像处理装置包括红外传感器、除所述红外传感器之外的至少一个其他的传感器和红外传感器保护装置,所述红外传感器保护装置包括:处理器、存储器以及存储在所述存储器上的计算机程序,所述处理器执行所述计算程序时,实现以下步骤:
    对红外传感器采集的红外图像进行目标对象的检测,所述目标对象为对所述红外传感器造成损坏的对象;
    根据检测结果以及传感器统计信息判定所述目标对象是否在所述红外传感器的视角范围内,所述传感器统计信息基于除所述红外传感器以外的至少一个传感器采集得到的;
    根据判定结果确定是否执行对所述红外传感器的保护操作。
  69. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至13或29至47任一项红外传感器保护方法。
PCT/CN2020/082182 2020-03-30 2020-03-30 图像传感器保护方法、装置、图像采集装置及无人机 WO2021195879A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080004881.8A CN112689989A (zh) 2020-03-30 2020-03-30 图像传感器保护方法、装置、图像采集装置及无人机
PCT/CN2020/082182 WO2021195879A1 (zh) 2020-03-30 2020-03-30 图像传感器保护方法、装置、图像采集装置及无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/082182 WO2021195879A1 (zh) 2020-03-30 2020-03-30 图像传感器保护方法、装置、图像采集装置及无人机

Publications (1)

Publication Number Publication Date
WO2021195879A1 true WO2021195879A1 (zh) 2021-10-07

Family

ID=75457707

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082182 WO2021195879A1 (zh) 2020-03-30 2020-03-30 图像传感器保护方法、装置、图像采集装置及无人机

Country Status (2)

Country Link
CN (1) CN112689989A (zh)
WO (1) WO2021195879A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114112067A (zh) * 2021-12-08 2022-03-01 苏州睿新微系统技术有限公司 一种红外探测器保护方法及装置
CN114167889A (zh) * 2021-11-29 2022-03-11 内蒙古易飞航空科技有限公司 基于图像ai与大数据应用的智能巡检飞行平台

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747055A (zh) * 2021-07-23 2021-12-03 南方电网深圳数字电网研究院有限公司 探测器防灼伤方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103512557A (zh) * 2012-06-29 2014-01-15 联想(北京)有限公司 电子设备间相对位置确定方法及电子设备
CN103942523A (zh) * 2013-01-18 2014-07-23 华为终端有限公司 一种日照场景识别方法及装置
US20150226968A1 (en) * 2013-05-14 2015-08-13 Google Inc. Reducing light damage in shutterless imaging devices according to future use
CN108347560A (zh) * 2018-01-17 2018-07-31 浙江大华技术股份有限公司 一种摄像机防太阳灼伤方法、摄像机及可读存储介质
CN108802742A (zh) * 2018-04-28 2018-11-13 北京集光通达科技股份有限公司 异常物体监测方法、装置及系统
CN110274694A (zh) * 2018-03-16 2019-09-24 杭州海康威视数字技术股份有限公司 一种具有测温功能的红外热成像相机

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101428590A (zh) * 2008-08-13 2009-05-13 程滋颐 汽车用摄像头

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103512557A (zh) * 2012-06-29 2014-01-15 联想(北京)有限公司 电子设备间相对位置确定方法及电子设备
CN103942523A (zh) * 2013-01-18 2014-07-23 华为终端有限公司 一种日照场景识别方法及装置
US20150226968A1 (en) * 2013-05-14 2015-08-13 Google Inc. Reducing light damage in shutterless imaging devices according to future use
CN108347560A (zh) * 2018-01-17 2018-07-31 浙江大华技术股份有限公司 一种摄像机防太阳灼伤方法、摄像机及可读存储介质
CN110274694A (zh) * 2018-03-16 2019-09-24 杭州海康威视数字技术股份有限公司 一种具有测温功能的红外热成像相机
CN108802742A (zh) * 2018-04-28 2018-11-13 北京集光通达科技股份有限公司 异常物体监测方法、装置及系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167889A (zh) * 2021-11-29 2022-03-11 内蒙古易飞航空科技有限公司 基于图像ai与大数据应用的智能巡检飞行平台
CN114167889B (zh) * 2021-11-29 2023-03-07 内蒙古易飞航空科技有限公司 基于图像ai与大数据应用的智能巡检飞行平台
CN114112067A (zh) * 2021-12-08 2022-03-01 苏州睿新微系统技术有限公司 一种红外探测器保护方法及装置

Also Published As

Publication number Publication date
CN112689989A (zh) 2021-04-20

Similar Documents

Publication Publication Date Title
WO2021195879A1 (zh) 图像传感器保护方法、装置、图像采集装置及无人机
US10084960B2 (en) Panoramic view imaging system with drone integration
CN108347560A (zh) 一种摄像机防太阳灼伤方法、摄像机及可读存储介质
JP6350549B2 (ja) 映像解析システム
AU2023282280B2 (en) System and method of capturing and generating panoramic three-dimensional images
WO2019041276A1 (zh) 一种图像处理方法、无人机及系统
CN103886107B (zh) 基于天花板图像信息的机器人定位与地图构建系统
WO2021179225A1 (zh) 图像采集方法、控制装置及可移动平台
WO2022000300A1 (zh) 图像处理方法、图像获取装置、无人机、无人机系统和存储介质
WO2022226695A1 (zh) 用于火灾场景的数据处理方法、装置和系统、无人机
US9714833B2 (en) Method of determining the location of a point of interest and the system thereof
CN105227842A (zh) 一种航拍设备的拍摄范围标定装置及方法
WO2022027596A1 (zh) 可移动平台的控制方法、装置、计算机可读存储介质
JP2020149642A (ja) 物体追跡装置および物体追跡方法
WO2022077296A1 (zh) 三维重建方法、云台负载、可移动平台以及计算机可读存储介质
CN111447413A (zh) 防太阳灼伤的高温监控方法、设备及存储装置
CN205051791U (zh) 一种航拍设备的拍摄范围标定装置及其无人机
CN108732178A (zh) 一种大气能见度检测方法和装置
EP4354853A1 (en) Thermal-image-monitoring system using plurality of cameras
CN112243106A (zh) 一种目标监控方法、装置及设备、存储介质
US20160156825A1 (en) Outdoor exposure control of still image capture
WO2022160294A1 (zh) 曝光控制方法、装置及计算机可读存储介质
Kim et al. A low-cost stereo-fisheye camera sensor for daylighting and glare control
RU2589463C1 (ru) Устройство для определения общего балла облачности на основе прямых цифровых широкоугольных снимков видимой полусферы неба
JP2018004255A (ja) 反射光害チェック装置およびその方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20928774

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20928774

Country of ref document: EP

Kind code of ref document: A1