CN114430462A - Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium - Google Patents

Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium Download PDF

Info

Publication number
CN114430462A
CN114430462A CN202210357456.8A CN202210357456A CN114430462A CN 114430462 A CN114430462 A CN 114430462A CN 202210357456 A CN202210357456 A CN 202210357456A CN 114430462 A CN114430462 A CN 114430462A
Authority
CN
China
Prior art keywords
image
unmanned aerial
aerial vehicle
shooting
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210357456.8A
Other languages
Chinese (zh)
Other versions
CN114430462B (en
Inventor
高小伟
谭启昀
高松鹤
赵慧童
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guo Dawei
Original Assignee
Beijing Yuhang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuhang Intelligent Technology Co ltd filed Critical Beijing Yuhang Intelligent Technology Co ltd
Priority to CN202210357456.8A priority Critical patent/CN114430462B/en
Publication of CN114430462A publication Critical patent/CN114430462A/en
Application granted granted Critical
Publication of CN114430462B publication Critical patent/CN114430462B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure provides an unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium. The method comprises the following steps: acquiring point cloud data of an unmanned aerial vehicle corresponding to an external environment at a shooting point and acquiring a shot image, judging the shooting environment of the shot image by using the point cloud data, and calculating a backlight detection coefficient of the shot image; when the shooting environment is judged to be a backlight environment, performing target detection on the shot image to obtain a target area image containing a preset component, and performing semantic segmentation on the target area image to obtain a foreground image; selecting a photometric area from the foreground image by using a sliding frame selection mode, calculating camera adjustment parameters according to the photometric area and a photometric function, adjusting camera parameters of the unmanned aerial vehicle by using the camera adjustment parameters, and establishing a mapping relation between a backlight detection coefficient and the camera adjustment parameters. The complexity of parameter calculation can be reduced, the accuracy of camera adjustment parameters is improved, and the effect of unmanned aerial vehicle image shooting is improved.

Description

Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium.
Background
The operation is patrolled and examined to transmission line outward appearance is an important monitoring mode of carrying out the circuit operation conditions, and traditional circuit is patrolled and is used people to patrol and take the people to be the owner, along with the continuous popularization that unmanned aerial vehicle used, and transmission line patrols and examines the mode and patrol and examine, unmanned aerial vehicle is leading in coordination gradually to man-machine in coordination, patrols and examines, unmanned aerial vehicle is leading in coordination independently patrols and examines the mode transition. At present unmanned aerial vehicle is at autonomic flight in-process, can be strictly according to predetermined air route flight, and current unmanned aerial vehicle flight control system can't carry out automatic adjustment to camera shooting parameter, leads to the picture that unmanned aerial vehicle shot can't satisfy the requirement of defect identification work.
In the prior art, in order to solve the problem of overexposure or underexposure of a picture caused by backlight shooting under a lighting condition, a histogram-based automatic exposure algorithm, an image information entropy-based automatic exposure algorithm, a weight average algorithm and the like are generally adopted, but the existing adjustment scheme for camera shooting parameters has high computational complexity and low algorithm sensitivity, so that the image shooting effect of a camera is poor, and the requirement of automatic adjustment on the problem of overexposure or underexposure of the picture in the inspection process of an unmanned aerial vehicle cannot be met.
Disclosure of Invention
In view of this, the embodiment of the present disclosure provides an unmanned aerial vehicle autonomous photographing parameter adjusting method, an apparatus, a device, and a storage medium, so as to solve the problem in the prior art that an automatic exposure algorithm is high in computational complexity and low in sensitivity, which results in a poor image capturing effect of a camera.
In a first aspect of the embodiments of the present disclosure, a method for adjusting an autonomous photographing parameter of an unmanned aerial vehicle is provided, including: acquiring point cloud data of an unmanned aerial vehicle corresponding to an external environment at a shooting point, acquiring a shooting image of the unmanned aerial vehicle at the shooting point, judging the shooting environment of the shooting image by using the point cloud data, and calculating a backlight detection coefficient of the shooting image; when the shooting environment is judged to be a backlight environment, performing target detection on a preset component in the shot image to obtain a target area image containing the preset component, and performing semantic segmentation on the target area image to obtain a foreground image corresponding to the target area image; selecting a photometric area from the foreground image by using a preset sliding frame selection mode, calculating camera adjustment parameters according to the photometric area and a photometric function, adjusting the camera parameters of the unmanned aerial vehicle by using the camera adjustment parameters, and establishing a mapping relation between a backlight detection coefficient and the camera adjustment parameters.
A second aspect of the embodiments of the present disclosure provides an unmanned aerial vehicle parameter adjusting device that independently shoots, include: the acquisition module is configured to acquire point cloud data of an external environment corresponding to the shooting point of the unmanned aerial vehicle, acquire a shooting image of the unmanned aerial vehicle at the shooting point, judge the shooting environment of the shooting image by using the point cloud data, and calculate a backlight detection coefficient of the shooting image; the detection module is configured to perform target detection on a preset component in a shot image to obtain a target area image containing the preset component when the shooting environment is judged to be a backlight environment, and perform semantic segmentation on the target area image to obtain a foreground image corresponding to the target area image; the adjusting module is configured to select a photometric area from the foreground image by using a preset slide frame selecting mode, calculate camera adjusting parameters according to the photometric area and the photometric function, adjust camera parameters of the unmanned aerial vehicle by using the camera adjusting parameters, and establish a mapping relation between a backlight detection coefficient and the camera adjusting parameters.
In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method when executing the program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
The embodiment of the present disclosure adopts at least one technical scheme that can achieve the following beneficial effects:
the method comprises the steps of acquiring point cloud data of an unmanned aerial vehicle corresponding to an external environment at a shooting point, acquiring a shooting image of the unmanned aerial vehicle at the shooting point, judging the shooting environment of the shooting image by using the point cloud data, and calculating a backlight detection coefficient of the shooting image; when the shooting environment is judged to be a backlight environment, performing target detection on a preset component in the shot image to obtain a target area image containing the preset component, and performing semantic segmentation on the target area image to obtain a foreground image corresponding to the target area image; selecting a photometric area from the foreground image by using a preset sliding frame selection mode, calculating camera adjustment parameters according to the photometric area and a photometric function, adjusting the camera parameters of the unmanned aerial vehicle by using the camera adjustment parameters, and establishing a mapping relation between a backlight detection coefficient and the camera adjustment parameters. This openly can patrol and examine the in-process at unmanned aerial vehicle, shoot the parameter and carry out automatic adjustment to unmanned aerial vehicle's camera, and the computational complexity of algorithm is low, and sensitivity is high to the effect of camera shooting image has been improved.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
Fig. 1 is a schematic flow chart of an unmanned aerial vehicle autonomous photographing parameter adjusting method provided in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a backlight detection provided by an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an autonomous photographing parameter adjusting device for an unmanned aerial vehicle according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
Unmanned aerial vehicle patrols and examines the main mode that has become daily patrolling and examining in the electric power industry, and along with the development of high accuracy location technique, unmanned aerial vehicle patrols and examines and relies on the flight hand by original whole to unmanned aerial vehicle automatic flight transition moreover. In order to reduce the participation of a flyer, reduce the difficulty of flight and improve the inspection work efficiency of the unmanned aerial vehicle, the automatic flight of the unmanned aerial vehicle mainly depends on a preset waypoint route, and the unmanned aerial vehicle autonomously flies according to waypoints, but the autonomous flight defect of the unmanned aerial vehicle is obvious at present, once the route is issued to the unmanned aerial vehicle, the unmanned aerial vehicle can strictly fly according to the route, the judgment cannot be carried out according to the actual situation of the field, particularly the shooting problem caused by the illumination condition, when the flyer manually controls the flight of the unmanned aerial vehicle, the camera parameters can be timely adjusted according to the illumination condition, and a high-quality picture is shot; however, a manual judgment link is lacked in the automatic flight process, and the unmanned aerial vehicle flight control system does not have the capability of automatic adjustment, so that the requirement of performing defect identification work by utilizing pictures in the next step cannot be met by shooting the pictures.
Aiming at the problem that the flight control system of the unmanned aerial vehicle cannot automatically adjust the shooting parameters of the camera, the traditional unmanned aerial vehicle adopts some conventional automatic exposure adjustment technologies to solve the problems, such as an automatic exposure algorithm based on a histogram, an automatic exposure algorithm based on image information entropy, a weight mean algorithm, an unmanned aerial vehicle camera automatic exposure and automatic white balance algorithm, an average brightness algorithm and the like. However, the computation complexity of an automatic exposure algorithm based on a histogram, an automatic exposure algorithm based on an image information entropy, a weight mean algorithm and the like is high, and the unmanned aerial vehicle flight exposure algorithm in an actual scene needs high sensitivity; the automatic exposure and automatic white balance algorithm of the unmanned aerial vehicle camera improves some rationality, but the color cast correction algorithm is not ideal for the color cast correction effect caused by the dominant hue; in addition, although the average brightness algorithm is sensitive in response, some information of the original image can be damaged, so that the method cannot meet the requirement of automatic adjustment of the problem of overexposure or underexposure of the image in the inspection process of the unmanned aerial vehicle. Therefore, the existing automatic exposure algorithm can not meet the requirement of carrying out automatic adjustment on the picture overexposure or underexposure problem in the unmanned aerial vehicle inspection process.
In view of the problems in the prior art, the embodiment of the present disclosure provides an unmanned aerial vehicle autonomous photographing parameter adjusting method, which adjusts a photographing parameter of a pan-tilt camera for photographing an image in a backlight environment in an autonomous inspection process of an unmanned aerial vehicle of a power transmission line, so as to achieve adaptive adjustment of the camera parameter. The method comprises the steps of judging the backlight environment of a photographing scene, comparing the backlight detection coefficient of a current frame with the backlight detection coefficient of a previous frame of image, and judging the similarity between the images; aiming at an image needing light metering, performing target identification on a preset part in the image, performing semantic segmentation on an identified part area, and separating the front background and the rear background of the image by using a semantic segmentation mode, so that a light metering area in a foreground image is further selected, namely a proper light metering area is selected; and finally, performing parameter calculation on the photometric area through the photometric function to obtain camera adjustment parameters, and adjusting the shooting parameters of the current camera by using the camera adjustment parameters. The utility model discloses an in-process is patrolled and examined at unmanned aerial vehicle, shoots the parameter and carries out automatic adjustment to unmanned aerial vehicle's camera, and the computational process of algorithm is simple, and the sensitivity of algorithm is high, and the information entropy value of correcting the back picture remains intact to improve the effect that the image was shot to the camera.
Fig. 1 is a schematic flow chart of an unmanned aerial vehicle autonomous photographing parameter adjusting method provided in an embodiment of the present disclosure. The unmanned aerial vehicle autonomous photographing parameter adjusting method of fig. 1 may be executed by an unmanned aerial vehicle. As shown in fig. 1, the method for adjusting the autonomous photographing parameter of the unmanned aerial vehicle may specifically include:
s101, collecting point cloud data of an unmanned aerial vehicle corresponding to an external environment at a shooting point, acquiring a shooting image of the unmanned aerial vehicle at the shooting point, judging the shooting environment of the shooting image by using the point cloud data, and calculating a backlight detection coefficient of the shooting image;
s102, when the shooting environment is judged to be a backlight environment, carrying out target detection on a preset component in the shot image to obtain a target area image containing the preset component, and carrying out semantic segmentation on the target area image to obtain a foreground image corresponding to the target area image;
s103, selecting a light metering area from the foreground image by using a preset slide frame selection mode, and calculating camera adjustment parameters according to the light metering area and a light metering function so as to adjust the camera parameters of the unmanned aerial vehicle by using the camera adjustment parameters and establish a mapping relation between a backlight detection coefficient and the camera adjustment parameters.
Specifically, the overall technical scheme flow of the embodiment of the present disclosure includes the following contents: the method comprises the steps of firstly carrying out backlight judgment on a shooting scene of a camera, comparing a backlight detection coefficient of a current frame shooting image with a backlight detection coefficient of a previous frame shooting image, taking the shooting image as an image needing exposure adjustment when the shooting image is judged to be an initial image or different from the backlight detection coefficient of the previous frame shooting image, selecting a photometric area in the shooting image, selecting the most appropriate photometric area, and finally carrying out camera adjustment parameter calculation on the photometric area through a photometric function to obtain the final camera adjustment parameter.
Further, the predetermined component of the embodiment of the present disclosure may be considered as a key component focused by a camera of the unmanned aerial vehicle when the camera is shooting at a shooting point, for example, in a power inspection process of the unmanned aerial vehicle, a main object shot by the camera installed on the unmanned aerial vehicle is generally a transmission tower, and therefore the transmission tower may be used as the key component (i.e., the predetermined component) of the embodiment of the present disclosure. By performing object recognition on the key component in the captured image and taking the area of the key component in the captured image as the initial area in photometry, the photometry area is further selected based on the initial area in photometry.
According to the technical scheme provided by the embodiment of the disclosure, the point cloud data of the unmanned aerial vehicle corresponding to the external environment at the shooting point is collected, the shooting image of the unmanned aerial vehicle at the shooting point is obtained, the shooting environment of the shooting image is judged by using the point cloud data, and the backlight detection coefficient of the shooting image is calculated; when the shooting environment is judged to be a backlight environment, performing target detection on a preset component in the shot image to obtain a target area image containing the preset component, and performing semantic segmentation on the target area image to obtain a foreground image corresponding to the target area image; selecting a photometric area from the foreground image by using a preset sliding frame selection mode, calculating camera adjustment parameters according to the photometric area and a photometric function, adjusting the camera parameters of the unmanned aerial vehicle by using the camera adjustment parameters, and establishing a mapping relation between a backlight detection coefficient and the camera adjustment parameters. This openly can patrol and examine the in-process at unmanned aerial vehicle, shoot the parameter and carry out automatic adjustment to unmanned aerial vehicle's camera, and the computational complexity of algorithm is low, and sensitivity is high to the effect of camera shooting image has been improved.
In some embodiments, collecting point cloud data of the unmanned aerial vehicle corresponding to the external environment at the shooting point location, and acquiring a shooting image of the unmanned aerial vehicle at the shooting point location, includes: in unmanned aerial vehicle's autonomic in-process of patrolling and examining, when unmanned aerial vehicle moved to the shooting position along the airline, utilized the laser radar who installs on unmanned aerial vehicle to gather the some data of cloud to utilize the camera of installing on unmanned aerial vehicle to gather the shooting image that the shooting position corresponds, wherein, contain unmanned aerial vehicle's position and the position of shaft tower in the some data of cloud.
Specifically, during the process of autonomous electric power inspection by the unmanned aerial vehicle according to a preset air route, when the unmanned aerial vehicle flies to a certain preset shooting point location for image acquisition, firstly, a laser radar on the unmanned aerial vehicle is used for acquiring point cloud data of the surrounding environment of the shooting point location, and the point cloud data comprises but is not limited to the position of the unmanned aerial vehicle and the position of a tower. After point cloud data are obtained, a camera installed on the unmanned aerial vehicle is used for collecting shooting images corresponding to shooting points, and further the backlight environment of the current shooting points is judged based on the position of the unmanned aerial vehicle, the position of a tower and the shooting images.
In some embodiments, the determining the shooting environment of the shot image by using the point cloud data comprises: the method comprises the steps of determining the position of the sun when a shot image is obtained according to shooting time of the shot image and longitude and latitude corresponding to shooting points, determining a first vector corresponding to a pole tower and an unmanned aerial vehicle and a second vector corresponding to the pole tower and the sun by taking the position of the pole tower as a central point, calculating an included angle between the first vector and the second vector, judging that a shooting environment is a backlight environment when the included angle is an obtuse angle, and judging that the shooting environment is a non-backlight environment when the included angle is an acute angle.
Specifically, when the shooting environment is subjected to backlight detection, judgment is carried out in a mode of combining a conventional histogram and a business scene, the histogram is a mode of conventionally judging the exposure degree of a photo, and in order to improve the judgment accuracy, other factors in the business scene are added into the backlight detection as regular terms. In practical applications, the business scenario factors include, but are not limited to, the following: unmanned aerial vehicle yaw angle, cloud platform pitch angle, shooting time, shooting place.
Further, can judge whether unmanned aerial vehicle shoots under the adverse light environment through sun position, shaft tower position, shooting time, shooting place and unmanned aerial vehicle driftage angle, in practical application, mainly utilize the vector between sun position and the shaft tower position to and the vector between unmanned aerial vehicle position and the shaft tower position, carry out the angle based on these two vectors and judge, thereby judge whether to be the adverse light environment according to the angle. The principle of determining the backlight environment will be described in detail below with reference to the accompanying drawings and embodiments, and fig. 2 is a schematic diagram of the principle of backlight detection provided by the embodiments of the present disclosure. As shown in fig. 2, the method for determining a backlight environment of an unmanned aerial vehicle may specifically include:
"circle" in fig. 2 represents the position of the sun, "asterisk" represents the position of unmanned aerial vehicle, and the central point is the position of the tower, uses the perpendicular north east west as the tangent plane, uses the position coordinate of the tower as the central point, sets the vector of tower and unmanned aerial vehicle as
Figure 938075DEST_PATH_IMAGE001
Setting the vector of the tower and the sun as
Figure 199292DEST_PATH_IMAGE002
Then, the current included angle calculation formula is:
Figure 868170DEST_PATH_IMAGE003
and determining the included angle between the first vector and the second vector by using the included angle calculation formula, judging that the shooting environment is a backlight environment when the included angle is an obtuse angle, judging that the shooting environment is a non-backlight environment when the included angle is an acute angle, and further obtaining a backlight detection coefficient according to the angle value. In practical application, when the included angle is included
Figure 861534DEST_PATH_IMAGE004
Setting the backlight detection coefficient to 0.65, when the included angle is
Figure 927448DEST_PATH_IMAGE005
Setting the backlight detection coefficient to 0.8 when the included angle is
Figure 664460DEST_PATH_IMAGE006
The backlight detection coefficient was set to 0.95.
In some embodiments, after when the photographing environment is determined to be a backlight environment, the method further comprises: judging whether the shot image is an initial image or not, and when the shot image is the initial image and the shooting environment of the shot image is a backlight environment, performing selection operation on a photometric area of the shot image; and when the shot image is a non-initial image, comparing the backlight detection coefficient of the shot image of the current frame with the backlight detection coefficient of the shot image of the previous frame, and judging whether to adopt the camera adjustment parameter of the shot image of the previous frame to adjust the camera parameter of the unmanned aerial vehicle or judge to execute selection operation on the light metering area of the shot image according to the comparison result.
Specifically, when performing backlight detection judgment, in order to save unnecessary computing resources, in the embodiment of the present disclosure, the captured image of the current frame is compared with the backlight detection coefficient of the captured image of the previous frame, when the backlight detection coefficient of the current frame is close to the backlight detection coefficient of the previous frame, it indicates that the similarity between the current frame image and the previous frame image is relatively high, and at this time, the camera adjustment parameter corresponding to the backlight detection coefficient of the previous frame image is directly obtained from the memory, so that it is not necessary to repeatedly calculate the backlight detection coefficient of the current frame.
Further, the embodiment of the present disclosure needs to determine the following for the captured image of the current frame before selecting the photometric area: if the current frame is an initial image and belongs to backlight environment shooting, directly executing next photometric area selection operation; if the current frame is a non-initial image, comparing the backlight detection coefficient of the current frame image with the backlight detection coefficient of the previous frame image in the memory, if the backlight detection results are similar, directly using the camera photographing parameter of the previous frame image to photograph, otherwise, carrying out the next step of light metering area selection.
In some embodiments, performing object detection on a predetermined component in the captured image to obtain an object region image including the predetermined component includes: the method comprises the steps of detecting a preset component in a shot image by using a preset target recognition algorithm, determining a target area corresponding to the preset component in the shot image according to a detection result, and cutting the target area from the shot image to obtain a target area image, wherein a transmission tower in the shot image is used as the preset component.
Specifically, the selection of the light metering area directly influences the setting of camera shooting parameters, the embodiment of the invention adopts a real-time target recognition and semantic segmentation mode when selecting the light metering area, compared with a conventional mode of directly selecting a point position in the middle of a shooting point, the embodiment of the invention combines a real-time target detection and semantic segmentation method, and by combining area selection, key position selection and front and rear background selection, a key part corresponding to the shooting point position is found by utilizing target detection, and the key part is selected as a large light metering area (namely a target area), the semantic segmentation is carried out on the key part area, so as to realize the separation of the front and rear backgrounds, thus the influence of the background on the removal of hollow parts in the transmission tower can be eliminated, particularly, the central area of the hollow parts can be sky or land, and the light metering environment of the key parts can not be truly reflected, and therefore, accuracy of photometry is seriously affected.
In some embodiments, selecting a photometric area from a foreground image by using a preset slide frame selection method includes: and generating a sliding frame with a preset range size in the foreground image, moving the sliding frame in an area corresponding to a preset part in the foreground image, calculating the intersection ratio between a background area and a foreground area in the current sliding frame after moving each time when the sliding frame is moved, and taking the area where the corresponding sliding frame is located when the numerical value of the intersection ratio is maximum as a photometric area.
Specifically, the embodiment of the present disclosure provides a sliding frame type automatic light metering region selection method based on a spot light metering method, in which after a key component is separated from a captured picture by semantic segmentation, a sliding frame mode is used to select a light metering region in a region of the key component. In practical application, the principle of selecting the photometric area from the foreground image by using a sliding frame selection mode is that an area with the highest foreground image ratio is found through a sliding frame to serve as a final photometric area.
Further, when the light is measured in the range with the designated point as the center, a sliding frame with a preset range size is generated in the foreground image, namely, the sliding frame with a certain length-width ratio is selected, the sliding frame moves in the part area which is already identified in the foreground image, Iou value calculation is carried out once the sliding frame moves, and the ratio of the background area to the foreground area in the current window is determined. The Iou value is calculated in a manner of IoU = area background &' area for group, and the current area corresponding to the sliding frame when the Intersection ratio is largest than IoU (Intersection-over-unity) is taken as the final photometric area. According to the embodiment of the disclosure, an accurate photometric result can be obtained in this way, and it is ensured that a specified object can be correctly exposed, and the method is suitable for shooting scenes with complex illumination.
In some embodiments, calculating the camera adjustment parameters according to the photometric area and the photometric function so as to adjust the camera parameters of the drone by using the camera adjustment parameters includes: converting an image corresponding to the light metering region into a gray scale map, counting the brightness value in the gray scale map, generating a histogram of the image corresponding to the light metering region according to a statistical result, and judging an exposure result of a shot image according to the histogram; according to the exposure result, the exposure mode corresponding to the camera installed on the unmanned aerial vehicle is adjusted, parameter calculation is carried out on the light metering area by utilizing the light metering function according to the exposure setting of the adjusted exposure mode, camera adjusting parameters are obtained, the camera parameters of the unmanned aerial vehicle are adjusted by utilizing the camera adjusting parameters, and the camera after the camera parameters are adjusted is utilized for photographing.
Specifically, the embodiment of the present disclosure performs parameter calculation on the light metering area through the light metering function to obtain the camera adjustment parameter, that is, performs calculation of the camera adjustment parameter on the light metering area by using the light metering function. In practical application, a photometric function is calculated by adopting a histogram, firstly, an image corresponding to a photometric area is converted into a gray-scale image, and the gray-scale image is subjected to luminance value statistics, namely, the luminance in the gray-scale image is counted according to the value of 0-255, and if the counted value is concentrated in a decimal area (0-50), the shot image is judged to be under-exposed; if the images are concentrated in a large number area (200-255), the shot images are judged to be over-exposed, and if the histogram numerical distribution is uniform, the illumination of the shot images is moderate.
Further, after the exposure result of the captured image is determined, the exposure mode corresponding to the camera mounted on the unmanned aerial vehicle is adjusted according to the exposure result, for example, the exposure mode may be set to MANUAL using setexposurgemode function, and the setting value range of each exposure setting is obtained by calling camera. And finally, according to the exposure setting of the adjusted exposure mode, performing parameter calculation on the photometric area by using a photometric function to obtain camera adjustment parameters.
Further, camera adjustment parameters output by the photometric function are stored in a memory, a backlight detection coefficient of a current shooting scene is recorded, a mapping relation between the backlight detection coefficient and the camera adjustment parameters is established in the memory, when the backlight detection coefficient of the shooting scene corresponding to the next shooting point is close to a certain backlight detection coefficient in the current memory, the camera adjustment parameters corresponding to the backlight detection coefficient in the memory are directly obtained, the current camera adjustment parameters are directly used for adjusting the camera, and therefore the life cycle of the camera adjustment parameters is prolonged.
According to the technical scheme provided by the embodiment of the disclosure, unnecessary computing resources are saved for exposure adjustment in the flight process of the unmanned aerial vehicle by backlight detection judgment, and when the backlight detection is carried out, judgment is carried out in a mode of combining a conventional histogram and a service scene, so that the accuracy of backlight detection judgment is improved. The method comprises the steps of focusing key parts by adopting a target identification method, obtaining a hollow part by utilizing a semantic segmentation technology, and finally performing sliding frame type automatic selection on a photometric area based on a spot photometry mode, wherein the influence of a background on photometry can be removed through the mode, and the separation of the front background and the rear background is realized, so that an accurate photometric area is obtained, the correct exposure of a specified object can be ensured, and the completeness of information entropy values of an exposed and corrected picture can be ensured for a shooting scene with complex illumination. And finally, calculating camera adjustment parameters of the photometric area by using the photometric function, storing the camera adjustment parameters output by the photometric function into a memory, directly acquiring the camera adjustment parameters corresponding to the backlight detection coefficients when the backlight detection coefficients of the shooting points where the unmanned aerial vehicle is located are close to the backlight detection coefficients in the current memory, and adjusting the current camera parameters by using the camera adjustment parameters, thereby improving the life cycle of the camera adjustment parameters. The camera parameter adjusting mode of the embodiment of the disclosure reduces the calculation steps and improves the sensitivity of the unmanned aerial vehicle in the inspection process.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 3 is a schematic structural diagram of an autonomous photographing parameter adjusting device for an unmanned aerial vehicle according to an embodiment of the present disclosure. As shown in fig. 3, this unmanned aerial vehicle parameter adjusting device that autonomically shoots includes:
the acquisition module 301 is configured to acquire point cloud data of an external environment corresponding to a shooting point of the unmanned aerial vehicle, acquire a shooting image of the unmanned aerial vehicle at the shooting point, judge a shooting environment of the shooting image by using the point cloud data, and calculate a backlight detection coefficient of the shooting image;
the detection module 302 is configured to, when the shooting environment is determined to be a backlight environment, perform target detection on a predetermined component in the shot image to obtain a target area image containing the predetermined component, and perform semantic segmentation on the target area image to obtain a foreground image corresponding to the target area image;
the adjusting module 303 is configured to select a photometric area from the foreground image by using a preset slide frame selection manner, calculate a camera adjustment parameter according to the photometric area and the photometric function, adjust a camera parameter of the unmanned aerial vehicle by using the camera adjustment parameter, and establish a mapping relationship between the backlight detection coefficient and the camera adjustment parameter.
In some embodiments, in the autonomous inspection process of the unmanned aerial vehicle, when the unmanned aerial vehicle moves to a shooting point location along a route, the acquisition module 301 of fig. 3 acquires point cloud data by using a laser radar installed on the unmanned aerial vehicle, and acquires a shooting image corresponding to the shooting point location by using a camera installed on the unmanned aerial vehicle, where the point cloud data includes a position of the unmanned aerial vehicle and a position of a tower.
In some embodiments, the acquisition module 301 in fig. 3 determines a position of the sun when the captured image is obtained according to the capturing time of the captured image and the longitude and latitude corresponding to the capturing point, determines a first vector corresponding to the tower and the unmanned aerial vehicle and a second vector corresponding to the tower and the sun with the position of the tower as a central point, calculates an included angle between the first vector and the second vector, determines that the capturing environment is a backlight environment when the included angle is an obtuse angle, and determines that the capturing environment is a non-backlight environment when the included angle is an acute angle.
In some embodiments, the detection module 302 of fig. 3 determines whether the captured image is an initial image after determining that the capturing environment is a backlight environment, and performs a selecting operation on a photometric area of the captured image when the captured image is the initial image and the capturing environment of the captured image is the backlight environment; and when the shot image is a non-initial image, comparing the backlight detection coefficient of the shot image of the current frame with the backlight detection coefficient of the shot image of the previous frame, and judging whether to adopt the camera adjustment parameter of the shot image of the previous frame to adjust the camera parameter of the unmanned aerial vehicle or judge to execute selection operation on the light metering area of the shot image according to the comparison result.
In some embodiments, the detection module 302 in fig. 3 detects a predetermined component in the captured image by using a preset target recognition algorithm, determines a target area corresponding to the predetermined component in the captured image according to a detection result, and crops the target area from the captured image to obtain a target area image, where a power transmission tower in the captured image is used as the predetermined component.
In some embodiments, the adjusting module 303 in fig. 3 generates a sliding frame with a preset range size in the foreground image, moves the sliding frame in the region corresponding to the predetermined component in the foreground image, calculates a merging ratio between the background region and the foreground region in the current sliding frame after moving each time the sliding frame is moved, and takes the region where the corresponding sliding frame is located when the value of the merging ratio is maximum as the photometric region.
In some embodiments, the adjusting module 303 in fig. 3 converts the image corresponding to the light metering region into a gray scale map, counts the brightness values in the gray scale map, generates a histogram of the image corresponding to the light metering region according to the statistical result, and determines the exposure result of the captured image according to the histogram; according to the exposure result, the exposure mode corresponding to the camera installed on the unmanned aerial vehicle is adjusted, parameter calculation is carried out on the light metering area by utilizing the light metering function according to the exposure setting of the adjusted exposure mode, camera adjusting parameters are obtained, the camera parameters of the unmanned aerial vehicle are adjusted by utilizing the camera adjusting parameters, and the camera after the camera parameters are adjusted is utilized for photographing.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 4 is a schematic diagram of an electronic device 4 provided by the embodiment of the present disclosure. As shown in fig. 4, the electronic apparatus 4 of this embodiment includes: a processor 401, a memory 402, and a computer program 403 stored in the memory 402 and operable on the processor 401. The steps in the various method embodiments described above are implemented when the processor 401 executes the computer program 403. Alternatively, the processor 401 implements the functions of the respective modules/units in the above-described respective apparatus embodiments when executing the computer program 403.
The electronic device 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other electronic devices. The electronic device 4 may include, but is not limited to, a processor 401 and a memory 402. Those skilled in the art will appreciate that fig. 4 is merely an example of electronic device 4 and does not constitute a limitation of electronic device 4 and may include more or fewer components than shown, or different components.
The Processor 401 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc.
The storage 402 may be an internal storage unit of the electronic device 4, for example, a hard disk or a memory of the electronic device 4. The memory 402 may also be an external storage device of the electronic device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 4. The memory 402 may also include both internal storage units of the electronic device 4 and external storage devices. The memory 402 is used for storing computer programs and other programs and data required by the electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the above embodiments may be realized by the present disclosure, and the computer program may be stored in a computer readable storage medium to instruct related hardware, and when the computer program is executed by a processor, the steps of the above method embodiments may be realized. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.

Claims (10)

1. An unmanned aerial vehicle autonomous photographing parameter adjusting method is characterized by comprising the following steps:
acquiring point cloud data of an unmanned aerial vehicle corresponding to an external environment at a shooting point, acquiring a shooting image of the unmanned aerial vehicle at the shooting point, judging the shooting environment of the shooting image by using the point cloud data, and calculating a backlight detection coefficient of the shooting image;
when the shooting environment is judged to be a backlight environment, performing target detection on a preset component in the shot image to obtain a target area image containing the preset component, and performing semantic segmentation on the target area image to obtain a foreground image corresponding to the target area image;
selecting a photometric area from the foreground image by using a preset slide frame selection mode, and calculating camera adjustment parameters according to the photometric area and a photometric function so as to adjust the camera parameters of the unmanned aerial vehicle by using the camera adjustment parameters and establish a mapping relation between the backlight detection coefficients and the camera adjustment parameters.
2. The method of claim 1, wherein the acquiring point cloud data of the unmanned aerial vehicle corresponding to the external environment at the shooting location and acquiring the shot image of the unmanned aerial vehicle at the shooting location comprises:
in the autonomous inspection process of the unmanned aerial vehicle, when the unmanned aerial vehicle moves to the shooting point location along a route, the point cloud data is collected by using a laser radar installed on the unmanned aerial vehicle, and the shooting image corresponding to the shooting point location is collected by using a camera installed on the unmanned aerial vehicle, wherein the point cloud data contains the position of the unmanned aerial vehicle and the position of a tower.
3. The method of claim 2, wherein the determining the shooting environment of the shot image using the point cloud data comprises:
determining the position of the sun when the image is obtained according to the shooting time of the shot image and the longitude and latitude corresponding to the shooting point position, determining a first vector corresponding to the pole tower and the unmanned aerial vehicle and a second vector corresponding to the pole tower and the sun by taking the position of the pole tower as a central point, calculating an included angle between the first vector and the second vector, judging that the shooting environment is a backlight environment when the included angle is an obtuse angle, and judging that the shooting environment is a non-backlight environment when the included angle is an acute angle.
4. The method according to claim 1, wherein after the when the photographing environment is determined to be a backlight environment, the method further comprises:
judging whether the shot image is an initial image or not, and when the shot image is the initial image and the shooting environment of the shot image is a backlight environment, executing selection operation on a photometric area of the shot image;
and when the shot image is a non-initial image, comparing the backlight detection coefficient of the shot image of the current frame with the backlight detection coefficient of the shot image of the previous frame, and judging whether to adjust the camera parameters of the unmanned aerial vehicle by adopting the camera adjustment parameters of the shot image of the previous frame or judge to perform selection operation on the light metering area of the shot image according to the comparison result.
5. The method according to claim 1, wherein the performing target detection on a predetermined component in the captured image to obtain a target area image including the predetermined component comprises:
detecting a preset component in the shot image by using a preset target recognition algorithm, determining a target area corresponding to the preset component in the shot image according to a detection result, and cutting the target area from the shot image to obtain a target area image, wherein a transmission tower in the shot image is used as the preset component.
6. The method according to claim 1, wherein the selecting a photometric area from the foreground image by using a preset slide frame selection manner comprises:
generating a sliding frame with a preset range size in the foreground image, moving the sliding frame in the area corresponding to the preset part in the foreground image, calculating the intersection ratio between the background area and the foreground area in the sliding frame after moving every time when the sliding frame is moved, and taking the area where the corresponding sliding frame is located when the intersection ratio is the maximum value as the photometric area.
7. The method of claim 1, wherein the calculating camera adjustment parameters according to the photometric area and the photometric function to adjust the camera parameters of the drone with the camera adjustment parameters comprises:
converting the image corresponding to the light metering area into a gray scale map, counting the brightness value in the gray scale map, generating a histogram of the image corresponding to the light metering area according to a statistical result, and judging the exposure result of the shot image according to the histogram;
adjusting an exposure mode corresponding to a camera installed on the unmanned aerial vehicle according to the exposure result, performing parameter calculation on the photometric area by using the photometric function according to the adjusted exposure setting of the exposure mode to obtain camera adjustment parameters, adjusting the camera parameters of the unmanned aerial vehicle by using the camera adjustment parameters, and taking pictures by using the camera with the camera parameters adjusted.
8. The utility model provides an unmanned aerial vehicle parameter adjustment device that independently shoots, its characterized in that includes:
the acquisition module is configured to acquire point cloud data of an unmanned aerial vehicle corresponding to an external environment at a shooting point, acquire a shooting image of the unmanned aerial vehicle at the shooting point, judge the shooting environment of the shooting image by using the point cloud data, and calculate a backlight detection coefficient of the shooting image;
the detection module is configured to perform target detection on a preset component in the shot image to obtain a target area image containing the preset component when the shooting environment is judged to be a backlight environment, and perform semantic segmentation on the target area image to obtain a foreground image corresponding to the target area image;
the adjusting module is configured to select a photometric area from the foreground image by using a preset slide frame selection mode, calculate camera adjustment parameters according to the photometric area and a photometric function, adjust camera parameters of the unmanned aerial vehicle by using the camera adjustment parameters, and establish a mapping relation between the backlight detection coefficients and the camera adjustment parameters.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 7 when executing the program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202210357456.8A 2022-04-07 2022-04-07 Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium Active CN114430462B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210357456.8A CN114430462B (en) 2022-04-07 2022-04-07 Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210357456.8A CN114430462B (en) 2022-04-07 2022-04-07 Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114430462A true CN114430462A (en) 2022-05-03
CN114430462B CN114430462B (en) 2022-07-05

Family

ID=81314410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210357456.8A Active CN114430462B (en) 2022-04-07 2022-04-07 Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114430462B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474007A (en) * 2022-08-12 2022-12-13 北京城市网邻信息技术有限公司 Shooting method, shooting device, terminal equipment and storage medium
CN115578662A (en) * 2022-11-23 2023-01-06 国网智能科技股份有限公司 Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006422A (en) * 2009-09-01 2011-04-06 华晶科技股份有限公司 Backlight shooting method
CN103927520A (en) * 2014-04-14 2014-07-16 中国华戎控股有限公司 Method for detecting human face under backlighting environment
WO2015131462A1 (en) * 2014-03-07 2015-09-11 国家电网公司 Centralized monitoring system and monitoring method for unmanned aerial vehicle to patrol power transmission line
CN105426825A (en) * 2015-11-09 2016-03-23 国网山东省电力公司烟台供电公司 Aerial image identification based power grid geographical wiring diagram drawing method
CN111770284A (en) * 2020-07-10 2020-10-13 广东电网有限责任公司 Backlight compensation shooting method and related device for transmission tower
CN112164015A (en) * 2020-11-30 2021-01-01 中国电力科学研究院有限公司 Monocular vision autonomous inspection image acquisition method and device and power inspection unmanned aerial vehicle
CN113034613A (en) * 2021-03-25 2021-06-25 中国银联股份有限公司 External parameter calibration method of camera and related device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006422A (en) * 2009-09-01 2011-04-06 华晶科技股份有限公司 Backlight shooting method
WO2015131462A1 (en) * 2014-03-07 2015-09-11 国家电网公司 Centralized monitoring system and monitoring method for unmanned aerial vehicle to patrol power transmission line
CN103927520A (en) * 2014-04-14 2014-07-16 中国华戎控股有限公司 Method for detecting human face under backlighting environment
CN105426825A (en) * 2015-11-09 2016-03-23 国网山东省电力公司烟台供电公司 Aerial image identification based power grid geographical wiring diagram drawing method
CN111770284A (en) * 2020-07-10 2020-10-13 广东电网有限责任公司 Backlight compensation shooting method and related device for transmission tower
CN112164015A (en) * 2020-11-30 2021-01-01 中国电力科学研究院有限公司 Monocular vision autonomous inspection image acquisition method and device and power inspection unmanned aerial vehicle
CN113034613A (en) * 2021-03-25 2021-06-25 中国银联股份有限公司 External parameter calibration method of camera and related device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474007A (en) * 2022-08-12 2022-12-13 北京城市网邻信息技术有限公司 Shooting method, shooting device, terminal equipment and storage medium
CN115578662A (en) * 2022-11-23 2023-01-06 国网智能科技股份有限公司 Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment

Also Published As

Publication number Publication date
CN114430462B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN114430462B (en) Unmanned aerial vehicle autonomous photographing parameter adjusting method, device, equipment and storage medium
CN110099222B (en) Exposure adjusting method and device for shooting equipment, storage medium and equipment
CN110166692B (en) Method and device for improving automatic focusing accuracy and speed of camera
CN107948538B (en) Imaging method, imaging device, mobile terminal and storage medium
CN106454145A (en) Automatic exposure method with scene self-adaptivity
CN102694981B (en) Automatic exposure method based on adaptive threshold segmentation and histogram equalization
CN105635565A (en) Shooting method and equipment
WO2021007690A1 (en) Exposure control method, apparatus and movable platform
WO2022089386A1 (en) Laser pattern extraction method and apparatus, and laser measurement device and system
CN105847708B (en) Line-scan digital camera automatic exposure method of adjustment based on image histogram analysis and system
CN107404647A (en) Camera lens condition detection method and device
CN113391644B (en) Unmanned aerial vehicle shooting distance semi-automatic optimization method based on image information entropy
CN105578062A (en) Light metering mode selection method and image acquisition device utilizing same
CN107995396B (en) Two camera modules and terminal
WO2021097848A1 (en) Image processing method, image collection apparatus, movable platform and storage medium
CN114885105B (en) Image acquisition and adjustment method for photovoltaic power station inspection unmanned aerial vehicle
CN111770284B (en) Backlight compensation shooting method and related device for transmission tower
WO2021168707A1 (en) Focusing method, apparatus and device
CN115578662A (en) Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment
CN116185065A (en) Unmanned aerial vehicle inspection method and device and nonvolatile storage medium
CN114666512B (en) Method and system for adjusting rapid automatic exposure
CN111355896B (en) Method for acquiring automatic exposure parameters of all-day camera
WO2021223113A1 (en) Metering method, camera, electronic device, and computer-readable storage medium
CN112330689A (en) Photovoltaic camera exposure parameter adjusting method and device based on artificial intelligence
CN109889734A (en) A kind of exposure compensating method of adjustment for the shooting of more camera lenses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240203

Address after: 100080 Building 1, Qinghe Anningli South District, Haidian District, Beijing 108

Patentee after: Guo Dawei

Country or region after: China

Address before: 100089 723, floor 7, building 1, yard 138, malianwa North Road, Haidian District, Beijing

Patentee before: BEIJING YUHANG INTELLIGENT TECHNOLOGY Co.,Ltd.

Country or region before: China