CN114500829B - Photographing processing method and device, unmanned equipment and storage medium - Google Patents

Photographing processing method and device, unmanned equipment and storage medium Download PDF

Info

Publication number
CN114500829B
CN114500829B CN202111616155.4A CN202111616155A CN114500829B CN 114500829 B CN114500829 B CN 114500829B CN 202111616155 A CN202111616155 A CN 202111616155A CN 114500829 B CN114500829 B CN 114500829B
Authority
CN
China
Prior art keywords
picture
camera
exposure time
operating system
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111616155.4A
Other languages
Chinese (zh)
Other versions
CN114500829A (en
Inventor
邱钟发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202111616155.4A priority Critical patent/CN114500829B/en
Publication of CN114500829A publication Critical patent/CN114500829A/en
Application granted granted Critical
Publication of CN114500829B publication Critical patent/CN114500829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a photographing processing method, a photographing processing device, unmanned equipment and a storage medium. The technical scheme provided by the embodiment of the application comprises the following steps: controlling the camera to shoot a first picture according to the exposure time of the camera, and acquiring gesture data of the camera in the exposure time; determining whether a shooting state of the camera when shooting a first picture meets a preset jitter condition according to the gesture data; and when the shooting state is determined to meet the preset jitter condition, marking the first picture, so that the marked picture is removed when the picture is resolved. By the technical means, the technical problem that in the prior art, the image resolving efficiency is low because a large amount of time is required to be spent for removing the blurred image is solved, and the image resolving efficiency is improved.

Description

Photographing processing method and device, unmanned equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of unmanned equipment, in particular to a photographing processing method and device, unmanned equipment and a storage medium.
Background
Mapping is one of the important fields of application of unmanned equipment, and the unmanned equipment controls a camera to shoot pictures of a mapping area when performing a mapping task, and calculates the pictures of the mapping area to construct a map model of the mapping area.
The pictures taken by the unmanned device may be blurred pictures, which do not contain valid information of the mapping region, which cannot be used for constructing a map model of the mapping region. Therefore, before resolving the picture of the mapping region, the blurred picture in the picture of the mapping region needs to be removed according to the picture quality. However, the unmanned equipment maps and shoots a large number of pictures each time, so that a large amount of time is spent for eliminating the blurred pictures before each time of building the map model, and the picture resolving efficiency is affected.
Disclosure of Invention
The embodiment of the application provides a photographing processing method, a photographing processing device, unmanned equipment and a storage medium, solves the technical problem of low picture resolving efficiency caused by the fact that a large amount of time is required to be spent for removing blurred pictures in the prior art, and improves the picture resolving efficiency.
In a first aspect, an embodiment of the present application provides a photographing processing method, including:
Controlling a camera to shoot a first picture according to the exposure time of the camera, and acquiring gesture data of the camera within the exposure time;
determining whether a shooting state of the camera when shooting a first picture meets a preset jitter condition according to the gesture data;
And when the shooting state is determined to meet the preset jitter condition, marking the first picture, so that marked pictures are removed when the pictures are resolved.
In a second aspect, an embodiment of the present application provides a photographing processing apparatus, including:
the data acquisition module is configured to control the camera to shoot a first picture according to the exposure time of the camera and acquire the attitude data of the camera in the exposure time;
The state determining module is configured to determine whether the shooting state of the camera when shooting the first picture meets a preset jitter condition according to the gesture data;
and the picture marking module is configured to mark the first picture when the shooting state is determined to meet a preset dithering condition, so as to reject the marked picture when the picture is resolved.
In a third aspect, an embodiment of the present application provides an unmanned apparatus, including:
One or more processors; and a storage device storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the photographing processing method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium containing computer executable instructions which, when executed by a computer processor, are used to perform the photographic processing method of the first aspect.
The application provides a photographing processing method, which is used for determining exposure time of a camera to take pictures so as to determine gesture data of the camera when taking the pictures according to the exposure time. And determining the shake state of the camera when the camera shoots the picture according to the posture data so as to determine whether the shot picture is pasted. When the picture is determined to be pasted, the picture is marked so as to be convenient for subsequent resolving, and the marked picture is directly removed. By the technical means, the image quality of the picture can be determined according to the shake state of the camera after the camera shoots the picture, the blurred picture is marked according to the image quality so that the marked picture is directly removed when the subsequent picture is resolved, a great amount of time is not required to be spent for evaluating the image quality of all the pictures before resolving so as to screen out the blurred picture, the picture quality evaluation time is reduced, and the picture processing efficiency is improved.
Drawings
FIG. 1 is a flowchart of a photographing processing method according to an embodiment of the present application;
fig. 2 is a block diagram of a photographing control system according to an embodiment of the present application;
FIG. 3 is a flowchart of taking a photograph and acquiring gesture data provided by an embodiment of the present application;
FIG. 4 is a flowchart of controlling a camera to take a photograph according to an embodiment of the present application;
Fig. 5 is a schematic diagram of a photographing time axis according to an embodiment of the present application;
Fig. 6 is a flowchart of acquiring gesture data corresponding to a first picture according to an embodiment of the present application;
Fig. 7 is a flowchart for determining that a photographing state of a camera satisfies a shake condition according to an embodiment of the present application;
FIG. 8 is a flowchart of camera anomaly investigation provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a photographing processing device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an unmanned device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of specific embodiments of the present application is given with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present application are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Fig. 1 shows a flowchart of a photographing processing method provided by an embodiment of the present application, where the photographing processing method provided by the embodiment of the present application may be implemented by an unmanned device, and the unmanned device may be implemented by software and/or hardware, and may be formed by two or more physical entities or may be formed by one physical entity. The unmanned equipment refers to equipment such as an unmanned plane which can automatically move based on a preset route.
In an embodiment, the unmanned device controls the camera to take pictures of the mapping area according to a pre-configured mapping task. A plurality of shooting points and shooting angles corresponding to the shooting points are configured in the mapping task, and when unmanned equipment sails to the preset shooting points, the camera is controlled to turn to the corresponding shooting angles to shoot pictures of the mapping area. After unmanned equipment completes a mapping task and acquires a mapping region picture, the unmanned equipment end, the cloud end or the management terminal can calculate the mapping region picture acquired by the unmanned equipment to construct a map model of the mapping region. However, in the mapping process, the unmanned equipment may be interfered by external environment, such as high current wind power, so that the camera is in a shaking state when shooting. In addition, when the camera is controlled to turn to the corresponding shooting angle, the rotation inertia is large, or a damping device of the camera is not suitable, so that the camera is in a shaking state when shooting. The picture of the mapping area acquired by the unmanned device may include a blurred picture because the picture is blurred when the camera is in a severely jittered state. The blurred picture does not contain valid information of the mapped region, which cannot be used to construct a map model of the mapped region. Therefore, in the prior art, before resolving the mapping region picture to construct a map model, the blurred picture in the mapping region picture is removed according to the picture quality. However, the unmanned equipment acquires a large number of mapping region pictures after finishing the mapping task each time, so that a great amount of time is required to be spent for removing the fuzzy pictures from the mapping region pictures, and the picture resolving efficiency and the map modeling efficiency are reduced. In contrast, the photographing processing method provided by the embodiment is used for rapidly screening out the fuzzy pictures in the mapping region pictures, and improving the picture resolving efficiency and the map modeling efficiency.
In an embodiment, the unmanned device is configured with a photographing processing system, and the unmanned device executes a photographing control method through the photographing processing system. Fig. 2 is a block diagram of a photographing control system according to an embodiment of the present application. As shown in fig. 2, the photographing processing system includes a pan-tilt, a camera, a pan-tilt control module, a camera control module, a positioning module and a gesture detection module, wherein the pan-tilt is connected with the camera and the pan-tilt control module, and the camera control module is connected with the pan-tilt control module, the camera and the gesture detection module. The cradle head control module is used for controlling the cradle head to turn, and the camera is driven to turn when the cradle head turns; the gesture detection module is used for collecting gesture data of the camera and transmitting the gesture data to the camera control module; the camera control module is used for controlling the camera to take the picture and determining the gesture information corresponding to the picture. The gesture detection module can be an IMU inertial measurement unit.
The following describes an example in which the photographing processing system is a main body for executing the photographing processing method. Referring to fig. 1, the photographing processing method specifically includes:
S110, controlling the camera to shoot a first picture according to the exposure time of the camera, and acquiring gesture data of the camera in the exposure time.
The first picture is a picture shot by the control camera when the unmanned equipment executes a mapping task. The exposure time refers to the time node of exposure when the camera takes a picture. When the unmanned equipment sails to the shooting point, the camera is controlled to turn to the corresponding shooting angle through the tripod head control module, and confirmation information is sent to the camera control module through the tripod head control module after the camera turns to the shooting angle. The camera control module confirms that the camera of the camera control module is turned to the shooting direction of the current shooting point through the confirmation information, and at the moment, the camera control module can control the camera to shoot a picture at the corresponding shooting point. Wherein the acknowledgement information refers to ack data.
Further, after receiving the confirmation information, the camera control module controls the camera to fully expose according to the exposure time node to take a picture. Meanwhile, the camera control module receives the gesture data sent by the gesture detection module, compares the exposure time corresponding to the photo with the acquisition time stamp of the gesture data, and can determine the gesture data when the camera shoots the photo. It should be noted that the process of taking a photo by the camera is equivalent to the photo exposure process, and thus the pose data of the camera during the exposure time can be used as the pose data when the camera takes the photo.
In one embodiment, the exposure time includes a start exposure time, which is the time node at which the camera begins to expose, and an end exposure time, which is the time node at which the camera shutter is closed. The camera can be controlled to take a picture according to the initial exposure time and the final exposure time, and attitude data of corresponding moments are acquired. In this embodiment, fig. 3 is a flowchart of taking a photograph and acquiring gesture data provided in an embodiment of the present application. As shown in fig. 3, the steps of taking a photograph according to the exposure time and acquiring pose data of the camera during the exposure time specifically include S1101-S1103:
s1101, determining a start exposure time and an end exposure time of the camera according to the exposure time of the camera and the frame rate of the photosensitive element.
The difference between the ending exposure time and the starting exposure time is equal to the exposure time of the camera, and the exposure time of the camera is determined by the ambient brightness of the camera. The frame rate of the photosensitive element refers to the refresh frequency of the exposure frame collected by the photosensitive element, and the photosensitive element periodically triggers field interruption according to the frame rate so as to inform the camera control module of the current exposure frame to be refreshed through the field interruption. Assuming that the frame rate of the photosensitive element is 1/40ms, the photosensitive element triggers a field interruption every 40ms, and the photosensitive element is in an exposure state after triggering the field interruption and re-exposing and collecting a new exposure frame until the next field interruption triggering. It should be noted that when the photosensitive element is switched from the line exposure to the global exposure, the photosensitive element will not automatically enter the exposure state, but the camera control module is required to control the exposure start, and the photosensitive element can automatically enter the exposure state when the subsequent field interruption triggers, and re-expose to collect a new exposure frame.
In one embodiment, the exposure time of the camera may be determined by an automatic exposure algorithm based on the ambient brightness of the camera. Illustratively, after the camera control module receives the confirmation information, the current exposure time of the camera is calculated through an automatic exposure algorithm, and when the field interruption triggered by the photosensitive element is received, the photosensitive element is controlled to be switched from line exposure to global exposure, and the photosensitive element is controlled to start exposure to acquire a current global exposure frame. And after the exposure time length is elapsed, controlling the shutter to be closed, acquiring a global exposure frame currently acquired by the photosensitive element, and processing original pixel data corresponding to the currently acquired global exposure frame to acquire a first picture. In this embodiment, since the triggering time of the field interruption is fixed, the time of the photosensitive element triggering the next field interruption may be set as the ending exposure time of the global exposure frame currently collected by the photosensitive element, and then the starting exposure time of the global exposure frame currently collected by the photosensitive element is calculated according to the next field interruption and the exposure time. After the photosensitive element is switched from line exposure to global exposure, the photosensitive element is controlled to start exposure according to the initial exposure time, and when the exposure time reaches the end exposure time, the next field interruption triggers the control shutter to close, so that the global exposure frame currently collected by the photosensitive element is obtained.
S1102, controlling the photosensitive element to collect original pixel data according to the initial exposure time and the final exposure time, and converting the original pixel data into a first picture.
In an embodiment, fig. 4 is a flowchart of controlling a camera to take a photo according to an embodiment of the present application.
As shown in fig. 4, the steps of controlling the camera to take a photograph specifically include S11021 to S11022:
S11021, when the first operating system receives the first field interruption of the photosensitive element, the first operating system controls the photosensitive element to switch from the line exposure to the global exposure and starts the timer.
S11022, when the timer finishes timing, the photosensitive element is controlled to start exposure through the first operating system, and the timing duration of the timer is the difference value between the field interruption period and the exposure duration.
S11023, when the first operating system receives the second field interruption of the photosensitive element, controlling a shutter of the camera to close and acquiring original pixel data acquired by the photosensitive element.
The first operating system is a real-time operating system configured by the camera control module, the camera control module is further configured with a Linux operating system, the Linux operating system and the real-time operating system are configured with a shared memory, and the two operating systems communicate through the shared memory. The cradle head control module is connected with the Linux operating system through a serial port, and performs data communication with the Linux operating system through the serial port. For example, after the pan-tilt control module determines that the camera is turned to the shooting angle, the confirmation information is sent to the Linux operating system through the serial port. After the Linux operating system receives the confirmation information, the Linux operating system sends a photographing instruction to the real-time operating system through the shared memory. After receiving a photographing instruction, the real-time operating system switches from a preview mode to a photographing mode, and controls the photosensitive element to switch from line exposure to global exposure when receiving a first field interruption triggered by the photosensitive element. The first field interruption refers to a first field interruption triggered by a photosensitive element received after the real-time operating system is switched to a photographing mode. The second field interruption refers to the next field interruption to the first field interruption.
In this embodiment, fig. 5 is a schematic diagram of a photographing time axis provided in an embodiment of the present application. As shown in fig. 5, T1 is a trigger time node of the first field interruption, T3 is a trigger time node of the second field interruption, and a trigger interval duration T3 of the two is a field interruption period, that is, a frequency inverse of the photosensitive element. From the above, T3 is also the ending exposure time corresponding to the first picture, and according to the exposure time T2 calculated by the automatic exposure algorithm, T2 can be determined as the starting exposure time corresponding to the first picture. In this embodiment, the difference between the field interruption period T3 and the exposure period T2 is set as the time period T1 of the timer, so that it can be determined that the time node T2 at which the timer ends is the initial exposure time of the photosensitive element, and it is ensured that the actual exposure period of the photosensitive element meets the exposure period calculated by the automatic exposure algorithm. When the real-time operating system receives the first field interruption triggered by the photosensitive element, a timer is started. After the timing duration T1 is elapsed, the timer finishes timing, and at a timing node T2 after timing is finished, the photosensitive element is controlled to start exposure. After the exposure time period T2 passes, reaching a trigger time node T3 of the second field interruption, the photosensitive element will be re-exposed to acquire a new global exposure frame, so that the shutter is controlled to be closed at the trigger time node T3 of the second field interruption, and the original pixel data of the global exposure frame currently acquired by the photosensitive element is acquired.
Further, after receiving the second field interruption, the real-time operating system switches the sensory element from the global exposure to the line exposure and from the photographing mode back to the preview mode. And converting the original pixel data into a first picture, and transmitting the first picture, the corresponding initial exposure time and the corresponding end exposure time to a Linux operating system through a shared memory.
S1103, acquiring the attitude data of the camera in the corresponding time period according to the initial exposure time and the final exposure time.
The IMU inertial measurement unit is an attitude detection module configured by a cradle head, and the attitude data of the camera are collected in real time through the IMU inertial measurement unit, wherein the attitude data comprise triaxial acceleration data and triaxial angular acceleration data of the camera. The IMU inertial measurement unit is connected with a Linux operating system of the camera control module through a serial port, the IMU inertial measurement unit sends acquired gesture data to the Linux operating system through the serial port, and the Linux operating system determines gesture data when the camera shoots a first picture according to an acquisition timestamp of the gesture data, a starting exposure time and an ending exposure time of the first picture.
Because the sampling frequency of the IMU inertial measurement unit is generally above 2KHz, if the gesture data acquired in real time are all sent to the Linux operating system, the Linux operating system determines that the gesture data when the camera shoots the first picture from a large amount of gesture data is very large in calculation amount, and the Linux operating system is large in calculation load. In this regard, the present embodiment proposes that the IMU inertial measurement unit only needs to issue the gesture data corresponding to the first picture to the Linux operating system, so as to reduce the operand of the Linux operating system. Fig. 6 is a flowchart of acquiring gesture data corresponding to a first picture according to an embodiment of the present application. As shown in fig. 6, the step of acquiring the pose data when the camera takes the first picture specifically includes S11031 to S11033:
s11031, when the timer is finished, a first level signal is sent to the gesture detection module through the first operating system, so that the gesture detection module sends gesture data acquired currently to the second operating system.
S11032, when the first operating system receives the second field interruption of the photosensitive element, a second level signal is sent to the gesture detection module through the first operating system, so that the gesture detection module stops sending gesture data.
S11033, receiving the gesture data sent by the gesture detection module through the second operating system, and determining a corresponding first picture according to the acquisition time stamp of the gesture data.
The second operating system is the Linux operating system. In an embodiment, the first operating system of the camera control module is connected with the gesture detection module through a communication pin, the first level signal is a high level signal received by the gesture detection module when the communication pin is pulled up by the first operating system, and the second level signal is a low level signal received by the gesture detection module when the communication pin is pulled down by the first operating system. For example, referring to fig. 5, at the timing end time node t2, that is, the initial exposure time t2 of the first picture, the communication pin is pulled up by the first operating system to send a high-level signal to the gesture detection module, and the gesture detection module knows that the camera is shooting the first picture after receiving the high-level signal, so that the gesture data collected currently is sent to the Linux operating system through the serial port. Further, at the trigger time node t3 of the second field interruption, that is, the ending exposure time t3 of the first picture, the communication pin is pulled down by the first operating system to send a low-level signal to the gesture detection module, and the gesture detection module knows that the camera has ended shooting of the first picture after receiving the low-level signal, so that sending of gesture data is stopped. In the period from t2 to t3, the Linux operating system only receives the gesture data of the camera for shooting the first picture, and after the Linux operating system acquires the first picture uploaded by the real-time operating system from the shared memory, the Linux operating system can determine the corresponding first picture according to the acquisition timestamp of the gesture data.
According to the embodiment, the real-time operation system sends the high-low level signal to the gesture detection module, so that the response speed of the gesture detection module for sending gesture data can be improved, the deviation between the acquisition time stamp of the gesture data sent to the Linux operation system by the gesture detection module and the exposure time of the first picture is reduced, and the accuracy of the gesture data when the acquired camera shoots the first picture is ensured. In addition, the photographing processing efficiency of the camera is improved by rapidly responding to the photographing instruction through the real-time operating system.
S120, determining whether a shooting state of the camera when shooting the first picture meets a preset jitter condition according to the gesture data.
The shake condition is a threshold condition that the camera pose data satisfies when the camera shakes, and when the shooting state of the camera when shooting the first picture satisfies the shake condition, the shake condition indicates that the camera shakes when shooting the first picture, and the first picture can be determined to be a blurred picture. When the shooting state of the camera when shooting the first picture meets the shake condition, the camera can be determined not to shake when shooting the first picture, and the first picture can be determined not to be a blurred picture.
In an embodiment, fig. 7 is a flowchart of determining that a shooting state of a camera satisfies a shake condition according to an embodiment of the present application. As shown in fig. 7, the step of determining that the photographing state when the camera photographs the first picture satisfies the shake condition specifically includes S1201-S1202:
s1201, determining the jitter amplitude of the camera when shooting the first picture according to the gesture data of the front moment and the back moment, and comparing the jitter amplitude with a preset jitter threshold.
S1202, when the jitter amplitude is larger than or equal to a preset jitter threshold value, determining that the shooting state meets a preset jitter condition.
The preset jitter threshold value refers to the minimum value of the jitter amplitude when the camera is jittered. When the jitter amplitude of the camera when shooting the first picture is larger than or equal to the jitter threshold value, the camera is shown to shake when shooting the first picture; when the jitter amplitude of the camera when taking the first picture is smaller than the jitter threshold value, the camera is indicated that no jitter occurs when taking the first picture. The gesture of the camera at each acquisition time when the camera shoots the first picture is calculated according to the triaxial angular acceleration data and the triaxial acceleration data corresponding to the first picture. And calculating the jitter amplitude of the camera when shooting the first picture according to the posture change quantity of the front and rear acquisition time. The jitter amplitude is compared to a jitter threshold to determine if the camera is jittering while taking the first picture.
And S130, marking the first picture when the shooting state is determined to meet the preset jitter condition, so as to reject the marked picture when the picture is resolved.
For example, when it is determined that the photographing state when the camera photographs the first picture satisfies the shake condition, the first picture may be determined to be a blurred picture, and thus the first picture whose photographing state satisfies the shake condition is marked. When the unmanned equipment completes the mapping task, the unmanned equipment calculates a first picture acquired in the mapping process to construct a map model of the mapping region. Or the unmanned equipment sends the first picture acquired in the mapping process to the cloud or the management terminal, and the cloud or the management terminal calculates the first picture to construct a map model of the mapping region. Before resolving the first picture, the marked fuzzy picture can be removed according to the mark of the first picture, so that a great amount of time is not required to screen the fuzzy picture from the first picture, and the picture resolving efficiency and the map modeling efficiency are improved.
In an embodiment, a shooting point and a shooting angle corresponding to the mark picture are determined, so that the picture is re-shot according to the shooting point and the shooting angle. For example, since the shooting points and the shooting angles in the mapping task are determined according to the course overlapping degree and the side overlapping degree of the mapping area, when a first picture corresponding to a certain shooting point is a blurred picture and cannot be used for constructing a map model, the accuracy of the map model of the mapping area is reduced, and when the first pictures corresponding to a plurality of shooting points are blurred pictures, the rest first pictures may not be used for constructing the complete map model of the mapping area. Therefore, after the unmanned equipment finishes the mapping task, a flight route can be planned according to the shooting points of the marked pictures, a supplementary shooting task is generated according to the flight route and the shooting angles of all the shooting points, and the electric quantity required by the unmanned equipment to finish the supplementary shooting task is calculated. When the residual electric quantity of the unmanned equipment is larger than or equal to the electric quantity required by the supplementary shooting task, the unmanned equipment re-shoots the pictures corresponding to the shooting points according to the supplementary shooting task so as to supplement the pictures required by constructing the mapping area. When the residual electric quantity of the unmanned equipment is smaller than the electric quantity required by the supplementary shooting task, the unmanned equipment sends the shooting point and the shooting angle of the marked picture to the management terminal, and the management terminal configures the new unmanned equipment to acquire the picture corresponding to the shooting point.
In an embodiment, when the duty ratio of the blurred picture taken by the camera in all the first pictures taken by the camera is too large, the camera may frequently shake due to structural loss of the camera or other hardware reasons. Therefore, when the proportion of the blurred pictures shot by the camera in all the first pictures shot by the camera is too large, the camera can be subjected to abnormal investigation, and the reason that the camera frequently shakes is determined. In this embodiment, fig. 8 is a flowchart of camera anomaly investigation provided in an embodiment of the present application. As shown in fig. 8, the steps of camera anomaly investigation specifically include: S1401-S1402:
S1401, counting the number of marked pictures and the number of first pictures, and determining the number duty ratio of the marked pictures.
And S1402, when the determined quantity duty ratio exceeds a preset duty ratio threshold, sending the marked picture and corresponding gesture data to the management terminal so that the management terminal determines abnormal shooting information of the camera according to the gesture data.
The duty ratio threshold value refers to a minimum value of the duty ratio of the blurred picture in the first picture when the camera frequently shakes in the process of executing the mapping task by the unmanned equipment. When the duty ratio of the blurred image in the first image is larger than or equal to the duty ratio threshold value, the fact that the camera frequently shakes in the process of executing the mapping task by the unmanned equipment is indicated; and when the duty ratio of the blurred picture in the first picture is smaller than the duty ratio threshold value, the camera is frequently dithered in the process of executing the mapping task by the unmanned equipment. Illustratively, after the unmanned device completes the mapping task, the number of the marked pictures and the number of the first pictures are counted, and the number proportion of the marked pictures in the first pictures is calculated. And comparing the number duty ratio with a preset duty ratio threshold, and determining that the camera frequently shakes in the process of executing the mapping task by the unmanned equipment when the number duty ratio is larger than or equal to the preset duty ratio threshold.
Further, the shooting abnormality information refers to a cause of frequent camera shake. The unmanned device sends the blurred picture and the corresponding gesture data to the management terminal, and the management terminal performs abnormal investigation on the camera according to the gesture data to determine the reason that the camera frequently shakes. In this embodiment, the management terminal may primarily determine, according to the blurred image and the corresponding gesture data, that the cause of the abnormal shake is wind interference or a pan-tilt steering control algorithm. If the management terminal cannot directly determine the shake cause according to the attitude data, a worker can actively check the structure of the cradle head camera of the unmanned equipment, and whether the abnormal shake cause is abrasion or damage of the cradle head camera structure is judged. After the abnormal jitter reason of the camera is determined, the structure or algorithm of the camera is improved pertinently, and the shooting stability of the camera is improved.
In summary, according to the photographing processing method provided by the embodiment of the application, the exposure time of the camera to take the picture is determined, so that the gesture data of the camera when taking the picture is determined according to the exposure time. And determining the shake state of the camera when the camera shoots the picture according to the posture data so as to determine whether the shot picture is pasted. And marking the picture when determining that the picture is pasted so as to remove the picture directly marked when resolving the picture later. By the technical means, the image quality of the picture can be determined according to the shaking state of the camera after the camera shoots the picture, the blurred picture is marked according to the image quality so that the marked picture is directly removed when the subsequent picture is resolved, a great amount of time is not required to be spent for evaluating the image quality of all the pictures before the resolution to screen out the blurred picture, the picture quality evaluation time is reduced, and the picture processing efficiency is improved. And the corresponding pictures are subjected to supplementary shooting according to the shooting points and shooting angles of the fuzzy pictures, so that the accuracy and the integrity of the map model of the mapping area are ensured. Determining whether the camera is abnormally dithered according to the duty ratio of the blurred picture in the first picture, and checking the reason of the abnormal camera dithering when the abnormal camera dithering is determined, so that the structure or algorithm of the camera is improved according to the reason of the abnormal camera dithering, and the shooting stability of the camera is improved.
On the basis of the above embodiments, fig. 9 is a schematic structural diagram of a photographing processing device according to an embodiment of the present application. Referring to fig. 9, the photographing processing apparatus provided in this embodiment specifically includes: a data acquisition module 21, a state determination module 22 and a picture marking module 23.
The data acquisition module is configured to control the camera to shoot a first picture according to the exposure time of the camera and acquire the attitude data of the camera in the exposure time;
The state determining module is configured to determine whether the shooting state of the camera when shooting the first picture meets a preset jitter condition according to the gesture data;
And the picture marking module is configured to mark the first picture when the shooting state is determined to meet the preset jitter condition, so as to reject the marked picture when the picture is resolved.
On the basis of the above embodiment, the data acquisition module includes: an exposure time determining unit configured to determine a start exposure time and an end exposure time of the camera according to an exposure time of the camera and a frame rate of the photosensitive element; the first picture acquisition unit is configured to control the photosensitive element to acquire original pixel data according to the initial exposure time and the final exposure time and convert the original pixel data into a first picture; and the gesture data acquisition unit is configured to acquire gesture data of the camera in a corresponding time period according to the initial exposure time and the end exposure time.
On the basis of the above embodiment, the first picture acquisition unit includes: a timing subunit configured to control, by the first operating system, switching of the photosensitive element from the line exposure to the global exposure and starting a timer when the first operating system receives a first field interruption of the photosensitive element; an exposure subunit, configured to control the photosensitive element to start exposure through the first operating system when the timing of the timer is finished, wherein the timing duration of the timer is the difference value between the field interruption period and the exposure duration; and the original pixel acquisition subunit is configured to control the shutter of the camera to close and acquire the original pixel data acquired by the photosensitive element when the first operating system receives the second field interruption of the photosensitive element.
On the basis of the above embodiment, the attitude data acquisition unit includes: the pin pulling subunit is configured to send a first level signal to the gesture detection module through the first operating system when the timer finishes timing, so that the gesture detection module sends the gesture data acquired currently to the second operating system; the pin pulling-down subunit is configured to send a second level signal to the gesture detection module through the first operating system when the first operating system receives the second field interruption of the photosensitive element, so that the gesture detection module stops sending gesture data; the timestamp matching subunit is configured to receive the gesture data sent by the gesture detection module through the second operating system, and determine the corresponding first picture according to the acquisition timestamp of the gesture data.
On the basis of the above embodiment, the state determining module includes: a comparison unit configured to determine a shake amplitude when the camera shoots the first picture according to the posture data of the front and rear moments, and compare the shake amplitude with a preset shake threshold; and a determining unit configured to determine that the photographing state satisfies a preset shake condition when the shake amplitude is greater than or equal to a preset shake threshold.
On the basis of the above embodiment, the photographing processing apparatus further includes: and the supplementary shooting module is configured to determine shooting points and shooting angles corresponding to the marked pictures so as to re-shoot the pictures according to the shooting points and the shooting angles.
On the basis of the above embodiment, the photographing processing apparatus further includes: the duty ratio statistics module is configured to count the number of the marked pictures and the number of the first pictures, and determine the number duty ratio of the marked pictures; and the abnormality checking module is configured to send the marked picture and the corresponding gesture data to the management terminal when the determined quantity duty ratio exceeds a preset duty ratio threshold value, so that the management terminal determines shooting abnormality information of the camera according to the gesture data.
In the above, the photographing processing device provided by the embodiment of the application determines the exposure time of the camera to take the picture, so as to determine the pose data of the camera when taking the picture according to the exposure time. And determining the shake state of the camera when the camera shoots the picture according to the posture data so as to determine whether the shot picture is pasted. And marking the picture when determining that the picture is pasted so as to remove the picture directly marked when resolving the picture later. By the technical means, the image quality of the picture can be determined according to the shaking state of the camera after the camera shoots the picture, the blurred picture is marked according to the image quality so that the marked picture is directly removed when the subsequent picture is resolved, a great amount of time is not required to be spent for evaluating the image quality of all the pictures before the resolution to screen out the blurred picture, the picture quality evaluation time is reduced, and the picture processing efficiency is improved. And the corresponding pictures are subjected to supplementary shooting according to the shooting points and shooting angles of the fuzzy pictures, so that the accuracy and the integrity of the map model of the mapping area are ensured. Determining whether the camera is abnormally dithered according to the duty ratio of the blurred picture in the first picture, and checking the reason of the abnormal camera dithering when the abnormal camera dithering is determined, so that the structure or algorithm of the camera is improved according to the reason of the abnormal camera dithering, and the shooting stability of the camera is improved.
The photographing processing device provided by the embodiment of the application can be used for executing the photographing processing method provided by the embodiment, and has corresponding functions and beneficial effects.
Fig. 10 is a schematic structural diagram of an unmanned device according to an embodiment of the present application, and referring to fig. 10, the unmanned device includes: a processor 31, a memory 32, a communication device 33, an input device 34 and an output device 35. The number of processors 31 in the drone may be one or more and the number of memories 32 in the drone may be one or more. The processor 31, memory 32, communication means 33, input means 34 and output means 35 of the unmanned device may be connected by bus or other means.
The memory 32 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and modules, such as program instructions/modules (e.g., the data acquisition module 21, the status determination module 22, and the picture marking module 23 in the photographing processing apparatus) corresponding to the photographing processing method according to any embodiment of the present application. The memory 32 may mainly include a storage program area and a storage data area, wherein the storage program area may store a second operating system, at least one application program required for functions; the storage data area may store data created according to the use of the device, etc. In addition, memory 32 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, the memory may further include memory remotely located with respect to the processor, the remote memory being connectable to the device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The communication means 33 are for data transmission.
The processor 31 executes various functional applications of the apparatus and data processing, that is, implements the above-described photographing processing method, by running software programs, instructions, and modules stored in the memory 32.
The input means 34 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the device. The output means 35 may comprise a display device such as a display screen.
The unmanned equipment provided by the embodiment can be used for executing the photographing processing method provided by the embodiment, and has corresponding functions and beneficial effects.
The embodiment of the application also provides a storage medium containing computer executable instructions, which when executed by a computer processor, are used for executing a photographing processing method, the photographing processing method comprises: controlling the camera to shoot a first picture according to the exposure time of the camera, and acquiring gesture data of the camera in the exposure time; determining whether a shooting state of the camera when shooting a first picture meets a preset jitter condition according to the gesture data; and when the shooting state is determined to meet the preset jitter condition, marking the first picture, so that the marked picture is removed when the picture is resolved.
Storage media-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, lanbas (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a second, different computer system connected to the first computer system through a network such as the internet. The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media residing in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by one or more processors.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present application is not limited to the above photographing processing method, and may also perform the related operations in the photographing processing method provided in any embodiment of the present application.
The photographing processing device, the storage medium and the unmanned equipment provided in the above embodiments may execute the photographing processing method provided in any embodiment of the present application, and technical details not described in detail in the above embodiments may refer to the photographing processing method provided in any embodiment of the present application.
The foregoing description is only of the preferred embodiments of the application and the technical principles employed. The present application is not limited to the specific embodiments described herein, but is capable of numerous modifications, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, while the application has been described in connection with the above embodiments, the application is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit of the application, the scope of which is set forth in the following claims.

Claims (9)

1. A photographing processing method, characterized by comprising:
Controlling a camera to shoot a first picture according to the exposure time of the camera, and acquiring gesture data of the camera within the exposure time; the method comprises the following steps: determining the time of triggering the next field interruption of the photosensitive element as the ending exposure time through a first operating system, and determining the starting exposure time according to the exposure time and the ending exposure time; controlling the photosensitive element to collect original pixel data according to the initial exposure time and the final exposure time by a first operating system, and converting the original pixel data into a first picture; the method comprises the steps that a first picture, a corresponding initial exposure time and a corresponding ending exposure time are sent to a second operation system through a shared memory through a first operation system, so that the second operation system obtains gesture data corresponding to the first picture according to the initial exposure time and the ending exposure time;
determining whether a shooting state of the camera when shooting a first picture meets a preset jitter condition according to the gesture data;
And when the shooting state is determined to meet the preset jitter condition, marking the first picture, so that marked pictures are removed when the pictures are resolved.
2. The photographing processing method as claimed in claim 1, wherein said controlling said photosensitive element to collect raw pixel data according to said start exposure time and said end exposure time comprises:
When a first operating system receives a first field interruption of a photosensitive element, controlling the photosensitive element to be switched from row exposure to global exposure by the first operating system and starting a timer;
when the timer finishes timing, the first operating system controls the photosensitive element to start exposure, and the timing duration of the timer is the difference value between the field interruption period and the exposure duration;
When the first operating system receives the second field interruption of the photosensitive element, the shutter of the camera is controlled to be closed, and the original pixel data acquired by the photosensitive element is acquired.
3. The photographing processing method according to claim 2, wherein the acquiring the pose data corresponding to the first picture according to the start exposure time and the end exposure time includes:
When the timer finishes timing, a first level signal is sent to the gesture detection module through the first operating system, so that the gesture detection module sends gesture data acquired currently to a second operating system;
When the first operating system receives the second field interruption of the photosensitive element, a second level signal is sent to the gesture detection module through the first operating system so that the gesture detection module stops sending the gesture data;
and receiving the gesture data sent by the gesture detection module through the second operating system, and determining a corresponding first picture according to the acquisition time stamp of the gesture data.
4. The photographing processing method according to claim 1, wherein the determining whether the photographing state of the camera when photographing the first picture satisfies a preset shake condition according to the posture data comprises:
Determining the jitter amplitude of the camera when shooting a first picture according to the gesture data of the front moment and the back moment, and comparing the jitter amplitude with a preset jitter threshold;
and when the jitter amplitude is larger than or equal to the preset jitter threshold value, determining that the shooting state meets a preset jitter condition.
5. The photographing processing method according to claim 1, characterized by further comprising, after said marking said first picture:
and determining shooting points and shooting angles corresponding to the marked pictures, and re-shooting the pictures according to the shooting points and the shooting angles.
6. The photographing processing method according to claim 1, characterized in that the method further comprises:
Counting the number of marked pictures and the number of the first pictures, and determining the number proportion of the marked pictures;
And when the quantity duty ratio is determined to exceed a preset duty ratio threshold, sending the marked picture and corresponding gesture data to a management terminal so that the management terminal determines abnormal shooting information of the camera according to the gesture data.
7. A photographing processing apparatus, characterized by comprising:
The data acquisition module is configured to control the camera to shoot a first picture according to the exposure time of the camera and acquire the attitude data of the camera in the exposure time; wherein the data acquisition module is configured to: determining the time of triggering the next field interruption of the photosensitive element as the ending exposure time through a first operating system, and determining the starting exposure time according to the exposure time and the ending exposure time; controlling the photosensitive element to collect original pixel data according to the initial exposure time and the final exposure time by a first operating system, and converting the original pixel data into a first picture; transmitting a first picture, a corresponding initial exposure time and a corresponding ending exposure time to a second operating system through a shared memory by a first operating system, so that the second operating system acquires attitude data of the camera within the exposure time of the first picture;
The state determining module is configured to determine whether the shooting state of the camera when shooting the first picture meets a preset jitter condition according to the gesture data;
and the picture marking module is configured to mark the first picture when the shooting state is determined to meet a preset dithering condition, so as to reject the marked picture when the picture is resolved.
8. An unmanned device, comprising: one or more processors; storage means storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the photographic processing method of any one of claims 1-6.
9. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the photographic processing method of any one of claims 1-6.
CN202111616155.4A 2021-12-27 2021-12-27 Photographing processing method and device, unmanned equipment and storage medium Active CN114500829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111616155.4A CN114500829B (en) 2021-12-27 2021-12-27 Photographing processing method and device, unmanned equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111616155.4A CN114500829B (en) 2021-12-27 2021-12-27 Photographing processing method and device, unmanned equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114500829A CN114500829A (en) 2022-05-13
CN114500829B true CN114500829B (en) 2024-04-30

Family

ID=81495661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111616155.4A Active CN114500829B (en) 2021-12-27 2021-12-27 Photographing processing method and device, unmanned equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114500829B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509285A (en) * 2011-09-28 2012-06-20 宇龙计算机通信科技(深圳)有限公司 Processing method and system for shot fuzzy picture and shooting equipment
CN106210544A (en) * 2016-08-18 2016-12-07 惠州Tcl移动通信有限公司 A kind of mobile terminal and shooting the stabilization processing method of video, system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3939889B2 (en) * 2000-02-01 2007-07-04 ペンタックス株式会社 Image blur prevention digital camera
JP6087671B2 (en) * 2013-03-12 2017-03-01 キヤノン株式会社 Imaging apparatus and control method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509285A (en) * 2011-09-28 2012-06-20 宇龙计算机通信科技(深圳)有限公司 Processing method and system for shot fuzzy picture and shooting equipment
CN106210544A (en) * 2016-08-18 2016-12-07 惠州Tcl移动通信有限公司 A kind of mobile terminal and shooting the stabilization processing method of video, system

Also Published As

Publication number Publication date
CN114500829A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN110493524B (en) Photometric adjustment method, device and equipment and storage medium
CN102498712B (en) Control device, image-capturing system, and control method
CN106708070B (en) Aerial photography control method and device
JP7091613B2 (en) Imaging equipment, camera-mounted drones, and mode control methods, as well as programs
WO2022011623A1 (en) Photographing control method and device, unmanned aerial vehicle, and computer-readable storage medium
CN110610610B (en) Vehicle access management method and device and storage medium
CN105704393A (en) Image-capturing apparatus and image-capturing direction control method
CN104603688B (en) Interchangeable lenses and camera body
WO2018163571A1 (en) Information-processing device, information-processing method, and information-processing program
CN112204944B (en) Shooting detection method, device, holder, system and storage medium
CN116248836A (en) Video transmission method, device and medium for remote driving
CN114500829B (en) Photographing processing method and device, unmanned equipment and storage medium
CN110099206B (en) Robot-based photographing method, robot and computer-readable storage medium
CN109246355A (en) The method, apparatus and robot of panoramic picture are generated using robot
CN111381607B (en) Method and device for calibrating direction of shooting equipment
CN113994657B (en) Track delay shooting method and device, cradle head camera, unmanned aerial vehicle and handheld cradle head
CN109873958B (en) Camera shutter control method, device and system
CN113261273B (en) Parameter self-adaption method, handheld cradle head, system and computer readable storage medium
CN114500768A (en) Camera synchronous shooting method and device, unmanned equipment and storage medium
CN114449165B (en) Photographing control method and device, unmanned equipment and storage medium
JP6312519B2 (en) Imaging device, control method thereof, and program
CN113676673A (en) Image acquisition method, image acquisition system and unmanned equipment
CN110900607B (en) Robot control method and device
CN114449164B (en) Photographing method and device, unmanned equipment and storage medium
JP6998921B2 (en) Adapters, image pickup devices, support mechanisms and moving objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant