CN114650353B - Shooting method and system for evidence-holding image - Google Patents

Shooting method and system for evidence-holding image Download PDF

Info

Publication number
CN114650353B
CN114650353B CN202210248207.5A CN202210248207A CN114650353B CN 114650353 B CN114650353 B CN 114650353B CN 202210248207 A CN202210248207 A CN 202210248207A CN 114650353 B CN114650353 B CN 114650353B
Authority
CN
China
Prior art keywords
graph
camera
shooting
image
evidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210248207.5A
Other languages
Chinese (zh)
Other versions
CN114650353A (en
Inventor
何玉生
冯辰
王军
左志杰
杨江川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Jinao Information Technology Co ltd
Original Assignee
Hangzhou Jinao Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Jinao Information Technology Co ltd filed Critical Hangzhou Jinao Information Technology Co ltd
Priority to CN202210248207.5A priority Critical patent/CN114650353B/en
Publication of CN114650353A publication Critical patent/CN114650353A/en
Application granted granted Critical
Publication of CN114650353B publication Critical patent/CN114650353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to a shooting method and a shooting system for an evidence image, wherein a camera shooting range is converted into a first graph which is mapped on a GIS map, the overlapping area of the first graph and an evidence image spot to be evidence is calculated, and the spatial position relation between the camera shooting range and the evidence image spot to be evidence is judged by judging the overlapping area of the first graph and the evidence image spot to be evidence, so that whether shooting action is required to be executed is determined, the effectiveness of the evidence image is improved, and the shooting efficiency of the shooting of the evidence image is greatly improved.

Description

Shooting method and system for evidence-holding image
Technical Field
The application relates to the technical field of natural resource investigation and evidence verification, in particular to a shooting method and a shooting system of evidence verification images.
Background
As countries and industries conduct research, monitoring, and evidence-holding work on a wide variety of resources, assets, and space-wide content, there is increasing frequency. In the field investigation of natural resources, field staff usually uses mobile intelligent equipment to take an evidence-holding image on the spot with the image spot as a target, then uploads the evidence-holding image, and then the field staff carries out data auditing, and finally the auditing result is reported to a main department. The traditional image capturing method adopts an image capturing software APP, and the software captures the image through a camera capturing function integrated with the foundation of the intelligent photographing equipment.
However, the conventional method of photographing an evidence image has a serious problem in that it is impossible to accurately locate whether an image photographed by a mobile device or an unmanned aerial vehicle camera effectively covers a target spot area to be investigated and demonstrated. This can result in a large number of images outside the target spot area, thus rendering the effectiveness of the document image low, resulting in a large number of retake times and low working efficiency.
Disclosure of Invention
Based on this, it is necessary to provide a method for photographing an evidence image, aiming at the problem that the conventional method for photographing an evidence image cannot accurately locate whether the image photographed by a mobile device or an unmanned aerial vehicle camera effectively covers a target spot area to be investigated and demonstrated.
The application provides a shooting method of an evidence image, which comprises the following steps:
establishing a three-dimensional space coordinate system;
acquiring camera parameters and shooting azimuth parameters, and calculating coordinates of all vertexes of a first graph according to the camera parameters and the shooting azimuth parameters; the first graph is a graph mapped in a three-dimensional space coordinate system by a camera shooting range;
the coordinates of all vertexes of the first graph are imported into a GIS map of the mobile terminal, so that the first graph is displayed in the GIS map; the outer frame of the first graph is displayed as a solid line frame of a first color;
the coordinates of all vertexes of the to-be-proved pattern spots are imported into a GIS map of the mobile terminal, so that the to-be-proved pattern spots are displayed in the GIS map; the outer frame of the pattern spot to be demonstrated is displayed as a solid frame of a second color;
calculating the overlapping area of the first graph and the pattern spot to be proved, and judging whether the overlapping area of the first graph and the pattern spot to be proved is smaller than a first preset percentage of the area of the pattern spot to be proved;
if the overlapping area of the first graph and the to-be-verified graph spot is larger than or equal to a first preset percentage of the area of the first graph, controlling the camera to enter a shooting state;
and controlling the camera to execute shooting action, and taking the shot image as an evidence-holding image.
The application also includes a shooting system for proving images, comprising:
a camera;
a processing terminal, which is in communication connection with the camera, and is used for executing the shooting method of the evidence-provided image mentioned in the previous description;
the mobile terminal is in communication connection with the processing terminal;
and the server is in communication connection with the processing terminal.
The application relates to a shooting method and a shooting system for an evidence image, wherein a camera shooting range is converted into a first graph which is mapped on a GIS map, the overlapping area of the first graph and an evidence image spot to be evidence is calculated, and the spatial position relation between the camera shooting range and the evidence image spot to be evidence is judged by judging the overlapping area of the first graph and the evidence image spot to be evidence, so that whether shooting action is required to be executed is determined, the effectiveness of the evidence image is improved, and the shooting efficiency of the shooting of the evidence image is greatly improved.
Drawings
Fig. 1 is a flowchart of a method for capturing an evidence image according to an embodiment of the present application.
Fig. 2 is a diagram illustrating a positional relationship between a first graphic and a camera placement point in a method for capturing an evidence image according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a GIS map at a user end when an overlapping area of the first graphic and the to-be-verified graphic spot is greater than or equal to a first preset percentage of an area of the first graphic.
Fig. 4 is a schematic diagram of a GIS map at a user end when the overlapping area of the first pattern and the to-be-verified pattern spot is smaller than a first preset percentage of the first pattern.
Fig. 5 is a schematic diagram of a user side GIS map when the overlapping area of the first graphic and the second graphic is greater than or equal to a second preset percentage of the area of the first graphic.
Fig. 6 is a schematic diagram showing a tracking point displayed in a camera preview window.
Fig. 7 is a schematic diagram of a display situation of tracking points in a GIS map.
Fig. 8 is a schematic structural diagram of a capturing system for an exemplary image according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The application provides a shooting method of an evidence-provided image. It should be noted that the shooting method of the evidence image provided by the application is applied to the field investigation work of any kind of natural resources, including but not limited to mountains, rivers, forests, animal habitats, and points of interest.
In addition, the shooting method of the evidence image provided by the application does not limit the execution subject. Alternatively, the execution subject of the method for capturing an evidence image provided in the present application may be a capturing system for capturing an evidence image. Specifically, the execution subject of the method for capturing an evidence image provided in the present application may be a processing terminal in the capturing system of the evidence image.
As shown in fig. 1, in an embodiment of the present application, the method for capturing the evidence image includes the following steps S100 to S620:
s100, establishing a three-dimensional space coordinate system.
Specifically, the purpose of the application is to display the camera shooting range (i.e. the range covered by the camera preview window) on the display interface of the mobile terminal in a linkage way. Thus, a three-dimensional space coordinate system is first established as a reference.
S200, acquiring camera parameters and shooting azimuth parameters, and calculating coordinates of all vertexes of the first graph according to the camera parameters and the shooting azimuth parameters; the first graph is a graph mapped by a camera shooting range in a three-dimensional space coordinate system.
Specifically, the camera parameters are hardware parameters of the camera itself, which may include one or more of aperture size, lens focal length, camera blind zone data.
The shooting azimuth parameter is a parameter related to the position and the angle of the camera. The shooting azimuth parameters may include one or more of camera placement point coordinates, azimuth angle, shooting spatial position, and pitch angle.
Please refer to fig. 2. And the camera placement point is 0 point, and after the coordinates of the 0 point are determined, taking the 0 point as a starting point, and making a ray OE. The drawing direction of the ray OE is determined by the direction angle, since the direction angle determines the photographing angle of the camera. Assuming that the north-positive direction is the reference direction, if the direction angle is 0 degrees, then OE is the north-positive direction, as shown in fig. 2. The vertical OE ray has two segments, a near segment CD and a far segment AB, which are the closest and farthest distances that the camera can take. The lengths of the near line segment CD and the far line segment AB are related to the camera blind area data.
The length of the distance FE between the proximal line segment CD and the distal line segment AB can be derived.
For example, the CD length is 10 meters, the AB length is 80 meters, and the FE length is 70 meters. It is noted that in actual photographing, the actual length of FE is affected by the magnitude of the pitch angle. The pitch angle may cause the camera lens to pitch up or down. The applicant has proved through a limited number of experiments that when the pitch angle is 0 degrees, the actual length of FE is 70 meters. When the pitch angle is 45 degrees, the actual length of FE becomes 10 meters. Thus, the actual length of fe=the actual length of FE at a pitch angle of 0 degrees- (4/3) ×pitch angle can be found.
Further, the coordinates of the point a and the coordinates of the point B may be calculated according to the actual length of FE. According to the camera blind area data and the coordinates OF the O point, the length OF can be obtained, and finally the coordinates OF the C point and the D point can be calculated.
At this time, the coordinates of the point a, the point B, the point C and the point D are all obtained, and the closed graph enclosed by the point a, the point B, the point C and the point D is the first graph, that is, the graph mapped by the camera shooting range in the three-dimensional space coordinate system.
And S300, importing the coordinates of all the vertexes of the first graph into a GIS map of the mobile terminal so as to display the first graph in the GIS map. The outer border of the first graphic is displayed as a solid border of a first color.
Specifically, the mobile terminal of the embodiment displays the first graphic through the GIS map. GIS, geographic information system (Geographic Information System). The coordinates of the point a, the point B, the point C and the point D are obtained in the step S200, and the four coordinates are imported into the GIS map of the mobile terminal, so as to display the first graphic in the GIS map. For the purpose of distinguishing the subsequent figures from other figures, the step also displays the outer border of the first figure as a solid border of the first color. The first color may be white.
S400, the coordinates of all the vertexes of the to-be-proved pattern spots are imported into a GIS map of the mobile terminal, so that the to-be-proved pattern spots are displayed in the GIS map. The outer border of the pattern spot to be demonstrated is displayed as a solid border of the second color.
Specifically, the coordinates of all the vertices of the pattern to be proved are known, and the coordinates of all the vertices of the pattern to be proved are imported into the GIS map of the mobile terminal so as to display the pattern to be proved in the GIS map. In order to distinguish from the first graph, the outer frame of the to-be-demonstrated pattern spot is displayed as a solid frame of the second color. The second color may be red.
S500, calculating the overlapping area of the first graph and the pattern spot to be proved, and judging whether the overlapping area of the first graph and the pattern spot to be proved is smaller than a first preset percentage of the area of the first graph.
Specifically, under the condition that the coordinates of all the vertices of the pattern to be proved and the coordinates of all the vertices of the first pattern are known, the overlapping area of the first pattern and the pattern to be proved can be calculated, and whether the overlapping area of the first pattern and the pattern to be proved is smaller than a first preset percentage of the area of the first pattern is judged. The first preset percentage may be 10%.
S610, if the overlapping area of the first graph and the to-be-demonstrated graph spot is larger than or equal to a first preset percentage of the area of the first graph, controlling the camera to enter a shooting state.
Specifically, as shown in fig. 3, if the overlapping area of the first pattern and the pattern spot to be demonstrated is greater than or equal to a first preset percentage of the area of the first pattern, it is determined that the shooting condition is met, and the camera is controlled to enter a shooting possible state.
S620, controlling the camera to execute shooting action, and taking the shot image as a proof image.
Specifically, the photographing action may be performed by controlling the shutter key of the camera to be pressed. Alternatively, a photographing button on a GIS map of the user terminal may be set to a "clickable" state, so that the user may press the photographing button, trigger a shutter key-press action of the camera, and perform a photographing action. And (5) proving that the image shooting is completed.
In this embodiment, the camera shooting range is converted into the first graph and mapped on the GIS map, the overlapping area of the first graph and the pattern spot to be authenticated is calculated, and the spatial position relationship between the camera shooting range and the pattern spot to be authenticated is determined by determining the overlapping area of the first graph and the pattern spot to be authenticated, so as to determine whether to execute shooting actions, increase the effectiveness of the image to be authenticated, and greatly improve the shooting efficiency of shooting the image to be authenticated.
In an embodiment of the present application, after S500, the method for capturing the proof image further includes the following S710 to S720:
and S710, if the overlapping area of the first graph and the to-be-demonstrated graph spot is smaller than a first preset percentage of the graph area of the to-be-demonstrated graph spot, controlling the camera to enter a non-shooting state.
Specifically, if the overlapping area of the first graph and the to-be-demonstrated pattern spot is smaller than a first preset percentage of the to-be-demonstrated pattern spot, determining that the shooting condition is not met, and controlling the camera to enter a non-shooting state. Controlling the camera to enter the non-photographable state may be accomplished by the camera data interface entering the non-callable state. Optionally, when the camera data interface enters the non-callable state, the shutter key of the camera fails, that is, when the shutter key of the camera receives a shooting instruction, the non-pressing state cannot be changed to the pressing state. S720, displaying the first movement prompt identifier on the GIS map. The first movement prompt identifier is used for prompting the first graph to be close to the pattern spot to be authenticated. Specifically, as shown in fig. 4, the first movement hint identifier may be a graphic arrow pointing to a pattern spot to be authenticated. The first mobile alert identification may alert the user: the camera shooting range needs to be adjusted. The algorithm of the first movement hint identification may be: obtaining a first image physical center point and a physical center point of a pattern spot to be demonstrated, connecting the two physical center points by using a straight line to generate a straight line, and then moving out the straight line in parallel, and adding an arrow for pointing to the pattern spot to be demonstrated.
In this embodiment, when the overlapping area of the first graphic and the graphic spot to be demonstrated is smaller than a first preset percentage of the graphic area of the graphic spot to be demonstrated, the camera is controlled to enter a non-photographable state, so that the user is prevented from photographing meaningless pictures, and meanwhile, the user can be prompted to adjust the photographing range of the camera by displaying a first mobile prompt identifier on the GIS map.
In an embodiment of the present application, S710 includes the following steps:
s711, controlling the virtual shooting button displayed in the GIS map to enter a non-clickable state.
Specifically, optionally, when the camera enters the non-photographable state, a photographing button on the GIS map of the user terminal is set to a "non-clickable" state. Entering the non-clickable state is to block the key triggering function, i.e. the action of pressing the shutter key of the camera cannot be triggered.
In this embodiment, when the overlapping area of the first graphic and the graphic spot to be demonstrated is smaller than the first preset percentage of the graphic area of the graphic spot to be demonstrated, the virtual shooting button displayed in the GIS map is controlled to enter the non-clickable state, so that the camera is controlled to be unable to continue shooting in actual action, and a prompt that the user cannot shoot can be given to the user on the GIS map at the user side.
In an embodiment of the present application, after the step S710, the step S700 further includes the following steps:
s720, controlling the outer frame of the first graph displayed in the GIS map to be changed from the solid line frame of the first color to the dotted line frame of the third color.
Specifically, the purpose of this step is also to enhance the prompting effect of the camera entering the non-photographable state. As shown in fig. 4, the outer border of the first graph is a dashed border.
In an embodiment of the present application, after S620, the following S631 to S632 are further included:
s631, after the shooting of the evidence-provided image is completed, acquiring the camera parameters, shooting azimuth parameters and shooting time nodes of the evidence-provided image.
S632, the evidence image, the shooting time node, the camera parameters of the evidence image and the shooting azimuth parameters of the evidence image are correspondingly stored in the server.
Specifically, after the shooting of one evidence image is finished, the step stores the camera parameters, shooting azimuth parameters, shooting time nodes and the evidence image of the evidence image in a mapping relation, so that the subsequent comparison and processing are convenient.
Optionally, the evidence image, the shooting time node, the camera parameter of the evidence image and the shooting azimuth parameter of the evidence image may be stored in a storage medium of a mobile device in communication with the camera, and then uploaded to the server.
In an embodiment of the present application, the method further includes the following S810 to S851:
s810, acquiring camera parameters and shooting azimuth parameters of the evidence image of the previous shooting time node from a server.
Specifically, since the capturing time node of the evidence image is stored after the capturing of one evidence image is completed, the camera parameter and the capturing azimuth parameter of the evidence image of the previous capturing time node can be obtained from the server by taking the capturing time node as an index. S820, calculating the coordinates of all vertexes of the second graph according to the camera parameters and shooting azimuth parameters of the evidence image of the previous shooting time node. The second graph is a graph which is mapped in a three-dimensional space coordinate system by a camera shooting range of the evidence image of the previous shooting time node.
Specifically, the working principle of this step is consistent with S200, and will not be described herein.
S830, the coordinates of all vertexes of the second graph are imported into a GIS map of the mobile terminal, so that the second graph is displayed in the GIS map; the outer frame of the second graph is displayed as a solid line frame of a fourth color.
Specifically, the working principle of this step is consistent with S300, and will not be described here again. In order to distinguish the follow-up pattern from the first pattern, the outline of the second pattern is also displayed as a solid outline of a fourth color. Alternatively, the fourth color may be blue.
S840, calculating the overlapping area of the first graph and the second graph, and judging whether the overlapping area of the first graph and the second graph is smaller than a second preset percentage of the area of the first graph.
Specifically, the working principle of this step is consistent with S500. However, the comparison is made between the pattern (i.e., the first pattern) in which the camera shooting range of the current shooting time node is mapped in the three-dimensional space coordinate system and the pattern (i.e., the second pattern) in which the camera shooting range of the previous shooting time node is mapped in the three-dimensional space coordinate system, so as to prevent repeated shooting and improve the shooting efficiency of the evidence image.
S851, if the overlapping area of the first pattern and the second pattern is smaller than the second preset percentage of the area of the first pattern, confirming that the camera does not repeatedly shoot, controlling the camera to enter a shooting state, and executing S620.
Specifically, the second preset percentage may be set to 80%. If the overlapping area of the first graph and the second graph is smaller than the second preset percentage of the area of the first graph, the camera is confirmed to be not repeatedly shot, the camera can be controlled to enter a shooting state, and a new evidence image is shot.
Specifically, S500 to S620 may be performed first, and then S810 to S851 may be performed again, or S810 to S851 may be performed first, and then S500 to S620 may be performed again. However, S810 to S851 must be performed after S400, that is, the coordinates of all vertices of the first pattern must be calculated first, and the first pattern is displayed on the GIS map, so that the coordinates of all vertices of the second pattern can be calculated later, and the overlapping areas of the first pattern and the second pattern are compared.
In this embodiment, the camera parameters and the shooting azimuth parameters of the evidence image of the previous shooting time node are obtained in the server, the coordinates of all the vertices of the second graph are calculated, the overlapping area of the first graph and the second graph is calculated, and whether the overlapping area of the first graph and the second graph is smaller than the second preset percentage of the area of the first graph or not is judged, so that whether the camera shoots repeatedly or not can be judged, and the user can be guided to shoot the evidence image more reasonably and more efficiently.
In an embodiment of the present application, after S840, the following S852 to S853 are further included:
s852, if the overlapping area of the first graph and the second graph is larger than or equal to the second preset percentage of the area of the first graph, the repeated shooting of the camera is confirmed, and S710 is executed.
Specifically, if the overlapping area of the first pattern and the second pattern is greater than or equal to the second preset percentage of the area of the first pattern, the repeated shooting of the camera is confirmed, S710 is executed, and the camera is controlled to enter a non-shooting state. The shooting range of the camera needs to be adjusted subsequently so as to prevent repeated shooting of the same evidence-providing image.
S853, displaying a second movement prompt identifier on the GIS map, wherein the second movement prompt identifier is used for prompting that the second graph is far away from the first graph and is close to the to-be-authenticated graph spot.
Specifically, as shown in fig. 5, the second movement hint identifier may be a curved arrow graph that is far from the first graph and near the pattern spot to be authenticated. The second mobile alert identification may alert the user: the camera shooting range needs to be adjusted. In fig. 5, a regular pentagon with a solid border is the pattern spot to be demonstrated. The triangle with the solid border is the first graphic, and is displayed as a solid border because the overlapping area of the first image and the pattern spot to be demonstrated is greater than or equal to a first preset percentage of the pattern spot area to be demonstrated. And the triangle with the frame with the broken line is a second graph, and the repeated shooting of the camera is confirmed and displayed as the frame with the broken line because the overlapping area of the first graph and the second graph is larger than or equal to the second preset percentage of the area of the first graph.
In this embodiment, when the overlapping area of the first graphic and the second graphic is greater than or equal to the second preset percentage of the area of the first graphic, the camera is controlled to enter the non-photographable state, so that the user is prevented from photographing nonsensical repeated pictures, and meanwhile, the user can be prompted to adjust the photographing range of the camera by displaying the second mobile prompt identifier on the GIS map.
In an embodiment of the present application, after S620, the following S641 to S643 are further included:
s641, selecting a tracking point in the first graph.
S642, calculating the coordinates of the tracking points in the GIS map according to the camera parameters.
And S643, mapping coordinates of the tracking point in the GIS map to a camera shooting range, and displaying the coordinates in a camera shooting preview interface.
Specifically, the purpose of the present embodiment is to achieve a point-to-point mapping of the camera shooting range and the GIS map of the user terminal. The final purpose is to make click on any point on the GIS map as the tracking point, if the point is in the shooting range of the camera, the point can be displayed in the preview window of the camera, and the user is assisted to judge the validity of the tracking point.
The specific implementation mode is as follows:
as shown in fig. 6, 1) define the T point as a key point of interest. On the basis of knowing the coordinates of the four points a, B, C and D (this inherits the algorithm for calculating the coordinates of the four points a, B, C and D in the foregoing embodiment, and is not described here again), the upper left corner a is used as a reference point, when we click on the tracking point-T point in the camera shooting range (i.e. the trapezoid in fig. 6), we can calculate the length of the line segment AT by the distance between the two points, and then combine the angle value of the angle BAT, and use the trigonometric function and the vector algorithm to obtain the length of the line segment ET. The reason why the distance of ET is not directly measured is because the vector algorithm works well. Similarly, the length of the line segment FD can be obtained by using the length of the line segment AD and the angle value of the +.bad, so the longitudinal ratio of the point T in the trapezoid yrate=et/FD.
3) Based on the longitudinal ratio YRate of the point T in the trapezoid, we can determine the coordinates of the H and I points and calculate the transverse ratio xrate=ht/HI of the click point T in the trapezoid (with the upper left corner as the reference point);
4) According to the vertical ratio YRate and the horizontal ratio XRate, the position of the clicking point T can be conveniently found in a GIS map (namely a rectangle of 7 in the figure) of the user terminal.
Therefore, any point on the GIS map of the user terminal can be clicked, and if the point is in the shooting range of the camera, the point can be displayed in the preview window of the camera, so that the user is assisted in judging the effectiveness of the tracking point.
Otherwise, if clicking a point in the camera preview window (at this time, the point is in the shooting range of the camera), the point can be displayed on the GIS map of the user terminal, so as to assist the user in judging the validity of the tracking point.
In an embodiment of the present application, when the camera enters the photographable state, the outer frame of the first graphic is controlled to be changed from the solid line frame of the first color to the dotted line frame of the fifth color.
Specifically, in S851, if the overlapping area of the first pattern and the second pattern is smaller than the second preset percentage of the area of the first pattern, it is determined that the camera does not repeatedly shoot, and when the overlapping area of the first pattern and the pattern spot to be verified is larger than or equal to the first preset percentage of the area of the pattern spot to be verified, that is, when two shooting conditions are satisfied at the same time, the camera is controlled to enter a shooting state. Before S620 is performed, the method further includes controlling the outer frame of the first graphic to be changed from the solid frame of the first color to the dotted frame of the fifth color. The fifth color may be yellow.
In fig. 5, a regular pentagon with a solid border is the pattern spot to be demonstrated. The triangle with the solid border is the first graphic, and is displayed as a solid border because the overlapping area of the first image and the pattern spot to be demonstrated is greater than or equal to a first preset percentage of the pattern spot area to be demonstrated. And the triangle with the frame with the broken line is a second graph, and the repeated shooting of the camera is confirmed and displayed as the frame with the broken line because the overlapping area of the first graph and the second graph is larger than or equal to the second preset percentage of the area of the first graph. But the fifth color cannot be represented in fig. 5.
Controlling the outer frame of the first graph to be changed from the solid frame of the first color to the broken frame of the fifth color, performing S620 photographing the evidence image,
in this embodiment, the outer frame of the first graph is controlled to be changed from the solid line frame of the first color to the dotted line frame of the fifth color, so that the shooting range of the camera can be shown to the user while two shooting conditions are satisfied: the method is overlapped with the pattern spots to be proved and is not repeatedly shot.
The application also provides a shooting system for the evidence-provided image.
As shown in fig. 8, in an embodiment of the present application, the capturing system for the evidence image includes a camera 100, a processing terminal 200, a mobile terminal 300, and a server 400. The processing terminal 200 is in communication with the camera 100. The processing terminal 200 is used to perform the shooting method of the proof image mentioned in the content. The mobile terminal 300 is in communication with the processing terminal 200. The server 400 is communicatively connected to the processing terminal 200.
The technical features of the above embodiments may be combined arbitrarily, and the steps of the method are not limited to the execution sequence, so that all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description of the present specification.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of capturing an evidence image, the method comprising:
establishing a three-dimensional space coordinate system;
acquiring camera parameters and shooting azimuth parameters, and calculating coordinates of all vertexes of a first graph according to the camera parameters and the shooting azimuth parameters; the first graph is a graph mapped in a three-dimensional space coordinate system by a camera shooting range;
the coordinates of all vertexes of the first graph are imported into a GIS map of the mobile terminal, so that the first graph is displayed in the GIS map; the outer frame of the first graph is displayed as a solid line frame of a first color;
the coordinates of all vertexes of the to-be-proved pattern spots are imported into a GIS map of the mobile terminal, so that the to-be-proved pattern spots are displayed in the GIS map; the outer frame of the pattern spot to be demonstrated is displayed as a solid frame of a second color;
calculating the overlapping area of the first graph and the pattern spot to be proved, and judging whether the overlapping area of the first graph and the pattern spot to be proved is smaller than a first preset percentage of the area of the first graph or not;
if the overlapping area of the first graph and the to-be-verified graph spot is larger than or equal to a first preset percentage of the area of the first graph, controlling the camera to enter a shooting state;
controlling a camera to execute shooting action, and taking a shot image as a proof image;
wherein the camera parameters include one or more of aperture size, lens focal length, or camera blind zone data; the shooting azimuth parameters comprise one or more of camera placement point coordinates, azimuth angles, shooting space positions and pitch angles.
2. The method for capturing an image of an evidence as claimed in claim 1, wherein after determining whether the overlapping area of the first graphic and the pattern spot to be evidence is smaller than a first predetermined percentage of the area of the first graphic, the method further comprises:
if the overlapping area of the first graph and the to-be-authenticated graph spot is smaller than a first preset percentage of the area of the first graph, controlling the camera to enter a non-shooting state;
displaying a first movement prompt identifier on the GIS map; the first movement prompt identifier is used for prompting the first graph to be close to the pattern spot to be authenticated.
3. The method of capturing an evidence image of claim 2, wherein controlling the camera to enter the non-photographable state includes:
and controlling the virtual shooting button displayed in the GIS map to enter a non-clickable state.
4. The method for capturing an evidence image according to claim 3, wherein after controlling the camera to enter the non-photographable state, the method further comprises:
and controlling the outer frame of the first graph displayed in the GIS map to be changed from a solid line frame of the first color to a dotted line frame of the third color.
5. The method for capturing an evidence image according to claim 4, wherein after controlling the camera to perform a capturing operation and taking the captured image as the evidence image, the method further comprises:
after the shooting of the evidence-provided image is finished, acquiring camera parameters, shooting azimuth parameters and shooting time nodes of the evidence-provided image;
and correspondingly storing the evidence image, the shooting time node, the camera parameters of the evidence image and the shooting azimuth parameters of the evidence image in a server.
6. The method for capturing an evidence image of claim 5, further comprising:
acquiring camera parameters and shooting azimuth parameters of the evidence image of the previous shooting time node from a server;
calculating coordinates of all vertexes of the second graph according to the camera parameters and shooting azimuth parameters of the evidence image of the previous shooting time node; the second graph is a graph which is mapped in a three-dimensional space coordinate system by a camera shooting range of the evidence image of the previous shooting time node;
the coordinates of all vertexes of the second graph are imported into a GIS map of the mobile terminal, so that the second graph is displayed in the GIS map; the outer frame of the second graph is displayed as a solid line frame of a fourth color;
calculating the overlapping area of the first graph and the second graph, and judging whether the overlapping area of the first graph and the second graph is smaller than a second preset percentage of the area of the first graph or not;
and if the overlapping area of the first graph and the second graph is smaller than the second preset percentage of the area of the first graph, confirming that the camera does not repeatedly shoot, controlling the camera to enter a shooting state, executing the shooting action of the control camera, and taking the shot image as a proof image.
7. The method of capturing an image of an evidence as claimed in claim 6, wherein after determining whether the overlapping area of the first graphic and the second graphic is smaller than a second predetermined percentage of the area of the first graphic, the method further comprises:
if the overlapping area of the first graph and the second graph is larger than or equal to a second preset percentage of the area of the first graph, the repeated shooting of the camera is confirmed, and the camera is controlled to enter a non-shooting state;
and displaying a second movement prompt identifier on the GIS map, wherein the second movement prompt identifier is used for prompting that the second graph is far away from the first graph and is close to the to-be-authenticated graph spot.
8. The method according to claim 7, wherein after controlling the camera to perform a photographing operation and taking the photographed image as the evidence, further comprising:
selecting a tracking point in the first graph;
calculating coordinates of tracking points in the GIS map according to the camera parameters;
and mapping the coordinates of the tracking points in the GIS map to the shooting range of the camera, and displaying the coordinates in a shooting preview interface of the camera.
9. The method according to claim 8, wherein the outer frame of the first graphic is controlled to be changed from a solid frame of the first color to a broken frame of the fifth color when the camera is brought into the photographable state.
10. A shooting system for proving an image, comprising:
a camera;
a processing terminal, in communication with the camera, for executing the method for capturing an evidence image according to any one of claims 1-9;
the mobile terminal is in communication connection with the processing terminal;
and the server is in communication connection with the processing terminal.
CN202210248207.5A 2022-03-14 2022-03-14 Shooting method and system for evidence-holding image Active CN114650353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210248207.5A CN114650353B (en) 2022-03-14 2022-03-14 Shooting method and system for evidence-holding image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210248207.5A CN114650353B (en) 2022-03-14 2022-03-14 Shooting method and system for evidence-holding image

Publications (2)

Publication Number Publication Date
CN114650353A CN114650353A (en) 2022-06-21
CN114650353B true CN114650353B (en) 2024-03-19

Family

ID=81993603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210248207.5A Active CN114650353B (en) 2022-03-14 2022-03-14 Shooting method and system for evidence-holding image

Country Status (1)

Country Link
CN (1) CN114650353B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593385B (en) * 2023-11-28 2024-04-19 广州赋安数字科技有限公司 Method for generating camera calibration data in auxiliary mode through image spots

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060018847A (en) * 2005-11-10 2006-03-02 미쓰비시덴키 가부시키가이샤 Picked-up image display method
CN111046121A (en) * 2019-12-05 2020-04-21 亿利生态大数据有限公司 Environment monitoring method, device and system
CN111275396A (en) * 2020-01-19 2020-06-12 东南大学 Novel method for collecting and changing pattern spot photos based on unmanned aerial vehicle
CN111479057A (en) * 2020-04-13 2020-07-31 杭州今奥信息科技股份有限公司 Intelligent pattern spot evidence-demonstrating method based on unmanned aerial vehicle
CN113485425A (en) * 2021-07-22 2021-10-08 北京中天博地科技有限公司 Method for automatic planning and flying of photographing path of unmanned aerial vehicle for national survey and certification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060018847A (en) * 2005-11-10 2006-03-02 미쓰비시덴키 가부시키가이샤 Picked-up image display method
CN111046121A (en) * 2019-12-05 2020-04-21 亿利生态大数据有限公司 Environment monitoring method, device and system
CN111275396A (en) * 2020-01-19 2020-06-12 东南大学 Novel method for collecting and changing pattern spot photos based on unmanned aerial vehicle
CN111479057A (en) * 2020-04-13 2020-07-31 杭州今奥信息科技股份有限公司 Intelligent pattern spot evidence-demonstrating method based on unmanned aerial vehicle
CN113485425A (en) * 2021-07-22 2021-10-08 北京中天博地科技有限公司 Method for automatic planning and flying of photographing path of unmanned aerial vehicle for national survey and certification

Also Published As

Publication number Publication date
CN114650353A (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN110969097B (en) Method, equipment and storage device for controlling linkage tracking of monitoring target
CN109510948B (en) Exposure adjusting method, exposure adjusting device, computer equipment and storage medium
CN110070564B (en) Feature point matching method, device, equipment and storage medium
US11024052B2 (en) Stereo camera and height acquisition method thereof and height acquisition system
CN111461994A (en) Method for obtaining coordinate transformation matrix and positioning target in monitoring picture
CN110443853B (en) Calibration method and device based on binocular camera, terminal equipment and storage medium
WO2020215283A1 (en) Facial recognition method, processing chip and electronic device
CN114650353B (en) Shooting method and system for evidence-holding image
WO2021136386A1 (en) Data processing method, terminal, and server
CN111915483B (en) Image stitching method, device, computer equipment and storage medium
CN108124102B (en) Image processing method, image processing apparatus, and computer-readable storage medium
US11523056B2 (en) Panoramic photographing method and device, camera and mobile terminal
CN111046725A (en) Spatial positioning method based on face recognition and point cloud fusion of surveillance video
WO2017094456A1 (en) Object inspection apparatus and inspection method
CN112991456A (en) Shooting positioning method and device, computer equipment and storage medium
EP4296947A1 (en) Calibration information determination method and apparatus, and electronic device
US20170004372A1 (en) Display control methods and apparatuses
CN112839165B (en) Method and device for realizing face tracking camera shooting, computer equipment and storage medium
JP2012093825A (en) Field management support device, field management support program, and field management support method
CN111432074A (en) Method for assisting mobile phone user in acquiring picture information
CN111279352B (en) Three-dimensional information acquisition system through pitching exercise and camera parameter calculation method
CN115514887A (en) Control method and device for video acquisition, computer equipment and storage medium
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111652338B (en) Method and device for identifying and positioning based on two-dimensional code
CN112640420B (en) Control method, device, equipment and system of electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: He Yusheng

Inventor after: Feng Chen

Inventor after: Wang Jun

Inventor after: Zuo Zhijie

Inventor after: Yang Jiangchuan

Inventor before: He Yusheng

Inventor before: Yang Jiangchuan

Inventor before: Zuo Zhijie

Inventor before: Feng Chen

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: The filming method and system of evidential images

Granted publication date: 20240319

Pledgee: Zhejiang Hangzhou Yuhang Rural Commercial Bank Co.,Ltd. Science and Technology City Branch

Pledgor: Hangzhou Jinao Information Technology Co.,Ltd.

Registration number: Y2024980017583