CN112418086B - Rule frame correction method and device, electronic equipment and storage medium - Google Patents
Rule frame correction method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112418086B CN112418086B CN202011318414.0A CN202011318414A CN112418086B CN 112418086 B CN112418086 B CN 112418086B CN 202011318414 A CN202011318414 A CN 202011318414A CN 112418086 B CN112418086 B CN 112418086B
- Authority
- CN
- China
- Prior art keywords
- center point
- image
- position information
- determining
- reference object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000012937 correction Methods 0.000 title claims abstract description 39
- 238000003860 storage Methods 0.000 title claims abstract description 17
- 238000012544 monitoring process Methods 0.000 claims abstract description 27
- 230000008859 change Effects 0.000 claims description 76
- 230000015654 memory Effects 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 20
- 238000004891 communication Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 14
- 238000002372 labelling Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 23
- 238000012545 processing Methods 0.000 description 10
- 238000004458 analytical method Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000005246 galvanizing Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000003466 welding Methods 0.000 description 2
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 1
- 229910000926 A-3 tool steel Inorganic materials 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000005536 corrosion prevention Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 108700041286 delta Proteins 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/586—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Geometry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a rule frame correction method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first center point of a target reference object in the first image; acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image; determining a yaw angle difference from the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value. Thereby providing a technical scheme for correcting the rule frame.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a rule frame correction method, a rule frame correction device, an electronic device, and a storage medium.
Background
In the field of video surveillance, there are many applications of intelligent video analysis, where in addition to target detection, there are also analyses of target behavior, such as: and performing behavior analysis such as line crossing detection, regional intrusion detection, detection of whether a vehicle is parked in a parking space or not, and the like. These behavioral analyses all involve the drawing of regular boxes in the camera.
Inaccuracy in the position of the regular frame in the camera occurs when the hypothetical camera is offset due to the influence of windy weather or the influence of road geology, etc. For example, when detecting whether a vehicle is parked in a parking space, the initial rule frame is coincident with the parking space frame in the monitored scene, but when the camera is offset, the rule frame is not coincident with the parking space frame in the monitored scene, and at this time, it is obviously inaccurate to detect whether the vehicle is parked in the parking space based on the rule frame, that is, the position of the rule frame in the camera is inaccurate. A technical solution for correcting the rule frame is needed.
Disclosure of Invention
The embodiment of the invention provides a rule frame correction method, a rule frame correction device, electronic equipment and a storage medium, which are used for providing a technical scheme for correcting a rule frame.
The embodiment of the invention provides a rule frame correction method, which comprises the following steps:
acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first center point of a target reference object in the first image;
Acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image;
Determining a yaw angle difference from the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
Further, before the corrected target rule frame corresponding to the initial rule frame is determined according to the position information of the first center point, the position information of the second center point and the yaw angle difference value, the method further includes:
determining first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
The determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value comprises:
Determining the distance from each first vertex of the initial regular frame to the second center point according to the position information of the second center point;
For each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point;
and taking the rule frame formed by each second vertex as a corrected target rule frame.
Further, before the depth change ratio is determined according to the first area information and the second area information, the method further includes:
Determining a first horizontal angle and a first vertical angle of a target reference object in the first image, acquiring a second horizontal angle and a second vertical angle of the target reference object in the second image, and determining a horizontal angle difference value and a vertical angle difference value;
The determining a depth change ratio from the first area information and the second area information includes:
And determining a depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value.
Further, the determining the depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value includes:
Inputting the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value into a preset first formula Determining a depth change ratio;
Wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference value, β ' - β is a vertical angle difference value, a is a preset horizontal angle depth compensation parameter, b is a preset vertical angle depth compensation parameter, and q s is a depth change ratio.
Further, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point includes:
Inputting the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point into a preset second formula A'(x1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ)*sin(θ+δ)), to determine the position information of the corrected second vertex corresponding to the first vertex;
Wherein ρ is the distance between the first vertex and the second center point, q s is the depth variation ratio, θ is the angle formed by the first vertex, the second center point and the horizontal coordinate axis, δ is the yaw angle difference, x 1 is the horizontal coordinate of the first center point, y 1 is the vertical coordinate of the first center point, c is the preset horizontal angle width compensation parameter, d is the preset vertical angle width compensation parameter, t α is the preset horizontal angle scaling parameter, t β is the preset vertical angle scaling parameter, and a' is the corrected position information of the second vertex corresponding to the first vertex.
Further, before the corrected target rule frame corresponding to the initial rule frame is determined according to the position information of the first center point, the position information of the second center point and the yaw angle difference value, the method further includes:
And if any one of the yaw angle difference value, the horizontal angle difference value, the vertical angle difference value, the distance between the position information of the first center point and the position information of the second center point and the depth change ratio is larger than a preset threshold value, determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
Further, determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information of the first center point, and the first area information of the target reference object in the first image includes:
Inputting the first image into a pre-trained reference object recognition model, and determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information of a first center point and the first area information of a target reference object in the first image based on the reference object recognition model.
Further, the determining the target reference in the first image includes:
Inputting the first image into a pre-trained reference object recognition model, determining the third horizontal angle, the third vertical angle, the third yaw angle and the position information and the third area information of a third center point of each reference object in the first image based on the reference object recognition model, determining the priority of each reference object according to the third horizontal angle, the third vertical angle, the third yaw angle and the position information and the third area information of the third center point of each reference object, and selecting the reference object with the highest priority as a target reference object; wherein the smaller the third horizontal angle, the third vertical angle, and the third yaw angle, the closer the position information of the third center point is to the center point of the first image, and the higher the priority of the reference object with the larger third area information is.
Further, the training process of the reference object recognition model comprises the following steps:
Aiming at each third image in the training set, inputting the third image and a labeling image corresponding to the third image into a reference object recognition model, and training the reference object recognition model; the marked image is marked with the position information and fourth area information of a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle and a fourth center point of each reference object in the third image.
Further, before the determining the corrected target rule frame corresponding to the initial rule frame, the method further includes:
Judging whether any one of the horizontal angle difference value, the vertical angle difference value, the yaw angle difference value, the distance between the first center point and the second center point and the difference value between the first area information and the second area information exceeds a preset alarm threshold value, if so, outputting alarm prompt information, and if not, carrying out the step of subsequently determining a corrected target rule frame corresponding to the initial rule frame;
After the corrected target rule frame corresponding to the initial rule frame is determined, the method further comprises:
Judging whether the corrected target rule frame has negative coordinate pixel points or not, and if so, outputting alarm prompt information.
In another aspect, an embodiment of the present invention provides a rule frame correction apparatus, including:
the first determining module is used for acquiring a first image of the monitoring area and determining a first yaw angle and position information of a first center point of a target reference object in the first image;
The acquisition module is used for acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image;
A second determining module for determining a yaw angle difference based on the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
Further, the first determining module is further configured to determine first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
The second determining module is specifically configured to determine, according to the position information of the second center point, a distance from each first vertex of the initial rule frame to the second center point; for each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point; and taking the rule frame formed by each second vertex as a corrected target rule frame.
Further, the apparatus further comprises:
A third determining module, configured to determine a first horizontal angle and a first vertical angle of a target reference object in the first image, obtain a second horizontal angle and a second vertical angle of the target reference object in the second image, and determine a horizontal angle difference value and a vertical angle difference value;
The first determining module is specifically configured to determine a depth change ratio according to the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value.
Further, the first determining module is specifically configured to input the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value into a preset first formulaDetermining a depth change ratio; wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference value, β ' - β is a vertical angle difference value, a is a preset horizontal angle depth compensation parameter, b is a preset vertical angle depth compensation parameter, and q s is a depth change ratio.
Further, the second determining module is specifically configured to input the distance between the first vertex and the second center point, an included angle formed by the first vertex, the second center point and a horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point into a preset second formula A'(x1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ)*sin(θ+δ)), to determine the position information of the corrected second vertex corresponding to the first vertex;
Wherein ρ is the distance between the first vertex and the second center point, q s is the depth variation ratio, θ is the angle formed by the first vertex, the second center point and the horizontal coordinate axis, δ is the yaw angle difference, x 1 is the horizontal coordinate of the first center point, y 1 is the vertical coordinate of the first center point, c is the preset horizontal angle width compensation parameter, d is the preset vertical angle width compensation parameter, t α is the preset horizontal angle scaling parameter, t β is the preset vertical angle scaling parameter, and a' is the corrected position information of the second vertex corresponding to the first vertex.
Further, the apparatus further comprises:
And the first judging module is used for determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value if any one of the yaw angle difference value, the horizontal angle difference value, the vertical angle difference value, the distance between the position information of the first center point and the position information of the second center point and the depth change ratio is larger than a preset threshold value.
Further, the first determining module is specifically configured to input the first image into a pre-trained reference object recognition model, and determine, based on the reference object recognition model, a first horizontal angle, a first vertical angle, a first yaw angle, position information of a first center point, and first area information of a target reference object in the first image.
Further, the apparatus further comprises: a fourth determining module, configured to input the first image into a pre-trained reference object recognition model, determine, based on the reference object recognition model, a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third center point, and third area information of each reference object in the first image, determine, according to the third horizontal angle, the third vertical angle, the third yaw angle, the position information of the third center point, and the third area information of each reference object, determine a priority of each reference object, and select a reference object with a highest priority as a target reference object; wherein the smaller the third horizontal angle, the third vertical angle, and the third yaw angle, the closer the position information of the third center point is to the center point of the first image, and the higher the priority of the reference object with the larger third area information is.
Further, the apparatus further comprises: the training module is used for inputting the third image and the labeling image corresponding to the third image into a reference object recognition model for training the reference object recognition model aiming at each third image in the training set; the marked image is marked with the position information and fourth area information of a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle and a fourth center point of each reference object in the third image.
Further, the apparatus further comprises:
the second judging module is used for judging whether any one of the horizontal angle difference value, the vertical angle difference value, the yaw angle difference value, the distance between the first center point and the second center point and the difference value between the first area information and the second area information exceeds a preset alarm threshold value, if not, triggering the second determining module, and if so, outputting alarm prompt information;
The second judging module is further used for judging whether the corrected target rule frame has negative coordinate pixel points or not, and if so, outputting alarm prompt information.
In yet another aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
A memory for storing a computer program;
A processor for implementing any of the method steps described above when executing a program stored on a memory.
In yet another aspect, embodiments of the present invention provide a computer-readable storage medium having a computer program stored therein, which when executed by a processor, implements the method steps of any of the above.
The embodiment of the invention provides a rule frame correction method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first center point of a target reference object in the first image; acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image; determining a yaw angle difference from the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
The technical scheme has the following advantages or beneficial effects:
In the embodiment of the invention, the movement condition of the target reference object in the image can be determined according to the first yaw angle and the position information of the first center point of the target reference object in the first image, the pre-stored initial rule frame in the second image and the second yaw angle and the position information of the second center point of the target reference object in the second image, and the initial rule frame is corrected based on the movement condition to obtain the target rule frame, so that the technical scheme for correcting the rule frame is provided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a rule frame correction process according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating installation of a monitoring camera according to an embodiment of the present invention;
Fig. 3 is another installation schematic diagram of a monitoring camera according to an embodiment of the present invention;
fig. 4 is a schematic view of a scenario provided in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a coordinate system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of identifying fixed reference object angle information according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a normal monitoring screen according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a monitor screen after being deflected by an external influence according to an embodiment of the present invention;
FIG. 9 is a schematic diagram showing a comparison of monitor images before and after being deflected by external influence according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a point coordinate model extracted from a contrast monitor screen according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of another rule frame correction process according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a rule frame correction device according to an embodiment of the present invention;
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail below with reference to the attached drawings, wherein it is apparent that the embodiments described are only some, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic diagram of a rule frame correction process according to an embodiment of the present invention, where the process includes the following steps:
S101: a first image of a monitored area is acquired and position information of a first yaw angle and a first center point of a target reference in the first image is determined.
The rule frame correction method provided by the embodiment of the invention is applied to electronic equipment, and the electronic equipment can be a camera arranged in a monitored scene, or can be equipment such as a PC (personal computer), a tablet personal computer and the like. If the electronic device is a camera arranged in a monitored scene, a process of determining a first yaw angle of a target reference object in a first image and position information of a first center point is performed after the camera acquires the first image of the monitored area. If the electronic device is a PC, a tablet computer or the like, after the camera arranged in the monitoring scene collects the first image of the monitoring area, the first image of the monitoring area may be sent to the electronic device, and then the electronic device performs a process of determining the first yaw angle of the target reference object in the first image and the position information of the first center point. The target reference object in the embodiment of the invention is a fixed object in a scene.
In the embodiment of the invention, the image of the monitoring area acquired by the electronic equipment is called a first image, and after the electronic equipment acquires the first image, a target reference object in the first image can be determined through a target recognition algorithm, so that the first yaw angle of the target reference object and the position information of a first center point are determined.
In order to make the first yaw angle and the first center point of the target reference object in the first image more accurate, in the embodiment of the invention, a pre-trained reference object recognition model can be stored in the electronic device, and when the reference object recognition model is trained, each sample image in the training set has a corresponding labeling image, and the labeling image is labeled with the target reference object in the sample image and the yaw angle and the center point of the target reference object. After the reference object recognition model is trained, the first image is input into the reference object recognition model, and the reference object recognition model can output the first yaw angle and the position information of the first center point of the target reference object in the first image.
S102: and acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image.
The electronic equipment stores a second image in advance, wherein the second image is an image acquired when the camera is determined not to deviate, each image comprises an image layer and a graphic layer, the image layers are used for displaying image content, and the graphic layers display a regular frame. In the embodiment of the invention, the rule box in the second image is called an initial rule box. The initial regular frame is completely overlapped with the region to be detected in the image layer in the second image. For example, if the area to be detected is a parking space, the initial regular frame of the graphic layer in the second image is completely overlapped with the parking space frame of the image layer. The electronic device may obtain an initial rule box in the pre-saved second image. And may acquire position information of a second yaw angle and a second center point of the target reference object in a second image. In the acquiring of the second yaw angle and the position information of the second center point of the target reference object in the second image, the second image may be input into a pre-trained reference object recognition model, and the second yaw angle and the position information of the second center point of the target reference object in the second image may be output based on the reference object recognition model.
S103: determining a yaw angle difference from the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
According to the embodiment of the invention, the offset distances of the target reference object in the horizontal direction and the vertical direction under the image coordinate system can be determined according to the position information of the first center point and the position information of the second center point, the offset distances of the target reference object in the horizontal direction and the vertical direction can be used as the offset distances of the initial regular frame in the horizontal direction and the vertical direction, the initial regular frame is moved according to the offset distances in the horizontal direction and the vertical direction, and if the camera is offset, the yaw angle of the target reference object is generally changed, so that the yaw angle difference value is determined in the embodiment of the invention, and the initial regular frame after movement rotates according to the yaw angle difference value to obtain the final target regular frame.
In the embodiment of the invention, the movement condition of the target reference object in the image can be determined according to the first yaw angle and the position information of the first center point of the target reference object in the first image, the pre-stored initial rule frame in the second image and the second yaw angle and the position information of the second center point of the target reference object in the second image, and the initial rule frame is corrected based on the movement condition to obtain the target rule frame, so that the technical scheme for correcting the rule frame is provided.
In order to make the determination of the corrected target rule frame more accurate, in this embodiment of the present invention, before the determination of the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point, and the yaw angle difference value, the method further includes:
determining first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
The determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value comprises:
Determining the distance from each first vertex of the initial regular frame to the second center point according to the position information of the second center point;
For each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point;
and taking the rule frame formed by each second vertex as a corrected target rule frame.
The depth information of the regular frame may also change due to a change in the position or angle of the camera. In order to make the determined target rule frame more accurate, in the embodiment of the invention, after the electronic device acquires the first image, the target reference object in the first image is identified, and the first area information of the target reference object in the first image is determined. When the electronic device is directed to the second image stored in advance, the second area information of the target reference object in the second image is also stored. The electronic equipment acquires second area information of the target reference object in the second image, and determines a depth change ratio according to the first area information and the second area information. The ratio of the first area information and the second area information can be calculated, and the ratio is subjected to squaring operation to obtain the depth change ratio.
In order to make the depth change ratio determined more accurate and thus the determined target rule frame more accurate, in an embodiment of the present invention, before determining the depth change ratio according to the first area information and the second area information, the method further includes:
Determining a first horizontal angle and a first vertical angle of a target reference object in the first image, acquiring a second horizontal angle and a second vertical angle of the target reference object in the second image, and determining a horizontal angle difference value and a vertical angle difference value;
The determining a depth change ratio from the first area information and the second area information includes:
And determining a depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value.
In the embodiment of the invention, after the electronic device acquires the first image, the first horizontal angle and the first vertical angle of the target reference object in the first image can be determined. When the electronic device is used for the second image stored in advance, the second horizontal angle and the second vertical angle of the target reference object in the second image are also stored. The electronic equipment acquires a second horizontal angle and a second vertical angle of the target reference object in the second image, and determines a horizontal angle difference value and a vertical angle difference value according to the first horizontal angle and the first vertical angle of the target reference object in the first image and the second horizontal angle and the second vertical angle of the target reference object in the second image.
In order to make the determination of the first horizontal angle and the first vertical angle of the target reference object in the first image more accurate, in the embodiment of the present invention, a pre-trained reference object recognition model may be stored in the electronic device, where each sample image in the training set has a corresponding labeling image in which the target reference object in the sample image, and the yaw angle, the position information of the center point, the horizontal angle, and the vertical angle of the target reference object are labeled during training of the reference object recognition model. After the reference object recognition model is trained, the first image is input into the reference object recognition model, and the reference object recognition model can output a first yaw angle, position information of a first center point, a first horizontal angle and a first vertical angle of a target reference object in the first image.
In addition, when the electronic device stores the second horizontal angle and the second vertical angle of the target reference object in the second image in advance, the second image may be input into the reference object recognition model, and the reference object recognition model may output the second yaw angle, the position information of the second center point, the second horizontal angle and the second vertical angle of the target reference object in the second image.
The electronic equipment determines the depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value. Specifically, the determining the depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value includes:
Inputting the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value into a preset first formula Determining a depth change ratio;
Wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference value, β ' - β is a vertical angle difference value, a is a preset horizontal angle depth compensation parameter, b is a preset vertical angle depth compensation parameter, and q s is a depth change ratio.
Wherein a is a preset horizontal angle and depth compensation parameter, and a is an empirical value, generally in the range of 0 to 0.5. b is a predetermined vertical angular depth compensation parameter, and b is an empirical value, typically in the range of 0 to 0.5.
In the embodiment of the invention, after the electronic equipment determines the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value, the electronic equipment substitutes the information into a preset first formulaThe depth change ratio is obtained. In the embodiment of the invention, when the depth change ratio is determined, the angle compensation is considered, so that the determined depth change ratio is more accurate.
In order to make the determination of the corrected target rule frame more accurate, in the embodiment of the present invention, the determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point includes:
Inputting the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point into a preset second formula A'(x1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ)*sin(θ+δ)), to determine the position information of the corrected second vertex corresponding to the first vertex;
Wherein ρ is the distance between the first vertex and the second center point, q s is the depth variation ratio, θ is the angle formed by the first vertex, the second center point and the horizontal coordinate axis, δ is the yaw angle difference, x 1 is the horizontal coordinate of the first center point, y 1 is the vertical coordinate of the first center point, c is the preset horizontal angle width compensation parameter, d is the preset vertical angle width compensation parameter, t α is the preset horizontal angle scaling parameter, t β is the preset vertical angle scaling parameter, and a' is the corrected position information of the second vertex corresponding to the first vertex.
Wherein c is a preset horizontal angle width compensation parameter, d is a preset vertical angle width compensation parameter, t α is a preset horizontal angle scaling parameter, t β is a preset vertical angle scaling parameter, and d is a preset vertical angle width compensation parameter, and t α is a preset horizontal angle scaling parameter, and t β is a preset vertical angle scaling parameter, and is a value in a range of 0 to 1.
In the embodiment of the present invention, after determining the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point, the electronic device inputs the information into a preset second formula A'(x1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ)*sin(θ+δ)), to determine the position information of the corrected second vertex corresponding to the first vertex. And taking the rule frame formed by each second vertex as a corrected target rule frame.
In order to reduce the resource consumption of the correction rule frame, in the embodiment of the invention, whether the correction of the rule frame is needed or not can be judged first, if yes, the correction is carried out, otherwise, the correction is not carried out. Specifically, in the embodiment of the present invention, before determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point, and the yaw angle difference value, the method further includes:
And if any one of the yaw angle difference value, the horizontal angle difference value, the vertical angle difference value, the distance between the position information of the first center point and the position information of the second center point and the depth change ratio is larger than a preset threshold value, determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
In the embodiment of the invention, after the electronic equipment determines any one of the yaw angle difference value, the horizontal angle difference value, the vertical angle difference value, the distance between the position information of the first center point and the position information of the second center point and the depth change ratio, judging whether any one of the distance and the depth change ratio is larger than a preset threshold value, if so, determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value. The yaw angle difference, the horizontal angle difference, the vertical angle difference, the distance between the position information of the first center point and the position information of the second center point, and the preset threshold values for comparison and judgment corresponding to the depth change ratios are different. Illustrating: the preset threshold value for comparison judgment corresponding to the yaw angle difference value and the preset threshold value for comparison judgment corresponding to the depth change ratio are different. In addition, the preset thresholds for comparison and judgment corresponding to the yaw angle difference, the horizontal angle difference and the vertical angle difference may be the same or different.
Since in the embodiment of the invention, when it is determined that any one of the yaw angle difference, the horizontal angle difference, the vertical angle difference, the distance between the position information of the first center point and the position information of the second center point, and the depth change ratio is greater than a preset threshold, a corrected target rule frame corresponding to the initial rule frame is determined according to the position information of the first center point, the position information of the second center point, and the yaw angle difference. Otherwise, the rule frame correction is not performed. And judging whether the rule frame is required to be corrected or not, and if not, not correcting the rule frame. Thereby reducing the resource consumption of the correction rule box.
In an embodiment of the present invention, determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information of the first center point, and the first area information of the target reference object in the first image includes:
Inputting the first image into a pre-trained reference object recognition model, and determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information of a first center point and the first area information of a target reference object in the first image based on the reference object recognition model.
In order to ensure that the calibration of the rule frame can be performed according to the target reference object and that the accurate calibration can be performed based on the target reference object, in the embodiment of the present invention, the determining the target reference object in the first image includes:
Inputting the first image into a pre-trained reference object recognition model, determining the third horizontal angle, the third vertical angle, the third yaw angle and the position information and the third area information of a third center point of each reference object in the first image based on the reference object recognition model, determining the priority of each reference object according to the third horizontal angle, the third vertical angle, the third yaw angle and the position information and the third area information of the third center point of each reference object, and selecting the reference object with the highest priority as a target reference object; wherein the smaller the third horizontal angle, the third vertical angle, and the third yaw angle, the closer the position information of the third center point is to the center point of the first image, and the higher the priority of the reference object with the larger third area information is.
Based on the reference object recognition model, a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third center point, and third area information of each reference object in the first image may be determined, and then a target reference object is selected based on the information. The selected strategy is that the smaller the third horizontal angle, the third vertical angle and the third yaw angle are, the closer the position information of the third center point is to the center point of the first image, and the higher the third area information is, the higher the priority of the reference object is, and the reference object with the highest priority is selected as the target reference object.
The scheme for determining the target reference object provided by the embodiment of the invention ensures that the determined target reference object is more accurate, and further ensures that the correction of the rule frame based on the target reference object is more accurate.
The training process of the reference object recognition model comprises the following steps:
Aiming at each third image in the training set, inputting the third image and a labeling image corresponding to the third image into a reference object recognition model, and training the reference object recognition model; the marked image is marked with the position information and fourth area information of a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle and a fourth center point of each reference object in the third image.
And the electronic equipment stores a training set, the images in the training set are third images, and each third image is provided with a corresponding annotation image. The labeling image is labeled with the position information and fourth area information of a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle and a fourth center point of each reference object in the corresponding third image. And inputting each group of third images and the labeling images into the reference object recognition model to finish training the reference object recognition model.
If the position or angle of the camera is greatly changed, the electronic device may not be able to complete the correction of the rule frame, and at this time, only the position or angle of the camera can be manually corrected, and then the rule frame is corrected. Therefore, in an embodiment of the present invention, before the determining the corrected target rule frame corresponding to the initial rule frame, the method further includes:
Judging whether any one of the horizontal angle difference value, the vertical angle difference value, the yaw angle difference value, the distance between the first center point and the second center point and the difference value between the first area information and the second area information exceeds a preset alarm threshold value, if so, outputting alarm prompt information, and if not, carrying out the step of subsequently determining a corrected target rule frame corresponding to the initial rule frame;
After the corrected target rule frame corresponding to the initial rule frame is determined, the method further comprises:
Judging whether the corrected target rule frame has negative coordinate pixel points or not, and if so, outputting alarm prompt information.
In the embodiment of the invention, if any one of the horizontal angle difference, the vertical angle difference, the yaw angle difference, the distance between the first center point and the second center point, and the difference between the first area information and the second area information exceeds a preset alarm threshold. The position or angle of the camera is greatly changed, and the position or angle of the camera exceeds the automatic correction capability range of the electronic equipment, so that alarm prompt information is output to prompt related personnel to correct the position or angle of the camera. Or after the corrected target rule frame corresponding to the initial rule frame is determined, if the corrected target rule frame is judged to have the pixel points with negative coordinates, the correction error of the electronic equipment is also described at the moment, and the alarm prompt information is also output so as to prompt related personnel to correct the position or the angle of the camera.
The rule frame correction process provided by the embodiment of the invention is described in detail below with reference to the accompanying drawings.
The application scene of the embodiment of the invention comprises that for the coastal areas in the south, the coastal areas are often influenced by strong wind or typhoon weather, and the erected monitoring cameras sometimes deviate from the preset monitoring area to different degrees due to position movement. Or some places are affected by geology, such as construction of a construction site, rolling of a road surface by a muck truck, long-term rainfall and the like, so that a loose foundation is settled or vibrated, a vertical rod or a cantilever rod is shifted or dithered, and a monitoring area of a monitoring camera fixed on the vertical rod is changed. And furthermore, or some construction site conditions are not ideal, and the cantilever rods are not standard and firm, and can only be fixed on some nonstandard rod bodies, so that the rod bodies have high elasticity or are relatively slim and are easily influenced by external environments. In addition, it may be intentionally or unintentionally broken, such as by impact, weight bearing of the tether line, etc. The above abnormal environment has two effects on the camera monitoring picture: jitter and shift.
Fig. 2 is a schematic view of a typical monitor camera, as shown in fig. 2, in which the monitor camera is mounted on a cantilever beam by a gimbal anchor. Standard construction requirements are generally: the height from the ground to the center line of the cantilever rod is as follows: 6.0mm. The components are all made of A3 steel. The rod member is required to be integrally formed. The welding rod adopts T42. All steel components are subjected to hot galvanizing corrosion prevention treatment, and the surfaces of the fasteners are: 350/m2, the others being: 600g/m. The concrete adopts C20, the basic specification is not lower than 1500 x 2200 (mm), and the specific specification is determined according to the situation of the crossing. All the welding positions must be fully welded, firm and can not be virtually welded, and the surface is beautiful. The center line of the arc-shaped rod and the transverse direction of the road are kept horizontal. Each set of arm is provided with a 2.5m hot dip galvanizing grounding rod, and is connected with a rod piece by adopting a 16mm2 bare copper wire, and the grounding resistance is less than 10Ω. The monitoring camera may be installed on a street lamp, a telegraph pole, an outer wall of a building, etc. without meeting the above standard construction requirements. As shown in fig. 3.
Fig. 4 is a schematic view of a scenario provided by an embodiment of the present invention, where the embodiment of the present invention provides an automatic correction method according to a fixed reference object, so that a rule area is corrected back after being offset in a certain range, and an event is reported as required, so as to remind an upper layer (user) to take a corresponding measure. The scheme process is as follows.
In the following, a region rule frame is taken as an example, and the exemplary scene is a parking space, the rule line is simpler than the rule frame, and the scene is not limited to the parking space. In addition, the regular frame is quadrilateral, and can be polygonal in practice.
Description of relevant parameters:
Coordinate system: the method is divided into a rectangular coordinate system and a polar coordinate system. The rectangular coordinate system is slightly different from the rectangular coordinate system in the image processing field and the mathematical field, and the rectangular coordinate system is vertically downward in the y-axis of the image processing field. As shown in fig. 5.
Fixed reference: the reference object identification model is trained in advance through AI, and some fixed references which can be identified by the reference object identification model, such as a fixed garbage can (the bottom foundation is fixed and not movable), a box body (an advertising lamp box) and the like, and other fixed identification accessories can be selected if the reference object identification model is not arranged on site. The fixed reference angle information can be identified: horizontal angle α, vertical angle β, yaw angle δ, as shown in fig. 6. In addition, the center point position O, the area S (w×h) can be identified. Wherein the center point position and the yaw angle reflect the change of the (monitoring picture) plane position; the area, horizontal angle, vertical angle reflect the depth position change. Depth variations can affect picture scaling.
T α: the horizontal angle scaling parameter approximates |cos (alpha) |, and can be practically taught along with the lens parameter, and the range is 0-1;
t β: the vertical angle scaling parameter approximates |cos (beta) |, and can be practically taught along with the lens parameter, and the range is 0-1;
a: the horizontal angle compensation depth changes the teaching parameters, and the range is 0-0.5;
b: the vertical angle compensation depth changes the teaching parameters, and the range is 0-0.5;
c: the horizontal angle compensation width changes the teaching parameters, and the range is 0 to 0.5;
d: the vertical angle compensation width changes the teaching parameters, and the range is 0-0.5.
Fig. 7 is a schematic view of a normal monitoring screen, and fig. 7 includes a regular frame, and a parking space line is arranged inside the regular frame. The fixed reference is a garbage can or a parking sign. Fig. 8 is a schematic view of a monitor screen after being offset by external influences, and fig. 8 is a schematic view after correcting a rule frame according to a fixed reference. Fig. 9 is a schematic diagram showing the comparison of monitor images before and after the offset by the external influence. Fig. 10 is a schematic diagram of a point coordinate model extracted from a comparative monitoring screen. The rule frame correction scheme provided by the embodiment of the present invention is described below with reference to fig. 10.
Angle xOA = θ, because a "is the offset position before the reference is rotated, so OA is O ' a", < x ' O ' a "= xOA = θ; a ' is the rotated position, reference rotation angle +.x ' Ox "= +.y ' Oy" = δ (rotation angle difference δ 1-δ0 of front δ 0, rear δ 1), and angle a "O ' a ' = δ. Because the quadrilateral ABCD is an application drawn on the graphics layer, it does not change with changes in the monitor picture in the video layer. Original reference object coordinate O (x 0,y0), changed reference object coordinate O '(x 1,y1), original rule point a (x a,ya), corrected rule point a' (x a1,ya1). In the absence of a change in depth,The polar coordinate of a with respect to O is a (ρ, θ), the polar coordinate of a 'with respect to O' is a '(ρ, θ+δ), and the mathematical coordinate system coordinate of a' with respect to O 'is a' (ρ×cos (θ+δ), ρ×sin (θ+δ)). As shown in fig. 6, consider the depth effect: the horizontal angle affects the x-axis scaling, the vertical angle affects the y-axis scaling, the z-axis directly corresponds to the depth of field, and many technical means such as laser, infrared ranging, scaling and the like already exist for the measurement of the z-axis depth of field. The method can be estimated based on the area approximation, or can be based on the horizontal angle and the vertical angle to perform the reduction compensation with respect to the initial angle before the change (the original reference object is not necessarily opposite to the monitoring lens, so the initial angle is also available). The area approximation estimates the depth change ratio q s =sqrt (S'/S). Angle compensation depth change ratioThe mathematical coordinate system coordinates of a ' with respect to O ' after considering the depth effect is a ' (ρ×q s*(c+tα)*cos(θ+δ),ρ*qs*(d+tβ) ×sin (θ+δ)). Considering the difference between the mathematical rectangular coordinate system and the image rectangular coordinate system in the y-axis direction, the other point coordinate estimation process of the final image coordinate system coordinate of A' relative to the picture O 0 is A'(x1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ)*sin(θ+δ))., which is not repeated.
Where A 'coordinates (x ", y"), original reference coordinates O (x 0,y0), changed reference coordinates O' (x 1,y1). Original rule point a (x a,ya);0≤a≤0.5,0≤b≤0.5,0≤c≤0.5,0≤d≤0.5;tα=|cos(α)|,tβ = |cos (β) |,x"=x1+ρ*qs*(c+tα)*cos(θ+δ),y"=y1-ρ*qs*(d+tβ)*sin(θ+δ).
The method comprises the following processing flows:
setting an offset alarm threshold value: reference displacement Δs, angular changes Δα, Δβ, Δδ, area change Δq.
The camera monitors the picture normally, draws regional rule lines and frames on the monitored picture, and issues rules.
Recording the coordinate positions of all points in the rule; when the algorithm analysis rule is issued, the current video frame information gives the values of the original reference object center point coordinates (O (x 0, y 0)), the width and height (w, h) and the angles (alpha, beta, delta 0).
The algorithm analyzes the references of each frame (frame can be extracted as required) to obtain real-time center point coordinates (O ' (x 1, y 1)), width and height (w ', h '), and angle (alpha ', beta ', delta 1) values.
Multiple references preferably: if a plurality of reference objects are detected, the optimization ranking is performed according to the principle that the angle is the most positive (each angle is closest to 0 value), the width and height are the greatest, and the center point is closest to the center of the picture. The first reference is used as a reference value.
Judging failure of a reference object: if there are multiple references, the reference parameter values in the reference sequence are compared, the parameter values may be the reference area, if a change exceeding a certain threshold (e.g. a parameter value fluctuating by more than 3%) is considered to be impaired, it is placed at the end of the preferred queue and its reference value is updated.
And (3) alarm judgment: calculating displacement, angle change and area change of a reference object, and when any value exceeds a preset alarm threshold value, indicating that the damage is difficult to correct (the possible reference object is damaged), directly alarming and dispatching to repair on site; otherwise, calculating the rule point coordinates after teaching according to the deduction formula, if any corrected rule point coordinate has a negative value, considering distortion, alarming, otherwise, redrawing, reporting shake/shift alarming, and considering whether manual correction is performed or not by an upper layer according to actual needs.
FIG. 11 is a schematic diagram of a rule frame correction process provided by an embodiment of the present invention, in which an offset correction threshold and an alarm threshold are set; drawing rule area lines and frames according to service requirements; service configuration is issued; recording the coordinates of each point of the original regular area line and the frame; taking a frame of image, and determining the coordinates, width, height, angle and the like of a central point of a target reference object; multiple reference object optimization and reference object failure judgment, and determining a target reference object; when the change parameter of the target reference object exceeds the offset correction threshold value and does not exceed the alarm threshold value, carrying out rule frame correction according to the target reference object; and when the alarm threshold value is exceeded or a pixel point with a negative coordinate exists after the correction of the rule frame is completed, outputting alarm prompt information.
Fig. 12 is a schematic structural diagram of a rule frame correction device according to an embodiment of the present invention, where the device includes:
A first determining module 121, configured to acquire a first image of a monitored area, and determine a first yaw angle of a target reference object in the first image and position information of a first center point;
An obtaining module 122, configured to obtain an initial regular frame in a second image that is stored in advance, and position information of a second yaw angle and a second center point of the target reference object in the second image;
A second determining module 123 for determining a yaw angle difference based on the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
The first determining module 121 is further configured to determine first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
The second determining module 123 is specifically configured to determine, according to the location information of the second center point, a distance between each first vertex of the initial rule frame and the second center point; for each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point; and taking the rule frame formed by each second vertex as a corrected target rule frame.
The apparatus further comprises:
A third determining module 124, configured to determine a first horizontal angle and a first vertical angle of the target reference object in the first image, obtain a second horizontal angle and a second vertical angle of the target reference object in the second image, and determine a horizontal angle difference value and a vertical angle difference value;
The first determining module 121 is specifically configured to determine a depth change ratio according to the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value.
The first determining module 121 is specifically configured to input the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value into a preset first formulaDetermining a depth change ratio; wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference value, β ' - β is a vertical angle difference value, a is a preset horizontal angle depth compensation parameter, b is a preset vertical angle depth compensation parameter, and q s is a depth change ratio.
The second determining module 123 is specifically configured to input a distance between the first vertex and the second center point, an included angle formed by the first vertex, the second center point, and a horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value, and position information of the first center point into a preset second formula A'(x1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ)*sin(θ+δ)), to determine position information of a corrected second vertex corresponding to the first vertex;
Wherein ρ is the distance between the first vertex and the second center point, q s is the depth variation ratio, θ is the angle formed by the first vertex, the second center point and the horizontal coordinate axis, δ is the yaw angle difference, x 1 is the horizontal coordinate of the first center point, y 1 is the vertical coordinate of the first center point, c is the preset horizontal angle width compensation parameter, d is the preset vertical angle width compensation parameter, t α is the preset horizontal angle scaling parameter, t β is the preset vertical angle scaling parameter, and a' is the corrected position information of the second vertex corresponding to the first vertex.
The apparatus further comprises:
And a first judging module 125, configured to determine, if any one of the yaw angle difference, the horizontal angle difference, the vertical angle difference, the distance between the position information of the first center point and the position information of the second center point, and the depth change ratio is greater than a preset threshold, a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point, and the yaw angle difference.
The first determining module 121 is specifically configured to input the first image into a pre-trained reference object recognition model, and determine, based on the reference object recognition model, a first horizontal angle, a first vertical angle, a first yaw angle, position information of a first center point, and first area information of a target reference object in the first image.
The apparatus further comprises: a fourth determining module 126, configured to input the first image into a pre-trained reference object recognition model, determine, based on the reference object recognition model, a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third center point, and third area information of each reference object in the first image, determine, according to the third horizontal angle, the third vertical angle, the third yaw angle, position information of the third center point, and the third area information of each reference object, determine a priority of each reference object, and select a reference object with a highest priority as a target reference object; wherein the smaller the third horizontal angle, the third vertical angle, and the third yaw angle, the closer the position information of the third center point is to the center point of the first image, and the higher the priority of the reference object with the larger third area information is.
The apparatus further comprises: the training module 127 is configured to input, for each third image in the training set, the third image and a label image corresponding to the third image into a reference object recognition model, and train the reference object recognition model; the marked image is marked with the position information and fourth area information of a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle and a fourth center point of each reference object in the third image.
The apparatus further comprises:
A second judging module 128, configured to judge whether any one of the horizontal angle difference, the vertical angle difference, the yaw angle difference, the distance between the first center point and the second center point, and the difference between the first area information and the second area information exceeds a preset alarm threshold, if not, trigger the second determining module 123, and if yes, output an alarm prompt message;
The second judging module 128 is further configured to judge whether a corrected target rule frame has a pixel point with a negative coordinate, and if so, output an alarm prompt message.
The embodiment of the invention also provides an electronic device, as shown in fig. 13, including: processor 301, communication interface 302, memory 303 and communication bus 304, wherein processor 301, communication interface 302, memory 303 complete the communication each other through communication bus 304;
The memory 303 has stored therein a computer program which, when executed by the processor 301, causes the processor 301 to perform the steps of:
acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first center point of a target reference object in the first image;
Acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image;
Determining a yaw angle difference from the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
Based on the same inventive concept, the embodiment of the invention also provides an electronic device, and because the principle of solving the problem of the electronic device is similar to that of the rule frame correction method, the implementation of the electronic device can refer to the implementation of the method, and the repetition is omitted.
The electronic device provided by the embodiment of the invention can be a desktop computer, a portable computer, a smart phone, a tablet Personal computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a network side device and the like.
The communication bus mentioned above for the electronic device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface 302 is used for communication between the electronic device and other devices described above.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit, a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing unit, DSP), application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
When a processor executes a program stored in a memory, the method and the device realize acquisition of a first image of a monitoring area and determine first yaw angle and first center point position information of a target reference object in the first image; acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image; determining a yaw angle difference from the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value. In the embodiment of the invention, the movement condition of the target reference object in the image can be determined according to the first yaw angle and the position information of the first center point of the target reference object in the first image, the pre-stored initial rule frame in the second image and the second yaw angle and the position information of the second center point of the target reference object in the second image, and the initial rule frame is corrected based on the movement condition to obtain the target rule frame, so that the technical scheme for correcting the rule frame is provided.
The embodiment of the invention also provides a computer storage readable storage medium, wherein the computer readable storage medium stores a computer program executable by an electronic device, and when the program runs on the electronic device, the program causes the electronic device to execute the following steps:
acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first center point of a target reference object in the first image;
Acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image;
Determining a yaw angle difference from the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
Based on the same inventive concept, the embodiment of the present invention further provides a computer readable storage medium, and since the principle of solving the problem when the processor executes the computer program stored on the computer readable storage medium is similar to the rule frame correction, the implementation of the processor executing the computer program stored on the computer readable storage medium can refer to the implementation of the method, and the repetition is omitted.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memories such as floppy disks, hard disks, magnetic tapes, magneto-optical disks (MO), etc., optical memories such as CD, DVD, BD, HVD, etc., and semiconductor memories such as ROM, EPROM, EEPROM, nonvolatile memories (NAND FLASH), solid State Disks (SSD), etc.
A computer program is stored in a computer readable storage medium provided in an embodiment of the present invention, and when the computer program is executed by a processor, the computer program realizes acquiring a first image of a monitoring area, and determining first yaw angle and position information of a first center point of a target reference object in the first image; acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image; determining a yaw angle difference from the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value. In the embodiment of the invention, the movement condition of the target reference object in the image can be determined according to the first yaw angle and the position information of the first center point of the target reference object in the first image, the pre-stored initial rule frame in the second image and the second yaw angle and the position information of the second center point of the target reference object in the second image, and the initial rule frame is corrected based on the movement condition to obtain the target rule frame, so that the technical scheme for correcting the rule frame is provided.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (12)
1. A method of rule frame correction, the method comprising:
acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first center point of a target reference object in the first image;
Acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image;
Determining a yaw angle difference from the first yaw angle and the second yaw angle; determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value;
Wherein before determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value, the method further includes:
determining first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
The determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value comprises:
Determining the distance from each first vertex of the initial regular frame to the second center point according to the position information of the second center point;
For each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point;
and taking the rule frame formed by each second vertex as a corrected target rule frame.
2. The method of claim 1, wherein prior to determining a depth change ratio from the first area information and the second area information, the method further comprises:
Determining a first horizontal angle and a first vertical angle of a target reference object in the first image, acquiring a second horizontal angle and a second vertical angle of the target reference object in the second image, and determining a horizontal angle difference value and a vertical angle difference value;
The determining a depth change ratio from the first area information and the second area information includes:
And determining a depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value.
3. The method of claim 2, wherein said determining a depth change ratio based on said first area information, said second area information, said horizontal angle difference and vertical angle difference comprises:
Inputting the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value into a preset first formula Determining a depth change ratio;
Wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference value, β ' - β is a vertical angle difference value, a is a preset horizontal angle depth compensation parameter, b is a preset vertical angle depth compensation parameter, and q s is a depth change ratio.
4. The method of claim 1, wherein determining the position information of the corrected second vertex corresponding to the first vertex based on the distance between the first vertex and the second center point, the included angle formed by the first vertex and the second center point and the horizontal coordinate axis, the depth change ratio, the yaw angle difference value, and the position information of the first center point comprises:
Inputting the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point into a preset second formula A'(x1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ)*sin(θ+δ)), to determine the position information of the corrected second vertex corresponding to the first vertex;
Wherein ρ is the distance between the first vertex and the second center point, q s is the depth variation ratio, θ is the angle formed by the first vertex, the second center point and the horizontal coordinate axis, δ is the yaw angle difference, x 1 is the horizontal coordinate of the first center point, y 1 is the vertical coordinate of the first center point, c is the preset horizontal angle width compensation parameter, d is the preset vertical angle width compensation parameter, t α is the preset horizontal angle scaling parameter, t β is the preset vertical angle scaling parameter, and a' is the corrected position information of the second vertex corresponding to the first vertex.
5. The method of claim 2, wherein the determining the corrected target rule box corresponding to the initial rule box is preceded by determining the corrected target rule box based on the position information of the first center point, the position information of the second center point, and the yaw angle difference value, the method further comprising:
And if any one of the yaw angle difference value, the horizontal angle difference value, the vertical angle difference value, the distance between the position information of the first center point and the position information of the second center point and the depth change ratio is larger than a preset threshold value, determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value.
6. The method of claim 2, wherein determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information of the first center point, and the first area information of the target reference in the first image comprises:
Inputting the first image into a pre-trained reference object recognition model, and determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information of a first center point and the first area information of a target reference object in the first image based on the reference object recognition model.
7. The method of claim 6, wherein determining the target reference in the first image comprises:
Inputting the first image into a pre-trained reference object recognition model, determining the third horizontal angle, the third vertical angle, the third yaw angle and the position information and the third area information of a third center point of each reference object in the first image based on the reference object recognition model, determining the priority of each reference object according to the third horizontal angle, the third vertical angle, the third yaw angle and the position information and the third area information of the third center point of each reference object, and selecting the reference object with the highest priority as a target reference object; wherein the smaller the third horizontal angle, the third vertical angle, and the third yaw angle, the closer the position information of the third center point is to the center point of the first image, and the higher the priority of the reference object with the larger third area information is.
8. The method of claim 7, wherein the training process of the reference identification model comprises:
Aiming at each third image in the training set, inputting the third image and a labeling image corresponding to the third image into a reference object recognition model, and training the reference object recognition model; the marked image is marked with the position information and fourth area information of a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle and a fourth center point of each reference object in the third image.
9. The method of claim 2, wherein prior to determining the corrected target rule box corresponding to the initial rule box, the method further comprises:
Judging whether any one of the horizontal angle difference value, the vertical angle difference value, the yaw angle difference value, the distance between the first center point and the second center point and the difference value between the first area information and the second area information exceeds a preset alarm threshold value, if so, outputting alarm prompt information, and if not, carrying out the step of subsequently determining a corrected target rule frame corresponding to the initial rule frame;
After the corrected target rule frame corresponding to the initial rule frame is determined, the method further comprises:
Judging whether the corrected target rule frame has negative coordinate pixel points or not, and if so, outputting alarm prompt information.
10. A rule frame correction apparatus, the apparatus comprising:
the first determining module is used for acquiring a first image of the monitoring area and determining a first yaw angle and position information of a first center point of a target reference object in the first image;
The acquisition module is used for acquiring an initial regular frame in a pre-stored second image and position information of a second yaw angle and a second center point of the target reference object in the second image;
A second determining module for determining a yaw angle difference based on the first yaw angle and the second yaw angle; determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point and the yaw angle difference value;
The first determining module is further configured to determine first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
The second determining module is specifically configured to determine, according to the position information of the second center point, a distance from each first vertex of the initial rule frame to the second center point; for each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal direction coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first center point; and taking the rule frame formed by each second vertex as a corrected target rule frame.
11. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
A memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-9 when executing a program stored on a memory.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011318414.0A CN112418086B (en) | 2020-11-23 | 2020-11-23 | Rule frame correction method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011318414.0A CN112418086B (en) | 2020-11-23 | 2020-11-23 | Rule frame correction method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112418086A CN112418086A (en) | 2021-02-26 |
CN112418086B true CN112418086B (en) | 2024-08-02 |
Family
ID=74778382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011318414.0A Active CN112418086B (en) | 2020-11-23 | 2020-11-23 | Rule frame correction method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112418086B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762272B (en) * | 2021-09-10 | 2024-06-14 | 北京精英路通科技有限公司 | Road information determining method and device and electronic equipment |
CN116906277B (en) * | 2023-06-20 | 2024-07-30 | 北京图知天下科技有限责任公司 | Fan yaw variation determining method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102263900A (en) * | 2010-05-26 | 2011-11-30 | 佳能株式会社 | Image processing apparatus and image processing method |
CN107113376B (en) * | 2015-07-31 | 2019-07-19 | 深圳市大疆创新科技有限公司 | A kind of image processing method, device and video camera |
CN110569838A (en) * | 2019-04-25 | 2019-12-13 | 内蒙古工业大学 | Autonomous landing method of quad-rotor unmanned aerial vehicle based on visual positioning |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4948552B2 (en) * | 2009-02-10 | 2012-06-06 | 日本電信電話株式会社 | Camera calibration apparatus, camera calibration method, camera calibration program, and recording medium recording the program |
EP3534333A1 (en) * | 2018-02-28 | 2019-09-04 | Aptiv Technologies Limited | Method for calibrating the position and orientation of a camera relative to a calibration pattern |
KR101999793B1 (en) * | 2018-10-29 | 2019-07-12 | (주)케이웍스 | System for Measuring Pothole of Road and Method Thereof |
CN111256663B (en) * | 2018-12-03 | 2021-10-19 | 北京世纪朝阳科技发展有限公司 | Centering calibration method and device |
CN111583119B (en) * | 2020-05-19 | 2021-07-09 | 北京数字绿土科技有限公司 | Orthoimage splicing method and equipment and computer readable medium |
-
2020
- 2020-11-23 CN CN202011318414.0A patent/CN112418086B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102263900A (en) * | 2010-05-26 | 2011-11-30 | 佳能株式会社 | Image processing apparatus and image processing method |
CN107113376B (en) * | 2015-07-31 | 2019-07-19 | 深圳市大疆创新科技有限公司 | A kind of image processing method, device and video camera |
CN110569838A (en) * | 2019-04-25 | 2019-12-13 | 内蒙古工业大学 | Autonomous landing method of quad-rotor unmanned aerial vehicle based on visual positioning |
Also Published As
Publication number | Publication date |
---|---|
CN112418086A (en) | 2021-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9436997B2 (en) | Estimating rainfall precipitation amounts by applying computer vision in cameras | |
CN112418086B (en) | Rule frame correction method and device, electronic equipment and storage medium | |
CN109345599B (en) | Method and system for converting ground coordinates and PTZ camera coordinates | |
CN113345019A (en) | Power transmission line channel hidden danger target ranging method, equipment and medium | |
CN104167109B (en) | The detection method of vehicle location and detection device | |
CN101577054A (en) | Control method of traffic signal lamp and system | |
CN115311354A (en) | Foreign matter risk area identification method, device, equipment and storage medium | |
CN113505643A (en) | Violation target detection method and related device | |
CN114384505A (en) | Method and device for determining radar deflection angle | |
CN114782548B (en) | Global image-based radar data calibration method, device, equipment and medium | |
CN104834886A (en) | Method and device for detecting video image | |
CN115272482A (en) | Camera external reference calibration method and storage medium | |
CN110718068A (en) | Road monitoring camera installation angle estimation method | |
Dehghani et al. | Single camera vehicles speed measurement | |
CN116363598A (en) | Crowd crowding early warning method and device, electronic equipment and readable storage medium | |
CN116704042A (en) | Positioning method, positioning device, electronic equipment and storage medium | |
JP4987819B2 (en) | Traffic monitoring device and traffic monitoring method | |
CN111435565A (en) | Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium | |
CN113112551B (en) | Camera parameter determining method and device, road side equipment and cloud control platform | |
CN113450385B (en) | Night work engineering machine vision tracking method, device and storage medium | |
CN111353932B (en) | Coordinate conversion method and device, electronic equipment and storage medium | |
CN104931024A (en) | Obstacle detection device | |
CN104299002B (en) | A kind of tower crane image detecting method based on monitoring system | |
Priantama et al. | The innovation development of early flash flood warning system based on digital image processing through android smartphone | |
CN114239995A (en) | Method and system for generating full-area cruising route, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |