CN112418086A - Rule box correction method and device, electronic equipment and storage medium - Google Patents

Rule box correction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112418086A
CN112418086A CN202011318414.0A CN202011318414A CN112418086A CN 112418086 A CN112418086 A CN 112418086A CN 202011318414 A CN202011318414 A CN 202011318414A CN 112418086 A CN112418086 A CN 112418086A
Authority
CN
China
Prior art keywords
image
position information
determining
reference object
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011318414.0A
Other languages
Chinese (zh)
Inventor
袁剑英
苏昭行
吴允
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011318414.0A priority Critical patent/CN112418086A/en
Publication of CN112418086A publication Critical patent/CN112418086A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures

Abstract

The invention discloses a method and a device for correcting a rule frame, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first central point of a target reference object in the first image; acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image; determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value. Thereby providing a technical scheme for correcting the rule box.

Description

Rule box correction method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for correcting a rule box, an electronic device, and a storage medium.
Background
In the field of video surveillance, there are many applications of intelligent video analysis, where in addition to target detection, there are also analyses of target behavior, such as: and performing behavior analysis such as line crossing detection, area intrusion detection, detection of whether the vehicle is parked in a parking space and the like. These behavioral analyses all involve the drawing of rule boxes in the camera.
When the assumed camera is deviated due to the influence of strong wind weather or the influence of pavement geology and the like, the position of the regular frame in the camera is inaccurate. For example, when detecting whether the vehicle is parked in the parking space, the initial rule frame coincides with the parking space frame in the monitoring scene, but after the deviation of the camera occurs, the rule frame does not coincide with the parking space frame in the monitoring scene, and at this time, it is obviously inaccurate to detect whether the vehicle is parked in the parking space based on the rule frame, that is, the position of the rule frame in the camera is inaccurate at this time. There is a need for a technical solution for correcting rule blocks.
Disclosure of Invention
The embodiment of the invention provides a method and a device for correcting a rule frame, electronic equipment and a storage medium, and provides a technical scheme for correcting the rule frame.
The embodiment of the invention provides a method for correcting a rule box, which comprises the following steps:
acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first central point of a target reference object in the first image;
acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image;
determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
Further, before determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point, and the yaw angle difference, the method further includes:
determining first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value comprises:
determining the distance from each first vertex of the initial rule box to the second central point according to the position information of the second central point;
for each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second central point, an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first central point;
and taking the rule frame formed by each second vertex as a corrected target rule frame.
Further, before determining the depth change ratio according to the first area information and the second area information, the method further includes:
determining a first horizontal angle and a first vertical angle of a target reference object in the first image, acquiring a second horizontal angle and a second vertical angle of the target reference object in the second image, and determining a horizontal angle difference value and a vertical angle difference value;
the determining a depth change ratio according to the first area information and the second area information includes:
and determining a depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value.
Further, the determining a depth change ratio according to the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value includes:
inputting the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value into a preset first formula
Figure BDA0002792024360000031
Determining a depth change ratio;
wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference, β ' - β is a vertical angle difference, a is a predetermined horizontal angle depth compensation parameter, b is a predetermined vertical angle depth compensation parameter, q is a predetermined horizontal angle depth compensation parameter, and q is a predetermined horizontal angle depth compensation parametersIs the depth variation ratio.
Further, the determining, according to the distance between the first vertex and the second center point, an included angle formed by the first vertex, the second center point, and a horizontal coordinate axis, the depth change ratio, the yaw angle difference value, and the position information of the first center point, the corrected position information of the second vertex corresponding to the first vertex includes:
inputting the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal coordinate axis, the depth change ratio, the yaw angle difference and the position information of the first center point into a preset second formula A' (x)1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ) Sin (θ + δ)), determining position information of a corrected second vertex corresponding to the first vertex;
where ρ is the distance between the first vertex and the second center point, and q is the distance between the first vertex and the second center pointsTheta is an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, delta is a difference value of a yaw angle, and x is a depth change ratio1Is a horizontal coordinate of the first center point, y1Is as followsA vertical coordinate of a center point, c is a predetermined horizontal angular width compensation parameter, d is a predetermined vertical angular width compensation parameter, tαFor a predetermined horizontal angle scaling parameter, tβAnd A' is the position information of the corrected second vertex corresponding to the first vertex, which is a preset vertical angle scaling parameter.
Further, before determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point, and the yaw angle difference, the method further includes:
and if any one of the yaw angle difference, the horizontal angle difference, the vertical angle difference, the distance between the position information of the first central point and the position information of the second central point and the depth change ratio is larger than a preset threshold value, determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference.
Further, determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information of the first center point, and the first area information of the target reference object in the first image comprises:
inputting the first image into a pre-trained reference object recognition model, and determining a first horizontal angle, a first vertical angle, a first yaw angle, position information of a first central point and first area information of a target reference object in the first image based on the reference object recognition model.
Further, the determining the target reference in the first image comprises:
inputting the first image into a pre-trained reference object recognition model, determining a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third central point and third area information of each reference object in the first image based on the reference object recognition model, determining the priority of each reference object according to the third horizontal angle, the third vertical angle, the third yaw angle, the position information of the third central point and the third area information of each reference object, and selecting the reference object with the highest priority as a target reference object; the smaller each angle of the third horizontal angle, the third vertical angle and the third yaw angle is, the closer the position information of the third central point is to the first image central point, and the higher the third area information is, the higher the priority of the reference is.
Further, the training process of the reference object recognition model comprises:
inputting the third image and a labeled image corresponding to the third image into a reference object recognition model aiming at each third image in a training set, and training the reference object recognition model; and the labeling image is labeled with a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle, position information of a fourth central point and fourth area information of each reference object in the third image.
Further, before determining the corrected target rule box corresponding to the initial rule box, the method further includes:
judging whether any one of the horizontal angle difference value, the vertical angle difference value, the yaw angle difference value, the distance between the first central point and the second central point and the difference value between the first area information and the second area information exceeds a preset alarm threshold value, if so, outputting alarm prompt information, and if not, performing the subsequent step of determining a corrected target rule frame corresponding to the initial rule frame;
after determining the corrected target rule box corresponding to the initial rule box, the method further includes:
and judging whether the corrected target rule frame has a pixel point of the negative coordinate or not, and if so, outputting alarm prompt information.
In another aspect, an embodiment of the present invention provides a rule box correction apparatus, where the apparatus includes:
the first determining module is used for acquiring a first image of a monitored area and determining a first yaw angle and position information of a first central point of a target reference object in the first image;
the acquisition module is used for acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image;
the second determining module is used for determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
Further, the first determining module is further configured to determine first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
the second determining module is specifically configured to determine, according to the location information of the second central point, a distance from each first vertex of the initial rule box to the second central point; for each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second central point, an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first central point; and taking the rule frame formed by each second vertex as a corrected target rule frame.
Further, the apparatus further comprises:
a third determining module, configured to determine a first horizontal angle and a first vertical angle of a target reference object in the first image, obtain a second horizontal angle and a second vertical angle of the target reference object in the second image, and determine a horizontal angle difference and a vertical angle difference;
the first determining module is specifically configured to determine a depth change ratio according to the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value.
Further, the first determining module is specifically configured to determine the first area information and the second areaInputting information, the horizontal angle difference value and the vertical angle difference value into a preset first formula
Figure BDA0002792024360000061
Determining a depth change ratio; wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference, β ' - β is a vertical angle difference, a is a predetermined horizontal angle depth compensation parameter, b is a predetermined vertical angle depth compensation parameter, q is a predetermined horizontal angle depth compensation parameter, and q is a predetermined horizontal angle depth compensation parametersIs the depth variation ratio.
Further, the second determining module is specifically configured to input a distance between the first vertex and the second center point, an included angle formed by the first vertex, the second center point, and a horizontal coordinate axis, the depth change ratio, the yaw angle difference, and the position information of the first center point into a preset second formula a' (x is a value obtained by subtracting the position information of the first center point from the position information of the second center point1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ) Sin (θ + δ)), determining position information of a corrected second vertex corresponding to the first vertex;
where ρ is the distance between the first vertex and the second center point, and q is the distance between the first vertex and the second center pointsTheta is an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, delta is a difference value of a yaw angle, and x is a depth change ratio1Is a horizontal coordinate of the first center point, y1Is a vertical coordinate of the first center point, c is a predetermined horizontal angular width compensation parameter, d is a predetermined vertical angular width compensation parameter, tαFor a predetermined horizontal angle scaling parameter, tβAnd A' is the position information of the corrected second vertex corresponding to the first vertex, which is a preset vertical angle scaling parameter.
Further, the apparatus further comprises:
and the first judgment module is used for determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value if any one of the yaw angle difference value, the horizontal angle difference value, the vertical angle difference value, the distance between the position information of the first central point and the position information of the second central point and the depth change ratio is greater than a preset threshold value.
Further, the first determining module is specifically configured to input the first image into a pre-trained reference object recognition model, and determine, based on the reference object recognition model, a first horizontal angle, a first vertical angle, a first yaw angle, position information of a first center point, and first area information of a target reference object in the first image.
Further, the apparatus further comprises: a fourth determining module, configured to input the first image into a pre-trained reference object recognition model, determine a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third center point, and third area information of each reference object in the first image based on the reference object recognition model, determine a priority of each reference object according to the third horizontal angle, the third vertical angle, the third yaw angle, the position information of the third center point, and the third area information of each reference object, and select a reference object with a highest priority as a target reference object; the smaller each angle of the third horizontal angle, the third vertical angle and the third yaw angle is, the closer the position information of the third central point is to the first image central point, and the higher the third area information is, the higher the priority of the reference is.
Further, the apparatus further comprises: the training module is used for inputting the third image and the labeled image corresponding to the third image into a reference object recognition model aiming at each third image in a training set and training the reference object recognition model; and the labeling image is labeled with a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle, position information of a fourth central point and fourth area information of each reference object in the third image.
Further, the apparatus further comprises:
the second judging module is used for judging whether any one of the horizontal angle difference value, the vertical angle difference value, the yaw angle difference value, the distance between the first central point and the second central point and the difference value between the first area information and the second area information exceeds a preset alarm threshold value, if not, the second determining module is triggered, and if so, alarm prompt information is output;
the second judgment module is also used for judging whether the pixel points of the negative coordinates exist in the corrected target rule frame or not, and if so, outputting alarm prompt information.
In another aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor for implementing any of the above method steps when executing a program stored in the memory.
In yet another aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method steps of any one of the above.
The embodiment of the invention provides a method and a device for correcting a rule frame, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first central point of a target reference object in the first image; acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image; determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
The technical scheme has the following advantages or beneficial effects:
in the embodiment of the present invention, the movement condition of the target reference object in the image can be determined according to the first yaw angle and the position information of the first center point of the target reference object in the first image, the initial rule frame in the second image, and the second yaw angle and the position information of the second center point of the target reference object in the second image, which are stored in advance, and the target rule frame is obtained by correcting the initial rule frame based on the movement condition, so that a technical solution for correcting the rule frame is provided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a rule block calibration process according to an embodiment of the present invention;
FIG. 2 is a schematic view of a monitoring camera according to an embodiment of the present invention;
FIG. 3 is another schematic view of a monitoring camera according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a scenario provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a coordinate system provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating angle information for identifying a fixed reference object according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a normal monitoring screen according to an embodiment of the present invention;
fig. 8 is a schematic view of a monitoring screen shifted by external influences according to an embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating comparison of monitoring images before and after being shifted by external influences according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a point location coordinate model extracted according to a comparison monitoring picture according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of another exemplary rule block calibration process according to the present invention;
FIG. 12 is a schematic structural diagram of a rule box calibration apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the attached drawings, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of a rule block correction process provided in an embodiment of the present invention, where the process includes the following steps:
s101: the method comprises the steps of obtaining a first image of a monitored area, and determining a first yaw angle and position information of a first central point of a target reference object in the first image.
The method for correcting the rule frame provided by the embodiment of the invention is applied to electronic equipment, and the electronic equipment can be a camera arranged in a monitoring scene, and can also be equipment such as a PC (personal computer), a tablet personal computer and the like. If the electronic equipment is a camera arranged in a monitoring scene, after the camera acquires a first image of a monitoring area, the process of determining the first yaw angle and the position information of the first central point of the target reference object in the first image is carried out. If the electronic device is a device such as a PC or a tablet computer, after the camera arranged in the monitoring scene acquires the first image of the monitoring area, the first image of the monitoring area may be sent to the electronic device, and then the electronic device performs a process of determining the first yaw angle of the target reference object and the position information of the first center point in the first image. The target reference object in the embodiment of the invention is an object which is fixed in a scene.
In the embodiment of the invention, the image of the monitoring area acquired by the electronic equipment is called a first image, and after the electronic equipment acquires the first image, the target reference object in the first image can be determined through a target recognition algorithm, so that the first yaw angle and the position information of the first central point of the target reference object are determined.
In order to make the determination of the first yaw angle and the position information of the first center point of the target reference object in the first image more accurate, in the embodiment of the present invention, a reference object recognition model which is trained in advance may be stored in the electronic device, during the training of the reference object recognition model, a corresponding annotation image exists in each sample image in the training set, and the annotation image is annotated with the target reference object in the sample image and the position information of the yaw angle and the center point of the target reference object. After the training of the reference object recognition model is completed, the first image is input into the reference object recognition model, and the reference object recognition model can output the first yaw angle and the position information of the first central point of the target reference object in the first image.
S102: and acquiring a pre-saved initial rule frame in a second image and position information of a second yaw angle and a second central point of the target reference object in the second image.
The electronic equipment prestores a second image, wherein the second image is an image acquired when the camera is determined not to be deviated, each image comprises an image layer and a graphic layer, the image layer is used for displaying image content, and the graphic layer displays a rule frame. The rule box in the second image is referred to as an initial rule box in the embodiment of the present invention. The initial rule frame completely coincides with the region to be detected in the image layer in the second image. For example, if the area to be detected is a parking space, the initial rule frame of the graphic layer in the second image completely coincides with the parking space frame of the graphic layer. The electronic device may retrieve an initial rule box in a second image that is pre-saved. And a second yaw angle and position information of a second center point of the target reference object in the second image may be acquired. When the second yaw angle and the position information of the second central point of the target reference object in the second image are acquired, the second image may be input into a reference object recognition model trained in advance, and the second yaw angle and the position information of the second central point of the target reference object in the second image may be output based on the reference object recognition model.
S103: determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
In the embodiment of the present invention, offset distances of the target reference object in the horizontal direction and the vertical direction in the image coordinate system can be determined according to the position information of the first center point and the position information of the second center point, the offset distances of the target reference object in the horizontal direction and the vertical direction can be used as offset distances of the initial rule frame in the horizontal direction and the vertical direction, the initial rule frame is moved according to the offset distances in the horizontal direction and the vertical direction, and if the camera is offset, a yaw angle of the target reference object generally changes.
In the embodiment of the present invention, the movement condition of the target reference object in the image can be determined according to the first yaw angle and the position information of the first center point of the target reference object in the first image, the initial rule frame in the second image, and the second yaw angle and the position information of the second center point of the target reference object in the second image, which are stored in advance, and the target rule frame is obtained by correcting the initial rule frame based on the movement condition, so that a technical solution for correcting the rule frame is provided.
In order to make determining the corrected target rule frame more accurate, in an embodiment of the present invention, before determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point, and the yaw angle difference, the method further includes:
determining first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value comprises:
determining the distance from each first vertex of the initial rule box to the second central point according to the position information of the second central point;
for each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second central point, an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first central point;
and taking the rule frame formed by each second vertex as a corrected target rule frame.
The depth information of the rule box may change after the position or angle of the camera changes. In order to make the determined target rule box more accurate, in the embodiment of the present invention, after the electronic device acquires the first image, the target reference object in the first image is identified, and first area information of the target reference object in the first image is determined. In addition, when the electronic device is directed to a second image stored in advance, second area information of the target reference object in the second image is also stored. The electronic equipment acquires second area information of the target reference object in the second image, and determines the depth change ratio according to the first area information and the second area information. The ratio of the first area information to the second area information may be calculated, and the depth change ratio may be obtained by performing an evolution operation on the ratio.
In order to make the depth variation ratio and the determined target rule box more accurate, in an embodiment of the present invention, before determining the depth variation ratio according to the first area information and the second area information, the method further includes:
determining a first horizontal angle and a first vertical angle of a target reference object in the first image, acquiring a second horizontal angle and a second vertical angle of the target reference object in the second image, and determining a horizontal angle difference value and a vertical angle difference value;
the determining a depth change ratio according to the first area information and the second area information includes:
and determining a depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value.
In the embodiment of the present invention, after the electronic device acquires the first image, a first horizontal angle and a first vertical angle of the target reference object in the first image may also be determined. In addition, when the electronic device is directed to a second image stored in advance, a second horizontal angle and a second vertical angle of the target reference object in the second image are also stored. The electronic device acquires a second horizontal angle and a second vertical angle of the target reference object in the second image, and determines a horizontal angle difference value and a vertical angle difference value according to the first horizontal angle and the first vertical angle of the target reference object in the first image and the second horizontal angle and the second vertical angle of the target reference object in the second image.
In order to make the determination of the first horizontal angle and the first vertical angle of the target reference object in the first image more accurate, in the embodiment of the present invention, a pre-trained reference object recognition model may be stored in the electronic device, when the reference object recognition model is trained, a corresponding annotation image exists in each sample image in the training set, and the annotation image is annotated with the target reference object in the sample image, and the yaw angle, the position information of the central point, the horizontal angle, and the vertical angle of the target reference object. After the training of the reference object recognition model is completed, the first image is input into the reference object recognition model, and the reference object recognition model can output the first yaw angle, the position information of the first central point, the first horizontal angle and the first vertical angle of the target reference object in the first image.
In addition, when the electronic device stores the second horizontal angle and the second vertical angle of the target reference object in the second image in advance, the second image may be input to the reference object recognition model, and the reference object recognition model may output the second yaw angle, the position information of the second center point, the second horizontal angle, and the second vertical angle of the target reference object in the second image.
The electronic device determines a depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value. Specifically, the determining the depth change ratio according to the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value includes:
inputting the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value into a preset first formula
Figure BDA0002792024360000131
Determining a depth change ratio;
wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference, β ' - β is a vertical angle difference, a is a predetermined horizontal angle depth compensation parameter, b is a predetermined vertical angle depth compensation parameter, q is a predetermined horizontal angle depth compensation parameter, and q is a predetermined horizontal angle depth compensation parametersIs the depth variation ratio.
Where a is a preset horizontal angle depth compensation parameter, and a is an empirical value, generally a value in the range of 0 to 0.5. b is a predetermined vertical angle depth compensation parameter, and b is an empirical value, typically a value in the range of 0 to 0.5.
In the embodiment of the invention, after the electronic equipment determines the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value, the information is substituted into a preset first formula
Figure BDA0002792024360000141
The depth variation ratio is obtained. In the embodiment of the invention, when the depth change ratio is determined, the angle compensation is considered, so that the determined depth change ratio is more accurate.
In order to determine the corrected target rule frame more accurately, in an embodiment of the present invention, the determining, according to the distance between the first vertex and the second center point, an included angle formed by the first vertex, the second center point, and a horizontal coordinate axis, the depth change ratio, the yaw angle difference, and the position information of the first center point, the position information of the corrected second vertex corresponding to the first vertex includes:
a clip formed by the distance between the first vertex and the second center point, and the horizontal coordinate axisThe angle, the depth change ratio, the yaw angle difference value and the position information of the first center point are input into a preset second formula A' (x)1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ) Sin (θ + δ)), determining position information of a corrected second vertex corresponding to the first vertex;
where ρ is the distance between the first vertex and the second center point, and q is the distance between the first vertex and the second center pointsTheta is an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, delta is a difference value of a yaw angle, and x is a depth change ratio1Is a horizontal coordinate of the first center point, y1Is a vertical coordinate of the first center point, c is a predetermined horizontal angular width compensation parameter, d is a predetermined vertical angular width compensation parameter, tαFor a predetermined horizontal angle scaling parameter, tβAnd A' is the position information of the corrected second vertex corresponding to the first vertex, which is a preset vertical angle scaling parameter.
Wherein c is a predetermined horizontal angular width compensation parameter, generally having a value in the range of 0 to 0.5, d is a predetermined vertical angular width compensation parameter, generally having a value in the range of 0 to 0.5, tαIs a preset horizontal angle scaling parameter, and takes a value generally ranging from 0 to 1, tβThe value of the preset vertical angle scaling parameter is generally in the range of 0 to 1.
In the embodiment of the present invention, after the electronic device determines the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point, and the horizontal coordinate axis, the depth change ratio, the yaw angle difference, and the position information of the first center point, the electronic device inputs the information into a preset second formula a '(x is a preset value of the preset second formula a', and x is a1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ) Sin (θ + δ)), and determining position information of the corrected second vertex corresponding to the first vertex. And taking the rule frame formed by each second vertex as a corrected target rule frame.
In order to reduce resource consumption of the correction rule frame, in the embodiment of the present invention, it may be determined whether the rule frame needs to be corrected first, and if the determination result is yes, the correction is performed, otherwise, the correction is not performed. Specifically, in this embodiment of the present invention, before determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point, and the yaw angle difference, the method further includes:
and if any one of the yaw angle difference, the horizontal angle difference, the vertical angle difference, the distance between the position information of the first central point and the position information of the second central point and the depth change ratio is larger than a preset threshold value, determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference.
In the embodiment of the present invention, after determining any one of a yaw angle difference, a horizontal angle difference, a vertical angle difference, a distance between the position information of the first center point and the position information of the second center point, and the depth change ratio, the electronic device determines whether any one of the two is greater than a preset threshold, and if so, determines a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point, and the yaw angle difference. It should be noted that the preset thresholds for comparison and determination respectively corresponding to the yaw angle difference, the horizontal angle difference, the vertical angle difference, the distance between the position information of the first center point and the position information of the second center point, and the depth change ratio are different. For example, the following steps are carried out: the preset threshold value for comparison and judgment corresponding to the yaw angle difference value is different from the preset threshold value for comparison and judgment corresponding to the depth change ratio. In addition, the preset thresholds for comparison and judgment, which correspond to the yaw angle difference, the horizontal angle difference and the vertical angle difference, may be the same or different.
In the embodiment of the present invention, when it is determined that any one of a yaw angle difference, a horizontal angle difference, a vertical angle difference, a distance between the position information of the first center point and the position information of the second center point, and the depth change ratio is greater than a preset threshold, the corrected target rule frame corresponding to the initial rule frame is determined according to the position information of the first center point, the position information of the second center point, and the yaw angle difference. Otherwise, no rule block correction is performed. That is, it is determined whether the rule frame needs to be corrected, and the rule frame is corrected, otherwise, the rule frame is not corrected. Thereby reducing resource consumption of the correction rule block.
In an embodiment of the present invention, determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information of the first center point, and the first area information of the target reference object in the first image includes:
inputting the first image into a pre-trained reference object recognition model, and determining a first horizontal angle, a first vertical angle, a first yaw angle, position information of a first central point and first area information of a target reference object in the first image based on the reference object recognition model.
In order to ensure that the correction of the rule frame can be performed according to the target reference object, and that accurate correction can be performed based on the target reference object, in an embodiment of the present invention, the determining the target reference object in the first image includes:
inputting the first image into a pre-trained reference object recognition model, determining a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third central point and third area information of each reference object in the first image based on the reference object recognition model, determining the priority of each reference object according to the third horizontal angle, the third vertical angle, the third yaw angle, the position information of the third central point and the third area information of each reference object, and selecting the reference object with the highest priority as a target reference object; the smaller each angle of the third horizontal angle, the third vertical angle and the third yaw angle is, the closer the position information of the third central point is to the first image central point, and the higher the third area information is, the higher the priority of the reference is.
Based on the reference object recognition model, a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third center point and third area information of each reference object in the first image can be determined, and then a target reference object is selected according to the information. The selected strategy is that the smaller each angle in the third horizontal angle, the third vertical angle and the third yaw angle, the closer the position information of the third central point is to the central point of the first image, and the higher the priority of the reference object with the larger third area information is, the higher the priority is, and the reference object with the highest priority is selected as the target reference object.
The scheme for determining the target reference object provided by the embodiment of the invention enables the determined target reference object to be more accurate, and further enables the correction of the rule frame based on the target reference object to be more accurate.
The training process of the reference object recognition model comprises the following steps:
inputting the third image and a labeled image corresponding to the third image into a reference object recognition model aiming at each third image in a training set, and training the reference object recognition model; and the labeling image is labeled with a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle, position information of a fourth central point and fourth area information of each reference object in the third image.
The electronic equipment stores a training set, images in the training set are third images, and each third image has a corresponding labeled image. And the labeling image is labeled with a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle, position information of a fourth central point and fourth area information of each reference object in the corresponding third image. And inputting each group of the third images and the marked images into the reference object recognition model to finish the training of the reference object recognition model.
If the position or the angle of the camera changes greatly, the electronic equipment may not be able to complete the correction of the rule frame, and at this time, the position or the angle of the camera can only be corrected manually, and then the rule frame is corrected. Therefore, in this embodiment of the present invention, before determining the corrected target rule box corresponding to the initial rule box, the method further includes:
judging whether any one of the horizontal angle difference value, the vertical angle difference value, the yaw angle difference value, the distance between the first central point and the second central point and the difference value between the first area information and the second area information exceeds a preset alarm threshold value, if so, outputting alarm prompt information, and if not, performing the subsequent step of determining a corrected target rule frame corresponding to the initial rule frame;
after determining the corrected target rule box corresponding to the initial rule box, the method further includes:
and judging whether the corrected target rule frame has a pixel point of the negative coordinate or not, and if so, outputting alarm prompt information.
In the embodiment of the present invention, if any one of a horizontal angle difference, a vertical angle difference, a yaw angle difference, a distance between the first center point and the second center point, and a difference between the first area information and the second area information exceeds a preset alarm threshold value, the first area information is updated to the second area information. The position or the angle of the camera is greatly changed and exceeds the automatic correction capability range of the electronic equipment, so that alarm prompt information is output to prompt related personnel to correct the position or the angle of the camera. Or after the corrected target rule frame corresponding to the initial rule frame is determined, if the corrected target rule frame is judged to have the pixel point with the negative coordinate, the electronic equipment is also indicated to be corrected incorrectly, and the alarm prompt information is output to prompt relevant personnel to correct the position or the angle of the camera.
The following describes in detail a rule block correction process provided by an embodiment of the present invention with reference to the accompanying drawings.
The application scenario of the embodiment of the invention includes that for southern coastal areas, the southern coastal areas are often influenced by strong wind or typhoon weather, and the erected monitoring cameras sometimes deviate from the preset monitoring area to different degrees due to position movement. Or some places are affected by geology, such as construction site, ground rolling by a muck truck, long-term rainfall and the like, so that the loosened ground subsides or vibrates, the vertical rod or the cantilever rod is displaced or shakes, and the monitoring area of the monitoring camera fixed on the vertical rod is changed. And moreover, some construction site conditions are not ideal, standard firm cantilever rods are not available, and the cantilever rods can only be fixed on some non-standard rod bodies, and the rod bodies have larger elasticity or are fine and easy to be influenced by external environment. Further, there may be intentional or unintentional damage by man, such as impacts, weight bearing of the tether cord, etc. The above abnormal environment has two effects on the camera monitoring picture: jitter and shift.
Fig. 2 is a schematic view showing the installation of a conventional monitoring camera, and as shown in fig. 2, the monitoring camera is held on a cross bar of a cantilever bar by a gimbal. The standard construction requirements are as follows: the height from the ground to the center line of the cantilever rod is as follows: 6.0 mm. The components are all made of A3 steel. The rod requires integral formation. The electrode used was T42. All steel members are subjected to hot galvanizing corrosion prevention treatment, and the surfaces of the fasteners are as follows: 350/m2, others are: 600 g/m. The concrete adopts C20, the basic specification is not lower than 1500 × 2200(mm), and the concrete specification is determined according to the crossing situation. All the welding positions must be fully welded and firm, cannot be subjected to cold welding, and have beautiful surfaces. The central line of the arc-shaped rod and the road should be kept horizontal. Each set of arm is provided with a 2.5m hot galvanizing grounding rod which is connected with a rod piece by a 16mm2 bare copper wire, and the grounding resistance is less than 10 omega. The monitoring camera which does not meet the standard construction requirements may be installed on a street lamp, a telegraph pole, an outer wall of a building and the like. As shown in fig. 3.
Fig. 4 is a scene schematic diagram provided in an embodiment of the present invention, which provides an automatic correction method according to a fixed reference, so that a rule area is corrected after being deviated in a certain range, and an event is reported as needed to remind an upper layer (user) to take a countermeasure. The protocol proceeds as follows.
In the following, the area rule box is taken as an example, the scene is a parking space, the rule line is simpler than the rule box, and the scene is not limited to the parking space. In addition, the regular frame is a quadrangle, and actually can be a polygon.
Description of relevant parameters:
and (3) coordinate system: the method is divided into a rectangular coordinate system and a polar coordinate system. The rectangular coordinate system is slightly different from mathematics in the image processing field, and the Y axis of the rectangular coordinate system is vertically downward in the image processing field. As shown in fig. 5.
Fixing a reference substance: the reference object recognition model is trained in advance through AI, and some fixed reference objects which can be recognized by the reference object recognition model, such as a fixed garbage can (the bottom foundation is fixed and can not be moved), a box body (an advertising lamp box) and the like, can be selected from other fixed identification accessories if the fixed reference objects are not on site. Fixed reference object angle information can be identified: horizontal angle α, vertical angle β, yaw angle δ, as shown in fig. 6. In addition, the central point position O and the area S (w h) can also be identified. Wherein, the central point position and the yaw angle reflect the change of the plane position of the (monitoring picture); the area, horizontal angle, vertical angle reflect the depth position change. Depth changes can affect picture scaling.
tα: the horizontal angle scaling parameter is approximate to | cos (alpha) |, and the horizontal angle scaling parameter can be actually taught along with the lens parameter within the range of 0-1;
tβ: the vertical angle scaling parameter is approximate to | cos (beta) |, and the vertical angle scaling parameter can be actually taught along with the lens parameter within the range of 0-1;
a: compensating depth change training parameters by a horizontal angle within the range of 0-0.5;
b: vertical angle compensation depth change teaching parameters, the range is 0-0.5;
c: compensating width change training parameters of the horizontal angle, wherein the range is 0-0.5;
d: and compensating width variation training parameters of the vertical angle within the range of 0-0.5.
Fig. 7 is a schematic view of a normal monitoring screen, and fig. 7 includes a rule frame, and a parking space line is inside the rule frame. The fixed reference object is a garbage can or a parking sign. Fig. 8 is a schematic view of the monitoring screen after being shifted by external influence, and fig. 8 is a schematic view after the rule frame is corrected according to the fixed reference object. Fig. 9 is a schematic comparison diagram of monitoring images before and after being affected by external influences. Fig. 10 is a schematic diagram of a point location coordinate model extracted from a comparison monitoring picture. The following describes a rule block correction scheme provided by an embodiment of the present invention with reference to fig. 10.
Angle xOA ═ θ, since a "is a shift position before the reference is rotated, OA/O ' a ″, < x ' O ' a ═ xOA ═ θ; a ' is a rotated position, and a reference object rotation angle ═ x ' Ox ═ y ' Oy ═ δ (variation)Delta before change0Delta, rear1Is a difference of a rotation angle δ10) And < a "O' a ═ δ. Since the quadrilateral ABCD is drawn on the graphics layer by the application, the quadrilateral ABCD does not change with the change of the monitoring picture in the video layer. Original reference object coordinate O (x)0,y0) Changed reference object coordinate O' (x)1,y1) Original rule point A (x)a,ya) Corrected rule Point A' (x)a1,ya1). In the absence of a change in depth,
Figure BDA0002792024360000201
the polar coordinates of a with respect to O are a (ρ, θ), the polar coordinates of a 'with respect to O' are a '(ρ, θ + δ), and the mathematical coordinate system coordinates of a' with respect to O 'are a' (ρ × (θ + δ), ρ × sin (θ + δ)). As shown in fig. 6, the depth effect is considered: the horizontal angle affects the scaling of the x-axis direction, the vertical angle affects the scaling of the y-axis, the z-axis directly corresponds to the depth of field, and many technical means have been provided for measuring the depth of field of the z-axis, such as laser, infrared distance measurement, and scale calibration. The initial angle before the change (the original reference object is not necessarily right opposite to the monitoring lens, so the initial angle can be also obtained) can be restored and compensated according to the horizontal angle and the vertical angle. Area approximation estimation depth change ratio qsSqrt (S'/S). Angle compensated depth variation ratio
Figure BDA0002792024360000202
The mathematical coordinate system coordinate of a ' relative to O ' after considering the depth influence is a ' (ρ × q)s*(c+tα)*cos(θ+δ),ρ*qs*(d+tβ) Sin (θ + δ)). Considering the difference between the mathematical rectangular coordinate system and the image rectangular coordinate system in the y-axis direction, the final A' is relative to the picture O0Has coordinates of A' (x) in the image coordinate system1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ) Sin (θ + δ)). The calculation processes of other point coordinates are the same and are not repeated.
Wherein, A ' coordinate (x ', y ') and original reference coordinate O (x)0,y0) Changed reference object coordinate O' (x)1,y1). Original gaugePoint A (x)a,ya);0≤a≤0.5,0≤b≤0.5,0≤c≤0.5,0≤d≤0.5;tα=|cos(α)|,tβ=|cos(β)|,
Figure BDA0002792024360000203
x"=x1+ρ*qs*(c+tα)*cos(θ+δ),y"=y1-ρ*qs*(d+tβ)*sin(θ+δ)。
Figure BDA0002792024360000204
The method comprises the following processing flows:
setting an offset alarm threshold: the reference displacement Δ S, the angular changes Δ α, Δ β, Δ δ, and the area change Δ q.
And the camera normally monitors the picture, draws a region rule line and a region rule frame on the monitoring picture, and issues the rule.
Recording coordinate positions of all points in the rule; when the algorithm analysis rule is issued, the current video frame information gives the coordinates (O (x0, y0)), width and height (w, h) and angles (alpha, beta, delta) of the center point of the original reference object0) The value is obtained.
The algorithm analyzes the reference object of each frame (which can be framed as required) to obtain the real-time central point coordinate (O ' (x1, y1)), width and height (w ', h '), angle (alpha ', beta ', delta 1) values.
The multiple references are preferably: if a plurality of reference objects are detected, the preferred sorting is performed according to the principle that the angle is the most positive (each angle is the closest to 0 value), the width and the height are the largest, and the central point is the closest to the center of the picture. The first preferred reference is taken as the reference value.
And (3) judging the failure of the reference substance: if there are multiple references, the values of the parameters of each reference in the reference sequence are aligned, the parameter value may be the area of the reference, and if a change over a certain threshold (e.g., a parameter value that fluctuates by more than 3%) is deemed to be impaired, it is placed at the end of the preferred queue and its reference value is updated.
And (4) alarm judgment: calculating the displacement, angle change and area change of the reference object, and when any value exceeds a preset alarm threshold value, indicating that the damage is difficult to correct (the reference object is possibly damaged), directly alarming and dispatching people to repair on site; otherwise, calculating the coordinates of the adjusted and taught regular points according to the derivation formula, if any corrected regular point coordinate has a negative value, considering distortion and giving an alarm, otherwise, redrawing and reporting jitter/shift alarm, and considering whether manual correction is performed or not according to actual needs by the upper layer.
FIG. 11 is a schematic diagram of a rule block calibration process provided by an embodiment of the present invention, setting an offset calibration threshold and an alarm threshold; drawing regular area lines and frames according to business requirements; service configuration is issued; recording the coordinates of each point of the original regular area line and frame; taking a frame of image, and determining the coordinates, width, height, angle and the like of the central point of a target reference object; optimizing multiple reference objects, judging the failure of the reference objects, and determining a target reference object; when the variation parameter of the target reference object exceeds the offset correction threshold value and does not exceed the alarm threshold value, performing regular frame correction according to the target reference object; and when the alarm threshold is exceeded or the pixel point of the negative coordinate exists after the correction of the rule frame is completed, outputting alarm prompt information.
Fig. 12 is a schematic structural diagram of a rule box correction apparatus according to an embodiment of the present invention, where the apparatus includes:
the first determining module 121 is configured to acquire a first image of a monitored area, and determine a first yaw angle and position information of a first central point of a target reference object in the first image;
an obtaining module 122, configured to obtain an initial rule frame in a second image that is saved in advance, and position information of a second yaw angle and a second center point of the target reference object in the second image;
a second determining module 123, configured to determine a yaw angle difference according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
The first determining module 121 is further configured to determine first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
the second determining module 123 is specifically configured to determine, according to the position information of the second central point, a distance from each first vertex of the initial rule block to the second central point; for each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second central point, an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first central point; and taking the rule frame formed by each second vertex as a corrected target rule frame.
The device further comprises:
a third determining module 124, configured to determine a first horizontal angle and a first vertical angle of the target reference object in the first image, obtain a second horizontal angle and a second vertical angle of the target reference object in the second image, and determine a horizontal angle difference and a vertical angle difference;
the first determining module 121 is specifically configured to determine a depth change ratio according to the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value.
The first determining module 121 is specifically configured to input the first area information, the second area information, the horizontal angle difference value, and the vertical angle difference value into a preset first formula
Figure BDA0002792024360000221
Determining a depth change ratio; wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference, β ' - β is a vertical angle difference, a is a predetermined horizontal angle depth compensation parameter, b is a predetermined vertical angle depth compensation parameter, q is a predetermined horizontal angle depth compensation parameter, and q is a predetermined horizontal angle depth compensation parametersIs the depth variation ratio.
The second determining module 123 is specifically configured to determine a distance between the first vertex and the second center point, and the first vertex and the second centerAn included angle formed by the point and a coordinate axis in the horizontal direction, the depth change ratio, the yaw angle difference value and the position information of the first central point are input into a preset second formula A' (x)1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ) Sin (θ + δ)), determining position information of a corrected second vertex corresponding to the first vertex;
where ρ is the distance between the first vertex and the second center point, and q is the distance between the first vertex and the second center pointsTheta is an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, delta is a difference value of a yaw angle, and x is a depth change ratio1Is a horizontal coordinate of the first center point, y1Is a vertical coordinate of the first center point, c is a predetermined horizontal angular width compensation parameter, d is a predetermined vertical angular width compensation parameter, tαFor a predetermined horizontal angle scaling parameter, tβAnd A' is the position information of the corrected second vertex corresponding to the first vertex, which is a preset vertical angle scaling parameter.
The device further comprises:
a first determining module 125, configured to determine, if any one of the yaw angle difference, the horizontal angle difference, the vertical angle difference, the distance between the position information of the first center point and the position information of the second center point, and the depth variation ratio is greater than a preset threshold, a corrected target rule frame corresponding to the initial rule frame according to the position information of the first center point, the position information of the second center point, and the yaw angle difference.
The first determining module 121 is specifically configured to input the first image into a pre-trained reference object recognition model, and determine a first horizontal angle, a first vertical angle, a first yaw angle, position information of a first central point, and first area information of a target reference object in the first image based on the reference object recognition model.
The device further comprises: a fourth determining module 126, configured to input the first image into a pre-trained reference object recognition model, determine, based on the reference object recognition model, a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third center point, and third area information of each reference object in the first image, determine, according to the third horizontal angle, the third vertical angle, the third yaw angle, the position information of the third center point, and the third area information of each reference object, a priority of each reference object, and select a reference object with a highest priority as a target reference object; the smaller each angle of the third horizontal angle, the third vertical angle and the third yaw angle is, the closer the position information of the third central point is to the first image central point, and the higher the third area information is, the higher the priority of the reference is.
The device further comprises: the training module 127 is configured to, for each third image in the training set, input the third image and the labeled image corresponding to the third image into a reference object recognition model, and train the reference object recognition model; and the labeling image is labeled with a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle, position information of a fourth central point and fourth area information of each reference object in the third image.
The device further comprises:
a second determining module 128, configured to determine whether any one of the horizontal angle difference, the vertical angle difference, the yaw angle difference, the distance between the first center point and the second center point, and the difference between the first area information and the second area information exceeds a preset alarm threshold, if not, trigger the second determining module 123, and if so, output an alarm prompt message;
the second judging module 128 is further configured to judge whether a pixel point with a negative coordinate exists in the corrected target rule frame, and if so, output an alarm prompt message.
An embodiment of the present invention further provides an electronic device, as shown in fig. 13, including: the system comprises a processor 301, a communication interface 302, a memory 303 and a communication bus 304, wherein the processor 301, the communication interface 302 and the memory 303 complete mutual communication through the communication bus 304;
the memory 303 has stored therein a computer program which, when executed by the processor 301, causes the processor 301 to perform the steps of:
acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first central point of a target reference object in the first image;
acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image;
determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
Based on the same inventive concept, the embodiment of the present invention further provides an electronic device, and as the principle of the electronic device for solving the problem is similar to the rule frame correction method, the implementation of the electronic device may refer to the implementation of the method, and repeated details are not repeated.
The electronic device provided by the embodiment of the invention can be a desktop computer, a portable computer, a smart phone, a tablet computer, a Personal Digital Assistant (PDA), a network side device and the like.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 302 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
When the processor executes the program stored in the memory in the embodiment of the invention, the first image of the monitoring area is obtained, and the first yaw angle and the position information of the first central point of the target reference object in the first image are determined; acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image; determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value. In the embodiment of the present invention, the movement condition of the target reference object in the image can be determined according to the first yaw angle and the position information of the first center point of the target reference object in the first image, the initial rule frame in the second image, and the second yaw angle and the position information of the second center point of the target reference object in the second image, which are stored in advance, and the target rule frame is obtained by correcting the initial rule frame based on the movement condition, so that a technical solution for correcting the rule frame is provided.
An embodiment of the present invention further provides a computer storage readable storage medium, in which a computer program executable by an electronic device is stored, and when the program runs on the electronic device, the electronic device is caused to execute the following steps:
acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first central point of a target reference object in the first image;
acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image;
determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
Based on the same inventive concept, embodiments of the present invention further provide a computer-readable storage medium, and since a principle of solving a problem when a processor executes a computer program stored in the computer-readable storage medium is similar to rule block correction, implementation of the computer program stored in the computer-readable storage medium by the processor may refer to implementation of the method, and repeated details are omitted.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memory such as floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc., optical memory such as CDs, DVDs, BDs, HVDs, etc., and semiconductor memory such as ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs), etc.
A computer program is stored in a computer readable storage medium provided in an embodiment of the present invention, and when executed by a processor, the computer program implements acquiring a first image of a monitored area, and determining a first yaw angle and position information of a first center point of a target reference object in the first image; acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image; determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value. In the embodiment of the present invention, the movement condition of the target reference object in the image can be determined according to the first yaw angle and the position information of the first center point of the target reference object in the first image, the initial rule frame in the second image, and the second yaw angle and the position information of the second center point of the target reference object in the second image, which are stored in advance, and the target rule frame is obtained by correcting the initial rule frame based on the movement condition, so that a technical solution for correcting the rule frame is provided.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (13)

1. A method of rule box correction, the method comprising:
acquiring a first image of a monitoring area, and determining a first yaw angle and position information of a first central point of a target reference object in the first image;
acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image;
determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
2. The method of claim 1, wherein before determining the corrected target rule block corresponding to the initial rule block according to the position information of the first center point, the position information of the second center point, and the difference in yaw angle, the method further comprises:
determining first area information of a target reference object in the first image; acquiring second area information of the target reference object in the second image; determining a depth change ratio according to the first area information and the second area information;
determining the corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value comprises:
determining the distance from each first vertex of the initial rule box to the second central point according to the position information of the second central point;
for each first vertex, determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second central point, an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, the depth change ratio, the yaw angle difference value and the position information of the first central point;
and taking the rule frame formed by each second vertex as a corrected target rule frame.
3. The method of claim 2, wherein prior to determining a depth change ratio based on the first area information and the second area information, the method further comprises:
determining a first horizontal angle and a first vertical angle of a target reference object in the first image, acquiring a second horizontal angle and a second vertical angle of the target reference object in the second image, and determining a horizontal angle difference value and a vertical angle difference value;
the determining a depth change ratio according to the first area information and the second area information includes:
and determining a depth change ratio according to the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value.
4. The method of claim 3, wherein said determining a depth change ratio based on said first area information, said second area information, said horizontal angle difference value, and a vertical angle difference value comprises:
inputting the first area information, the second area information, the horizontal angle difference value and the vertical angle difference value into a preset first formula
Figure FDA0002792024350000021
Determining a depth change ratio;
wherein w ' h ' is first area information, w ' h is second area information, α ' - α is a horizontal angle difference, β ' - β is a vertical angle difference, a is a predetermined horizontal angle depth compensation parameter, b is a predetermined vertical angle depth compensation parameter, q is a predetermined horizontal angle depth compensation parameter, and q is a predetermined horizontal angle depth compensation parametersIs the depth variation ratio.
5. The method of claim 2, wherein the determining the position information of the corrected second vertex corresponding to the first vertex according to the distance between the first vertex and the second center point, the included angle formed by the first vertex and the second center point and the horizontal coordinate axis, the depth variation ratio, the difference of the yaw angles, and the position information of the first center point comprises:
inputting the distance between the first vertex and the second center point, the included angle formed by the first vertex, the second center point and the horizontal coordinate axis, the depth change ratio, the yaw angle difference and the position information of the first center point into a preset second formula A' (x)1+ρ*qs*(c+tα)*cos(θ+δ),y1-ρ*qs*(d+tβ) Sin (θ + δ)), determining position information of a corrected second vertex corresponding to the first vertex;
where ρ is the distance between the first vertex and the second center point, and q is the distance between the first vertex and the second center pointsTheta is an included angle formed by the first vertex, the second central point and a horizontal coordinate axis, delta is a difference value of a yaw angle, and x is a depth change ratio1Is a horizontal coordinate of the first center point, y1Is a vertical coordinate of the first center point, c is a predetermined horizontal angular width compensation parameter, d is a predetermined vertical angular width compensation parameter, tαFor a predetermined horizontal angle scaling parameter, tβAnd A' is the position information of the corrected second vertex corresponding to the first vertex, which is a preset vertical angle scaling parameter.
6. The method of claim 3, wherein before determining the corrected target rule block corresponding to the initial rule block according to the position information of the first center point, the position information of the second center point, and the difference in yaw angle, the method further comprises:
and if any one of the yaw angle difference, the horizontal angle difference, the vertical angle difference, the distance between the position information of the first central point and the position information of the second central point and the depth change ratio is larger than a preset threshold value, determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference.
7. The method of claim 3, wherein determining the first horizontal angle, the first vertical angle, the first yaw angle, the position information for the first center point, and the first area information for the target reference in the first image comprises:
inputting the first image into a pre-trained reference object recognition model, and determining a first horizontal angle, a first vertical angle, a first yaw angle, position information of a first central point and first area information of a target reference object in the first image based on the reference object recognition model.
8. The method of claim 7, wherein determining the target reference in the first image comprises:
inputting the first image into a pre-trained reference object recognition model, determining a third horizontal angle, a third vertical angle, a third yaw angle, position information of a third central point and third area information of each reference object in the first image based on the reference object recognition model, determining the priority of each reference object according to the third horizontal angle, the third vertical angle, the third yaw angle, the position information of the third central point and the third area information of each reference object, and selecting the reference object with the highest priority as a target reference object; the smaller each angle of the third horizontal angle, the third vertical angle and the third yaw angle is, the closer the position information of the third central point is to the first image central point, and the higher the third area information is, the higher the priority of the reference is.
9. The method of claim 8, wherein the training process of the reference recognition model comprises:
inputting the third image and a labeled image corresponding to the third image into a reference object recognition model aiming at each third image in a training set, and training the reference object recognition model; and the labeling image is labeled with a fourth horizontal angle, a fourth vertical angle, a fourth yaw angle, position information of a fourth central point and fourth area information of each reference object in the third image.
10. The method of claim 3, wherein prior to determining the corrected target rule box to which the initial rule box corresponds, the method further comprises:
judging whether any one of the horizontal angle difference value, the vertical angle difference value, the yaw angle difference value, the distance between the first central point and the second central point and the difference value between the first area information and the second area information exceeds a preset alarm threshold value, if so, outputting alarm prompt information, and if not, performing the subsequent step of determining a corrected target rule frame corresponding to the initial rule frame;
after determining the corrected target rule box corresponding to the initial rule box, the method further includes:
and judging whether the corrected target rule frame has a pixel point of the negative coordinate or not, and if so, outputting alarm prompt information.
11. A rulebox correction apparatus, characterized in that the apparatus comprises:
the first determining module is used for acquiring a first image of a monitored area and determining a first yaw angle and position information of a first central point of a target reference object in the first image;
the acquisition module is used for acquiring an initial rule frame in a second image which is saved in advance, and position information of a second yaw angle and a second central point of the target reference object in the second image;
the second determining module is used for determining a yaw angle difference value according to the first yaw angle and the second yaw angle; and determining a corrected target rule frame corresponding to the initial rule frame according to the position information of the first central point, the position information of the second central point and the yaw angle difference value.
12. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 10 when executing a program stored in the memory.
13. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1-10.
CN202011318414.0A 2020-11-23 2020-11-23 Rule box correction method and device, electronic equipment and storage medium Pending CN112418086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011318414.0A CN112418086A (en) 2020-11-23 2020-11-23 Rule box correction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011318414.0A CN112418086A (en) 2020-11-23 2020-11-23 Rule box correction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112418086A true CN112418086A (en) 2021-02-26

Family

ID=74778382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011318414.0A Pending CN112418086A (en) 2020-11-23 2020-11-23 Rule box correction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112418086A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762272A (en) * 2021-09-10 2021-12-07 北京精英路通科技有限公司 Road information determination method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010186265A (en) * 2009-02-10 2010-08-26 Nippon Telegr & Teleph Corp <Ntt> Camera calibration device, camera calibration method, camera calibration program, and recording medium with the program recorded threin
CN102263900A (en) * 2010-05-26 2011-11-30 佳能株式会社 Image processing apparatus and image processing method
CN107113376B (en) * 2015-07-31 2019-07-19 深圳市大疆创新科技有限公司 A kind of image processing method, device and video camera
CN110211186A (en) * 2018-02-28 2019-09-06 Aptiv技术有限公司 For calibrating camera relative to the position of calibrating pattern and the method for orientation
CN110569838A (en) * 2019-04-25 2019-12-13 内蒙古工业大学 Autonomous landing method of quad-rotor unmanned aerial vehicle based on visual positioning
CN111256663A (en) * 2018-12-03 2020-06-09 北京世纪朝阳科技发展有限公司 Centering calibration method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010186265A (en) * 2009-02-10 2010-08-26 Nippon Telegr & Teleph Corp <Ntt> Camera calibration device, camera calibration method, camera calibration program, and recording medium with the program recorded threin
CN102263900A (en) * 2010-05-26 2011-11-30 佳能株式会社 Image processing apparatus and image processing method
CN107113376B (en) * 2015-07-31 2019-07-19 深圳市大疆创新科技有限公司 A kind of image processing method, device and video camera
CN110211186A (en) * 2018-02-28 2019-09-06 Aptiv技术有限公司 For calibrating camera relative to the position of calibrating pattern and the method for orientation
CN111256663A (en) * 2018-12-03 2020-06-09 北京世纪朝阳科技发展有限公司 Centering calibration method and device
CN110569838A (en) * 2019-04-25 2019-12-13 内蒙古工业大学 Autonomous landing method of quad-rotor unmanned aerial vehicle based on visual positioning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762272A (en) * 2021-09-10 2021-12-07 北京精英路通科技有限公司 Road information determination method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112417926B (en) Parking space identification method and device, computer equipment and readable storage medium
EP3641298B1 (en) Method and device for capturing target object and video monitoring device
US20120154604A1 (en) Camera recalibration system and the method thereof
CN109685858B (en) Monocular camera online calibration method
US9436997B2 (en) Estimating rainfall precipitation amounts by applying computer vision in cameras
CN110703760B (en) Newly-added suspicious object detection method for security inspection robot
CN104167109A (en) Detection method and detection apparatus for vehicle position
CN115131748B (en) Method and system for improving target tracking and identifying accuracy rate of radar and vision all-in-one machine
CN112418086A (en) Rule box correction method and device, electronic equipment and storage medium
CN104834886A (en) Method and device for detecting video image
CN110718068A (en) Road monitoring camera installation angle estimation method
CN113505643A (en) Violation target detection method and related device
CN113947714A (en) Multi-mode collaborative optimization method and system for video monitoring and remote sensing
CN114384505A (en) Method and device for determining radar deflection angle
CN117152265A (en) Traffic image calibration method and device based on region extraction
CN103363916A (en) Information processing method and processing device
CN115272482A (en) Camera external reference calibration method and storage medium
CN113450385B (en) Night work engineering machine vision tracking method, device and storage medium
CN104299002B (en) A kind of tower crane image detecting method based on monitoring system
CN115683046A (en) Distance measuring method, distance measuring device, sensor and computer readable storage medium
CN110426674B (en) Spatial position determination method and device, electronic equipment and storage medium
CN111429469B (en) Berth position determining method and device, electronic equipment and storage medium
CN113674358A (en) Method and device for calibrating radar vision equipment, computing equipment and storage medium
CN113643374A (en) Multi-view camera calibration method, device, equipment and medium based on road characteristics
JP4987819B2 (en) Traffic monitoring device and traffic monitoring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination