CN114078124A - Camera detection method, detection device, computer equipment and storage medium - Google Patents

Camera detection method, detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN114078124A
CN114078124A CN202111418822.8A CN202111418822A CN114078124A CN 114078124 A CN114078124 A CN 114078124A CN 202111418822 A CN202111418822 A CN 202111418822A CN 114078124 A CN114078124 A CN 114078124A
Authority
CN
China
Prior art keywords
image
camera
coordinate
current position
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111418822.8A
Other languages
Chinese (zh)
Inventor
廖华
陆世豪
丁坤
班杰雄
袁卫义
邓朝翥
潘鹏
李更达
梁阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanning Monitoring Center of Extra High Voltage Power Transmission Co
Original Assignee
Nanning Monitoring Center of Extra High Voltage Power Transmission Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanning Monitoring Center of Extra High Voltage Power Transmission Co filed Critical Nanning Monitoring Center of Extra High Voltage Power Transmission Co
Priority to CN202111418822.8A priority Critical patent/CN114078124A/en
Publication of CN114078124A publication Critical patent/CN114078124A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a camera detection method, a detection device, computer equipment and a storage medium. The method comprises the following steps: the method comprises the steps of obtaining a preset point image of a camera and a current position image of the camera, judging whether the quality of the current position image is qualified or not, carrying out edge detection on the preset point image to obtain a first image if the quality of the current position image is qualified, carrying out edge detection on the current position image to obtain a second image, carrying out Hough line transformation on the first image to obtain a third image, carrying out Hough line transformation on the second image to obtain a fourth image, determining two end point coordinates of the longest line in the third image and the fourth image, and finally judging whether the camera is deviated or not based on the two end point coordinates of the longest line in the third image and the two end point coordinates of the longest line in the fourth image. The detection method detects the image quality acquired by the camera, and detects the acquisition angle of the camera if the image quality acquired by the camera is qualified, so that the detection of whether the image quality acquired by the camera and the position of the camera deviate or not is realized.

Description

Camera detection method, detection device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of power system device detection, and in particular, to a camera detection method, a camera detection device, a computer device, and a storage medium.
Background
With the intelligent development of the transformer substation, more and more image processing and image recognition technologies based on images acquired by a camera in the transformer substation appear in the working process of the transformer substation, and the images are acquired by the camera in the transformer substation. Therefore, the quality of the image acquired by the camera directly affects the subsequent image processing or image recognition process. In addition, the collection angle of the camera usually affects the image processing work in the subsequent power detection system.
In the conventional technology, the detection of the camera in the power detection system is stopped to detect whether the camera can acquire a head portrait or not, and the image quality acquired by the camera and the acquisition angle of the camera cannot be detected in a deviation manner or not.
Disclosure of Invention
In view of the above, it is necessary to provide a camera detection method, a detection apparatus, a computer device, and a computer readable storage medium capable of detecting whether the image quality and the camera capturing angle are deviated.
In a first aspect, the present application provides a camera detection method. The method comprises the following steps:
acquiring a preset point image of a camera and a current position image of the camera;
judging whether the image quality of the current position is qualified or not;
if the preset point image is qualified, performing edge detection on the preset point image to obtain a first image, and performing edge detection on the current position image to obtain a second image;
carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image;
determining two endpoint coordinates of the longest line in the third image;
determining two endpoint coordinates of the longest line in the fourth image;
and judging whether the camera deviates or not based on the coordinates of the two end points of the longest line in the third image and the coordinates of the two end points of the longest line in the fourth image.
In one embodiment, the two end point coordinates of the longest line in the third image are a first coordinate and a second coordinate, the two end point coordinates of the longest line in the fourth image are a third coordinate and a fourth coordinate, the first coordinate position point and the third coordinate position point are located on the same side of the perpendicular bisector of the longest line in the third image, and the step of determining whether the camera has a deviation based on the two end point coordinates of the longest line in the third image and the two end point coordinates of the longest line in the fourth image includes:
calculating a first coordinate offset of the first coordinate and the third coordinate and a second coordinate offset of the second coordinate and the fourth coordinate;
if the first coordinate offset and the second coordinate offset are both (0,0), judging that the camera does not offset;
and if any one of the first coordinate offset amount and the second coordinate offset amount is not (0,0), judging that the camera head generates offset.
In one embodiment, the step of determining the coordinates of the two end points of the longest line in the third image comprises:
finding the longest line in the third image based on the pythagorean theorem;
defining a first coordinate system;
determining the coordinates of two end points of the longest line in the third image under the first coordinate system.
In one embodiment, the step of determining coordinates of two end points of the longest line in the fourth image includes:
finding the longest line in the four images based on the pythagorean theorem;
defining a first coordinate system;
determining the coordinates of two end points of the longest line in the fourth image under the first coordinate system.
In one embodiment, before acquiring the image of the camera preset point and the image of the current position of the camera, the method further comprises the following steps:
and acquiring the basic information of each camera based on an onvif protocol.
In one embodiment, the step of determining whether the image quality at the current position is qualified includes:
judging whether the image definition of the current position is qualified or not;
judging whether the brightness of the current position image is qualified or not;
and if the definition and the brightness are qualified, judging that the image quality of the current position is qualified.
In one embodiment, the method for determining whether the image quality at the current position is qualified further comprises the following steps:
if not, an alarm signal is sent out.
In a second aspect, the present application further provides a camera detection device. The above-mentioned device includes:
the image acquisition module is used for acquiring a preset point image of the camera and a current position image of the camera;
the image quality judging module is used for judging whether the image quality at the current position is qualified or not;
the edge detection module is used for carrying out edge detection on the preset point image to obtain a first image and carrying out edge detection on the current position image to obtain a second image if the preset point image is qualified;
the Hough line transformation module is used for carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image;
the first coordinate determination module is used for determining two endpoint coordinates of the longest line in the third image;
the second coordinate determination module is used for determining the coordinates of two end points of the longest line in the fourth image;
and the offset judging module is used for judging whether the camera is offset or not based on the two endpoint coordinates of the longest line in the third image and the two endpoint coordinates of the longest line in the fourth image.
In a third aspect, the present application also provides a computer device. The computer equipment comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the camera detection method when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the above-described camera detection method.
According to the camera detection method, the camera detection device, the computer equipment and the storage medium, whether the quality of the image at the current position is qualified or not is judged by acquiring the preset point image of the camera and the current position image of the camera, if the quality of the image at the current position is qualified, the preset point image is subjected to edge detection to obtain a first image, and the current position image is subjected to edge detection to obtain a second image; carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image; and finally, judging whether the camera deviates or not based on the two end point coordinates of the longest line in the third image and the two end point coordinates of the longest line in the fourth image. According to the camera detection method, the image quality acquired by the camera is detected firstly, and the acquisition angle of the camera is detected after the image quality is detected to be qualified, so that the detection of the image quality acquired by the camera and the detection of whether the position of the camera deviates or not are realized.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for detecting a camera in one embodiment;
fig. 2 is a schematic flow chart illustrating a step of determining whether the camera has shifted based on coordinates of two end points of the longest line in the third image and coordinates of two end points of the longest line in the fourth image in one embodiment;
FIG. 3 is a flowchart illustrating the steps of determining the coordinates of the two endpoints of the longest line in the third image in one embodiment;
FIG. 4 is a flowchart illustrating the steps of determining the coordinates of the two endpoints of the longest line in the fourth image in one embodiment;
FIG. 5 is a schematic flow chart of a camera inspection method in another embodiment;
FIG. 6 is a block diagram showing the structure of a camera detection device according to an embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a camera detection method, including the steps of:
and S200, acquiring a preset point image of the camera and a current position image of the camera.
Each camera is provided with a preset point position, and the preset point position is the position where the camera is to be located. The preset point image is an image recorded when the camera is at a preset point position.
And S300, judging whether the image quality at the current position is qualified.
And S500, if the image is qualified, performing edge detection on the preset point image to obtain a first image, and performing edge detection on the current position image to obtain a second image.
Performing edge detection on the preset point image by adopting a Canny function in OpenCV (open Source Computer Vision library) to obtain a first image; and performing edge detection on the current position image by adopting a Canny function in OpenCV to obtain a second image. OpenCV, a cross-platform computer vision library, can be used to develop real-time image processing, computer vision, and pattern recognition programs. The Canny function is a function in OpenCV for implementing edge detection.
After the edge detection is performed on the preset point and the current position image, the method further comprises the following steps:
and converting the preset point and the current position image into a gray scale image.
The method comprises the steps of converting preset point images after edge detection into gray level images by adopting a cvtColor function in OpenCV to obtain first images, and converting current position images into gray level images by adopting the cvtColor function in OpenCV to obtain second images.
S600, carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image.
The hough line transform is a transform for detecting a straight line. The third image comprises straight lines in the first image detected by Hough line transformation, and the fourth image comprises straight lines in the second image detected by Hough line transformation.
The Hough line transformation is carried out on the first image to obtain the third image, and the Hough line transformation is carried out on the first image based on a Hough lines function in OpenCV to obtain the third image.
The obtaining of the fourth image by performing hough line transformation on the second image comprises obtaining the fourth image by performing hough line transformation on the second image based on a HoughLines function in OpenCV.
S700, determining the coordinates of two end points of the longest line in the third image.
And S800, determining two end point coordinates of the longest line in the fourth image.
When the camera position is slightly offset compared with the preset point, the longest line in the third image and the longest line in the fourth image can be considered as the same object.
And S900, judging whether the camera deviates or not based on the coordinates of the two end points of the longest line in the third image and the coordinates of the two end points of the longest line in the fourth image.
The camera detection method comprises the steps of judging whether the quality of a current position image is qualified or not by acquiring a preset point image of a camera and the current position image of the camera, carrying out edge detection on the preset point image to obtain a first image if the quality of the current position image is qualified, and carrying out edge detection on the current position image to obtain a second image; carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image; and finally, judging whether the camera deviates or not based on the two end point coordinates of the longest line in the third image and the two end point coordinates of the longest line in the fourth image. According to the camera detection method, the image quality acquired by the camera is detected firstly, and the acquisition angle of the camera is detected after the image quality is detected to be qualified, so that the detection of the image quality acquired by the camera and the detection of whether the position of the camera deviates or not are realized.
As shown in fig. 2, in an embodiment, the step S900 includes:
s910, calculating a first coordinate offset of the first coordinate and the third coordinate and a second coordinate offset of the second coordinate and the fourth coordinate.
The first coordinate position point and the third coordinate position point are respectively positioned at the same side of the perpendicular bisector of the longest line in the third image and the fourth image, and the second coordinate position point and the fourth coordinate position point are respectively positioned at the same side of the perpendicular bisector of the longest line in the third image and the fourth image.
And subtracting the first coordinate from the third coordinate to obtain a first coordinate offset, and subtracting the second coordinate from the fourth coordinate to obtain a second coordinate offset, wherein the offsets are also in a coordinate form.
For example, the first coordinate is (X1, Y1) and the third coordinate is (X2, Y2), then the first offset may be represented as (X1-X2, Y1-Y2). The second coordinate is (X3, Y3) and the fourth coordinate is (X4, Y4), then the second offset may be represented as (X3-X4, Y3-Y4).
S920, if the first coordinate offset and the second coordinate offset are both (0,0), judging that the camera does not offset; and if any one of the first coordinate offset and the second coordinate offset is not (0,0), judging that the camera is offset.
For example, when the first offset amount is (X1-X2, Y1-Y2) and the second offset amount is (X3-X4, Y3-Y4), it is determined that the camera is not offset only when X1-X2 is 0, Y1-Y2 is 0, X3-X4 is 0, and Y3-Y4 is 0.
As shown in fig. 3, in one embodiment, the step S700 includes:
and S710, finding the longest line in the third image based on the Pythagorean theorem.
S720, a first coordinate system is defined.
And S730, determining two end point coordinates of the longest line in the third image under the first coordinate system.
And after a coordinate system is constructed on the third image, the lengths of all lines can be compared based on the pythagorean theorem to find out the longest line.
As shown in fig. 4, in one embodiment, the step S800 includes:
s810, finding the longest line in the fourth image based on the Pythagorean theorem.
S820, a first coordinate system is defined.
And S830, determining two end point coordinates of the longest line in the fourth image under the first coordinate system.
And after a coordinate system is constructed on the fourth image, the lengths of all lines can be compared based on the pythagorean theorem to find out the longest line. Whether two lines are overlapped or not can be judged by comparing the coordinates of the two end points of the longest line in the third image and the coordinates of the two end points of the longest line in the fourth image under the same first coordinate system, when the two lines are overlapped, the camera does not move, the camera is located at a preset point, and if the two lines are not overlapped, the camera moves, so that the camera offset detection is realized.
As shown in fig. 5, in one embodiment, step S200 is preceded by:
and S100, acquiring the basic information of each camera based on the onvif protocol.
Wherein, onvif (open Network Video Interface forum) is an open Network Video Interface forum protocol. The onvif protocol defines a general protocol for information exchange among network video devices, and covers the functions of device discovery, device configuration, events, PTZ control, video analysis, real-time streaming media live broadcast and the like.
The camera basic information comprises information such as a camera serial number, a camera version number and an IP. Thus, camera grounding information can be used to distinguish between different cameras in the system.
The camera basic information can be automatically scanned and acquired based on the onvif protocol, and after scanning is finished, the staff can acquire the camera basic information through the onvif protocol according to the passwords of the cameras.
Based on the camera basic information, when the preset point image and the current position image are stored, the image and the camera basic information can be matched, and a worker can conveniently inquire the current position image and the preset point image based on the camera basic information. In addition, unified management of the cameras by the staff can be realized through acquisition of the basic information of the cameras.
As shown in fig. 5, in one embodiment, step S300 includes:
and S310, judging whether the image definition of the current position is qualified.
And judging the definition of the image at the current position based on the Laplace operator.
Specifically, the variance of the picture filtered by the laplacian operator can reflect the definition of the picture, and the higher the definition is, the larger the above method is. Specifically, a definition threshold value is set, the current position image is processed based on the Laplacian operator, the variance of the processed image is calculated, and if the variance is larger than the definition threshold value, the image is judged to be clear.
And S320, judging whether the brightness of the image at the current position is qualified.
Specifically, an average value of the current position image on the gray scale map is calculated, and when the brightness of the current position image is abnormal, the average value deviates from a preset average value point. Therefore, when the calculated average value of the current position image deviates from the preset average value point, the current position image is judged to have abnormal brightness.
And S330, if the definition and the brightness are qualified, judging that the image quality at the current position is qualified.
In one embodiment, step S300 further includes:
and judging whether the current position image has color cast.
Specifically, the current position image is converted into a CIE (Commission international d' Eclairage) Lab space, the mean values of the a component and the b component of the current position image in the CIE Lab space are calculated, and if the mean value of the a component and the mean value of the b component both deviate from the preset mean origin, it is determined that the color cast exists in the current position image.
In one embodiment, step S300 further includes:
and judging whether the current position image is a black screen.
Specifically, the current position image is grayed, the proportion of dark pixels in the total is detected, and whether the camera acquiring the current position image is a black screen or not is deduced. Wherein, the pixels with the gray value within the range of 0-19 are dark pixels.
And if one index is unqualified in the definition, brightness, color cast and black screen detection processes, judging that the image quality of the current position is unqualified, and stopping judging other indexes.
In one embodiment, as shown in fig. 5, step S300 is followed by the steps of:
s400, if the image quality of the current position is not qualified, alarm information is sent out.
Wherein, the staff can in time adjust or overhaul the camera that acquires the current position image according to alarm information, guarantees the image quality that the camera acquireed.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a camera detection device for realizing the camera detection method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so specific limitations in one or more embodiments of the camera detection device provided below can be referred to the limitations on the camera detection method in the foregoing, and details are not repeated herein.
In one embodiment, as shown in fig. 6, there is provided a camera detection apparatus including: an image acquisition module 200, an image quality determination module 300, an edge detection module 500, a hough line transformation module 600, a first coordinate determination module 700, a second coordinate determination module 800, and an offset determination module 900, wherein:
and the image acquisition module 200 is configured to acquire a preset point image of the camera and a current position image of the camera.
And an image quality determining module 300, configured to determine whether the image quality at the current position is qualified.
And the edge detection module 500 is configured to perform edge detection on the preset point image to obtain a first image and perform edge detection on the current position image to obtain a second image if the preset point image is qualified.
And a hough line transformation module 600, configured to perform hough line transformation on the first image to obtain a third image, and perform hough line transformation on the second image to obtain a fourth image.
A first coordinate determination module 700, configured to determine coordinates of two end points of a longest line in the third image.
A second coordinate determination module 800, configured to determine coordinates of two end points of a longest line in the fourth image.
An offset determining module 900, configured to determine whether the camera is offset based on the coordinates of the two end points of the longest line in the third image and the coordinates of the two end points of the longest line in the fourth image.
In one embodiment, the offset determining module 900 includes:
and the offset acquisition unit is used for calculating a first coordinate offset of the first coordinate and the third coordinate and a second coordinate offset of the second coordinate and the fourth coordinate.
The offset determination unit is used for determining that the camera does not offset if the first coordinate offset and the second coordinate offset are both (0, 0); and if any one of the first coordinate offset and the second coordinate offset is not (0,0), judging that the camera is offset.
In one embodiment, the first coordinate determination module 700 includes:
and the line determining unit is used for finding the longest line in the third image based on the pythagorean theorem.
And the coordinate system definition unit is used for defining a first coordinate system.
And the coordinate determination unit is used for determining the coordinates of two end points of the longest line in the third image under the first coordinate system.
In one embodiment, the second coordinate determination module 800 includes:
and the line determining unit is used for finding the longest line in the fourth image based on the pythagorean theorem.
And the coordinate system definition unit is used for defining a first coordinate system.
And the coordinate determination unit is used for determining the coordinates of two end points of the longest line in the fourth image under the first coordinate system.
In one embodiment, the image quality determining module 300 includes:
and the definition judging unit is used for judging whether the definition of the image at the current position is qualified or not.
And the brightness judging unit is used for judging whether the brightness of the image at the current position is qualified or not.
And the final judging unit is used for judging that the image quality at the current position is qualified if the definition and the brightness are qualified.
In one embodiment, the camera detection device further includes:
and the alarm module is used for sending alarm information if the image quality of the current position is unqualified.
All or part of each module in the camera detection device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the preset point image and the current position image. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a camera detection method.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
and S200, acquiring a preset point image of the camera and a current position image of the camera.
And S300, judging whether the image quality at the current position is qualified.
And S500, if the image is qualified, performing edge detection on the preset point image to obtain a first image, and performing edge detection on the current position image to obtain a second image.
S600, carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image.
S700, determining the coordinates of two end points of the longest line in the third image.
And S800, determining two end point coordinates of the longest line in the fourth image.
And S900, judging whether the camera deviates or not based on the coordinates of the two end points of the longest line in the third image and the coordinates of the two end points of the longest line in the fourth image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
s910, calculating a first coordinate offset of the first coordinate and the third coordinate and a second coordinate offset of the second coordinate and the fourth coordinate.
S920, if the first coordinate offset and the second coordinate offset are both (0,0), judging that the camera does not offset; and if any one of the first coordinate offset and the second coordinate offset is not (0,0), judging that the camera is offset.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and S710, finding the longest line in the third image based on the Pythagorean theorem.
S720, a first coordinate system is defined.
And S730, determining two end point coordinates of the longest line in the third image under the first coordinate system.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
s810, finding the longest line in the fourth image based on the Pythagorean theorem.
S820, a first coordinate system is defined.
And S830, determining two end point coordinates of the longest line in the fourth image under the first coordinate system.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and S310, judging whether the image definition of the current position is qualified.
And S320, judging whether the brightness of the image at the current position is qualified.
And S330, if the definition and the brightness are qualified, judging that the image quality at the current position is qualified.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
s400, if the image quality of the current position is not qualified, alarm information is sent out.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
and S200, acquiring a preset point image of the camera and a current position image of the camera.
And S300, judging whether the image quality at the current position is qualified.
And S500, if the image is qualified, performing edge detection on the preset point image to obtain a first image, and performing edge detection on the current position image to obtain a second image.
S600, carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image.
S700, determining the coordinates of two end points of the longest line in the third image.
And S800, determining two end point coordinates of the longest line in the fourth image.
And S900, judging whether the camera deviates or not based on the coordinates of the two end points of the longest line in the third image and the coordinates of the two end points of the longest line in the fourth image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
s910, calculating a first coordinate offset of the first coordinate and the third coordinate and a second coordinate offset of the second coordinate and the fourth coordinate.
S920, if the first coordinate offset and the second coordinate offset are both (0,0), judging that the camera does not offset; and if any one of the first coordinate offset and the second coordinate offset is not (0,0), judging that the camera is offset.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and S710, finding the longest line in the third image based on the Pythagorean theorem.
S720, a first coordinate system is defined.
And S730, determining two end point coordinates of the longest line in the third image under the first coordinate system.
In one embodiment, the computer program when executed by the processor further performs the steps of:
s810, finding the longest line in the fourth image based on the Pythagorean theorem.
S820, a first coordinate system is defined.
And S830, determining two end point coordinates of the longest line in the fourth image under the first coordinate system.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and S310, judging whether the image definition of the current position is qualified.
And S320, judging whether the brightness of the image at the current position is qualified.
And S330, if the definition and the brightness are qualified, judging that the image quality at the current position is qualified.
In one embodiment, the computer program when executed by the processor further performs the steps of:
s400, if the image quality of the current position is not qualified, alarm information is sent out.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A camera detection method, the method comprising:
acquiring a preset point image of a camera and a current position image of the camera;
judging whether the image quality of the current position is qualified or not;
if the preset point image is qualified, performing edge detection on the preset point image to obtain a first image, and performing edge detection on the current position image to obtain a second image;
carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image;
determining two endpoint coordinates of the longest line in the third image;
determining two endpoint coordinates of the longest line in the fourth image;
and judging whether the camera deviates or not based on the coordinates of the two end points of the longest line in the third image and the coordinates of the two end points of the longest line in the fourth image.
2. The method according to claim 1, wherein the coordinates of two end points of the longest line in the third image are a first coordinate and a second coordinate, the coordinates of two end points of the longest line in the fourth image are a third coordinate and a fourth coordinate, the first coordinate position point and the third coordinate position point are respectively located on the same side of the perpendicular line of the longest line in the third image and the fourth image, and the step of determining whether the camera is shifted based on the coordinates of two end points of the longest line in the third image and the coordinates of two end points of the longest line in the fourth image comprises:
calculating a first coordinate offset of the first coordinate and the third coordinate and a second coordinate offset of the second coordinate and the fourth coordinate;
if the first coordinate offset and the second coordinate offset are both (0,0), judging that the camera does not offset; and if any one of the first coordinate offset amount and the second coordinate offset amount is not (0,0), judging that the camera head generates offset.
3. The method of claim 1, wherein the step of determining the coordinates of the two endpoints of the longest line in the third image comprises:
finding the longest line in the third image based on the pythagorean theorem;
defining a first coordinate system;
determining the coordinates of two end points of the longest line in the third image under the first coordinate system.
4. The method of claim 1, wherein the step of determining coordinates of two end points of the longest line in the fourth image comprises:
finding the longest line in the fourth image based on the pythagorean theorem;
defining a first coordinate system;
determining the coordinates of two end points of the longest line in the fourth image under the first coordinate system.
5. The method according to claim 1, wherein before acquiring the camera preset point image and the camera current position image, the method further comprises the steps of:
and acquiring the basic information of each camera based on an onvif protocol.
6. The method of claim 1, wherein the step of determining whether the current position image quality is acceptable comprises:
judging whether the image definition of the current position is qualified or not;
judging whether the brightness of the current position image is qualified or not;
and if the definition and the brightness are qualified, judging that the image quality of the current position is qualified.
7. The method according to any one of claims 1-6, wherein said determining whether the image quality of the current position is qualified further comprises:
and if the image quality of the current position is not qualified, sending alarm information.
8. A camera inspection device, the device comprising:
the image acquisition module is used for acquiring a preset point image of the camera and a current position image of the camera;
the image quality judging module is used for judging whether the image quality at the current position is qualified or not;
the edge detection module is used for carrying out edge detection on the preset point image to obtain a first image and carrying out edge detection on the current position image to obtain a second image if the preset point image is qualified;
the Hough line transformation module is used for carrying out Hough line transformation on the first image to obtain a third image and carrying out Hough line transformation on the second image to obtain a fourth image;
the first coordinate determination module is used for determining two endpoint coordinates of the longest line in the third image;
the second coordinate determination module is used for determining the coordinates of two end points of the longest line in the fourth image;
and the offset judging module is used for judging whether the camera is offset or not based on the two endpoint coordinates of the longest line in the third image and the two endpoint coordinates of the longest line in the fourth image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202111418822.8A 2021-11-26 2021-11-26 Camera detection method, detection device, computer equipment and storage medium Pending CN114078124A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111418822.8A CN114078124A (en) 2021-11-26 2021-11-26 Camera detection method, detection device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111418822.8A CN114078124A (en) 2021-11-26 2021-11-26 Camera detection method, detection device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114078124A true CN114078124A (en) 2022-02-22

Family

ID=80284237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111418822.8A Pending CN114078124A (en) 2021-11-26 2021-11-26 Camera detection method, detection device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114078124A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116522417A (en) * 2023-07-04 2023-08-01 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116522417A (en) * 2023-07-04 2023-08-01 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment
CN116522417B (en) * 2023-07-04 2023-09-19 广州思涵信息科技有限公司 Security detection method, device, equipment and storage medium for display equipment

Similar Documents

Publication Publication Date Title
US11120565B2 (en) Image registration method, image registration device and storage medium
Mahdian et al. Blind methods for detecting image fakery
US8059154B1 (en) Systems and methods for automatic camera calibration
CN114078124A (en) Camera detection method, detection device, computer equipment and storage medium
CN111383254A (en) Depth information acquisition method and system and terminal equipment
CN112258507A (en) Target object detection method and device of internet data center and electronic equipment
CN114078161A (en) Automatic deviation rectifying method and device for preset position of camera and computer equipment
CN113838003A (en) Speckle detection method, device, medium, and computer program product for image
CN116168345B (en) Fire detection method and related equipment
CN112037128A (en) Panoramic video splicing method
CN116824166A (en) Transmission line smoke identification method, device, computer equipment and storage medium
CN116228861A (en) Probe station marker positioning method, probe station marker positioning device, electronic equipment and storage medium
CN112541853A (en) Data processing method, device and equipment
US20180150966A1 (en) System and method for estimating object size
CN116152166A (en) Defect detection method and related device based on feature correlation
CN111667539B (en) Camera calibration and plane measurement method
CN114004839A (en) Image segmentation method and device of panoramic image, computer equipment and storage medium
JP2022030615A (en) Tint correction system and tint correction method
CN111429450A (en) Corner point detection method, system, equipment and storage medium
CN116974671B (en) Vector surface pickup method and device, electronic equipment and storage medium
CN114066921A (en) Camera correction method and device, computer equipment and storage medium
CN115146686B (en) Method, device, equipment and medium for determining installation position of target object
CN117392161A (en) Calibration plate corner point for long-distance large perspective distortion and corner point number determination method
CN116883928A (en) Power operation monitoring alarm method and substation operation safety monitoring robot
TWI643498B (en) Method and image capture device for computing a lens angle in a single image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination