Disclosure of Invention
In order to solve the problems of huge equipment, complex installation, higher cost and inaccurate detection of the conventional automatic detection mode of the pantograph sheep horn, the invention aims to provide a real-time detection method, a real-time detection device, computer equipment and a storage medium of the pantograph sheep horn, which can simplify a detection system, facilitate installation and arrangement and reduce hardware cost compared with the conventional automatic detection mode, can simultaneously prevent a detection result from being interfered by an external environment, ensure the accuracy of the detection result, reduce the occurrence of false alarm conditions, avoid bringing extra rechecking workload to maintainers, and are particularly suitable for scenes in urban rail transit tunnels.
In a first aspect, the present invention provides a real-time detection method for a pantograph goat's horn, including:
acquiring a video acquired by a monitoring camera in real time, wherein the monitoring camera is arranged on the roof of the vehicle and enables the camera view to cover the area where the pantograph is located;
carrying out histogram equalization processing on the latest image in the video to obtain a sample image to be detected;
importing the sample image to be tested into a trained target detection model, and identifying the position of the goat's horn of the pantograph in the area of the sample image to be tested;
intercepting a goat horn image from the sample image to be detected according to the position of the area;
carrying out binarization processing on the cavel image to obtain a binarized image;
carrying out opening and closing operation processing on the binary image to obtain a new cavel image;
fitting to obtain a real-time geometric outline of the goat horn according to the new goat horn image;
acquiring a real-time value of a goat horn geometric parameter according to the real-time geometric contour, wherein the goat horn geometric parameter comprises goat horn width, goat horn height and/or goat horn area;
and judging whether the goat horn is qualified in the current state according to the comparison result of the real-time value of the goat horn geometric parameter and the design allowable variation range.
Based on the content of the invention, the method can firstly identify and position the cavel area based on the pantograph cavel video acquired by the machine vision target detection method in real time, then carry out binarization processing, opening and closing operation processing and contour fitting extraction on the cavel image obtained by identification to obtain the cavel geometric contour and the real-time value of the cavel geometric parameter, and finally judge whether the cavel is qualified in the current state according to the comparison between the real-time value of the cavel geometric parameter and the allowable variation range of the design, so as to realize the real-time detection aim of the pantograph cavel. The method reduces the occurrence of false alarm, avoids bringing extra rechecking workload to maintainers, and is particularly suitable for scenes in urban rail transit tunnels. In addition, as the geometrical parameters of the cavel can be accurately measured, compared with the existing automatic detection mode, the detection precision is greatly improved; and the source monitoring video (namely the video) can be directly checked while the detection is carried out, so that the detection result can be conveniently rechecked.
In one possible design, before the introducing the sample image to be tested into the trained target detection model, the method further includes:
acquiring a pantograph monitoring video historically acquired by the monitoring camera in one trip;
respectively carrying out histogram equalization processing on each frame of image in the pantograph monitoring video to obtain a plurality of training sample images;
acquiring horn region labeling data corresponding to each training sample image in the plurality of training sample images one to one;
and importing the training sample image and the cavel region labeling data corresponding to the training sample image into the target detection model for training to obtain the trained target detection model.
In one possible design, importing the training sample image and the cavel region labeling data corresponding to the training sample image into the target detection model for training to obtain the trained target detection model, including:
importing the training sample image and the cavel region labeling data corresponding to the training sample image into a YOLO-v4 target detection model, and executing the following training steps in the YOLO-v4 target detection model:
adjusting the training sample image to have a target size and is divided into
A square image of a grid, wherein,
a natural number not less than 5;
judging whether the target center falls on the target center
In a certain grid of the grids, wherein the target center is a center of a horn area of the pantograph;
if so, the grid is responsible for predicting a plurality of target frames and calculating to obtain the predicted value of each target frame in the target frames on frame parameters, wherein the frame parameters comprise a frame center horizontal coordinate, a frame center vertical coordinate, a frame width, a frame height, a frame type and a confidence coefficient corresponding to the frame type;
and performing iterative optimization of a loss function according to the predicted value and the cavel region labeling data to obtain a detection model which is finished with target training.
In one possible design, performing iterative optimization of a loss function according to the predicted value and the cavel region labeling data to obtain a detection model with completed target training, including:
extracting a real value of a real frame on the frame parameter according to the cavel region labeling data, wherein the real frame comprises a cavel labeling frame in the square image;
calculating the frame position loss value according to the following formula according to the predicted value and the real value
:
In the formula (I), the compound is shown in the specification,
the frame of the target is represented by a frame,
representing the real border of the frame,
representing the intersection ratio of the target bounding box and the real bounding box,
representing a Euclidean distance between a center of the target bounding box and a center of the real bounding box,
representing a diagonal length of a minimum bounding box that bounds the target bounding box and the real bounding box,
it is indicated that the intermediate quantities are calculated,
representing the real value of the bounding box width among the real values,
representing the real values of the bounding box heights among the real values,
representing a bounding box width prediction value of the prediction values,
representing a bounding box height prediction value of the prediction values,
representing the circumferential ratio;
calculating a frame confidence coefficient loss value according to the predicted value and the true value and the following formula
:
In the formula (I), the compound is shown in the specification,
which represents a pre-set weight parameter that is,
respectively represent a positive integer, and each represents a positive integer,
representing a total number of bounding boxes of the plurality of target bounding boxes,
is shown with
Corresponding to each grid
Whether the logic value of the cavel is predicted by each target frame or not is judged, if yes, the value is 1, and if not, the value is 0,
represents the first
Each target frame contains the probability score of the goat's horn,
represents the first
Each target frame comprises a probability score predicted value of the goat horn;
calculating a class prediction loss value according to the following formula according to the predicted value and the true value
:
In the formula (I), the compound is shown in the specification,
respectively represent a positive integer, and each represents a positive integer,
representing a total number of bounding boxes of the plurality of target bounding boxes,
is shown with
Corresponding to each grid
Whether the target frame is responsible for predicting the logic value of the cavel or not is judged, if yes, the value is 1, and if not, the value is 0,
a frame class is represented that is a frame class,
a set of bounding box categories is represented,
is represented by the second
The actual code value of the frame class corresponding to the target frame,
is represented by the second
A target frame pairA predictive coding value of a corresponding frame type;
losing the frame position value
The frame confidence loss value
And the class prediction loss value
The sum is used as a loss function calculation result so as to carry out iterative optimization of the loss function and obtain the detection model.
In one possible design, the opening and closing operation processing is performed on the binarized image to obtain a new cavel image, and the method includes:
the open operation processing is performed according to the following formula:
in the formula (I), the compound is shown in the specification,
representing a new cavel image obtained after the opening operation processing,
representing the binarized image before the on operation processing,
the structural elements of the on-operation are represented,
it is shown that the etching treatment is performed,
showing the expansion treatment;
and/or performing closed operation processing according to the following formula:
in the formula (I), the compound is shown in the specification,
representing a new cavel image obtained after the closed operation processing,
representing the binarized image before the closed arithmetic operation processing,
the structural elements of the closed-loop operation are represented,
it is shown that the etching treatment is performed,
showing the expansion process.
In one possible design, after determining whether the cavel is qualified in the current state according to the comparison result between the real-time value of the cavel geometric parameter and the design allowable variation range, the method further includes:
and if the cavel is judged to be unqualified in the current state, calculating the difference between the real-time value of the geometrical parameters of the cavel and the boundary value of the design allowable variation range, and outputting warning information containing the difference.
In one possible design, the target detection model employs a Faster R-CNN target detection model, an SSD target detection module, or a YOLO target detection model.
The invention provides a real-time detection device for a pantograph goat horn, which comprises a video acquisition module, a equalization processing module, a goat horn area identification module, a goat horn image interception module, a binarization processing module, an opening and closing operation processing module, a goat horn contour fitting module, a goat horn parameter acquisition module and a goat horn qualification judgment module which are sequentially in communication connection;
the video acquisition module is used for acquiring videos acquired by the monitoring camera in real time, wherein the monitoring camera is installed on the roof of the vehicle and enables the camera view to cover the area where the pantograph is located;
the equalization processing module is used for carrying out histogram equalization processing on the latest image in the video to obtain a sample image to be detected;
the cavel region identification module is used for guiding the sample image to be detected into the trained target detection model and identifying the region position of the cavel of the pantograph in the sample image to be detected;
the goat horn image intercepting module is used for intercepting a goat horn image from the sample image to be detected according to the position of the area;
the binarization processing module is used for carrying out binarization processing on the cavel image to obtain a binarization image;
the switching operation processing module is used for carrying out switching operation processing on the binary image to obtain a new cavel image;
the cavel contour fitting module is used for fitting to obtain a real-time geometric contour of the cavel according to the new cavel image;
the cavel parameter acquiring module is used for acquiring a real-time value of a cavel geometric parameter according to the real-time geometric contour, wherein the cavel geometric parameter comprises cavel width, cavel height and/or cavel area;
and the horn qualification judging module is used for judging whether the horn is qualified in the current state according to the comparison result of the real-time value of the geometric parameters of the horn and the design allowable variation range.
In a third aspect, the present invention provides a computer device, comprising a memory, a processor and a transceiver, which are sequentially connected in communication, wherein the memory is used for storing a computer program, the transceiver is used for transceiving data, and the processor is used for reading the computer program and executing the method according to the first aspect or any one of the possible designs of the first aspect.
In a fourth aspect, the present invention provides a storage medium having stored thereon instructions for carrying out the method according to the first aspect or any one of the possible designs of the first aspect, when the instructions are run on a computer.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method as described above in the first aspect or any one of the possible designs of the first aspect.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely representative of exemplary embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first object may be referred to as a second object, and similarly, a second object may be referred to as a first object, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone or A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists singly or A and B exist simultaneously; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
As shown in fig. 1 to 4, the real-time detection method for a pantograph claw according to the first aspect of the present embodiment may be, but is not limited to being, executed by a computer device disposed in an interior cabinet of a vehicle and communicatively connected to a roof monitoring camera. The real-time detection method for the pantograph sheep horn can include, but is not limited to, the following steps S1 to S9.
S1, acquiring a video acquired by a monitoring camera in real time, wherein the monitoring camera is installed on the roof of the vehicle and enables the camera view to cover the area where the pantograph is located.
In step S1, since the camera view covers the area where the pantograph is located, a complete pantograph picture can be obtained in the obtained video, as shown in fig. 2. In addition, the computer equipment is in communication connection with the monitoring camera, so that the video can be transmitted in real time after being collected.
And S2, carrying out histogram equalization processing on the latest image in the video to obtain a sample image to be detected.
In step S2, the histogram equalization processing is a conventional Image preprocessing method, which is a method for enhancing Image Contrast (Image Contrast), and the main idea is to change the histogram distribution of one Image into an approximately uniform distribution, so as to enhance the Contrast of the Image.
And S3, importing the sample image to be detected into the trained target detection model, and identifying the area position of the horn of the pantograph in the sample image to be detected.
In the step S3, the target detection model is an existing artificial intelligence recognition model for recognizing objects in the picture and marking the positions of the objects, and specifically, but not limited to, the target detection algorithm proposed in 2015 by using fast R-CNN (fast Regions with conditional Neural Networks, by which he kamin et al, which obtains multiple first target detection models in the ILSVRV and COCO contest in 2015, SSD (Single Shot multiple box Detector, which is one of the currently popular main detection frames proposed by Wei Liu) target detection module or YOLO (youonly lok, which has been recently developed to V4 version, has wide application in the industry, which is based on the principle that firstly, 2 frames are predicted for each 7x7 grid for the input image, and then removing the target window with low possibility according to the threshold, and finally removing the redundant window by using a frame combination mode to obtain a detection result) and the like. Therefore, after the identification training of the target detection model on the goat horn image is completed, the goat horn can be identified in the sample image to be detected, and the goat horn position is marked.
Before the step S3, in order to complete the training of the target detection model, the method further includes, but is not limited to, the following steps S21 to S24.
And S21, acquiring a pantograph monitoring video historically acquired by the monitoring camera in one trip.
In the step S21, since the pantograph monitoring video is historically acquired in one trip, it can be ensured that the cavel images in the subsequent training sample images are non-static, and better meet the real-time detection condition, and the real-time detection accuracy of the target detection model obtained by training is ensured.
And S22, respectively carrying out histogram equalization processing on each frame of image in the pantograph monitoring video to obtain a plurality of training sample images.
And S23, acquiring the horn area labeling data which correspond to the training sample images in the plurality of training sample images one by one.
In the step S23, the cavel region labeling data is obtained by manual labeling.
And S24, importing the training sample image and the cavel region labeling data corresponding to the training sample image into the target detection model for training to obtain the trained target detection model.
In the step S24, the training sample image and the cavel region labeling data corresponding to the training sample image are imported into a YOLO-V4 target detection model (i.e., a V4 version of the YOLO target detection model, which includes an Input layer Input, a BackBone layer back bone, a Neck layer tack, and a Prediction layer Prediction, and finally a target detection result is output by the Prediction layer Prediction, which includes a detection target position and a confidence coefficient, where the detection target position is a coordinate region of a detection target in the whole image, and the confidence coefficient is an accuracy of the detection target), and the following training steps S241 to S244 are performed in the YOLO-V4 target detection model.
S241, adjusting the training sample image to have a target size and be divided into
A square image of a grid, wherein,
a natural number not less than 5 is represented.
In step S241, the adjustment of the image and the division of the mesh are both conventional manners in the YOLO target detection model. In addition to this, the present invention is,
the value of (d) may be, for example, 7.
S242, judging whether the target center falls on the target center
In a certain grid of the grids, wherein the target center is a center of a horn area of the pantograph.
In step S242, the specific determination manner is a conventional manner in the YOLO target detection model.
And S243, if yes, the grid is responsible for predicting a plurality of target frames and calculating the predicted value of each target frame in the target frames on frame parameters, wherein the frame parameters include, but are not limited to, frame center horizontal coordinates, frame center vertical coordinates, frame width, frame height, frame type, confidence degrees corresponding to the frame type and the like.
In the step S243, the prediction of the target frames and the calculation of the predicted values are both conventional in the YOLO target detection model.
And S244, performing iterative optimization of a loss function according to the predicted value and the cavel region labeling data to obtain a detection model which is finished with target training.
In said step S244, the iteration of the loss function is optimized in a conventional manner in the YOLO target detection model. More specifically, the method includes, but is not limited to, the following steps S2441 to S2445.
S2441, extracting a real value of a real frame on the frame parameter according to the cavel region labeling data, wherein the real frame comprises a cavel labeling frame in the square image.
S2442. according to the predicted value and the truthValue, calculating the frame position loss value according to the following formula
:
In the formula (I), the compound is shown in the specification,
the frame of the target is represented by a frame,
representing the real border of the frame,
representing the intersection ratio of the target bounding box and the real bounding box,
representing a Euclidean distance between a center of the target bounding box and a center of the real bounding box,
representing a diagonal length of a minimum bounding box that bounds the target bounding box and the real bounding box,
it is indicated that the intermediate quantities are calculated,
representing the real value of the bounding box width among the real values,
representing the real values of the bounding box heights among the real values,
represents the aboveA frame width prediction value among the prediction values,
representing a bounding box height prediction value of the prediction values,
indicating the circumferential ratio.
S2443, calculating a frame confidence coefficient loss value according to the predicted value and the true value and the following formula
:
In the formula (I), the compound is shown in the specification,
which represents a pre-set weight parameter that is,
respectively represent a positive integer, and each represents a positive integer,
representing a total number of bounding boxes of the plurality of target bounding boxes,
is shown with
Corresponding to each grid
Whether the target frame is responsible for predicting the logic value of the cavel or not is judged, if yes, the value is 1, and if not, the value is 0,
represents the first
Each target frame contains the probability score of the goat's horn,
represents the first
Each target frame contains the probability score prediction value of the goat horn.
S2444, calculating a category prediction loss value according to the following formula according to the predicted value and the true value
:
In the formula (I), the compound is shown in the specification,
respectively represent a positive integer, and each represents a positive integer,
representing a total number of bounding boxes of the plurality of target bounding boxes,
is shown with
Corresponding to each grid
Whether the target frame is responsible for predicting the logic value of the cavel or not is judged, if yes, the value is 1, and if not, the value is 0,
a frame class is represented that is a frame class,
a set of bounding box categories is represented,
is represented by the second
The actual code value of the frame class corresponding to the target frame,
is represented by the second
And the predicted coding value of the frame class corresponding to each target frame.
S2445, calculating the frame position loss value
The frame confidence loss value
And the class prediction loss value
The sum is used as a loss function calculation result so as to carry out iterative optimization of the loss function and obtain the trained target detection model.
The method for training the target detection model described in the foregoing steps S21-S24 may be executed on the computer device, or may be executed on another computer device, and then the trained target detection model is deployed on the computer device, so as to smoothly execute the step S3.
And S4, intercepting the image of the goat horn from the image of the sample to be detected according to the position of the area.
In step S4, since the cavel is identified in the sample image to be tested by the trained target detection model and the cavel position is marked, the cavel image can be easily obtained by clipping, as shown in fig. 3.
And S5, carrying out binarization processing on the cavel image to obtain a binarization image.
In step S5, the binarization processing is a conventional image preprocessing manner, that is, the gray scale value of the pixel points on the image is set to 0 (black) or 255 (white), specifically, the pixel value smaller than 127 is set to 0, and the pixel value greater than or equal to 127 is set to 255, that is, the whole image exhibits a distinct black-and-white effect.
And S6, carrying out opening and closing operation processing on the binary image to obtain a new cavel image.
In step S6, the open/close operation processing includes, but is not limited to, the following modes (a) and/or (B).
(A) The open operation processing is performed according to the following formula:
in the formula (I), the compound is shown in the specification,
representing a new cavel image obtained after the opening operation processing,
representing the binarized image before the on operation processing,
the structural elements of the on-operation are represented,
it is shown that the etching treatment is performed,
showing the expansion process.
(B) And performing closed operation processing according to the following formula:
in the formula (I), the compound is shown in the specification,
representing a new cavel image obtained after the closed operation processing,
representing the binarized image before the closed arithmetic operation processing,
the structural elements of the closed-loop operation are represented,
it is shown that the etching treatment is performed,
showing the expansion process.
And S7, fitting to obtain a real-time geometric outline of the goat horn according to the new goat horn image.
In step S7, since the cavel image is sequentially subjected to binarization processing and opening and closing operation processing, the cavel contour can be obviously represented in the new cavel image, and the real-time geometric contour of the cavel can be accurately obtained by fitting, as shown in fig. 4.
And S8, acquiring a real-time value of the geometrical parameters of the goat horn according to the real-time geometrical outline, wherein the geometrical parameters of the goat horn include but are not limited to goat horn width, goat horn height and/or goat horn area.
In step S8, as shown in fig. 4, since the real-time geometric profile of the cavel has been obtained, the real-time values of the cavel geometric parameters can be obtained based on conventional geometric knowledge.
And S9, judging whether the goat horn is qualified in the current state according to the comparison result of the real-time value of the geometrical parameter of the goat horn and the design allowable variation range.
In the step S9, the allowable design variation range is the design parameter of the cavel, and if the real-time value is within the allowable design variation range, it may be determined that the cavel is qualified in the current state and meets the design requirement, otherwise, it is determined that the cavel is not qualified in the current state, so as to achieve the purpose of real-time detection of the pantograph cavel. In addition, after judging whether the cavel is qualified in the current state according to the comparison result of the real-time value of the cavel geometric parameter and the design allowable variation range, the method may further include: and if the cavel is judged to be unqualified in the current state, calculating the difference between the real-time value of the geometrical parameters of the cavel and the boundary value of the design allowable variation range, and outputting warning information containing the difference. Therefore, the loss amount of the goat horn can be output, early warning is carried out, bow net accidents are avoided, and the running safety of the train is improved.
Therefore, through the real-time detection scheme of the pantograph sheep horn described in detail in the foregoing steps S1 to S9, the real-time detection scheme of the pantograph sheep horn can be firstly based on the machine vision target detection method to identify and position the sheep horn region of the pantograph sheep horn video acquired in real time from the roof, then the binarization processing, the opening and closing operation processing and the contour fitting extraction are performed on the sheep horn image obtained by identification, so as to obtain the geometrical contour of the pantograph horn and the real-time value of the geometrical parameter of the pantograph horn, and finally the real-time value of the geometrical parameter of the pantograph horn is compared with the allowable variation range of design to judge whether the pantograph horn is qualified in the current state, so as to achieve the real-time detection purpose of the pantograph sheep horn, compared with the existing automatic detection scheme, because only a roof camera and in-vehicle computer equipment are required to be configured, the detection system can be simplified, the installation and the hardware cost can be reduced, and the detection result can be prevented from being interfered by the external environment, the accuracy of the detection result is guaranteed, the occurrence of false alarm is reduced, extra rechecking workload brought to maintainers is avoided, and the method is particularly suitable for scenes in urban rail transit tunnels. In addition, as the geometrical parameters of the cavel can be accurately measured, compared with the existing automatic detection mode, the detection precision is greatly improved; and because the horn missing condition can be output and early-warning can be carried out when disqualification is found, the bow net accident can be avoided, and the running safety of the train is improved; and the source monitoring video (namely the video) can be directly checked while the detection is carried out, so that the detection result can be conveniently rechecked.
As shown in fig. 5, a second aspect of this embodiment provides a virtual device for implementing the real-time detection method of a pantograph goat's horn as possible in any one of the first aspect or the first aspect, including a video acquisition module, an equalization processing module, a goat's horn region identification module, a goat's horn image capture module, a binarization processing module, an on-off operation processing module, a goat's horn contour fitting module, a goat's horn parameter acquisition module, and a goat's horn qualification determination module, which are sequentially connected in a communication manner;
the video acquisition module is used for acquiring videos acquired by the monitoring camera in real time, wherein the monitoring camera is installed on the roof of the vehicle and enables the camera view to cover the area where the pantograph is located;
the equalization processing module is used for carrying out histogram equalization processing on the latest image in the video to obtain a sample image to be detected;
the cavel region identification module is used for guiding the sample image to be detected into the trained target detection model and identifying the region position of the cavel of the pantograph in the sample image to be detected;
the goat horn image intercepting module is used for intercepting a goat horn image from the sample image to be detected according to the position of the area;
the binarization processing module is used for carrying out binarization processing on the cavel image to obtain a binarization image;
the switching operation processing module is used for carrying out switching operation processing on the binary image to obtain a new cavel image;
the cavel contour fitting module is used for fitting to obtain a real-time geometric contour of the cavel according to the new cavel image;
the cavel parameter acquiring module is used for acquiring a real-time value of a cavel geometric parameter according to the real-time geometric contour, wherein the cavel geometric parameter comprises cavel width, cavel height and/or cavel area;
and the horn qualification judging module is used for judging whether the horn is qualified in the current state according to the comparison result of the real-time value of the geometric parameters of the horn and the design allowable variation range.
For the working process, working details and technical effects of the foregoing apparatus provided in the second aspect of this embodiment, reference may be made to the method described in the first aspect or any one of the possible designs of the first aspect, which is not described herein again.
As shown in fig. 6, a third aspect of this embodiment provides a computer device for executing the real-time detection method of a pantograph sheep corner as may be designed in any one of the first aspect or the first aspect, including a memory, a processor and a transceiver, which are sequentially and communicatively connected, where the memory is used for storing a computer program, the transceiver is used for transceiving data, and the processor is used for reading the computer program and executing the real-time detection method of a pantograph sheep corner as may be designed in any one of the first aspect or the first aspect. For example, the Memory may include, but is not limited to, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a First-in First-out (FIFO), and/or a First-in Last-out (FILO), and the like; the processor may be, but is not limited to, a microprocessor of the model number STM32F105 family. In addition, the computer device may also include, but is not limited to, a power module, a display screen, and other necessary components.
For the working process, working details, and technical effects of the foregoing computer device provided in the third aspect of this embodiment, reference may be made to the method in the first aspect or any one of the possible designs in the first aspect, which is not described herein again.
A fourth aspect of the present embodiment provides a storage medium storing instructions including any one of the first aspect or any one of the first aspect possible designs of the real-time detection method for a pantograph sheep corner, that is, the storage medium stores instructions that, when executed on a computer, perform the real-time detection method for a pantograph sheep corner as described in any one of the first aspect or any one of the first aspect possible designs. The storage medium refers to a carrier for storing data, and may include, but is not limited to, a computer-readable storage medium such as a floppy disk, an optical disk, a hard disk, a flash Memory, a flash disk and/or a Memory Stick (Memory Stick), and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For the working process, the working details and the technical effects of the foregoing readable storage medium provided in the fourth aspect of this embodiment, reference may be made to the method in the first aspect or any one of the possible designs in the first aspect, which is not described herein again.
A fifth aspect of the present embodiment provides a computer program product containing instructions for causing a computer to execute the real-time detection method of the cavum pantograph as described in the first aspect or any one of the possible designs of the first aspect when the instructions are run on the computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.