CN112001963A - Fire fighting channel investigation method, system and computer equipment - Google Patents

Fire fighting channel investigation method, system and computer equipment Download PDF

Info

Publication number
CN112001963A
CN112001963A CN202010761984.0A CN202010761984A CN112001963A CN 112001963 A CN112001963 A CN 112001963A CN 202010761984 A CN202010761984 A CN 202010761984A CN 112001963 A CN112001963 A CN 112001963A
Authority
CN
China
Prior art keywords
fire
point cloud
image
fighting
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010761984.0A
Other languages
Chinese (zh)
Inventor
彭志蓉
潘华东
殷俊
刘明
巩海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010761984.0A priority Critical patent/CN112001963A/en
Publication of CN112001963A publication Critical patent/CN112001963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a method, a system and computer equipment for troubleshooting a fire fighting channel, wherein three-dimensional point cloud of a binocular image of the fire fighting channel and the ground area of the fire fighting channel are obtained; performing ground projection on the three-dimensional point cloud to obtain point cloud distribution of a two-dimensional plane; acquiring the area of a circumscribed graph of an article on the binocular image on the two-dimensional plane; according to the ratio of the area of the external graph to the area of the ground, whether the fire fighting channel is blocked or not is determined, the problem that the checking efficiency and the checking quality of fire safety supervision cannot be guaranteed is solved, and the checking efficiency and the checking quality of fire safety supervision are improved.

Description

Fire fighting channel investigation method, system and computer equipment
Technical Field
The invention relates to the field of video monitoring, in particular to a fire fighting channel checking method, a fire fighting channel checking system and computer equipment.
Background
Whether an indoor fire fighting channel is blocked and whether fire fighting equipment is reasonable or available is one of the most important contents in fire safety supervision, and whether direct relation personnel can be rescued in time.
In the related art, in the fire safety supervision process, whether a supervision person occupies a fire protection channel and whether fire-fighting equipment is in compliance or not is required to carry out careful investigation, the investigation requires that the supervision person has higher investigation frequency, high specialty and high responsibility, and the investigation efficiency and the investigation quality cannot be guaranteed under the condition that the supervision person does not have the time and the capability.
Aiming at the problem that the investigation efficiency and the investigation quality of fire safety supervision cannot be guaranteed in the related art, no effective solution is provided at present.
Disclosure of Invention
Aiming at the problem that the checking efficiency and the checking quality of fire safety supervision cannot be guaranteed in the related art, the embodiment of the invention at least solves the problem.
According to an aspect of the present invention, there is provided a fire fighting access troubleshooting method, the method including: acquiring three-dimensional point cloud of a binocular image of a fire fighting channel and the ground area of the fire fighting channel; performing ground projection on the three-dimensional point cloud to obtain point cloud distribution of a two-dimensional plane; acquiring the area of a circumscribed graph of an article on the binocular image on the two-dimensional plane; and determining whether the fire fighting channel is blocked according to the ratio of the area of the external graph to the area of the ground.
In some of these implementations, after the acquiring the three-dimensional point cloud of binocular images of the fire fighting tunnel and the ground area of the fire fighting tunnel, the method further comprises: inputting a first-order image or a second-order image of the binocular images into a fire-fighting equipment placement recognition model which is completely trained so as to recognize the positions, types and quantity of the fire-fighting equipment in the binocular images; and comparing the position, type and quantity of the fire-fighting equipment obtained by identification with the preset position, type and quantity, and judging whether the placement of the fire-fighting equipment is in compliance.
In some of these embodiments, the method further comprises: acquiring training pictures of fire-fighting equipment and occupied articles in the fire-fighting channel; marking the fire-fighting equipment and the occupied articles in the training picture, and training a target detection frame and a classification network by using the marked training picture to obtain the fire-fighting equipment placement identification model.
In some of these embodiments, after the acquiring the three-dimensional point cloud of binocular images of a fire fighting tunnel and the ground area of the fire fighting tunnel, the method further comprises: and inputting the first-order image or the second-order image of the binocular image into a completely trained contamination identification model of the fire-fighting equipment so as to determine whether the fire-fighting equipment in the binocular image is contaminated or not.
In some of these embodiments, the method further comprises: acquiring a training picture of fire-fighting equipment in the fire-fighting channel; and marking the stained fire-fighting equipment in the training picture, and training a target detection framework and a classification network by using the marked training picture to obtain the fire-fighting equipment stain recognition model.
In some embodiments, the acquiring a three-dimensional point cloud of a binocular image of a fire fighting tunnel and a ground area of the fire fighting tunnel, wherein the acquiring a three-dimensional point cloud of a binocular image of a fire fighting tunnel comprises: correcting the binocular image; calculating a disparity map of the corrected binocular image through a semi-global matching algorithm SGBM; acquiring a depth image of the disparity map; and acquiring the three-dimensional point cloud according to the depth image.
In some embodiments, the acquiring, on the two-dimensional plane, a circumscribed graphical area of the item on the binocular image includes: removing points on the point cloud distribution corresponding to the three-dimensional point cloud boundary and removing points on the point cloud distribution corresponding to the height points in the three-dimensional point cloud which are larger than a preset value; and acquiring the minimum circumscribed rectangle of each article on the two-dimensional plane, acquiring the sum of the areas of the minimum circumscribed rectangles of each article, and taking the sum of the areas of the minimum circumscribed rectangles as the area of the circumscribed graph.
According to another aspect of the present invention, there is provided a fire fighting access troubleshooting system, the system including a binocular camera and a server; the binocular camera is used for acquiring a binocular image of the fire fighting channel and sending the acquired binocular image to the server; the server is used for acquiring the three-dimensional point cloud of the binocular image of the fire fighting channel and the ground area of the fire fighting channel, performing ground projection on the three-dimensional point cloud to acquire point cloud distribution of a two-dimensional plane, acquiring the external graphic area of an article on the binocular image on the two-dimensional plane, and determining whether the fire fighting channel is blocked according to the ratio of the external graphic area to the ground area.
In some embodiments, the server is further configured to input a first-order image or a second-order image of the binocular images into a well-trained fire fighting equipment identification model, and identify the position, type and number of the fire fighting equipment in the binocular images; and according to the position, the type and the number of the fire-fighting equipment, comparing the position, the type and the number with the preset position, type and number, and judging whether the placement of the fire-fighting equipment is in compliance.
According to another aspect of the invention, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of the above when executing the computer program.
According to another aspect of the invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above.
According to the invention, three-dimensional point cloud of a binocular image of a fire fighting channel and the ground area of the fire fighting channel are obtained, the three-dimensional point cloud is subjected to ground projection, and point cloud distribution of a two-dimensional plane is obtained; acquiring the area of a circumscribed graph of an article on the binocular image on the two-dimensional plane; and determining whether the fire fighting channel is blocked according to the ratio of the area of the external graph to the area of the ground. The problem that the investigation efficiency and the investigation quality of fire control safety supervision can not be guaranteed is solved, and the investigation efficiency and the investigation quality of fire control safety supervision are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic application environment diagram of a fire fighting access troubleshooting method according to an embodiment of the invention;
FIG. 2 is a flow chart of a fire fighting access troubleshooting method according to an embodiment of the invention;
FIG. 3 is a flow chart of a method of troubleshooting fire equipment according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method of troubleshooting fire equipment fouling according to an embodiment of the present invention;
fig. 5 is a flowchart of an indoor fire fighting access status identification method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The fire fighting access troubleshooting system provided by the application can be applied to an application environment as shown in fig. 1, and fig. 1 is a schematic application environment of a fire fighting access troubleshooting method according to an embodiment of the invention, wherein a binocular camera 102 and a server 104 are connected through a wired or wireless network. The binocular camera 102 is configured to acquire a binocular image of the fire fighting access, and send the acquired binocular image to the server 104; the server 104 is configured to obtain a three-dimensional point cloud of the binocular image of the fire fighting channel and a ground area of the fire fighting channel, perform ground projection on the three-dimensional point cloud, obtain point cloud distribution of a two-dimensional plane, and obtain an external graphic area of an article on the binocular image on the two-dimensional plane. The server 104 may also determine whether the fire passageway is blocked based on a ratio of an area of the circumscribed graph to the area of the ground. The problem that the investigation efficiency and the investigation quality that fire control safety was supervised and examined can't obtain the guarantee has been solved in this application, has improved the investigation efficiency and the investigation quality that fire control safety was supervised and examined.
Wherein the surveillance area of the binocular camera 102 covers the area of the fire passage. The server 104 may be connected to a plurality of binocular cameras 102 to perform fire fighting access screening. It should be noted that the server 104 may be connected to a display terminal, and the display terminal displays the result of the investigation. The server 104 may also send the result of the investigation to a mobile device such as a mobile phone or a tablet in time.
In an embodiment of the present invention, there is provided a fire fighting access troubleshooting method, and fig. 2 is a flowchart of a fire fighting access troubleshooting method according to an embodiment of the present invention, the method including the steps of:
step S202, acquiring three-dimensional point cloud of a binocular image of the fire fighting access and the ground area of the fire fighting access. The binocular image can be acquired through a binocular camera, and then the three-dimensional point cloud is acquired through point cloud conversion software based on the binocular image. For example, a three-dimensional Point Cloud of the binocular image may be obtained through a Point Cloud Library (PCL), the three-dimensional Point Cloud is further subjected to three-dimensional clustering, and the clustering target is determined when the three-dimensional clustering meets a preset condition. The ground area is the area of the fire fighting tunnel under the image pixel coordinate system on the binocular image. In addition, the mature PCL can be used for obtaining the three-dimensional point cloud, some points which are meaningless or have abnormal three-dimensional information are removed, and then the point cloud which is relatively sparse is obtained, so that time consumption is saved. The clustering mode can adopt a clustering mode based on density, for example, DBSCAN and various improved methods based on the DBSCAN can be realized.
And step S204, performing ground projection on the three-dimensional point cloud to obtain point cloud distribution of a two-dimensional plane. The three-dimensional point cloud of the clustering target can be subjected to two-dimensional projection to obtain a conversion matrix based on the ground, and the point cloud distribution of the two-dimensional plane is formed by the three-dimensional point cloud through the conversion matrix. The principle of converting the binocular image three-dimensional point cloud into the two-dimensional plane point cloud distribution is to convert the three-dimensional point cloud into a world coordinate system coordinate, project the height of the related three-dimensional point cloud of the clustering target to the ground, and then change the height value to 0, so that the point cloud distribution of the two-dimensional plane projected to the ground can be obtained.
And step S206, acquiring the circumscribed graphic area of the article on the binocular image on the two-dimensional plane. In the process of converting the three-dimensional point cloud into the point cloud distribution of the two-dimensional plane, the depth information of the article can be converted into a gray value, and the area of the external graph of the article under the image pixel coordinate system on the binocular image is determined through the gray value. The article can be the article of non-fire-fighting equipment such as a paper box, a garbage can and a chair, wherein the basic principle of the area calculation of the projection external rectangle of the fire fighting channel or the article mapping is that the article is mapped to a two-dimensional plane and then is consistent with a common two-dimensional image, and the area of a connected domain can be directly solved. In addition, the category of the outer clustering target is obtained on the monocular detection image, and the area of a certain type of target can be obtained on the two-dimensional image by using the connected domain obtaining method.
And S208, determining whether the fire fighting access is blocked according to the ratio of the area of the external graph to the area of the ground. After the area of the fire fighting channel occupied by the article is larger than the preset area, the article does not conform to the regulation of fire fighting channel investigation, and reminding information is sent.
Through the above steps S202 to S208, for the background modeling method in the related art, the method for identifying the foreground object through the background establishment cannot accurately identify the type and the position of the object, so that whether the fire fighting access is blocked cannot be calculated. This application is through the point cloud distribution that converts the three-dimensional point cloud of binocular image to two-dimensional plane, calculates article area occupied on the fire control passageway to confirm whether this fire control passageway blocks up, the area that article occupied the fire control passageway can be calculated to be more accurate, has solved the problem that the investigation efficiency and the investigation quality that fire safety was supervised and examined can't obtain the guarantee, has improved the investigation efficiency and the investigation quality that fire safety was supervised and examined.
In one embodiment, fig. 3 is a flow chart of a method of troubleshooting fire equipment according to an embodiment of the present invention, the method including the steps of:
step S302, inputting a first-order image or a second-order image of the binocular image into a fire fighting equipment placement recognition model which is trained completely so as to recognize the position, type and number of the fire fighting equipment in the binocular image, wherein the type of the fire fighting equipment comprises fire hydrants, fire extinguishers and other articles. In addition, the binocular positioning in the binocular vision image positioning system is widely applied to the fields of part confirmation, dimension measurement, measurement devices and the like, and is mainly applied to the position identification and positioning of ICs, chips and circuit boards and the vision image positioning system, for example: positioning of a punching machine, positioning of a binding machine, absorption and positioning of a transistor, alignment of an IC chip mounter, machine coordinate positioning, robot positioning and direction distinguishing and positioning.
And S304, comparing the positions, types and numbers of the fire-fighting equipment obtained by identification with preset positions, types and numbers, and judging whether the placement of the fire-fighting equipment is in compliance. For example, the regulations for the placement of fire-fighting equipment on fire-fighting aisles include the necessity of 1 hydrant and 2 fire extinguishers, which need to be placed at distinct and easily accessible locations and must not be placed on aisles that would interfere with safe evacuation. In case that the number of fire extinguishers is identified as 1, the placement of the fire extinguishers is not compliant; alternatively, the fire extinguisher placement is not compliant upon recognition of its placement on the evacuation passageway.
Through the steps S302 to S304, the embodiment of the application can check whether the fire fighting channel is blocked or not, and can also check whether the position, the type and the number of the fire fighting equipment are in compliance or not, so that the inspection items for fire safety supervision are added, and the labor cost is further saved.
It should be noted that the method further includes: acquiring training pictures of fire-fighting equipment and occupied articles in the fire-fighting channel; marking the fire-fighting equipment and the occupied articles in the training picture, and training a target detection frame and a classification network by using the marked training picture to obtain the fire-fighting equipment placement recognition model.
The training process of the model can be trained by using a YOLO or other target detection framework; and (3) carrying out classification network training on fire fighting equipment or articles, wherein the fire fighting equipment placement recognition model which is completely trained can recognize the positions, types and quantity of the fire fighting equipment or articles in the left-eye image or the right-eye image in the binocular image.
Wherein, detect the training frame and mainly include: the input map preprocessing, such as color space enhancement, rotation, random cropping, etc., of the input image, is performed on the label information of the input image. The marking information generally includes coordinates of width, height and center point of a manually printed target frame, and target category; forward calculation and backward reasoning calculation functions of each layer of the neural network, for example, the functions comprise a convolutional layer, a full connection layer, a concat layer, a short layer, an activation function calculation layer, an up-down sampling layer and the like; loss function calculations, e.g., mean square error function, focal loss, etc.; darknet, pyrrch, tensorflow, etc. are various training frameworks. On these frameworks the final network model can be designed and built according to algorithms. Taking yolov3 as an example, a network structure and a loss function design of v3 are built under the framework, training images and marked txt information are input, deep information of the images is continuously extracted through layers of convolution, adoption, shortcut and the like, and finally the position and the type of a target are predicted; and continuously optimizing and solving the parameters of each layer when the loss function is minimum by predicting the position, the category and the truth value, so as to obtain the weight parameters of the better model. Calculating on a test picture by using the weight, namely the trained model, so that the position and the category information of the target can be predicted; the classification network training framework is not different from the detection framework, and mainly lies in that training data and network design are different. The classified training data is usually a graph of an object, and the marking information is the category of the object. The feature extraction and detection of the backbone on the network design is not much different, for example, resnet18 backbone network; the main difference is again the loss function and the activation function. The classification network usually obtains the probability of each class by using a full connection layer and a softmax activation function after the last feature layer is extracted from the last feature, and the class with the maximum probability is the classification class of the target.
In one embodiment, fig. 4 is a flow chart of a method of troubleshooting fire equipment fouling according to an embodiment of the present invention, the method comprising the steps of:
step S402, inputting a first image or a second image of a second image into a completely trained fouling recognition model of the fire-fighting equipment, wherein the second image is positioned by two cameras, a feature point on an object is shot by the two cameras fixed at different positions, coordinates of the point on image planes of the two cameras are respectively obtained, and as long as the precise relative positions of the two cameras are known, the coordinates of the feature point in a coordinate system for fixing the camera can be obtained by a geometric method, namely the position of the feature point is determined;
step S404, determining whether the fire fighting equipment in the binocular image is stained, wherein the fire fighting equipment comprises a fire hydrant, a fire extinguisher and the like, for example, whether the surface of the fire hydrant is neat, stained or advertisement is pasted.
Through the steps S402 to S404, the embodiment of the application can check whether the fire fighting channel is blocked or not, can also check whether the fire fighting equipment is stained or not, increases the check items for fire safety supervision, and further saves labor cost.
It should be noted that, after the preset fire-fighting equipment contamination identification model is used for obtaining a training picture of the fire-fighting equipment in the fire passage, the fire-fighting equipment in the training picture is labeled, target detection frame training and classification network training are performed, whether the fire-fighting equipment is contaminated or not is identified, the type of the fire-fighting equipment is identified by the preset fire-fighting equipment contamination identification model firstly, and after the type is determined, whether the type of the fire-fighting equipment is contaminated or not is determined, for example, whether a fire hydrant with a left-eye image or a right-eye image in a binocular image is identified firstly, whether the surface of the fire hydrant is clean or neat or whether an advertisement is contaminated or not is identified.
In one embodiment, the process for acquiring a three-dimensional point cloud of binocular images of a fire fighting tunnel comprises: correcting the binocular image; calculating a corrected disparity map of the binocular image through a semi-global matching algorithm SGBM; and acquiring a depth image of the disparity map, and acquiring the three-dimensional point cloud according to the depth image.
In one embodiment, the acquiring a point cloud distribution of a two-dimensional plane on which a circumscribed graphic area of the object on the binocular image is acquired comprises: removing points on the point cloud distribution corresponding to the three-dimensional point cloud boundary and removing points on the point cloud distribution corresponding to the height points in the three-dimensional point cloud which are larger than a preset value; and acquiring the minimum circumscribed rectangle of each article on the two-dimensional plane, acquiring the sum of the areas of the minimum circumscribed rectangles of each article, and taking the sum of the areas of the minimum circumscribed rectangles as the area of the circumscribed graph.
The invention is explained in detail with reference to a specific application scenario, before the indoor fire fighting access state is identified, training pictures of fire fighting equipment and access occupation objects are collected and labeled, common access occupation object types comprise chairs, trash cans, paper boxes, boxes and the like, fire fighting equipment types comprise fire hydrants, fire extinguishers and the like, wherein the fire hydrant attribute is labeled as whether the fire hydrant is stained or not; training using a YOLO or other target detection framework; the fire hydrant picture classification network training method comprises the steps of carrying out classification network training on fire hydrant pictures, installing a binocular camera in a fire fighting passageway area, and covering an interested area of the fire fighting passageway in a monitoring area of the binocular camera.
Fig. 5 is a flowchart of an indoor fire fighting access status identification method according to an embodiment of the present invention, as shown in fig. 5, the method includes:
step S501, after acquiring binocular images, the binocular camera selects a left eye image or a right eye image for deep learning to acquire image coordinate positions and categories of objects;
step S502, correcting the binocular image, and calculating a disparity map and a three-dimensional point cloud by using a Semi-Global Block Matching (SGBM) method;
step S503, projecting the three-dimensional point cloud on the ground to obtain point cloud distribution of a two-dimensional plane, and removing points which are not satisfied with image boundaries and heights in the original three-dimensional point cloud;
step S504, acquiring the minimum circumscribed rectangle of each object on a two-dimensional plane, and obtaining the actual length and width of each rectangle according to the mapping relation; calculating the occupied area of the channel according to the size of the ground;
and step S505, analyzing the class targets of the fire-fighting equipment in the binocular image respectively, judging whether the detected fire hydrant is stained by using a classifier, judging the number of the detected fire extinguishers, and comparing the number with a preset value to judge whether the fire extinguishers are missed.
Step S506, according to the type of the object occupied by the channel, the total occupied area of the object on the ground, whether the comprehensive fire hydrant is stained or not and whether the number of the fire extinguishers is consistent with a preset value or not, alarming is carried out according to the judgment result, and whether the fire fighting channel is blocked or the fire hydrant is stained or the number of the fire extinguishers is inconsistent is informed according to the judgment result;
through the steps S501 to S506, the relevant states in the fire fighting channel are identified, wherein the relevant states comprise the specific occupied object type and the occupied severity of the fire fighting channel, whether the fire hydrant is stained or not, whether the fire extinguisher is missing or not and the like.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize the license plate inclination correction method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
An embodiment of the present application provides a computer-readable storage medium, and fig. 6 is a schematic diagram of a computer device according to an embodiment of the present invention, as shown in fig. 6, on which a computer program is stored, and when the program is executed by a processor, the computer program implements the fire fighting access troubleshooting method according to the first aspect. Those skilled in the art will appreciate that the structure shown in fig. 6 is a block diagram of only a portion of the structure relevant to the present disclosure, and does not constitute a limitation on the electronic device to which the present disclosure may be applied, and that a particular electronic device may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
In addition, with reference to the method for troubleshooting a fire fighting access in the above embodiments, it can be understood by those skilled in the art that all or part of the processes in the method in the above embodiments can be implemented by a computer program instructing related hardware, where the computer program can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes in the embodiments of the methods. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A fire fighting access troubleshooting method, comprising:
acquiring three-dimensional point cloud of a binocular image of a fire fighting channel and the ground area of the fire fighting channel;
performing ground projection on the three-dimensional point cloud to obtain point cloud distribution of a two-dimensional plane;
acquiring the area of a circumscribed graph of an article on the binocular image on the two-dimensional plane;
and determining whether the fire fighting channel is blocked according to the ratio of the area of the external graph to the area of the ground.
2. The method of claim 1, wherein after the acquiring the three-dimensional point cloud of binocular images of the fire fighting tunnel and the ground area of the fire fighting tunnel, the method further comprises:
inputting a first-order image or a second-order image of the binocular images into a fire-fighting equipment placement recognition model which is completely trained so as to recognize the positions, types and quantity of the fire-fighting equipment in the binocular images;
and comparing the position, type and quantity of the fire-fighting equipment obtained by identification with the preset position, type and quantity, and judging whether the placement of the fire-fighting equipment is in compliance.
3. The method of claim 2, further comprising the steps of:
acquiring training pictures of fire-fighting equipment and occupied articles in the fire-fighting channel;
marking the fire-fighting equipment and the occupied articles in the training picture, and training a target detection frame and a classification network by using the marked training picture to obtain the fire-fighting equipment placement identification model.
4. The method of claim 1, wherein after the acquiring the three-dimensional point cloud of binocular images of the fire fighting tunnel and the ground area of the fire fighting tunnel, the method further comprises:
and inputting the first-order image or the second-order image of the binocular image into a completely trained contamination identification model of the fire-fighting equipment so as to determine whether the fire-fighting equipment in the binocular image is contaminated or not.
5. The method of claim 4, further comprising the steps of:
acquiring a training picture of fire-fighting equipment in the fire-fighting channel;
and marking the stained fire-fighting equipment in the training picture, and training a target detection framework and a classification network by using the marked training picture to obtain the fire-fighting equipment stain recognition model.
6. The method of claim 1, wherein the acquiring the three-dimensional point cloud of the binocular image of the fire fighting tunnel and the ground area of the fire fighting tunnel, wherein the acquiring the three-dimensional point cloud of the binocular image of the fire fighting tunnel comprises:
correcting the binocular image;
calculating a disparity map of the corrected binocular image through a semi-global matching algorithm SGBM;
acquiring a depth image of the disparity map;
and acquiring the three-dimensional point cloud according to the depth image.
7. The method of claim 1, wherein said obtaining on said two-dimensional plane a circumscribed graphical area of the item on said binocular image comprises:
removing points on the point cloud distribution corresponding to the three-dimensional point cloud boundary and removing points on the point cloud distribution corresponding to the height points in the three-dimensional point cloud which are larger than a preset value;
and acquiring the minimum circumscribed rectangle of each article on the two-dimensional plane, acquiring the sum of the areas of the minimum circumscribed rectangles of each article, and taking the sum of the areas of the minimum circumscribed rectangles as the area of the circumscribed graph.
8. The system for troubleshooting of the fire fighting access is characterized by comprising a binocular camera and a server;
the binocular camera is used for acquiring a binocular image of the fire fighting channel and sending the acquired binocular image to the server;
the server is used for acquiring the three-dimensional point cloud of the binocular image of the fire fighting channel and the ground area of the fire fighting channel, performing ground projection on the three-dimensional point cloud to acquire point cloud distribution of a two-dimensional plane, acquiring the external graphic area of an article on the binocular image on the two-dimensional plane, and determining whether the fire fighting channel is blocked according to the ratio of the external graphic area to the ground area.
9. The system of claim 8, wherein the server is further configured to input a first image or a second image of the binocular images into a well-trained fire fighting equipment identification model to identify the location, type, and number of fire fighting equipment in the binocular images; and according to the position, the type and the number of the fire-fighting equipment, comparing the position, the type and the number with the preset position, type and number, and judging whether the placement of the fire-fighting equipment is in compliance.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010761984.0A 2020-07-31 2020-07-31 Fire fighting channel investigation method, system and computer equipment Pending CN112001963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010761984.0A CN112001963A (en) 2020-07-31 2020-07-31 Fire fighting channel investigation method, system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010761984.0A CN112001963A (en) 2020-07-31 2020-07-31 Fire fighting channel investigation method, system and computer equipment

Publications (1)

Publication Number Publication Date
CN112001963A true CN112001963A (en) 2020-11-27

Family

ID=73462655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010761984.0A Pending CN112001963A (en) 2020-07-31 2020-07-31 Fire fighting channel investigation method, system and computer equipment

Country Status (1)

Country Link
CN (1) CN112001963A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396040A (en) * 2021-01-19 2021-02-23 成都四方伟业软件股份有限公司 Method and device for identifying lane occupation of vehicle
CN112966619A (en) * 2021-03-15 2021-06-15 精英数智科技股份有限公司 Method and device for detecting blockage of lower opening of coal chute of coal mining belt machine head

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204595A (en) * 2016-07-13 2016-12-07 四川大学 A kind of airdrome scene three-dimensional panorama based on binocular camera monitors method
CN107386730A (en) * 2017-07-18 2017-11-24 武汉智象机器人有限公司 A kind of intelligent underground parking garage and its parking method
CN110261436A (en) * 2019-06-13 2019-09-20 暨南大学 Rail deformation detection method and system based on infrared thermal imaging and computer vision
CN110472486A (en) * 2019-07-03 2019-11-19 北京三快在线科技有限公司 A kind of shelf obstacle recognition method, device, equipment and readable storage medium storing program for executing
CN110766915A (en) * 2019-09-19 2020-02-07 重庆特斯联智慧科技股份有限公司 Alarm method and system for identifying fire fighting access state
US20200082207A1 (en) * 2018-09-07 2020-03-12 Baidu Online Network Technology (Beijing) Co., Ltd. Object detection method and apparatus for object detection
CN110910355A (en) * 2019-11-07 2020-03-24 浙江大华技术股份有限公司 Package blocking detection method and device and computer storage medium
CN110992356A (en) * 2019-12-17 2020-04-10 深圳辰视智能科技有限公司 Target object detection method and device and computer equipment
CN111414848A (en) * 2020-03-19 2020-07-14 深动科技(北京)有限公司 Full-class 3D obstacle detection method, system and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204595A (en) * 2016-07-13 2016-12-07 四川大学 A kind of airdrome scene three-dimensional panorama based on binocular camera monitors method
CN107386730A (en) * 2017-07-18 2017-11-24 武汉智象机器人有限公司 A kind of intelligent underground parking garage and its parking method
US20200082207A1 (en) * 2018-09-07 2020-03-12 Baidu Online Network Technology (Beijing) Co., Ltd. Object detection method and apparatus for object detection
CN110261436A (en) * 2019-06-13 2019-09-20 暨南大学 Rail deformation detection method and system based on infrared thermal imaging and computer vision
CN110472486A (en) * 2019-07-03 2019-11-19 北京三快在线科技有限公司 A kind of shelf obstacle recognition method, device, equipment and readable storage medium storing program for executing
CN110766915A (en) * 2019-09-19 2020-02-07 重庆特斯联智慧科技股份有限公司 Alarm method and system for identifying fire fighting access state
CN110910355A (en) * 2019-11-07 2020-03-24 浙江大华技术股份有限公司 Package blocking detection method and device and computer storage medium
CN110992356A (en) * 2019-12-17 2020-04-10 深圳辰视智能科技有限公司 Target object detection method and device and computer equipment
CN111414848A (en) * 2020-03-19 2020-07-14 深动科技(北京)有限公司 Full-class 3D obstacle detection method, system and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396040A (en) * 2021-01-19 2021-02-23 成都四方伟业软件股份有限公司 Method and device for identifying lane occupation of vehicle
CN112396040B (en) * 2021-01-19 2022-03-01 成都四方伟业软件股份有限公司 Method and device for identifying lane occupation of vehicle
CN112966619A (en) * 2021-03-15 2021-06-15 精英数智科技股份有限公司 Method and device for detecting blockage of lower opening of coal chute of coal mining belt machine head

Similar Documents

Publication Publication Date Title
CN108009543B (en) License plate recognition method and device
CN109508688B (en) Skeleton-based behavior detection method, terminal equipment and computer storage medium
CN107358149B (en) Human body posture detection method and device
US11030464B2 (en) Privacy processing based on person region depth
CN111898486B (en) Monitoring picture abnormality detection method, device and storage medium
CN112131951B (en) System for automatically identifying behaviors of illegal use of ladder in construction
CN112528974B (en) Distance measuring method and device, electronic equipment and readable storage medium
CN112001963A (en) Fire fighting channel investigation method, system and computer equipment
CN116152863B (en) Personnel information identification method and device, electronic equipment and storage medium
CN109389105A (en) A kind of iris detection and viewpoint classification method based on multitask
CN116229560B (en) Abnormal behavior recognition method and system based on human body posture
CN110717449A (en) Vehicle annual inspection personnel behavior detection method and device and computer equipment
CN111372042B (en) Fault detection method and device, computer equipment and storage medium
CN114022846A (en) Anti-collision monitoring method, device, equipment and medium for working vehicle
US11544839B2 (en) System, apparatus and method for facilitating inspection of a target object
CN113177941A (en) Steel coil edge crack identification method, system, medium and terminal
CN116229502A (en) Image-based tumbling behavior identification method and equipment
CN106663317A (en) Morphologically processing method for digital images and digital image processing device thereof
CN115294505A (en) Risk object detection and model training method and device and electronic equipment
CN114241354A (en) Warehouse personnel behavior identification method and device, computer equipment and storage medium
CN112188151B (en) Video processing method, apparatus and computer readable storage medium
CN113449617A (en) Track safety detection method, system, device and storage medium
CN114630102A (en) Method and device for detecting angle change of data acquisition equipment and computer equipment
CN115376275B (en) Construction safety warning method and system based on image processing
CN116403165B (en) Dangerous chemical leakage emergency treatment method, dangerous chemical leakage emergency treatment device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination