CN111953933B - Method, device, medium and electronic equipment for determining fire area - Google Patents

Method, device, medium and electronic equipment for determining fire area Download PDF

Info

Publication number
CN111953933B
CN111953933B CN202010631909.2A CN202010631909A CN111953933B CN 111953933 B CN111953933 B CN 111953933B CN 202010631909 A CN202010631909 A CN 202010631909A CN 111953933 B CN111953933 B CN 111953933B
Authority
CN
China
Prior art keywords
fire
key point
dimensional position
target
view camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010631909.2A
Other languages
Chinese (zh)
Other versions
CN111953933A (en
Inventor
黄劲
黄钢
解学军
李昊然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongan Anbo Culture Technology Co ltd
Original Assignee
Beijing Zhongan Anbo Culture Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongan Anbo Culture Technology Co ltd filed Critical Beijing Zhongan Anbo Culture Technology Co ltd
Priority to CN202010631909.2A priority Critical patent/CN111953933B/en
Publication of CN111953933A publication Critical patent/CN111953933A/en
Application granted granted Critical
Publication of CN111953933B publication Critical patent/CN111953933B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)

Abstract

The present disclosure provides a method, apparatus, medium, and electronic device for determining a fire area. The method comprises the following steps: acquiring panoramic images of a target fire by at least three multi-camera distributed in a grid manner; determining target fire key points associated with the multi-view camera from the panoramic image; acquiring a key point distance and a key point angle of a target fire key point in a three-dimensional position based on preset measurement basic information of each multi-view camera and the associated target fire key point; and determining a target fire area based on the preset three-dimensional positions of all the multi-view cameras and the key point distance and key point angle of the key points of the associated target fire. The three-dimensional fire model and the fire area displayed in the three-dimensional space are provided, so that fire commanders can be helped to visually know the progress of fire, and the fire commanders can be assisted to quickly decide and command fire fighting work. Avoiding the fire loss caused by unsmooth information when a fire happens.

Description

Method, device, medium and electronic equipment for determining fire area
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a medium, and an electronic device for determining a fire area.
Background
The natural disaster is a fire phenomenon which occurs in the nature and is relied on by human beings, and the harm to human society caused by the natural disaster is always amazing. Among them, there are sudden disasters such as earthquake, volcanic eruption, debris flow, tsunami, typhoon, flood, fire and the like; there are also gradual change disasters that can be gradually shown in a long time, such as ground settlement, land desertification, drought, coastline change and the like; and environmental disasters caused by human activities such as ozone layer change, water body pollution, water and soil loss, acid rain and the like.
In the case of a fire, for example, the fire may develop very rapidly when it occurs, and the fire may have developed to another scale when a mission-performing fire brigade arrives at the scene, so that there may be missed opportunities due to the shortage of fire fighters at the optimal time of fire fighting. It is also possible that the fire is effectively controlled before the fire brigade arrives, thereby wasting fire resources. Therefore, the first task of fire fighting is to accurately grasp the fire condition in real time.
At present, the size and scale of the fire mainly depend on the report of field personnel, the subjective factor is large, and the accuracy of information is not high.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
An object of the present disclosure is to provide a method, an apparatus, a medium, and an electronic device for determining a fire zone, which can solve at least one of the above-mentioned technical problems. The specific scheme is as follows:
according to a first aspect, the present disclosure provides a method of determining a fire zone, comprising:
acquiring panoramic images of a target fire by at least three multi-camera distributed in a grid manner; the optical axes of the cameras of the multi-view camera are arranged in parallel;
determining target fire key points associated with the multi-view camera from the panoramic image;
acquiring key point distances and key point angles of the target fire key points in three-dimensional positions based on preset measurement basic information of each multi-view camera and the associated target fire key points; the key point distance comprises the distance from the three-dimensional position of the central point of the multi-view camera baseline to the three-dimensional position of the corresponding key point of the target fire key point in the three-dimensional space; the key point angle comprises an included angle between a first straight line and a second straight line, the first straight line comprises a straight line which passes through the three-dimensional position of the central point and is parallel to the optical axis of the camera, and the second straight line comprises a straight line which passes through the three-dimensional position of the central point and the three-dimensional position of the key point;
and determining a target fire area based on the preset three-dimensional positions of all the multi-view cameras and the key point distance and key point angle of the key points of the associated target fire.
According to a second aspect, the present disclosure provides an apparatus for determining a fire area, including:
the system comprises an acquisition panoramic image unit, a display unit and a control unit, wherein the acquisition panoramic image unit is used for acquiring panoramic images of target fires acquired by at least three multi-camera cameras distributed in a grid; the optical axes of the cameras of the multi-view camera are arranged in parallel;
a target fire key point determining unit, configured to determine a target fire key point associated with the multi-view camera from the panoramic image;
the key point distance and key point angle obtaining unit is used for obtaining the key point distance and the key point angle of the target fire key point in the three-dimensional position based on the preset measurement basic information of each multi-view camera and the associated target fire key point; the key point distance comprises the distance from the three-dimensional position of the central point of the multi-view camera baseline to the three-dimensional position of the corresponding key point of the target fire key point in the three-dimensional space; the key point angle comprises an included angle between a first straight line and a second straight line, the first straight line comprises a straight line which passes through the three-dimensional position of the central point and is parallel to the optical axis of the camera, and the second straight line comprises a straight line which passes through the three-dimensional position of the central point and the three-dimensional position of the key point;
and the target fire area determining unit is used for determining a target fire area based on the preset three-dimensional positions of all the multi-view cameras and the key point distance and key point angle of the key points of the associated target fire.
According to a third aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of determining a fire zone as defined in any one of the first aspects.
According to a fourth aspect thereof, the present disclosure provides an electronic device, comprising: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of determining a fire zone of any of the first aspects.
Compared with the prior art, the scheme of the embodiment of the disclosure at least has the following beneficial effects:
the present disclosure provides a method, apparatus, medium, and electronic device for determining a fire area. The three-dimensional fire model and the fire area displayed in the three-dimensional space are provided, so that fire commanders can be helped to visually know the progress of fire, and the fire commanders can be assisted to quickly decide and command fire fighting work. Avoiding the fire loss caused by unsmooth information when a fire happens.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
FIG. 1 shows a flow chart of a method of determining a fire zone according to an embodiment of the disclosure;
FIG. 2 illustrates a block diagram of elements of an apparatus for determining a fire zone according to an embodiment of the present disclosure;
fig. 3 shows an electronic device connection structure schematic according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Alternative embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
A first embodiment, an embodiment of a method of determining a fire zone, is provided for the present disclosure.
The embodiments of the present disclosure are described in detail below with reference to fig. 1.
And S101, acquiring panoramic images of the target fire by at least three multi-view cameras distributed in a grid manner.
The fire-fighting management area is divided into a plurality of grid areas according to the form of the grid, and all the grid areas form the fire-fighting management grid. The safe case corresponds a grid region in the management net, and every safe case has unique identification, just the unique identification of safe case corresponds a grid region in the management net, safe case provide fire control emergency article, on-the-spot control, with fire control command center's wireless communication and intelligent fire alarm to the volunteer when being used for safe culture propagation, fire to take place, wherein include: the system comprises fire-fighting emergency articles, a display screen, a broadcast sound box, a microphone, an uninterruptible power supply, a multi-view camera, an alarm button, an information collector and a control system.
The multi-view camera is used for monitoring the change condition of the field environment in real time, acquiring an environment image in real time and providing an image of a target fire disaster to the cloud center server so that the cloud center server can determine the fire disaster area intelligently when the fire disaster happens.
Therefore, the smaller the grid area is, the more densely the multi-view cameras are installed, so that the monitoring dead angle can be reduced, and the intelligent degree and effectiveness for determining the fire area are improved. At least one grid area is provided for each floor of the high-rise building.
In order to avoid the ranging error generated by the ranging of the multi-view camera, the optical axes of the cameras of the multi-view camera are arranged in parallel. The correction calculation is reduced, and the distance measuring speed is improved.
In order to reduce unnecessary three-dimensional space calculation and improve the efficiency of determining the fire area by the cloud center server, optionally, the optical axes of the cameras of the multi-view camera are parallel and horizontally arranged. The arrangement plane of the multi-view camera is kept parallel to the horizontal plane of the set three-dimensional space, and the plane determined by any two axes in the three-dimensional space is parallel or vertical to the middle calculation plane, so that the calculation steps are simplified.
In order to determine the fire area, the multi-view camera of the embodiment of the present disclosure must acquire the panoramic image of the target fire, that is, the target fire in the panoramic image has a complete fire left contour line, a fire right contour line, a fire top contour line and a fire bottom contour line. If a multi-view camera can acquire panoramic images, but as the fire progresses, the multi-view camera may not acquire the panoramic images, and therefore, the cloud center server continuously selects the multi-view camera meeting the requirements.
In order to determine the fire area in various environments, the camera of the multi-view camera adopts an infrared camera, and a panoramic image acquired by the infrared camera is a panoramic infrared image. So that the fire area can be determined through the panoramic infrared image at night.
When a fire occurs on the surface of an object, for example, a grass field burns, or a dump in the open space burns, or flames appear on all sides of a building. Optionally, at least three of the at least three multi-view cameras distributed in the grid include at least three of the multi-view cameras, and a connecting line of the at least three multi-view cameras is an acute triangle. Therefore, panoramic images can be comprehensively and accurately acquired at multiple angles, and a fire area can be determined.
And step S102, determining a target fire key point related to the multi-view camera from the panoramic image.
Optionally, the target fire key point includes: fire left profile key point, fire right profile key point, fire highest key point, and fire lowest key point.
The method for determining the target fire key point associated with the multi-view camera from the panoramic image comprises the following steps:
and S102-1, performing fire contour analysis on the panoramic image according to a fire contour analysis model, and acquiring a fire left contour line, a fire right contour line, a fire top contour line and a fire bottom contour line in the panoramic image.
Wherein the fire profile analysis model comprises a neural network model trained based on historical fire profile images.
The present embodiment does not describe in detail the process of training the fire profile analysis model based on the historical fire profile image, and may refer to various implementation manners in the prior art.
And S102-2, analyzing the fire left contour line and determining a fire left contour key point.
Optionally, the fire left contour key point includes a connection point of the fire left contour line and the fire bottom contour line.
And S102-3, analyzing the fire right contour line and determining a fire right contour key point.
Optionally, the fire right contour key point includes a connection point of the fire right contour line and the fire bottom contour line.
And S102-4, analyzing the contour line of the top of the fire and determining the key point of the highest fire.
Optionally, the key point of the highest fire behavior includes a highest point of the top contour line of the fire behavior.
And S102-5, analyzing the contour line of the bottom of the fire, and determining the lowest key point of the fire.
Optionally, the fire lowest key point includes a lowest point of the fire bottom contour line.
And S103, acquiring a key point distance and a key point angle of the target fire key point in a three-dimensional position based on preset measurement basic information of each multi-view camera and the associated target fire key point.
The preset measurement basic information at least comprises preset base length and preset lens focal length information.
The key point distance comprises the distance from the three-dimensional position of the central point of the multi-view camera baseline to the three-dimensional position of the corresponding key point of the target fire key point in the three-dimensional space.
The multi-view camera baseline is the connecting line from the center point of the leftmost camera to the center point of the rightmost camera. The length of the link is fixed, i.e. the base length. According to the embodiment of the disclosure, the length of the preset base line is used as one of the preset measurement basic information for calculating the distance of the key point.
The keypoint angle comprises an included angle of a first straight line and a second straight line.
The first straight line comprises a straight line which passes through the three-dimensional position of the central point and is parallel to the optical axis of the camera.
The second straight line comprises a straight line passing through the three-dimensional position of the central point and the three-dimensional position of the key point;
according to the embodiment of the disclosure, the key point distance and the key point angle of the target fire key point in the actual three-dimensional space in the panoramic image are measured in a multi-view camera ranging mode.
The present embodiment does not describe the ranging process in detail, and can be implemented by referring to various implementations in the prior art.
Optionally, the obtaining of the key point distance and the key point angle of the target fire key point in the three-dimensional space based on the preset measurement basic information of the multi-view camera and the associated target fire key point includes the following steps:
step S103-1, acquiring a key point distance and a key point angle from the multi-view camera to a three-dimensional position of a left key point based on preset measurement basic information of the multi-view camera and the associated fire behavior left contour key point.
And the three-dimensional position of the left key point comprises a corresponding three-dimensional position of the fire left outline key point in the three-dimensional space.
And S103-2, acquiring a key point distance and a key point angle from the multi-view camera to a three-dimensional position of a right key point based on preset measurement basic information of the multi-view camera and the associated fire behavior right contour key point.
And the three-dimensional position of the right key point comprises a corresponding three-dimensional position of the fire right contour key point in the three-dimensional space.
And S103-3, acquiring a key point distance and a key point angle from the multi-view camera to the three-dimensional position of the highest key point based on the preset measurement basic information of the multi-view camera and the associated fire behavior highest key point.
And the three-dimensional position of the highest key point comprises a corresponding three-dimensional position of the highest key point of the fire in the three-dimensional space.
And S103-4, acquiring a key point distance and a key point angle from the multi-view camera to the three-dimensional position of the lowest key point based on the preset measurement basic information of the multi-view camera and the associated fire bottom contour key points.
Wherein, the three-dimensional position of the lowest key point comprises the corresponding three-dimensional position of the fire lowest key point in the three-dimensional space.
And step S104, determining a target fire area based on the preset three-dimensional positions of all the multi-view cameras and the key point distance and key point angle of the key points of the associated target fire.
Specifically, the method comprises the following steps:
and S104-1, respectively acquiring the three-dimensional position of the left key point based on the preset three-dimensional position of each multi-view camera, the associated key point distance and the associated key point angle.
And S104-2, respectively acquiring the three-dimensional position of the right key point based on the preset three-dimensional position of each multi-view camera, the associated key point distance and the key point angle.
And S104-3, respectively acquiring the three-dimensional position of the highest key point based on the preset three-dimensional position of each multi-view camera and the associated key point distance and key point angle.
And S104-4, respectively acquiring the three-dimensional position of the lowest key point based on the preset three-dimensional position of each multi-view camera and the associated key point distance and key point angle.
And S104-5, sequentially connecting the three-dimensional positions of the left key point, the right key point and the lowest key point which are adjacent to each other to form a closed target fire basic curve.
And S104-6, sequentially connecting the adjacent highest key points of the fire to form a closed target fire high point curve.
And S104-7, acquiring a three-dimensional fire model of the target fire based on the target fire basic curve and the target fire high-point curve.
The three-dimensional fire model is displayed in the computer of the fire command center based on the three-dimensional space, so that fire commanders can be helped to visually know the progress of fire, and the fire commanders can be helped to quickly decide and command fire fighting work.
Further, the obtaining of the three-dimensional fire model of the target fire based on the target fire base curve and the target fire high point curve includes the following steps:
step S104-7-1, generating a horizontal fire projection based on the target fire basic curve on a horizontal plane formed by the lowest points of the target fire basic curve;
and step S104-7-2, determining the horizontal fire projection as a horizontal fire area.
In the step, a horizontal fire area is provided in a computer of a fire command center based on the three-dimensional space, so that fire commanders can be assisted to know the area and scale of the fire, the fire commanders can be timely gathered to put into disaster relief work, and the fire loss caused by unsmooth information when the fire occurs is avoided.
In correspondence with the first embodiment provided by the present disclosure, the present disclosure also provides a second embodiment, that is, an apparatus for determining a fire zone. Since the second embodiment is basically similar to the first embodiment, the description is simple, and the relevant portions should be referred to the corresponding description of the first embodiment. The device embodiments described below are merely illustrative.
Fig. 2 illustrates an embodiment of an apparatus for determining a fire zone provided by the present disclosure.
Referring to fig. 2, the present disclosure provides an apparatus for determining a fire zone, including:
an acquiring panoramic image unit 201, configured to acquire panoramic images of a target fire collected by at least three multi-camera cameras distributed in a grid; the optical axes of the cameras of the multi-view camera are arranged in parallel;
a determine target fire key point unit 202, configured to determine a target fire key point associated with the multi-view camera from the panoramic image;
a key point distance and key point angle obtaining unit 203, configured to obtain, based on preset measurement basic information of each multi-view camera and the associated target fire key point, a key point distance and a key point angle of the target fire key point in a three-dimensional position; the key point distance comprises the distance from the three-dimensional position of the central point of the multi-view camera baseline to the three-dimensional position of the corresponding key point of the target fire key point in the three-dimensional space; the key point angle comprises an included angle between a first straight line and a second straight line, the first straight line comprises a straight line which passes through the three-dimensional position of the central point and is parallel to the optical axis of the camera, and the second straight line comprises a straight line which passes through the three-dimensional position of the central point and the three-dimensional position of the key point;
a determine target fire zone unit 204 for determining a target fire zone based on the preset three-dimensional positions of all the multi-view cameras and the key point distances and key point angles of the key points of the associated target fire.
Optionally, in the unit 202 for determining a target fire key point, the method includes:
the panoramic image analysis sub-unit is used for carrying out fire contour analysis on the panoramic image according to a fire contour analysis model to obtain a fire left contour line, a fire right contour line, a fire top contour line and a fire bottom contour line in the panoramic image; the fire profile analysis model comprises a neural network model trained based on historical fire profile images;
the fire left contour key point determining subunit is used for analyzing the fire left contour line and determining a fire left contour key point; the fire left contour key point comprises a connection point of the fire left contour line and the fire bottom contour line;
the fire right contour key point determining subunit is used for analyzing the fire right contour line and determining a fire right contour key point; the fire right contour key point comprises a connection point of the fire right contour line and the fire bottom contour line;
the fire behavior top contour line is analyzed to determine a fire behavior top key point;
and determining a fire minimum key point subunit, which is used for analyzing the fire bottom contour line and determining the fire minimum key point.
Optionally, the preset measurement basic information at least includes a preset base length and preset lens focal length information;
in the obtaining keypoint distance and keypoint angle unit 203, the method includes:
a measuring left key point three-dimensional position subunit, configured to obtain, based on preset measurement basic information of the multi-view camera and the associated fire left contour key point, a key point distance and a key point angle from the multi-view camera to the left key point three-dimensional position; wherein the three-dimensional position of the left key point comprises a corresponding three-dimensional position of the fire left contour key point in the three-dimensional space;
the measuring right key point three-dimensional position subunit is used for acquiring a key point distance and a key point angle from the multi-view camera to the right key point three-dimensional position based on the preset measuring basic information of the multi-view camera and the associated fire behavior right contour key point; wherein the three-dimensional position of the right key point comprises a corresponding three-dimensional position of the fire right contour key point in the three-dimensional space;
the measuring highest key point three-dimensional position subunit is used for acquiring a key point distance and a key point angle from the multi-view camera to the highest key point three-dimensional position based on preset measuring basic information of the multi-view camera and the associated fire highest key point; wherein the three-dimensional position of the highest key point comprises a corresponding three-dimensional position of the highest key point of the fire in the three-dimensional space;
the measuring lowest key point three-dimensional position subunit is used for acquiring the key point distance and the key point angle from the multi-view camera to the lowest key point three-dimensional position based on the preset measuring basic information of the multi-view camera and the associated fire bottom contour key point; wherein, the three-dimensional position of the lowest key point comprises the corresponding three-dimensional position of the fire lowest key point in the three-dimensional space.
Optionally, in the target fire zone determining unit 204, the method includes:
a left key point three-dimensional position obtaining subunit, configured to respectively obtain the left key point three-dimensional positions based on a preset three-dimensional position of each multi-view camera and the associated key point distance and key point angle;
a right key point three-dimensional position obtaining subunit, configured to obtain the right key point three-dimensional positions based on a preset three-dimensional position of each multi-view camera, the associated key point distance, and the key point angle, respectively;
a highest key point three-dimensional position obtaining subunit, configured to obtain the highest key point three-dimensional position based on a preset three-dimensional position of each multi-view camera, and the associated key point distance and key point angle, respectively;
a lowest key point three-dimensional position obtaining subunit, configured to obtain the lowest key point three-dimensional position based on a preset three-dimensional position of each multi-view camera, and the associated key point distance and key point angle, respectively;
a target fire basic curve generating subunit, configured to sequentially connect the three-dimensional positions of the left key point, the right key point, and the lowest key point, so as to generate a closed target fire basic curve;
generating a target fire high point curve subunit, which is used for sequentially connecting the adjacent fire highest key points to generate a closed target fire high point curve;
and the acquiring three-dimensional fire model subunit is used for acquiring a three-dimensional fire model of the target fire based on the target fire basic curve and the target fire high-point curve.
Optionally, in the sub-unit for obtaining a three-dimensional fire model, the method includes:
a horizontal fire projection generating subunit for generating a horizontal fire projection based on the target fire base curve on a horizontal plane formed by the lowest points of the target fire base curve;
and the horizontal fire area determining subunit is used for determining the horizontal fire projection as a horizontal fire area.
Optionally, the optical axes of the cameras of the multi-view camera are parallel and horizontally arranged.
Optionally, at least three of the at least three multi-view cameras distributed in the grid include at least three of the multi-view cameras, and a connecting line of the at least three multi-view cameras is an acute triangle.
The embodiment of the disclosure provides a three-dimensional fire model and a fire area displayed in a three-dimensional space, helps fire commanders to visually know the progress of a fire, and helps the fire commanders to quickly decide and command fire fighting work. The fire loss caused by unsmooth information when a fire occurs is avoided.
The present disclosure provides a third embodiment, which is an electronic device, for determining a fire area, the electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the method of determining a fire zone as described in the first embodiment.
The disclosed embodiments provide a fourth embodiment, namely a computer storage medium for determining a fire zone, the computer storage medium storing computer executable instructions that can execute the method for determining a fire zone as described in the first embodiment.
Referring now to FIG. 3, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, including conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method of determining a fire zone, comprising:
acquiring panoramic images of a target fire by at least three multi-camera distributed in a grid manner; the optical axes of the cameras of the multi-view camera are arranged in parallel;
determining target fire key points associated with the multi-view camera from the panoramic image;
acquiring a key point distance and a key point angle of a target fire key point in a three-dimensional position based on preset measurement basic information of each multi-view camera and the associated target fire key point; the key point distance comprises the distance from the three-dimensional position of the central point of the multi-view camera baseline to the three-dimensional position of the corresponding key point of the target fire key point in the three-dimensional space; the key point angle comprises an included angle between a first straight line and a second straight line, the first straight line comprises a straight line which passes through the three-dimensional position of the central point and is parallel to the optical axis of the camera, the second straight line comprises a straight line which passes through the three-dimensional position of the central point and the three-dimensional position of the key point, and the preset measurement basic information at least comprises a preset base line length and preset lens focal length information;
and determining a target fire area based on the preset three-dimensional positions of all the multi-view cameras and the key point distance and key point angle of the key points of the associated target fire.
2. The method of claim 1, wherein said determining target fire keypoints associated with the multi-view camera from the panoramic image comprises:
performing fire contour analysis on the panoramic image according to a fire contour analysis model to obtain a fire left contour line, a fire right contour line, a fire top contour line and a fire bottom contour line in the panoramic image; the fire profile analysis model comprises a neural network model trained based on historical fire profile images;
analyzing the fire left contour line and determining a fire left contour key point; the fire left contour key point comprises a connection point of the fire left contour line and the fire bottom contour line;
analyzing the fire behavior right contour line and determining a fire behavior right contour key point; the fire right contour key point comprises a connection point of the fire right contour line and the fire bottom contour line;
analyzing the contour line of the top of the fire, and determining the key point of the highest fire;
and analyzing the contour line of the bottom of the fire to determine the lowest key point of the fire.
3. The method according to claim 2, wherein the preset measurement basis information includes at least a preset baseline length and preset lens focal length information;
the acquiring of the key point distance and the key point angle of the target fire key point in the three-dimensional space based on the preset measurement basic information of the multi-view camera and the associated target fire key point comprises:
acquiring a key point distance and a key point angle from the multi-view camera to a three-dimensional position of a left key point based on preset measurement basic information of the multi-view camera and the associated fire left contour key point; wherein the three-dimensional position of the left key point comprises a corresponding three-dimensional position of the fire left contour key point in the three-dimensional space;
acquiring a key point distance and a key point angle from the multi-view camera to a three-dimensional position of a right key point based on preset measurement basic information of the multi-view camera and the associated fire right contour key point; wherein the three-dimensional position of the right key point comprises a corresponding three-dimensional position of the fire right contour key point in the three-dimensional space;
acquiring a key point distance and a key point angle from the multi-view camera to the three-dimensional position of the highest key point based on preset measurement basic information of the multi-view camera and the associated highest key point of the fire; wherein the three-dimensional position of the highest key point comprises a corresponding three-dimensional position of the highest key point of the fire in the three-dimensional space;
acquiring a key point distance and a key point angle from the multi-view camera to the three-dimensional position of the lowest key point based on preset measurement basic information of the multi-view camera and the associated fire bottom contour key point; wherein, the three-dimensional position of the lowest key point comprises the corresponding three-dimensional position of the fire lowest key point in the three-dimensional space.
4. The method of claim 3, wherein determining the target fire zone based on the preset three-dimensional positions of all the multi-view cameras and the key point distances and key point angles of the associated target fire key points comprises:
respectively acquiring the three-dimensional position of the left key point based on the preset three-dimensional position of each multi-view camera and the associated key point distance and key point angle;
respectively acquiring the three-dimensional position of the right key point based on the preset three-dimensional position of each multi-view camera and the associated key point distance and key point angle;
respectively acquiring the three-dimensional position of the highest key point based on the preset three-dimensional position of each multi-view camera and the associated key point distance and key point angle;
respectively acquiring the three-dimensional position of the lowest key point based on the preset three-dimensional position of each multi-view camera and the associated key point distance and key point angle;
sequentially connecting the three-dimensional positions of the adjacent left key point, the right key point and the three-dimensional position of the lowest key point to generate a closed target fire basic curve;
sequentially connecting adjacent fire highest key points to generate a closed target fire high point curve;
and acquiring a three-dimensional fire model of the target fire based on the target fire basic curve and the target fire high-point curve.
5. The method of claim 4, wherein obtaining the three-dimensional fire model of the target fire based on the target fire base curve and the target fire high point curve comprises:
generating a horizontal fire projection based on the target fire base curve on a horizontal plane formed by the lowest points of the target fire base curve;
and determining the horizontal fire projection as a horizontal fire area.
6. The method according to any one of claims 1 to 5, wherein the optical axes of the cameras of the multi-view camera are parallel and horizontally aligned.
7. The method according to any one of claims 1-5, wherein the at least three multi-view cameras distributed in the grid, including at least three of the multi-view cameras, are connected by an acute triangle.
8. An apparatus for determining a fire zone, comprising:
the system comprises an acquisition panoramic image unit, a display unit and a control unit, wherein the acquisition panoramic image unit is used for acquiring panoramic images of target fires acquired by at least three multi-camera cameras distributed in a grid; the optical axes of the cameras of the multi-view camera are arranged in parallel;
a target fire key point determining unit, configured to determine a target fire key point associated with the multi-view camera from the panoramic image;
the key point distance and key point angle obtaining unit is used for obtaining the key point distance and the key point angle of the target fire key point in the three-dimensional position based on the preset measurement basic information of each multi-view camera and the associated target fire key point; the key point distance comprises the distance from the three-dimensional position of the central point of the multi-view camera baseline to the three-dimensional position of the corresponding key point of the target fire key point in the three-dimensional space; the key point angle comprises an included angle between a first straight line and a second straight line, the first straight line comprises a straight line which passes through the three-dimensional position of the central point and is parallel to the optical axis of the camera, the second straight line comprises a straight line which passes through the three-dimensional position of the central point and the three-dimensional position of the key point, and the preset measurement basic information at least comprises a preset base line length and preset lens focal length information;
and the target fire area determining unit is used for determining a target fire area based on the preset three-dimensional positions of all the multi-view cameras and the key point distance and key point angle of the key points of the associated target fire.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1 to 7.
CN202010631909.2A 2020-07-03 2020-07-03 Method, device, medium and electronic equipment for determining fire area Active CN111953933B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010631909.2A CN111953933B (en) 2020-07-03 2020-07-03 Method, device, medium and electronic equipment for determining fire area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010631909.2A CN111953933B (en) 2020-07-03 2020-07-03 Method, device, medium and electronic equipment for determining fire area

Publications (2)

Publication Number Publication Date
CN111953933A CN111953933A (en) 2020-11-17
CN111953933B true CN111953933B (en) 2022-07-05

Family

ID=73337650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010631909.2A Active CN111953933B (en) 2020-07-03 2020-07-03 Method, device, medium and electronic equipment for determining fire area

Country Status (1)

Country Link
CN (1) CN111953933B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298818A (en) * 2011-08-18 2011-12-28 中国科学技术大学 Binocular shooting fire detecting and positioning device and fire positioning method thereof
CN102800083A (en) * 2012-06-19 2012-11-28 中国农业大学 Crop spraying positioning method based on binocular vision gridding partition matching algorithm
CN105931409A (en) * 2016-05-30 2016-09-07 重庆大学 Infrared and visible light camera linkage-based forest fire monitoring method
KR101867469B1 (en) * 2017-02-15 2018-06-15 주식회사 동남기술단 System for monitoring forest fire using solar energy generation
CN108230383A (en) * 2017-03-29 2018-06-29 北京市商汤科技开发有限公司 Hand three-dimensional data determines method, apparatus and electronic equipment
CN108600607A (en) * 2018-03-13 2018-09-28 上海网罗电子科技有限公司 A kind of fire-fighting panoramic information methods of exhibiting based on unmanned plane
CN108876856A (en) * 2018-06-29 2018-11-23 北京航空航天大学 A kind of heavy construction fire fire source recognition positioning method and system
CN109035307A (en) * 2018-07-16 2018-12-18 湖北大学 Setting regions target tracking method and system based on natural light binocular vision
CN109303995A (en) * 2018-09-12 2019-02-05 东南大学 Fire-fighting robot fire monitor control method based on fire source fixation and recognition
CN109618108A (en) * 2019-01-07 2019-04-12 Oppo广东移动通信有限公司 Electronic equipment and mobile platform
CN110287954A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target area determines training method, device and the computer readable storage medium of model
CN110705071A (en) * 2019-09-24 2020-01-17 浙江树人学院(浙江树人大学) Fire fighting three-dimensional digital plan method fusing fire prediction model
CN110837822A (en) * 2019-12-09 2020-02-25 国网智能科技股份有限公司 Fire-fighting robot injection curve adjusting method and device based on multi-view vision
CN111179279A (en) * 2019-12-20 2020-05-19 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and binocular vision

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPQ289099A0 (en) * 1999-09-16 1999-10-07 Silverbrook Research Pty Ltd Method and apparatus for manipulating a bayer image
CN101609589A (en) * 2008-06-17 2009-12-23 侯荣琴 Multi-frequency image fire detection system
CN107274400B (en) * 2017-06-21 2021-02-12 歌尔光学科技有限公司 Space positioning device, positioning processing method and device, and virtual reality system
CN110706447B (en) * 2019-10-14 2022-05-03 浙江大华技术股份有限公司 Disaster position determination method, disaster position determination device, storage medium, and electronic device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298818A (en) * 2011-08-18 2011-12-28 中国科学技术大学 Binocular shooting fire detecting and positioning device and fire positioning method thereof
CN102800083A (en) * 2012-06-19 2012-11-28 中国农业大学 Crop spraying positioning method based on binocular vision gridding partition matching algorithm
CN105931409A (en) * 2016-05-30 2016-09-07 重庆大学 Infrared and visible light camera linkage-based forest fire monitoring method
KR101867469B1 (en) * 2017-02-15 2018-06-15 주식회사 동남기술단 System for monitoring forest fire using solar energy generation
CN108230383A (en) * 2017-03-29 2018-06-29 北京市商汤科技开发有限公司 Hand three-dimensional data determines method, apparatus and electronic equipment
CN108600607A (en) * 2018-03-13 2018-09-28 上海网罗电子科技有限公司 A kind of fire-fighting panoramic information methods of exhibiting based on unmanned plane
CN108876856A (en) * 2018-06-29 2018-11-23 北京航空航天大学 A kind of heavy construction fire fire source recognition positioning method and system
CN109035307A (en) * 2018-07-16 2018-12-18 湖北大学 Setting regions target tracking method and system based on natural light binocular vision
CN109303995A (en) * 2018-09-12 2019-02-05 东南大学 Fire-fighting robot fire monitor control method based on fire source fixation and recognition
CN109618108A (en) * 2019-01-07 2019-04-12 Oppo广东移动通信有限公司 Electronic equipment and mobile platform
CN110287954A (en) * 2019-06-05 2019-09-27 北京字节跳动网络技术有限公司 Target area determines training method, device and the computer readable storage medium of model
CN110705071A (en) * 2019-09-24 2020-01-17 浙江树人学院(浙江树人大学) Fire fighting three-dimensional digital plan method fusing fire prediction model
CN110837822A (en) * 2019-12-09 2020-02-25 国网智能科技股份有限公司 Fire-fighting robot injection curve adjusting method and device based on multi-view vision
CN111179279A (en) * 2019-12-20 2020-05-19 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and binocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像处理的火灾中受困人员定位方法仿真;顾梦霞 等;《计算机仿真》;20160715;全文 *

Also Published As

Publication number Publication date
CN111953933A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
US11188686B2 (en) Method and apparatus for holographic display based upon position and direction
US10628617B1 (en) Method and apparatus for wireless determination of position and orientation of a smart device
Qiu et al. Enabling cloud computing in emergency management systems
US10863310B2 (en) Method, server and terminal for information interaction
US20200065433A1 (en) Method and apparatus for construction and operation of connected infrastructure
CN112488783B (en) Image acquisition method and device and electronic equipment
Kumar et al. Cost estimation of cellularly deployed IoT-enabled network for flood detection
US20200242282A1 (en) Apparatus for operation of connected infrastructure
US11481527B2 (en) Apparatus for displaying information about an item of equipment in a direction of interest
CN115631212B (en) Person accompanying track determining method and device, electronic equipment and readable medium
Ghasemi et al. A qualitative study of various aspects of the application of IoT in disaster management
CN111953933B (en) Method, device, medium and electronic equipment for determining fire area
US10659920B1 (en) Efficient discovery of survivors in disaster area using robotics and mobile networks
CN115375855A (en) Visualization method and device for engineering project, electronic equipment and readable medium
CN112734962B (en) Attendance information generation method and device, computer equipment and readable storage medium
Lee et al. The role of information technologies in crises: a review and conceptual development of IT-enabled agile crisis management
Hadiana Fog Computing Architecture for Indoor Disaster Management
US20200221261A1 (en) Visualization of spatio-temporal location
JP2022551671A (en) OBJECT DISPLAY METHOD, APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
CN111367485A (en) Method, device, medium and electronic equipment for controlling combined multimedia blackboard
Demirkan et al. Real-time perception enhancement in obscured environments for underground mine search and rescue teams
Supekar et al. Sensor data visualization on google maps using AWS, and IoT Discovery Board
CN117058319A (en) Building group model processing method, device, equipment and medium
CN112033284B (en) Memory, interactive measurement method, system and equipment based on monitoring video
CN118280068A (en) Control method, device, terminal and storage medium of electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant