CN113436338A - Three-dimensional reconstruction method and device for fire scene, server and readable storage medium - Google Patents

Three-dimensional reconstruction method and device for fire scene, server and readable storage medium Download PDF

Info

Publication number
CN113436338A
CN113436338A CN202110792133.7A CN202110792133A CN113436338A CN 113436338 A CN113436338 A CN 113436338A CN 202110792133 A CN202110792133 A CN 202110792133A CN 113436338 A CN113436338 A CN 113436338A
Authority
CN
China
Prior art keywords
point cloud
local
dimensional model
type
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110792133.7A
Other languages
Chinese (zh)
Inventor
崔岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, 4Dage Co Ltd filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN202110792133.7A priority Critical patent/CN113436338A/en
Publication of CN113436338A publication Critical patent/CN113436338A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image visual processing, and provides a three-dimensional reconstruction method, a device, a server and a readable storage medium for a fire scene, wherein the method comprises the following steps: acquiring a panoramic image to be processed, wherein the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene; generating a point cloud according to the panoramic image to be processed; reconstructing according to the type of the point cloud to obtain a local three-dimensional model; and combining the local three-dimensional models to obtain a global three-dimensional model. Therefore, the method and the device automatically perform modeling through the acquired panoramic image, respectively establish local three-dimensional models according to different point cloud types in the modeling process, finally combine the local three-dimensional models to obtain a global three-dimensional model, fully consider the difference between object objects in a real scene of a fire scene, and improve the reality of the three-dimensional model.

Description

Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
Technical Field
The application belongs to the technical field of image visual processing, and particularly relates to a three-dimensional reconstruction method, a three-dimensional reconstruction device, a three-dimensional reconstruction server and a readable storage medium for a fire scene.
Background
The fire rescue department is an important guarantee for maintaining the social security of people, and the fire scene investigation is systematic investigation work which is carried out by the fire rescue department on the fire scene, related places, articles, corpses and all objects capable of proving the fire cause, the fire property and the fire responsibility by using scientific means and an investigation and research method within the scope of authority specified by laws and regulations and makes fire conclusions through field analysis. However, due to the influence of factors such as artificial damage, fire suppression, the means of the traditional technology and the like, the difficulty of fire scene investigation is increased. In the prior art, a three-dimensional model of a fire scene is drawn by manually utilizing three-dimensional modeling software, so that the three-dimensional model of the fire scene is utilized for fire accident investigation at a later stage. However, the manual modeling manufacturing period is long, deviation exists between the manual modeling manufacturing period and an actual site, and the accident simulation cannot be combined with a real site environment, so that the rapid reappearance of an accident site and case simulation analysis are influenced.
Disclosure of Invention
The embodiment of the application provides a three-dimensional reconstruction method, a three-dimensional reconstruction device, a server and a readable storage medium for a fire scene, and the technical problems that in the prior art, in the modeling process of the fire scene, according to the fact that the manual modeling manufacturing period is long and the manual modeling manufacturing period has deviation with the actual scene, accident simulation cannot be combined with the real scene environment, and quick reappearance of the accident scene and case simulation analysis are affected are solved.
In a first aspect, an embodiment of the present application provides a method for three-dimensional reconstruction of a fire scene, including:
acquiring a panoramic image to be processed, wherein the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene;
generating a point cloud according to the panoramic image to be processed;
reconstructing according to the type of the point cloud to obtain a local three-dimensional model;
and combining the local three-dimensional models to obtain a global three-dimensional model.
In a possible implementation manner of the first aspect, generating a point cloud according to the to-be-processed panoramic image includes:
carrying out depth estimation on the panoramic image to be processed to obtain depth information;
and obtaining point cloud according to the depth information.
In a possible implementation manner of the first aspect, performing depth estimation on the to-be-processed panoramic image to obtain depth information includes:
extracting feature points of the panoramic image to be processed according to a preset feature extraction algorithm;
screening out target feature points with matching relations in the feature points;
and calculating the depth information according to the target feature points with the matching relationship.
In a possible implementation manner of the first aspect, reconstructing a local three-dimensional model according to the type of the point cloud includes:
inputting the three-dimensional coordinates of the point cloud to a pre-trained point cloud identification model to obtain the type of the point cloud;
and reconstructing a local three-dimensional model according to the type of the point cloud.
In a possible implementation manner of the first aspect, the pre-trained point cloud identification model includes a local information processing module and a global information processing module;
inputting the three-dimensional coordinates of the point cloud into a pre-trained point cloud identification model to obtain the type of the point cloud, wherein the type comprises the following steps:
extracting local characteristic information of the point cloud according to the three-dimensional coordinates based on the local information processing module:
and identifying semantic information of the point cloud according to the local characteristic information of the point cloud based on the global information processing module to obtain the type of the point cloud.
In one possible implementation manner of the first aspect, the point clouds are a first type point cloud, a second type point cloud, and a third type point cloud;
the local three-dimensional model comprises a first local three-dimensional model, a second local three-dimensional model and a third local three-dimensional model;
reconstructing a local three-dimensional model according to the type of the point cloud, including:
segmenting the first type of point cloud according to a preset rasterized region growing algorithm, calculating the vertex of the first type of point cloud, and reconstructing based on the vertex of the first type of point cloud to obtain a first local three-dimensional model;
dividing the second type point cloud according to a preset Euclidean distance algorithm, calculating two end points of the second type point cloud, and reconstructing based on the two end points of the second type point cloud to obtain a second local three-dimensional model;
and reconstructing the third type point cloud according to a preset greedy projection triangulation algorithm to obtain a third local three-dimensional model.
In a possible implementation manner of the first aspect, the local three-dimensional models are combined to obtain a global three-dimensional model;
respectively carrying out first registration on point clouds in the local three-dimensional model according to a preset ICP algorithm to obtain a registered local three-dimensional model;
determining edge point clouds among the local three-dimensional models after the first registration;
performing second registration on the edge point cloud according to a pre-trained registration neural network model;
and forming a global three-dimensional model based on the first registered local three-dimensional model and the second registered edge point cloud.
In a second aspect, an embodiment of the present application provides a three-dimensional reconstruction apparatus for a fire scene, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a panoramic image to be processed, and the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene;
the generating module is used for generating a point cloud according to the panoramic image to be processed;
the reconstruction module is used for reconstructing to obtain a local three-dimensional model according to the type of the point cloud;
and the merging module is used for merging the local three-dimensional models to obtain a global three-dimensional model.
In one possible implementation, the generating module includes:
the depth estimation submodule is used for carrying out depth estimation on the panoramic image to be processed to obtain depth information;
and the generation submodule is used for obtaining point cloud according to the depth information.
In one possible implementation, the depth estimation sub-module includes:
the extraction unit is used for extracting the feature points of the panoramic image to be processed according to a preset feature extraction algorithm;
the screening unit is used for screening out target feature points with matching relations in the feature points;
and the calculating unit is used for calculating the depth information according to the target characteristic points with the matching relation.
In one possible implementation, the reconstruction module includes:
the point cloud type identification submodule is used for inputting the three-dimensional coordinates of the point cloud into a pre-trained point cloud identification model to obtain the type of the point cloud;
and the reconstruction submodule is used for reconstructing a local three-dimensional model according to the type of the point cloud.
In one possible implementation manner, the pre-trained point cloud identification model comprises a local information processing module and a global information processing module;
the reconstruction submodule includes:
a local processing unit, configured to extract local feature information of the point cloud according to the three-dimensional coordinates based on the local information processing module:
and the global processing unit is used for identifying the semantic information of the point cloud according to the local characteristic information of the point cloud based on the global information processing module to obtain the type of the point cloud.
In one possible implementation, the point clouds are a first type of point cloud, a second type of point cloud, and a third type of point cloud;
the local three-dimensional model comprises a first local three-dimensional model, a second local three-dimensional model and a third local three-dimensional model;
the reconstruction submodule includes:
the first reconstruction unit is used for segmenting the first type of point cloud according to a preset rasterized region growing algorithm, calculating the vertex of the first type of point cloud, and reconstructing based on the vertex of the first type of point cloud to obtain a first local three-dimensional model;
the second reconstruction unit is used for segmenting the second type point cloud according to a preset Euclidean distance algorithm, calculating two end points of the second type point cloud, and reconstructing based on the two end points of the second type point cloud to obtain a second local three-dimensional model;
and the third reconstruction unit is used for reconstructing the third type point cloud according to a preset greedy projection triangulation algorithm to obtain a third local three-dimensional model.
In one possible implementation, the merging module includes:
the first registration submodule is used for respectively carrying out first registration on point clouds in the local three-dimensional model according to a preset ICP algorithm to obtain a local three-dimensional model after the first registration;
the determining module is used for determining edge point clouds among the local three-dimensional models after the first registration;
the second registration module is used for carrying out second registration on the edge point cloud according to a pre-trained registration neural network model;
and the forming module is used for forming a global three-dimensional model based on the first registered local three-dimensional model and the second registered edge point cloud.
In a third aspect, an embodiment of the present application provides a server, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
in the embodiment of the application, a to-be-processed panoramic image is obtained, the to-be-processed panoramic image is a panoramic image obtained by shooting a depth camera on a fire scene, a point cloud is generated according to the to-be-processed panoramic image, a local three-dimensional model is obtained according to the type reconstruction of the point cloud, and the local three-dimensional model is combined to obtain a global three-dimensional model. Therefore, the obtained panoramic image is automatically modeled, the local three-dimensional models are respectively established according to different point cloud types in the modeling process, and finally the local three-dimensional models are combined to obtain the global three-dimensional model, so that the difference between object objects in the real scene of the fire scene is fully considered, and the sense of reality of the three-dimensional model is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a three-dimensional reconstruction method for a fire scene according to an embodiment of the present disclosure;
fig. 2 is a detailed flowchart of step S104 in fig. 1 of a method for three-dimensional reconstruction of a fire scene according to an embodiment of the present disclosure;
fig. 3 is a detailed flowchart of step S204 in fig. 2 of a method for three-dimensional reconstruction of a fire scene according to an embodiment of the present application;
fig. 4 is a detailed flowchart of step S106 in fig. 1 of a method for three-dimensional reconstruction of a fire scene according to an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating an implementation of step S402 in fig. 4 of a method for three-dimensional reconstruction of a fire scene according to an embodiment of the present application;
fig. 6 is a detailed flowchart of step S404 in fig. 4 of a method for three-dimensional reconstruction of a fire scene according to an embodiment of the present application;
fig. 7 is a schematic flowchart of step S108 in fig. 1 of a method for three-dimensional reconstruction of a fire scene according to an embodiment of the present application;
fig. 8 is a block diagram of a three-dimensional reconstruction device of a fire scene according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The technical solutions provided in the embodiments of the present application will be described below with specific embodiments.
The fire scene is an urban fire, the urban fire is a fire sent in a city, and because buildings and vegetation in the city are adjacent and mixed together, in the prior art, a three-dimensional model of the fire scene is drawn by manually utilizing three-dimensional modeling software, deviation exists between the three-dimensional model and an actual scene, accident simulation cannot be combined with a real scene environment, and rapid reappearance of the accident scene and case simulation analysis are influenced.
Referring to fig. 1, a schematic flow chart of a method for three-dimensional reconstruction of a fire scene provided in an embodiment of the present application is shown, by way of example and not limitation, the method may be applied to a server, the server is respectively connected to a depth camera and a user terminal, the server may be a computing device such as a cloud server, the user terminal may be a mobile computing device such as a mobile phone and a tablet computer, and the method for three-dimensional reconstruction of a fire scene may include the following steps:
and S102, acquiring a panoramic image to be processed.
The panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene. The depth camera of the embodiment of the application can be an eight-eye camera, namely the eight-eye camera is composed of an upper group of fish-eye lenses and a lower group of fish-eye lenses, and the four lenses collect four groups of lens images respectively and are spliced into a 360-degree panorama.
It can be understood that the user collects the spatial information of the fire scene through the depth camera at the fire scene, so that the server can reconstruct a three-dimensional model of the fire scene according to the spatial information of the fire scene sent by the depth camera, and the server sends the three-dimensional model of the fire scene to the user terminal for the user to view.
For example, the operation flow of the user operating at the fire scene may be: installing and fixing the depth camera, starting the depth camera, opening a WiFi (wireless fidelity) connection function of the mobile phone or the ipad, starting a small program of the mobile phone or the ipad, planning a shooting route by a user, moving the depth camera by the user for shooting, storing space data by the depth camera, and automatically uploading the space data to the server by the depth camera.
And step S104, generating a point cloud according to the panoramic image to be processed.
In a specific application, as shown in fig. 2, a specific flowchart of step S104 in fig. 1 of the method for three-dimensional reconstruction of a fire scene provided in an embodiment of the present application is shown, and generating a point cloud according to a to-be-processed panoramic image includes:
and S202, carrying out depth estimation on the panoramic image to be processed to obtain depth information.
In a specific application, as shown in fig. 3, for a specific flowchart of step S204 in fig. 2 of the method for three-dimensional reconstruction of a fire scene provided in the application embodiment, depth estimation is performed on a to-be-processed panoramic image to obtain depth information, where the depth information includes:
and S302, extracting the feature points of the panoramic image to be processed according to a preset feature extraction algorithm.
The preset feature extraction algorithm may be a corner detection algorithm, such as Harris corner detection and FAST corner detection, or may be a patch feature point detection algorithm, such as SIFI extraction algorithm and SURF extraction algorithm.
And S304, screening out target feature points with a stable matching relation in the feature points.
In specific application, the feature points of the panoramic image to be processed are input into a pre-trained panoramic image feature point matching model, and target feature points with a stable matching relation in the feature points are screened out.
It can be understood that, due to hardware errors and other reasons, matching relationships between feature points in panoramic images corresponding to the same real point are not stable enough between feature points of panoramic images shot at the same point and at different viewing angles, that is, some feature points do not correspond to the real point, and a target feature point having a stable matching relationship needs to be screened out, so as to improve accuracy of subsequent depth estimation according to the target feature point having the stable matching relationship.
Illustratively, the training process of the panoramic image feature point matching model may be: acquiring a training scene data set, wherein each training scene data in the training scene data set comprises a plurality of training images shot by the same camera at different shooting positions in a training scene; determining a target truth value of a training scene data set, wherein the target truth value is a target characteristic point which is extracted from the training scene data set and has a target matching relationship; and training the panoramic image feature point matching model by taking the training scene data set as input and the target truth value as output to obtain the trained panoramic image feature point matching model.
Therefore, the target truth value of the training scene data set (namely the characteristic points with the robust matching relationship in the training scene data set) is automatically determined, the robust matching relationship does not need to be manually configured for the huge number of characteristic points in the training scene data set to serve as the target truth value, and the effect that the panoramic image characteristic point matching model obtained through training according to the training scene data set and the target truth value can accurately perform characteristic matching on the panoramic image is achieved.
And step S306, calculating depth information according to the target feature points with the matching relation.
The depth information refers to the depth value of each target feature point from the corresponding real point.
In specific application, target feature points with matching relations are directly processed according to an SFM algorithm, and depth information and position information of a depth camera are calculated.
And step S204, obtaining point cloud according to the depth information.
Obtaining the three-dimensional coordinates of the point cloud according to the following formula:
Figure 701148DEST_PATH_IMAGE001
wherein, (u, v) is the pixel coordinate of each target feature point in the panoramic image, d is the depth value of each target feature point in the panoramic image, K is the internal reference of the depth camera, and (X, Y, Z) is the three-dimensional coordinate of the point cloud.
Illustratively, the internal parameters of the depth camera may be calculated using the Zhang-friend scaling method.
And S106, reconstructing according to the type of the point cloud to obtain a local three-dimensional model.
It can be understood that, because the fire scene in the embodiment of the present application is an urban fire scene including objects such as buildings and vegetation, the point cloud generated according to the depth information obtained by the depth camera also represents different semantic information.
In a specific application, as shown in fig. 4, a specific flowchart of step S106 in fig. 1 of the method for three-dimensional reconstruction of a fire scene provided in an embodiment of the present application is shown, and a local three-dimensional model is reconstructed according to the type of point cloud, including:
and S402, inputting the three-dimensional coordinates of the point cloud into a pre-trained point cloud identification model to obtain the type of the point cloud.
The pre-trained point cloud identification model comprises a local information processing module and a global information processing module. It should be noted that the point cloud identification model may be obtained by training in advance according to an open-source data set.
In a specific application, as shown in fig. 5, for a specific implementation process schematic diagram of step S402 in fig. 4 of the method for three-dimensional reconstruction of a fire scene provided in an embodiment of the present application, the method includes the steps of inputting a three-dimensional coordinate of a point cloud to a pre-trained point cloud recognition model to obtain a type of the point cloud, including:
and S502, extracting local characteristic information of the point cloud according to the three-dimensional coordinate based on the local information processing module.
Wherein, the local information processing module comprises an edge convolution layer.
In the specific application, after the three-dimensional coordinates of the point cloud are read, the edge features between the center point and the adjacent points of the point cloud are extracted, common convolution operation is performed on the edge features, three times of edge convolution are used in total to improve the local feature comprehension capability of the network, and the local feature information of the point cloud is obtained.
Step S504, the global information processing module identifies semantic information of the point cloud according to the local feature information of the point cloud to obtain the type of the point cloud.
The panoramic information processing module comprises a maximum pooling layer and a common convolution layer.
In specific application, the maximum pooling layer is used for pooling the local characteristic information, then three layers of common convolution layers are introduced for convolution processing of the pooled local characteristic information, and semantic labels of the point cloud, namely the type of the point cloud, are output.
And S404, reconstructing a local three-dimensional model according to the type of the point cloud.
The types of the point clouds comprise a first type point cloud, a second type point cloud and a third type point cloud, and the local three-dimensional model comprises a first local three-dimensional model, a second local three-dimensional model and a third local three-dimensional model. Illustratively, the first type of point cloud is a point cloud corresponding to a building object, the second type of point cloud is a point cloud corresponding to a strip object, and the third type of point cloud is a point cloud corresponding to a road surface object.
It can be understood that the application scenario of the embodiment of the application is an urban fire scene, and since the urban fire scene includes objects such as buildings and vegetation, the point cloud is divided into a first type point cloud containing building object semantics, a second type point cloud containing strip object semantics, and a third type point cloud containing road surface object semantics. It should be noted that the local three-dimensional model is a point cloud set of the same type.
In a specific application, referring to fig. 6, a specific flowchart of step S404 in fig. 4 of the method for three-dimensional reconstruction of a fire scene provided in the embodiment of the present application is shown, reconstructing a local three-dimensional model according to the type of point cloud, including:
step S602, segmenting the first type of point cloud according to a preset rasterized region growing algorithm, calculating the vertex of the first type of point cloud, and reconstructing based on the vertex of the first type of point cloud to obtain a first local three-dimensional model.
Wherein the region growing algorithm may be a RANSAC algorithm. It will be appreciated that the first type of point cloud data corresponding to the building object is sparse and therefore is reconstructed using the coordinates of the four vertices of the building wall.
And S604, segmenting the second type of point cloud according to a preset Euclidean distance algorithm, calculating two end points of the second type of point cloud, and reconstructing based on the two end points of the second type of point cloud to obtain a second local three-dimensional model.
It will be appreciated that the strip object is considered to be a straight line and the reconstruction is based on the coordinates of the two end points of the strip object.
And S606, reconstructing the third type point cloud according to a preset greedy projection triangulation algorithm to obtain a third local three-dimensional model.
It can be understood that, since the data characteristics of the third type point cloud corresponding to the ground object are small height difference among data and uniform data distribution, the third type point cloud is reconstructed by using a greedy projection triangulation algorithm in the embodiments of the present application.
And S108, combining the local three-dimensional models to obtain a global three-dimensional model.
The panoramic three-dimensional model is a combination of a plurality of local three-dimensional models.
In a specific application, as shown in fig. 7, a flow diagram of step S108 in fig. 1 of the method for three-dimensional reconstruction of a fire scene provided in an embodiment of the present application is combined with a local three-dimensional model to obtain a global three-dimensional model, where the method includes:
step S702, respectively carrying out first registration on point clouds in the local three-dimensional model according to a preset ICP algorithm to obtain the local three-dimensional model after the first registration.
It can be understood that the point clouds in the local three-dimensional models belong to rigid point clouds, and can be directly registered according to an ICP (inductively coupled plasma) algorithm, so that the point clouds in the local three-dimensional models are unified in a coordinate system.
And step S704, determining edge point clouds among the local three-dimensional models after the first registration.
In specific application, point clouds of different local three-dimensional models with distance values smaller than a preset distance threshold are used as edge point clouds.
It is understood that the registration between edge point clouds is a non-rigid registration.
And S706, carrying out second registration on the edge point cloud according to a pre-trained registration neural network model.
Step 708, forming a global three-dimensional model based on the first registered local three-dimensional model and the second registered edge point cloud.
The pre-trained registration neural network model can be a registration neural network model of a Benchmark network architecture.
It can be understood that the embodiment of the application uses the regression feature of the Benchmark network architecture, takes the non-rigid point cloud registration problem as the regression problem, and establishes the relation between edge point clouds, thereby realizing the registration of the edge point clouds.
In the embodiment of the application, a to-be-processed panoramic image is obtained, the to-be-processed panoramic image is a panoramic image obtained by shooting a depth camera on a fire scene, a point cloud is generated according to the to-be-processed panoramic image, a local three-dimensional model is obtained according to the type reconstruction of the point cloud, and the local three-dimensional model is combined to obtain a global three-dimensional model. Therefore, the obtained panoramic image is automatically modeled, the local three-dimensional models are respectively established according to different point cloud types in the modeling process, and finally the local three-dimensional models are combined to obtain the global three-dimensional model, so that the difference between object objects in the real scene of the fire scene is fully considered, and the sense of reality of the three-dimensional model is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 8 shows a block diagram of a three-dimensional reconstruction apparatus for a fire scene according to an embodiment of the present application, which corresponds to the three-dimensional reconstruction method for a fire scene described in the above embodiments, and only the parts related to the embodiment of the present application are shown for convenience of description.
Referring to fig. 8, the apparatus includes:
the acquiring module 81 is used for acquiring a panoramic image to be processed, wherein the panoramic image to be processed is a panoramic image obtained by shooting in a fire scene by a depth camera;
a generating module 82, configured to generate a point cloud according to the to-be-processed panoramic image;
the reconstruction module 83 is used for reconstructing to obtain a local three-dimensional model according to the type of the point cloud;
and a merging module 84, configured to merge the local three-dimensional models to obtain a global three-dimensional model.
In one possible implementation, the generating module includes:
the depth estimation submodule is used for carrying out depth estimation on the panoramic image to be processed to obtain depth information;
and the generation submodule is used for obtaining point cloud according to the depth information.
In one possible implementation, the depth estimation sub-module includes:
the extraction unit is used for extracting the feature points of the panoramic image to be processed according to a preset feature extraction algorithm;
the screening unit is used for screening out target feature points with matching relations in the feature points;
and the calculating unit is used for calculating the depth information according to the target characteristic points with the matching relation.
In one possible implementation, the reconstruction module includes:
the point cloud type identification submodule is used for inputting the three-dimensional coordinates of the point cloud into a pre-trained point cloud identification model to obtain the type of the point cloud;
and the reconstruction submodule is used for reconstructing a local three-dimensional model according to the type of the point cloud.
In one possible implementation manner, the pre-trained point cloud identification model comprises a local information processing module and a global information processing module;
the reconstruction submodule includes:
a local processing unit, configured to extract local feature information of the point cloud according to the three-dimensional coordinates based on the local information processing module:
and the global processing unit is used for identifying the semantic information of the point cloud according to the local characteristic information of the point cloud based on the global information processing module to obtain the type of the point cloud.
In one possible implementation, the point clouds are a first type of point cloud, a second type of point cloud, and a third type of point cloud;
the local three-dimensional model comprises a first local three-dimensional model, a second local three-dimensional model and a third local three-dimensional model;
the reconstruction submodule includes:
the first reconstruction unit is used for segmenting the first type of point cloud according to a preset rasterized region growing algorithm, calculating the vertex of the first type of point cloud, and reconstructing based on the vertex of the first type of point cloud to obtain a first local three-dimensional model;
the second reconstruction unit is used for segmenting the second type point cloud according to a preset Euclidean distance algorithm, calculating two end points of the second type point cloud, and reconstructing based on the two end points of the second type point cloud to obtain a second local three-dimensional model;
and the third reconstruction unit is used for reconstructing the third type point cloud according to a preset greedy projection triangulation algorithm to obtain a third local three-dimensional model.
In one possible implementation, the merging module includes:
the first registration submodule is used for respectively carrying out first registration on point clouds in the local three-dimensional model according to a preset ICP algorithm to obtain a local three-dimensional model after the first registration;
the determining module is used for determining edge point clouds among the local three-dimensional models after the first registration;
the second registration module is used for carrying out second registration on the edge point cloud according to a pre-trained registration neural network model;
and the forming module is used for forming a global three-dimensional model based on the first registered local three-dimensional model and the second registered edge point cloud.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 6, the server 9 of this embodiment includes: at least one processor 91, a memory 91 and a computer program 92 stored in said memory 91 and executable on said at least one processor 91, said processor 91 implementing the steps in any of the various method embodiments described above when executing said computer program 92.
The server 9 may be a computing device such as a cloud server. The server may include, but is not limited to, a processor 91, a memory 91. Those skilled in the art will appreciate that fig. 9 is merely an example of the server 9, and does not constitute a limitation on the server 9, and may include more or less components than those shown, or combine certain components, or different components, such as input output devices, network access devices, etc.
The Processor 91 may be a Central Processing Unit (CPU), and the Processor 91 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 91 may in some embodiments be an internal storage unit of the server 9, such as a hard disk or a memory of the server 9. The memory 91 may also be an external storage device of the server 9 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the server 9. Further, the memory 91 may also include both an internal storage unit of the server 9 and an external storage device. The memory 91 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a readable storage medium, specifically a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a server, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of three-dimensional reconstruction of a fire scene, comprising:
acquiring a panoramic image to be processed, wherein the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene;
generating a point cloud according to the panoramic image to be processed;
reconstructing according to the type of the point cloud to obtain a local three-dimensional model;
and combining the local three-dimensional models to obtain a global three-dimensional model.
2. The method for three-dimensional reconstruction of a fire scene according to claim 1, wherein generating a point cloud from the panoramic image to be processed comprises:
carrying out depth estimation on the panoramic image to be processed to obtain depth information;
and obtaining point cloud according to the depth information.
3. The method of claim 2, wherein the depth estimation of the panoramic image to be processed to obtain depth information comprises:
extracting feature points of the panoramic image to be processed according to a preset feature extraction algorithm;
screening out target feature points with matching relations in the feature points;
and calculating the depth information according to the target feature points with the matching relationship.
4. The method of claim 1, wherein reconstructing a local three-dimensional model according to the type of the point cloud comprises:
inputting the three-dimensional coordinates of the point cloud to a pre-trained point cloud identification model to obtain the type of the point cloud;
and reconstructing a local three-dimensional model according to the type of the point cloud.
5. The three-dimensional reconstruction method for the fire scene as claimed in claim 4, wherein the pre-trained point cloud recognition model includes a local information processing module and a global information processing module;
inputting the three-dimensional coordinates of the point cloud into a pre-trained point cloud identification model to obtain the type of the point cloud, wherein the type comprises the following steps:
extracting local characteristic information of the point cloud according to the three-dimensional coordinates based on the local information processing module:
and identifying semantic information of the point cloud according to the local characteristic information of the point cloud based on the global information processing module to obtain the type of the point cloud.
6. The method of claim 4, wherein the types of the point clouds include a first type point cloud, a second type point cloud, and a third type point cloud;
the local three-dimensional model comprises a first local three-dimensional model, a second local three-dimensional model and a third local three-dimensional model;
reconstructing a local three-dimensional model according to the type of the point cloud, including:
segmenting the first type of point cloud according to a preset rasterized region growing algorithm, calculating the vertex of the first type of point cloud, and reconstructing based on the vertex of the first type of point cloud to obtain a first local three-dimensional model;
dividing the second type point cloud according to a preset Euclidean distance algorithm, calculating two end points of the second type point cloud, and reconstructing based on the two end points of the second type point cloud to obtain a second local three-dimensional model;
and reconstructing the third type point cloud according to a preset greedy projection triangulation algorithm to obtain a third local three-dimensional model.
7. A three-dimensional reconstruction method according to any of claims 1 to 6, characterized in that said local three-dimensional models are combined to obtain a global three-dimensional model;
respectively carrying out first registration on point clouds in the local three-dimensional model according to a preset ICP algorithm to obtain a local three-dimensional model after the first registration;
determining edge point clouds among the local three-dimensional models after the first registration;
performing second registration on the edge point cloud according to a pre-trained registration neural network model;
and forming a global three-dimensional model based on the first registered local three-dimensional model and the second registered edge point cloud.
8. A three-dimensional reconstruction apparatus for a fire scene, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a panoramic image to be processed, and the panoramic image to be processed is a panoramic image shot by a depth camera at a fire scene;
the generating module is used for generating a point cloud according to the panoramic image to be processed;
the reconstruction module is used for reconstructing to obtain a local three-dimensional model according to the type of the point cloud;
and the merging module is used for merging the local three-dimensional models to obtain a global three-dimensional model.
9. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A readable storage medium, storing a computer program, characterized in that the computer program, when executed by a processor, implements the method according to any of claims 1 to 7.
CN202110792133.7A 2021-07-14 2021-07-14 Three-dimensional reconstruction method and device for fire scene, server and readable storage medium Pending CN113436338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110792133.7A CN113436338A (en) 2021-07-14 2021-07-14 Three-dimensional reconstruction method and device for fire scene, server and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110792133.7A CN113436338A (en) 2021-07-14 2021-07-14 Three-dimensional reconstruction method and device for fire scene, server and readable storage medium

Publications (1)

Publication Number Publication Date
CN113436338A true CN113436338A (en) 2021-09-24

Family

ID=77760242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110792133.7A Pending CN113436338A (en) 2021-07-14 2021-07-14 Three-dimensional reconstruction method and device for fire scene, server and readable storage medium

Country Status (1)

Country Link
CN (1) CN113436338A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092559A (en) * 2021-11-30 2022-02-25 中德(珠海)人工智能研究院有限公司 Training method and device for panoramic image feature point descriptor generation network
CN114898354A (en) * 2022-03-24 2022-08-12 中德(珠海)人工智能研究院有限公司 Measuring method and device based on three-dimensional model, server and readable storage medium
CN115222896A (en) * 2022-09-20 2022-10-21 荣耀终端有限公司 Three-dimensional reconstruction method and device, electronic equipment and computer-readable storage medium
CN116866723A (en) * 2023-09-04 2023-10-10 广东力创信息技术有限公司 Pipeline safety real-time monitoring and early warning system
WO2024066689A1 (en) * 2022-09-29 2024-04-04 华为技术有限公司 Model processing method, and apparatus
CN117889858A (en) * 2023-12-29 2024-04-16 大湾区大学(筹) Positioning method, device, system and medium for multiple fire targets

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050715A (en) * 2014-06-23 2014-09-17 华北电力大学 High-precision three-dimensional reconstruction method for power transmission line and corridor
WO2015188684A1 (en) * 2014-06-12 2015-12-17 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN106600690A (en) * 2016-12-30 2017-04-26 厦门理工学院 Complex building three-dimensional modeling method based on point cloud data
CN106910242A (en) * 2017-01-23 2017-06-30 中国科学院自动化研究所 The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
US20180205941A1 (en) * 2017-01-17 2018-07-19 Facebook, Inc. Three-dimensional scene reconstruction from set of two dimensional images for consumption in virtual reality
CN110288712A (en) * 2019-03-30 2019-09-27 天津大学 The sparse multi-view angle three-dimensional method for reconstructing of indoor scene
CN111161404A (en) * 2019-12-23 2020-05-15 华中科技大学鄂州工业技术研究院 Three-dimensional reconstruction method, device and system for annular scanning morphology
CN112085844A (en) * 2020-09-11 2020-12-15 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment
CN112802185A (en) * 2021-01-26 2021-05-14 合肥工业大学 Endoscope image three-dimensional reconstruction method and system facing minimally invasive surgery space perception
CN112927362A (en) * 2021-04-07 2021-06-08 Oppo广东移动通信有限公司 Map reconstruction method and device, computer readable medium and electronic device
WO2021114143A1 (en) * 2019-12-11 2021-06-17 中国科学院深圳先进技术研究院 Image reconstruction method and apparatus, terminal device and storage medium
CN113012302A (en) * 2021-03-02 2021-06-22 北京爱笔科技有限公司 Three-dimensional panorama generation method and device, computer equipment and storage medium
CN113064135A (en) * 2021-06-01 2021-07-02 北京海天瑞声科技股份有限公司 Method and device for detecting obstacle in 3D radar point cloud continuous frame data

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015188684A1 (en) * 2014-06-12 2015-12-17 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN104050715A (en) * 2014-06-23 2014-09-17 华北电力大学 High-precision three-dimensional reconstruction method for power transmission line and corridor
CN106600690A (en) * 2016-12-30 2017-04-26 厦门理工学院 Complex building three-dimensional modeling method based on point cloud data
US20180205941A1 (en) * 2017-01-17 2018-07-19 Facebook, Inc. Three-dimensional scene reconstruction from set of two dimensional images for consumption in virtual reality
CN106910242A (en) * 2017-01-23 2017-06-30 中国科学院自动化研究所 The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN110288712A (en) * 2019-03-30 2019-09-27 天津大学 The sparse multi-view angle three-dimensional method for reconstructing of indoor scene
WO2021114143A1 (en) * 2019-12-11 2021-06-17 中国科学院深圳先进技术研究院 Image reconstruction method and apparatus, terminal device and storage medium
CN111161404A (en) * 2019-12-23 2020-05-15 华中科技大学鄂州工业技术研究院 Three-dimensional reconstruction method, device and system for annular scanning morphology
CN112085844A (en) * 2020-09-11 2020-12-15 中国人民解放军军事科学院国防科技创新研究院 Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment
CN112802185A (en) * 2021-01-26 2021-05-14 合肥工业大学 Endoscope image three-dimensional reconstruction method and system facing minimally invasive surgery space perception
CN113012302A (en) * 2021-03-02 2021-06-22 北京爱笔科技有限公司 Three-dimensional panorama generation method and device, computer equipment and storage medium
CN112927362A (en) * 2021-04-07 2021-06-08 Oppo广东移动通信有限公司 Map reconstruction method and device, computer readable medium and electronic device
CN113064135A (en) * 2021-06-01 2021-07-02 北京海天瑞声科技股份有限公司 Method and device for detecting obstacle in 3D radar point cloud continuous frame data

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092559A (en) * 2021-11-30 2022-02-25 中德(珠海)人工智能研究院有限公司 Training method and device for panoramic image feature point descriptor generation network
CN114898354A (en) * 2022-03-24 2022-08-12 中德(珠海)人工智能研究院有限公司 Measuring method and device based on three-dimensional model, server and readable storage medium
CN115222896A (en) * 2022-09-20 2022-10-21 荣耀终端有限公司 Three-dimensional reconstruction method and device, electronic equipment and computer-readable storage medium
CN115222896B (en) * 2022-09-20 2023-05-23 荣耀终端有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer readable storage medium
WO2024066689A1 (en) * 2022-09-29 2024-04-04 华为技术有限公司 Model processing method, and apparatus
CN116866723A (en) * 2023-09-04 2023-10-10 广东力创信息技术有限公司 Pipeline safety real-time monitoring and early warning system
CN116866723B (en) * 2023-09-04 2023-12-26 广东力创信息技术有限公司 Pipeline safety real-time monitoring and early warning system
CN117889858A (en) * 2023-12-29 2024-04-16 大湾区大学(筹) Positioning method, device, system and medium for multiple fire targets

Similar Documents

Publication Publication Date Title
US10592780B2 (en) Neural network training system
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
CN109815843B (en) Image processing method and related product
US10043097B2 (en) Image abstraction system
CN113807451B (en) Panoramic image feature point matching model training method and device and server
CN111459269B (en) Augmented reality display method, system and computer readable storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
Cheng et al. Extracting three-dimensional (3D) spatial information from sequential oblique unmanned aerial system (UAS) imagery for digital surface modeling
CN108492284B (en) Method and apparatus for determining perspective shape of image
CN114627244A (en) Three-dimensional reconstruction method and device, electronic equipment and computer readable medium
CN109034214B (en) Method and apparatus for generating a mark
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN113902802A (en) Visual positioning method and related device, electronic equipment and storage medium
CN117132737A (en) Three-dimensional building model construction method, system and equipment
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
US20230053952A1 (en) Method and apparatus for evaluating motion state of traffic tool, device, and medium
CN114913105A (en) Laser point cloud fusion method and device, server and computer readable storage medium
CN115393423A (en) Target detection method and device
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image
CN113436332A (en) Digital display method and device for fire-fighting plan, server and readable storage medium
CN116109759A (en) Fire scene three-dimensional reconstruction method and device for laser camera and spherical screen camera
Kim et al. Vision-based all-in-one solution for augmented reality and its storytelling applications
CN111292372A (en) Target object positioning method, target object positioning device, storage medium and electronic equipment
CN117635875B (en) Three-dimensional reconstruction method, device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 2-101-1 / 2-201 / 2-501, building 2, science and Technology Innovation Park, No.1 harbor, No.1 Jintang Road, high tech Zone, Zhuhai City, Guangdong Province

Applicant after: CHINA-GERMANY (ZHUHAI) ARTIFICIAL INTELLIGENCE INSTITUTE Co.,Ltd.

Applicant after: ZHUHAI 4DAGE NETWORK TECHNOLOGY Co.,Ltd.

Address before: 519080 2-101-1 / 2-201 / 2-501, building 2, science and Innovation Park, No. 1, Gangwan, Tangjiawan Town, high tech Zone, Zhuhai, Guangdong

Applicant before: CHINA-GERMANY (ZHUHAI) ARTIFICIAL INTELLIGENCE INSTITUTE Co.,Ltd.

Applicant before: ZHUHAI 4DAGE NETWORK TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information