CN116630550A - Three-dimensional model generation method and system based on multiple pictures - Google Patents

Three-dimensional model generation method and system based on multiple pictures Download PDF

Info

Publication number
CN116630550A
CN116630550A CN202310896727.1A CN202310896727A CN116630550A CN 116630550 A CN116630550 A CN 116630550A CN 202310896727 A CN202310896727 A CN 202310896727A CN 116630550 A CN116630550 A CN 116630550A
Authority
CN
China
Prior art keywords
modeling
dimensional model
feature
images
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310896727.1A
Other languages
Chinese (zh)
Other versions
CN116630550B (en
Inventor
粟海斌
刘珺
詹柱
刘斌
欧阳宏剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fangxin Technology Co ltd
Original Assignee
Fangxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fangxin Technology Co ltd filed Critical Fangxin Technology Co ltd
Priority to CN202310896727.1A priority Critical patent/CN116630550B/en
Publication of CN116630550A publication Critical patent/CN116630550A/en
Application granted granted Critical
Publication of CN116630550B publication Critical patent/CN116630550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The application discloses a three-dimensional model generation method and system based on multiple pictures. The three-dimensional model generation method based on multiple pictures comprises the following steps: acquiring a plurality of first images acquired by a first image acquisition device and acquiring a plurality of second images acquired by a second image acquisition device; determining a first modeling feature from the plurality of first images and a second modeling feature from the plurality of second images; generating a first three-dimensional model based on the first modeling feature and a preset first model element library, and generating a second three-dimensional model based on the second modeling feature and a preset second model element library; a three-dimensional model of the target object is generated based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship of the first position and the second position, the first three-dimensional model, and the second three-dimensional model. The three-dimensional model generation method based on multiple pictures can improve the construction efficiency of the three-dimensional model.

Description

Three-dimensional model generation method and system based on multiple pictures
Technical Field
The application relates to the technical field of three-dimensional modeling, in particular to a three-dimensional model generation method and system based on multiple pictures.
Background
With the development of three-dimensional modeling technology, the application of three-dimensional models is also gradually wide. The three-dimensional model may be applied in various scenes, for example, an extended reality scene, a simulated scene, and the like. In these scenarios, the construction of virtual objects is achieved by constructing a three-dimensional model; based on the constructed virtual object, interactions, or simulation, etc., may be performed.
When the three-dimensional model is constructed, a construction mode based on modeling elements can be adopted; that is, modeling elements required by the three-dimensional model are determined first, and then the modeling elements are integrated to construct the three-dimensional model.
In the current three-dimensional model construction scene, modeling elements of an object to be modeled need to be selected by a user, and then a corresponding modeling platform is used for constructing a three-dimensional model based on the modeling elements, so that the construction efficiency of the three-dimensional model is low.
Disclosure of Invention
The application aims to provide a three-dimensional model generation method and system based on multiple pictures, which can improve the construction efficiency of a three-dimensional model.
To achieve the above object, an embodiment of the present application provides a three-dimensional model generating method based on multiple pictures, including: acquiring a plurality of first images acquired by a first image acquisition device and acquiring a plurality of second images acquired by a second image acquisition device; the first image acquisition device is used for acquiring an image of a first position of a target object, the second image acquisition device is used for acquiring an image of a second position of the target object, the first position is determined based on first modeling information of the target object, and the second position is determined based on second modeling information of the target object; determining a first modeling feature from the plurality of first images and a second modeling feature from the plurality of second images; generating a first three-dimensional model based on the first modeling feature and a preset first model element library, and generating a second three-dimensional model based on the second modeling feature and a preset second model element library; a three-dimensional model of the target object is generated based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship of the first position and the second position, the first three-dimensional model, and the second three-dimensional model.
In one possible implementation manner, the acquiring the plurality of first images acquired by the first image acquisition device and the acquiring the plurality of second images acquired by the second image acquisition device includes: acquiring a plurality of images of the first position acquired by the first image acquisition equipment within a first preset duration; acquiring a plurality of images of the second position acquired by the second image acquisition equipment within a second preset time length; the first preset duration and the second preset duration comprise the same time point and different time points, and the image acquisition quantity of the first image acquisition device and the second image acquisition device at the same time point is larger than the image acquisition quantity of the different time points.
In a possible implementation manner, the determining the first modeling feature according to the plurality of first images includes: sequencing the plurality of first images according to the acquisition time to determine sequenced plurality of first images; grouping the ordered first images to determine a plurality of groups of first images; the adjacent image groups comprise the same first images with preset quantity; extracting modeling features from the multiple groups of first images according to preset rules respectively, and determining multiple groups of modeling features; the first modeling feature is determined based on the plurality of sets of modeling features.
In a possible implementation manner, the determining the first modeling feature based on the multiple sets of modeling features includes: inputting the multiple groups of modeling features into a pre-trained feature screening model to obtain screened modeling features output by the pre-trained feature screening model; the pre-trained feature screening model is used for screening modeling features which have no influence on the three-dimensional model; judging whether the screened modeling features have modeling features corresponding to the same first image or not; if the screened modeling features have modeling features corresponding to the same first image, extracting the modeling features of the same first image again; the first modeled feature is determined based on the re-extracted feature and the screened modeled feature.
In a possible implementation manner, the determining the second modeling feature according to the plurality of second images includes: sequencing the plurality of second images according to the acquisition time to determine sequenced plurality of second images; grouping the ordered second images to determine a plurality of groups of second images; the adjacent image groups comprise the same second images with preset numbers; extracting modeling features from the multiple groups of second images according to preset rules respectively, and determining multiple groups of modeling features; the second modeling feature is determined based on the plurality of sets of modeling features.
In a possible implementation manner, the determining the second modeling feature based on the multiple sets of modeling features includes: inputting the multiple groups of modeling features into a pre-trained feature screening model to obtain screened modeling features output by the pre-trained feature screening model; the pre-trained feature screening model is used for screening modeling features which have no influence on the three-dimensional model; judging whether the screened modeling features have modeling features corresponding to the same second image or not; if the screened modeling features have modeling features corresponding to the same second image, extracting the modeling features of the same second image again; the second modeled feature is determined based on the re-extracted feature and the screened modeled feature.
In a possible implementation manner, the preset first model element library includes a plurality of first preset modeling features and modeling elements corresponding to the plurality of first preset modeling features; the generating a first three-dimensional model based on the first modeling feature and a preset first model element library comprises the following steps: judging whether the first modeling feature has a corresponding first preset modeling feature or not; if the first modeling feature has a corresponding first preset modeling feature, determining modeling elements corresponding to the corresponding first preset modeling feature as elements to be modeled; if the first modeling feature does not have the corresponding first preset modeling feature, determining an element to be modeled matched with the first modeling feature from modeling elements corresponding to the plurality of first preset modeling features; and generating the first three-dimensional model based on the element to be modeled and a first modeling rule corresponding to the first position.
In a possible implementation manner, the preset second model element library includes a plurality of second preset modeling features and modeling elements corresponding to the plurality of second preset modeling features; the generating a second three-dimensional model based on the second modeling feature and a preset second model element library includes: judging whether the second modeling feature has a corresponding second preset modeling feature or not; if the second modeling feature has a corresponding second preset modeling feature, determining modeling elements corresponding to the corresponding second preset modeling feature as elements to be modeled; if the second modeling feature does not have the corresponding second preset modeling feature, determining an element to be modeled, which is matched with the second modeling feature, from modeling elements corresponding to the second preset modeling features; and generating the second three-dimensional model based on the element to be modeled and a second modeling rule corresponding to the second position.
In one possible implementation manner, the generating the three-dimensional model of the target object based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship between the first position and the second position, the first three-dimensional model and the second three-dimensional model includes: generating an initial three-dimensional model of the target object based on the positional relationship, the first three-dimensional model, the second three-dimensional model, and a base three-dimensional model of the target object; adjusting a model part corresponding to the first position in the initial three-dimensional model based on the number of the plurality of first images and a preset first adjustment model element, and adjusting a model part corresponding to the second position in the initial three-dimensional model based on the number of the plurality of second images and a preset second adjustment model element; a three-dimensional model of the target object is determined based on the adjusted initial three-dimensional model.
The embodiment of the application provides a three-dimensional model generation system based on multiple pictures, which comprises the following steps: the acquisition unit is used for acquiring a plurality of first images acquired by the first image acquisition equipment and acquiring a plurality of second images acquired by the second image acquisition equipment; the first image acquisition device is used for acquiring an image of a first position of a target object, the second image acquisition device is used for acquiring an image of a second position of the target object, the first position is determined based on first modeling information of the target object, and the second position is determined based on second modeling information of the target object; a modeling unit for: determining a first modeling feature from the plurality of first images and a second modeling feature from the plurality of second images; generating a first three-dimensional model based on the first modeling feature and a preset first model element library, and generating a second three-dimensional model based on the second modeling feature and a preset second model element library; a three-dimensional model of the target object is generated based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship of the first position and the second position, the first three-dimensional model, and the second three-dimensional model.
Compared with the prior art, the three-dimensional model generating method and system based on the multiple pictures, provided by the application, have the advantages that on one hand, the three-dimensional model is built based on the multiple pictures which are automatically acquired, modeling elements are not required to be selected by a user, and the building efficiency of the three-dimensional model can be improved. On the other hand, the plurality of automatically acquired pictures depend on the related modeling information of the target object, so that the adaptation degree of the finally constructed three-dimensional model and the target object is higher, and the construction precision of the three-dimensional model is improved. Therefore, the three-dimensional model generating method and system based on the multiple pictures improve the construction efficiency of the three-dimensional model on the basis of guaranteeing the construction precision of the three-dimensional model, so that the construction scheme can be suitable for three-dimensional model construction of various scenes and has strong applicability.
Drawings
FIG. 1 is a schematic illustration of an application scenario according to an embodiment of the present application;
FIG. 2 is a flow chart of a multi-picture based three-dimensional model generation method according to an embodiment of the present application;
FIG. 3 is a schematic structural view of a multi-picture based three-dimensional model generating apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural view of a terminal device according to an embodiment of the present application.
Detailed Description
The following detailed description of embodiments of the application is, therefore, to be taken in conjunction with the accompanying drawings, and it is to be understood that the scope of the application is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the term "comprise" or variations thereof such as "comprises" or "comprising", etc. will be understood to include the stated element or component without excluding other elements or components.
The technical scheme provided by the embodiment of the application can be applied to various three-dimensional modeling scenes, wherein a three-dimensional model of an object is constructed in the three-dimensional modeling scenes, and based on the constructed three-dimensional model, simulation can be executed; or to perform a correlation analysis on the object, etc.
At present, in order to realize the construction of a three-dimensional model, a user needs to select modeling elements of an object to be modeled, and then a corresponding modeling platform builds the three-dimensional model based on the modeling elements, so that the construction efficiency of the three-dimensional model is low. Moreover, the construction mode is too dependent on a modeling platform, so that construction accuracy is difficult to ensure.
Based on the above, the embodiment of the application provides a construction scheme of a three-dimensional model, on one hand, the three-dimensional model is constructed based on a plurality of automatically acquired pictures, the modeling elements are not required to be selected by a user, and the construction efficiency of the three-dimensional model can be improved. On the other hand, the plurality of automatically acquired pictures depend on the related modeling information of the target object, so that the adaptation degree of the finally constructed three-dimensional model and the target object is higher, and the construction precision of the three-dimensional model is improved. Therefore, on the basis of guaranteeing the construction precision of the three-dimensional model, the construction efficiency of the three-dimensional model is improved, so that the construction scheme can be suitable for three-dimensional model construction of various scenes and has strong applicability.
Referring next to fig. 1, a schematic structural diagram of a three-dimensional modeling system according to an embodiment of the present application is provided, where the three-dimensional modeling system includes an image capturing device and a terminal processing device, and the image capturing device and the terminal processing device are connected in a communication manner. The image acquisition device comprises a first image acquisition device and a second image acquisition device.
In some embodiments, the first image capturing device and the second image capturing device are respectively configured to capture different images, so as to implement capturing of multiple pictures.
In some embodiments, an image acquisition device is disposed in the real scene for acquiring image data in the real scene.
In some embodiments, the terminal processing device, as a back-end processing device, may be implemented in different forms, for example: computer, monitor terminal, etc.
Therefore, the three-dimensional model generation scheme provided by the embodiment of the application can be applied to terminal processing equipment; in some embodiments, the terminal processing device may be not only one device, but also a system device formed by a plurality of devices or modules.
Referring next to fig. 2, a flowchart of a three-dimensional model generating method based on multiple pictures according to an embodiment of the present application is provided, where the three-dimensional model generating method includes:
Step 201, acquiring a plurality of first images acquired by a first image acquisition device and acquiring a plurality of second images acquired by a second image acquisition device.
In some embodiments, the first image acquisition device is configured to acquire an image of a first location of the target object, and the second image acquisition device is configured to acquire an image of a second location of the target object, the first location being determined based on the first modeling information of the target object, the second location being determined based on the second modeling information of the target object.
In some embodiments, the first image acquisition device is configured to acquire a plurality of images of different angles of a first location of the target object and the second image acquisition device is configured to acquire a plurality of images of different angles of a second location of the target object.
In some embodiments, the first modeling information and the second modeling information belong to different modeling information, and the determination of the modeling element of the target object can be achieved by combining the two modeling information.
In some embodiments, the first modeling information is, for example, a modeling type; the second modeling information is, for example, modeling element complexity, number of modeling elements, modeling key point distribution, and the like.
Thus, based on the first modeling information, a first location for affecting the modeling result of the target object may be determined; and based on the second modeling information, a second location for affecting the modeling result of the target object may be determined.
It will be appreciated that the first location and the second location are not one location point, but may be a region, for example, the first location is a middle region of the target object, the second location is a peripheral region of the target object, or the like.
In some embodiments, the first modeling information and the second modeling information may be fed back to the relevant user, which determines the first location and the second location, respectively, based on the two information.
In other embodiments, a position determination model may be pre-trained, the training dataset of the position determination model including modeling information for a plurality of objects, and modeling positions for the plurality of objects. Thus, after training the position determination model based on the training data set, the position determination model may output the modeled position, i.e., the first position and the second position.
As an alternative embodiment, step 201 includes: acquiring a plurality of images of a first position acquired by first image acquisition equipment within a first preset duration; acquiring a plurality of images of a second position acquired by second image acquisition equipment within a second preset time length; the first preset duration and the second preset duration comprise the same time point and different time points, and the image acquisition quantity of the first image acquisition device and the second image acquisition device at the same time point is larger than that of the different time points.
In some embodiments, the first preset time period and the second preset time period may be configured according to a movement frequency of the target object. For example, if the movement frequency of the target object is higher, the first preset duration and the second preset duration may be configured longer; otherwise, the first preset duration and the second preset duration can be configured to be shorter.
In some embodiments, the first preset duration and the second preset duration include the same time point and different time points; that is, there is a crossing time point in the first preset time period and the second preset time period.
For example, the first preset duration is less than 10 minutes, and the second preset duration may be a duration of 15 minutes from the current time and 5 minutes from the current time.
In some embodiments, at different points in time, the image acquisition angles of the first image acquisition device and the second image acquisition device for the respective positions may be different.
In some embodiments, the first image acquisition device and the second image acquisition device each acquire a greater number of images at the same point in time than at different points in time. That is, the number of images acquired by the first image acquisition device and the second image acquisition device for those intersecting points in time is greater than the number of images acquired for those non-intersecting points in time.
Step 202, determining a first modeling feature from the first plurality of images and determining a second modeling feature from the second plurality of images.
As an alternative embodiment, determining the first modeling feature from the plurality of first images includes: sequencing the plurality of first images according to the acquisition time, and determining the sequenced plurality of first images; grouping the ordered first images to determine a plurality of groups of first images; the adjacent image groups comprise the same first images with preset quantity; extracting modeling features from a plurality of groups of first images according to preset rules respectively, and determining a plurality of groups of modeling features; a first modeling feature is determined based on the plurality of sets of modeling features.
In some embodiments, the plurality of first images are first ordered according to their image acquisition times. The first images are then grouped in order of their order.
In some embodiments, a predetermined number of identical first images are included in adjacent image groups. Then, at each grouping, a preset number of first images of the previous group need to be grouped into the current group. The preset number can be set according to different application scenes, and can be smaller under the condition of more images; conversely, the preset number may be greater.
In some embodiments, after obtaining the plurality of sets of first images, extraction of modeling features may be performed per set to obtain the plurality of sets of modeling features, thereby determining the first modeling features based on the plurality of sets of modeling features.
In some embodiments, the preset rule may be a modeling feature extraction rule configured for the first modeling information corresponding to the first location, for example: extracting pixel points with gray values larger than the preset gray values, extracting pixel points with distribution of adjacent pixel points meeting preset conditions, and the like.
As an alternative embodiment, determining the first modeling feature based on the plurality of sets of modeling features includes: inputting a plurality of groups of modeling features into a pre-trained feature screening model to obtain screened modeling features output by the pre-trained feature screening model; the pre-trained feature screening model is used for screening modeling features which have no influence on the three-dimensional model; judging whether the screened modeling features have modeling features corresponding to the same first image or not; if the screened modeling features have modeling features corresponding to the same first image, extracting the modeling features of the same first image again; a first modeled feature is determined based on the re-extracted features and the screened modeled features.
In some embodiments, the training dataset of the feature screening model may include a plurality of features including modeled features that have an impact on the three-dimensional model and modeled features that have no impact on the three-dimensional model. Therefore, the trained feature screening model can identify modeling features which have no influence on the three-dimensional model.
After screening, the modeling characteristics of the screening can be obtained. At this time, it is further determined whether the screened modeling feature has a modeling feature corresponding to the same first image, that is, the modeling feature extracted from the same first image, and if so, the modeling feature extraction is performed again on the first image, so as to obtain a re-extracted feature.
Further, based on the re-extracted features and the screened modeled features, a deduplication process is performed, and the features of the deduplication process are determined as first modeled features.
As an alternative embodiment, determining the second modeling feature from the plurality of second images includes: sequencing the plurality of second images according to the acquisition time, and determining the sequenced plurality of second images; grouping the ordered second images to determine a plurality of groups of second images; the adjacent image groups comprise the same second images with preset numbers; extracting modeling features from a plurality of groups of second images according to preset rules respectively, and determining a plurality of groups of modeling features; a second modeling feature is determined based on the plurality of sets of modeling features.
In some embodiments, the plurality of second images are first ordered according to their image acquisition times. These second images are then grouped in order of the second images.
In some embodiments, a predetermined number of identical second images are included in adjacent image groups. Then, at each grouping, a preset number of second images of the previous group need to be grouped into the current group. The preset number can be set according to different application scenes, and can be smaller under the condition of more images; conversely, the preset number may be greater.
In some embodiments, after obtaining the plurality of sets of second images, extraction of modeling features may be performed per set to obtain the plurality of sets of modeling features, thereby determining the second modeling features based on the plurality of sets of modeling features.
In some embodiments, the preset rule may be a modeling feature extraction rule configured for the second modeling information corresponding to the second location, for example: extracting pixel points with gray values larger than the preset gray values, extracting pixel points with distribution of adjacent pixel points meeting preset conditions, and the like.
Further, determining a second modeling feature based on the plurality of sets of modeling features, comprising: inputting a plurality of groups of modeling features into a pre-trained feature screening model to obtain screened modeling features output by the pre-trained feature screening model; the pre-trained feature screening model is used for screening modeling features which have no influence on the three-dimensional model; judging whether the screened modeling features have modeling features corresponding to the same second image or not; if the screened modeling features have modeling features corresponding to the same second image, extracting the modeling features of the same second image again; a second modeled feature is determined based on the re-extracted features and the screened modeled features.
In some embodiments, the training dataset of the feature screening model may include a plurality of features including modeled features that have an impact on the three-dimensional model and modeled features that have no impact on the three-dimensional model. Therefore, the trained feature screening model can identify modeling features which have no influence on the three-dimensional model.
After screening, the modeling characteristics of the screening can be obtained. At this time, whether the screened modeling features exist modeling features corresponding to the same second image or not is judged again, namely, if yes, the modeling features of the second image are extracted again, and the re-extracted features are obtained.
Further, based on the re-extracted features and the screened modeled features, a deduplication process is performed, and the deduplicated features are determined as second modeled features.
Step 203, generating a first three-dimensional model based on the first modeling feature and a preset first model element library, and generating a second three-dimensional model based on the second modeling feature and a preset second model element library.
As an optional implementation manner, the preset first model element library includes a plurality of first preset modeling features and modeling elements corresponding to the plurality of first preset modeling features.
Generating a first three-dimensional model based on the first modeling feature and a preset first model element library, including: judging whether the first modeling feature has a corresponding first preset modeling feature or not; if the first modeling feature has a corresponding first preset modeling feature, determining modeling elements corresponding to the corresponding first preset modeling feature as elements to be modeled; if the first modeling feature does not have the corresponding first preset modeling feature, determining an element to be modeled, which is matched with the first modeling feature, from modeling elements corresponding to the first preset modeling features; and generating a first three-dimensional model based on the element to be modeled and a first modeling rule corresponding to the first position.
In some embodiments, the first modeling feature is compared with each first preset modeling feature, and if the feature comparison results in the same modeling feature, then there is a corresponding first preset modeling feature.
In some embodiments, each element in the library may be preconfigured with application information, and the element to be modeled that matches the first modeling feature may be determined based on the application information. For example, if the application information of the modeling element matches the first modeling information corresponding to the first location more highly, the modeling element may be determined to be the matched element to be modeled.
In some embodiments, the first modeling rule may be determined according to the first location or the first modeling information, which represents a rule that the modeling element generates the model, so that, based on the first modeling rule, a three-dimensional model corresponding to the first location, that is, the first three-dimensional model, may be generated.
As an optional implementation manner, the preset second model element library includes a plurality of second preset modeling features and modeling elements corresponding to the plurality of second preset modeling features.
Generating a second three-dimensional model based on the second modeling feature and a preset second model element library, including: judging whether the second modeling feature has a corresponding second preset modeling feature or not; if the second modeling feature has a corresponding second preset modeling feature, determining modeling elements corresponding to the corresponding second preset modeling feature as elements to be modeled; if the second modeling feature does not have the corresponding second preset modeling feature, determining an element to be modeled, which is matched with the second modeling feature, from modeling elements corresponding to the second preset modeling features; and generating a second three-dimensional model based on the element to be modeled and a second modeling rule corresponding to the second position.
In some embodiments, the second modeling feature is compared with each second preset modeling feature, and if the feature comparison results in the same modeling feature, then there is a corresponding second preset modeling feature.
In some embodiments, each element in the library may be preconfigured with application information, and the element to be modeled that matches the second modeling feature may be determined based on the application information. For example, if the application information of the modeling element matches the second modeling information corresponding to the second location more highly, the modeling element may be determined to be the matched element to be modeled.
In some embodiments, a second modeling rule may be determined according to a second location or second modeling information, which represents a rule of generating a model by the modeling element, so that, based on the second modeling rule, a three-dimensional model corresponding to the second location, that is, a second three-dimensional model, may be generated.
Step 204, generating a three-dimensional model of the target object based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship between the first position and the second position, the first three-dimensional model and the second three-dimensional model.
As an alternative embodiment, step 204 includes: generating an initial three-dimensional model of the target object based on the position relation, the first three-dimensional model, the second three-dimensional model and the basic three-dimensional model of the target object; adjusting the model part corresponding to the first position in the initial three-dimensional model based on the number of the plurality of first images and a preset first adjustment model element, and adjusting the model part corresponding to the second position in the initial three-dimensional model based on the number of the plurality of second images and a preset second adjustment model element; based on the adjusted initial three-dimensional model, a three-dimensional model of the target object is determined.
In some embodiments, an integration strategy of the first three-dimensional model and the second three-dimensional model is determined based on the position relationship, and then the first three-dimensional model and the second three-dimensional model are integrated into the basic three-dimensional model according to the integration strategy, so as to generate an initial three-dimensional model of the target object.
In some embodiments, the integration policies corresponding to the different positional relationships may be preset, so that the corresponding integration policies may be determined based on the current positional relationship.
The positional relationship may include a distance between positions, a modeling association relationship between positions, and the like.
In some embodiments, the integrated model is pre-trained, and the training data set of the integrated model may include a plurality of three-dimensional models to be integrated, and corresponding integrated models. Thus, after training with the training data set, the integrated model may integrate the model.
In some embodiments, the different first adjustment model elements correspond to different numbers of images, such that, based on the number of current first images, the corresponding first adjustment model elements may be determined. And, the first adjustment model element corresponds to a corresponding adjustment mode, for example: mapping is performed as textures, adding is performed as a supplemental element, and the like. Then, an adjustment of the model portion corresponding to the first position may be achieved.
In some embodiments, the different second adjustment model elements correspond to different numbers of images, such that, based on the number of current second images, the corresponding second adjustment model elements may be determined. And, the second adjustment model element corresponds to a corresponding adjustment mode, for example: mapping is performed as textures, adding is performed as a supplemental element, and the like. Then, an adjustment of the model portion corresponding to the second position may be achieved.
Further, based on the adjusted initial three-dimensional model, a manual audit or a system audit may be performed to verify whether the three-dimensional model has problems, such as: there is a missing, there is an excess portion, etc., and after verification passes, it is determined as a three-dimensional model of the target object.
If a problem is found in the verification process, the found problem needs to be solved, and then the obtained three-dimensional model is determined as the three-dimensional model of the target object.
According to the embodiment of the application, on one hand, the three-dimensional model is built based on a plurality of automatically acquired pictures, the modeling elements are not required to be selected by a user, and the construction efficiency of the three-dimensional model can be improved. On the other hand, the plurality of automatically acquired pictures depend on the related modeling information of the target object, so that the adaptation degree of the finally constructed three-dimensional model and the target object is higher, and the construction precision of the three-dimensional model is improved. Therefore, on the basis of guaranteeing the construction precision of the three-dimensional model, the construction efficiency of the three-dimensional model is improved, so that the construction scheme can be suitable for three-dimensional model construction of various scenes and has strong applicability.
Referring next to fig. 3, an embodiment of the present application further provides a three-dimensional model generating system based on multiple pictures, including: an acquiring unit 301, configured to acquire a plurality of first images acquired by a first image acquisition device, and acquire a plurality of second images acquired by a second image acquisition device; the first image acquisition device is used for acquiring an image of a first position of a target object, the second image acquisition device is used for acquiring an image of a second position of the target object, the first position is determined based on first modeling information of the target object, and the second position is determined based on second modeling information of the target object; a modeling unit 302 for: determining a first modeling feature from the plurality of first images and a second modeling feature from the plurality of second images; generating a first three-dimensional model based on the first modeling feature and a preset first model element library, and generating a second three-dimensional model based on the second modeling feature and a preset second model element library; a three-dimensional model of the target object is generated based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship of the first position and the second position, the first three-dimensional model, and the second three-dimensional model.
In some embodiments, the acquisition unit 302 is further to: acquiring a plurality of images of the first position acquired by the first image acquisition equipment within a first preset duration; acquiring a plurality of images of the second position acquired by the second image acquisition equipment within a second preset time length; the first preset duration and the second preset duration comprise the same time point and different time points, and the image acquisition quantity of the first image acquisition device and the second image acquisition device at the same time point is larger than the image acquisition quantity of the different time points.
In some embodiments, the modeling unit 302 is further to: sequencing the plurality of first images according to the acquisition time to determine sequenced plurality of first images; grouping the ordered first images to determine a plurality of groups of first images; the adjacent image groups comprise the same first images with preset quantity; extracting modeling features from the multiple groups of first images according to preset rules respectively, and determining multiple groups of modeling features; the first modeling feature is determined based on the plurality of sets of modeling features.
In some embodiments, the modeling unit 302 is further to: inputting the multiple groups of modeling features into a pre-trained feature screening model to obtain screened modeling features output by the pre-trained feature screening model; the pre-trained feature screening model is used for screening modeling features which have no influence on the three-dimensional model; judging whether the screened modeling features have modeling features corresponding to the same first image or not; if the screened modeling features have modeling features corresponding to the same first image, extracting the modeling features of the same first image again; the first modeled feature is determined based on the re-extracted feature and the screened modeled feature.
In some embodiments, the modeling unit 302 is further to: sequencing the plurality of second images according to the acquisition time to determine sequenced plurality of second images; grouping the ordered second images to determine a plurality of groups of second images; the adjacent image groups comprise the same second images with preset numbers; extracting modeling features from the multiple groups of second images according to preset rules respectively, and determining multiple groups of modeling features; the second modeling feature is determined based on the plurality of sets of modeling features.
In some embodiments, the modeling unit 302 is further to: inputting the multiple groups of modeling features into a pre-trained feature screening model to obtain screened modeling features output by the pre-trained feature screening model; the pre-trained feature screening model is used for screening modeling features which have no influence on the three-dimensional model; judging whether the screened modeling features have modeling features corresponding to the same second image or not; if the screened modeling features have modeling features corresponding to the same second image, extracting the modeling features of the same second image again; the second modeled feature is determined based on the re-extracted feature and the screened modeled feature.
In some embodiments, the modeling unit 302 is further to: judging whether the first modeling feature has a corresponding first preset modeling feature or not; if the first modeling feature has a corresponding first preset modeling feature, determining modeling elements corresponding to the corresponding first preset modeling feature as elements to be modeled; if the first modeling feature does not have the corresponding first preset modeling feature, determining an element to be modeled matched with the first modeling feature from modeling elements corresponding to the plurality of first preset modeling features; and generating the first three-dimensional model based on the element to be modeled and a first modeling rule corresponding to the first position.
In some embodiments, the modeling unit 302 is further to: judging whether the second modeling feature has a corresponding second preset modeling feature or not; if the second modeling feature has a corresponding second preset modeling feature, determining modeling elements corresponding to the corresponding second preset modeling feature as elements to be modeled; if the second modeling feature does not have the corresponding second preset modeling feature, determining an element to be modeled, which is matched with the second modeling feature, from modeling elements corresponding to the second preset modeling features; and generating the second three-dimensional model based on the element to be modeled and a second modeling rule corresponding to the second position.
In some embodiments, the modeling unit 302 is further to: generating an initial three-dimensional model of the target object based on the positional relationship, the first three-dimensional model, the second three-dimensional model, and a base three-dimensional model of the target object; adjusting a model part corresponding to the first position in the initial three-dimensional model based on the number of the plurality of first images and a preset first adjustment model element, and adjusting a model part corresponding to the second position in the initial three-dimensional model based on the number of the plurality of second images and a preset second adjustment model element; a three-dimensional model of the target object is determined based on the adjusted initial three-dimensional model.
As shown in fig. 4, the embodiment of the present application further provides a terminal device, which includes a processor 401 and a memory 402, where the processor 401 and the memory 402 are communicatively connected, and the terminal device may be used as an execution body of the foregoing multi-picture-based three-dimensional model generating method.
The processor 401 and the memory 402 are directly or indirectly electrically connected to each other to realize data transmission or interaction. For example, electrical connections may be made between these elements through one or more communication buses or signal buses. The aforementioned multi-picture based three-dimensional model generation method includes at least one software functional module that may be stored in the memory 402 in the form of software or firmware (firmware), respectively.
The processor 401 may be an integrated circuit chip having signal processing capabilities. The processor 401 may be a general-purpose processor including a CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but may be a digital signal processor, an application specific integrated circuit, an off-the-shelf programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. Which may implement or perform the disclosed methods, steps, and logic blocks in embodiments of the application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 402 may store various software programs and modules, such as program instructions/modules corresponding to the image processing methods and apparatuses provided in the embodiments of the present application. The processor 401 executes various functional applications and data processing, i.e., implements the methods of embodiments of the present application, by running software programs and modules stored in the memory 402.
Memory 402 may include, but is not limited to, RAM (Random Access Memory ), ROM (Read Only Memory), PROM (Programmable Read-Only Memory, programmable Read Only Memory), EPROM (Erasable Programmable Read-Only Memory, erasable Read Only Memory), EEPROM (Electric Erasable Programmable Read-Only Memory, electrically erasable Read Only Memory), and the like.
It will be appreciated that the configuration shown in fig. 4 is merely illustrative, and that the terminal device may also include more or fewer components than shown in fig. 4, or have a different configuration than shown in fig. 4.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing descriptions of specific exemplary embodiments of the present application are presented for purposes of illustration and description. It is not intended to limit the application to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain the specific principles of the application and its practical application to thereby enable one skilled in the art to make and utilize the application in various exemplary embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the application be defined by the claims and their equivalents.

Claims (10)

1. The three-dimensional model generation method based on multiple pictures is characterized by comprising the following steps of:
acquiring a plurality of first images acquired by a first image acquisition device and acquiring a plurality of second images acquired by a second image acquisition device; the first image acquisition device is used for acquiring an image of a first position of a target object, the second image acquisition device is used for acquiring an image of a second position of the target object, the first position is determined based on first modeling information of the target object, and the second position is determined based on second modeling information of the target object;
determining a first modeling feature from the plurality of first images and a second modeling feature from the plurality of second images;
generating a first three-dimensional model based on the first modeling feature and a preset first model element library, and generating a second three-dimensional model based on the second modeling feature and a preset second model element library;
a three-dimensional model of the target object is generated based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship of the first position and the second position, the first three-dimensional model, and the second three-dimensional model.
2. The multi-picture based three-dimensional model generation method according to claim 1, wherein the acquiring a plurality of first images acquired by a first image acquisition device and acquiring a plurality of second images acquired by a second image acquisition device includes:
acquiring a plurality of images of the first position acquired by the first image acquisition equipment within a first preset duration;
acquiring a plurality of images of the second position acquired by the second image acquisition equipment within a second preset time length; the first preset duration and the second preset duration comprise the same time point and different time points, and the image acquisition quantity of the first image acquisition device and the second image acquisition device at the same time point is larger than the image acquisition quantity of the different time points.
3. The multi-picture based three-dimensional model generation method of claim 1, wherein the determining a first modeling feature from the plurality of first images comprises:
sequencing the plurality of first images according to the acquisition time to determine sequenced plurality of first images;
grouping the ordered first images to determine a plurality of groups of first images; the adjacent image groups comprise the same first images with preset quantity;
Extracting modeling features from the multiple groups of first images according to preset rules respectively, and determining multiple groups of modeling features;
the first modeling feature is determined based on the plurality of sets of modeling features.
4. A multi-picture based three-dimensional model generation method according to claim 3, wherein the determining the first modeling feature based on the plurality of sets of modeling features comprises:
inputting the multiple groups of modeling features into a pre-trained feature screening model to obtain screened modeling features output by the pre-trained feature screening model; the pre-trained feature screening model is used for screening modeling features which have no influence on the three-dimensional model;
judging whether the screened modeling features have modeling features corresponding to the same first image or not;
if the screened modeling features have modeling features corresponding to the same first image, extracting the modeling features of the same first image again;
the first modeled feature is determined based on the re-extracted feature and the screened modeled feature.
5. The multi-picture based three-dimensional model generation method of claim 1, wherein the determining a second modeling feature from the plurality of second images comprises:
Sequencing the plurality of second images according to the acquisition time to determine sequenced plurality of second images;
grouping the ordered second images to determine a plurality of groups of second images; the adjacent image groups comprise the same second images with preset numbers;
extracting modeling features from the multiple groups of second images according to preset rules respectively, and determining multiple groups of modeling features;
the second modeling feature is determined based on the plurality of sets of modeling features.
6. The multi-picture based three-dimensional model generation method of claim 5, wherein the determining the second modeling feature based on the plurality of sets of modeling features comprises:
inputting the multiple groups of modeling features into a pre-trained feature screening model to obtain screened modeling features output by the pre-trained feature screening model; the pre-trained feature screening model is used for screening modeling features which have no influence on the three-dimensional model;
judging whether the screened modeling features have modeling features corresponding to the same second image or not;
if the screened modeling features have modeling features corresponding to the same second image, extracting the modeling features of the same second image again;
The second modeled feature is determined based on the re-extracted feature and the screened modeled feature.
7. The multi-picture-based three-dimensional model generation method according to claim 1, wherein the preset first model element library comprises a plurality of first preset modeling features and modeling elements corresponding to the plurality of first preset modeling features; the generating a first three-dimensional model based on the first modeling feature and a preset first model element library comprises the following steps:
judging whether the first modeling feature has a corresponding first preset modeling feature or not;
if the first modeling feature has a corresponding first preset modeling feature, determining modeling elements corresponding to the corresponding first preset modeling feature as elements to be modeled;
if the first modeling feature does not have the corresponding first preset modeling feature, determining an element to be modeled matched with the first modeling feature from modeling elements corresponding to the plurality of first preset modeling features;
and generating the first three-dimensional model based on the element to be modeled and a first modeling rule corresponding to the first position.
8. The multi-picture-based three-dimensional model generation method according to claim 1, wherein the preset second model element library comprises a plurality of second preset modeling features and modeling elements corresponding to the plurality of second preset modeling features; the generating a second three-dimensional model based on the second modeling feature and a preset second model element library includes:
Judging whether the second modeling feature has a corresponding second preset modeling feature or not;
if the second modeling feature has a corresponding second preset modeling feature, determining modeling elements corresponding to the corresponding second preset modeling feature as elements to be modeled;
if the second modeling feature does not have the corresponding second preset modeling feature, determining an element to be modeled, which is matched with the second modeling feature, from modeling elements corresponding to the second preset modeling features;
and generating the second three-dimensional model based on the element to be modeled and a second modeling rule corresponding to the second position.
9. The multi-picture based three-dimensional model generation method according to claim 1, wherein the generating the three-dimensional model of the target object based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship of the first position and the second position, the first three-dimensional model and the second three-dimensional model, comprises:
generating an initial three-dimensional model of the target object based on the positional relationship, the first three-dimensional model, the second three-dimensional model, and a base three-dimensional model of the target object;
Adjusting a model part corresponding to the first position in the initial three-dimensional model based on the number of the plurality of first images and a preset first adjustment model element, and adjusting a model part corresponding to the second position in the initial three-dimensional model based on the number of the plurality of second images and a preset second adjustment model element;
a three-dimensional model of the target object is determined based on the adjusted initial three-dimensional model.
10. A multi-picture based three-dimensional model generation system, comprising:
the acquisition unit is used for acquiring a plurality of first images acquired by the first image acquisition equipment and acquiring a plurality of second images acquired by the second image acquisition equipment; the first image acquisition device is used for acquiring an image of a first position of a target object, the second image acquisition device is used for acquiring an image of a second position of the target object, the first position is determined based on first modeling information of the target object, and the second position is determined based on second modeling information of the target object;
a modeling unit for:
determining a first modeling feature from the plurality of first images and a second modeling feature from the plurality of second images;
Generating a first three-dimensional model based on the first modeling feature and a preset first model element library, and generating a second three-dimensional model based on the second modeling feature and a preset second model element library;
a three-dimensional model of the target object is generated based on the number of the plurality of first images, the number of the plurality of second images, the positional relationship of the first position and the second position, the first three-dimensional model, and the second three-dimensional model.
CN202310896727.1A 2023-07-21 2023-07-21 Three-dimensional model generation method and system based on multiple pictures Active CN116630550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310896727.1A CN116630550B (en) 2023-07-21 2023-07-21 Three-dimensional model generation method and system based on multiple pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310896727.1A CN116630550B (en) 2023-07-21 2023-07-21 Three-dimensional model generation method and system based on multiple pictures

Publications (2)

Publication Number Publication Date
CN116630550A true CN116630550A (en) 2023-08-22
CN116630550B CN116630550B (en) 2023-10-20

Family

ID=87621560

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310896727.1A Active CN116630550B (en) 2023-07-21 2023-07-21 Three-dimensional model generation method and system based on multiple pictures

Country Status (1)

Country Link
CN (1) CN116630550B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1544800A2 (en) * 2003-12-17 2005-06-22 United Technologies Corporation CAD modeling system and method
WO2015200782A1 (en) * 2014-06-27 2015-12-30 A9.Com, Inc. 3-d model generation
CN108876907A (en) * 2018-05-31 2018-11-23 大连理工大学 A kind of active three-dimensional rebuilding method of object-oriented object
CN113313832A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Semantic generation method and device of three-dimensional model, storage medium and electronic equipment
WO2021193672A1 (en) * 2020-03-27 2021-09-30 パナソニックIpマネジメント株式会社 Three-dimensional model generation method and three-dimensional model generation device
CN114332435A (en) * 2020-09-29 2022-04-12 北京初速度科技有限公司 Image labeling method and device based on three-dimensional reconstruction
WO2022095514A1 (en) * 2020-11-06 2022-05-12 北京迈格威科技有限公司 Image detection method and apparatus, electronic device, and storage medium
CN114519764A (en) * 2020-11-20 2022-05-20 株式会社理光 Three-dimensional model construction method and device and computer readable storage medium
CN115661371A (en) * 2022-12-14 2023-01-31 深圳思谋信息科技有限公司 Three-dimensional object modeling method and device, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1544800A2 (en) * 2003-12-17 2005-06-22 United Technologies Corporation CAD modeling system and method
WO2015200782A1 (en) * 2014-06-27 2015-12-30 A9.Com, Inc. 3-d model generation
CN108876907A (en) * 2018-05-31 2018-11-23 大连理工大学 A kind of active three-dimensional rebuilding method of object-oriented object
WO2021193672A1 (en) * 2020-03-27 2021-09-30 パナソニックIpマネジメント株式会社 Three-dimensional model generation method and three-dimensional model generation device
CN114332435A (en) * 2020-09-29 2022-04-12 北京初速度科技有限公司 Image labeling method and device based on three-dimensional reconstruction
WO2022095514A1 (en) * 2020-11-06 2022-05-12 北京迈格威科技有限公司 Image detection method and apparatus, electronic device, and storage medium
CN114519764A (en) * 2020-11-20 2022-05-20 株式会社理光 Three-dimensional model construction method and device and computer readable storage medium
CN113313832A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Semantic generation method and device of three-dimensional model, storage medium and electronic equipment
CN115661371A (en) * 2022-12-14 2023-01-31 深圳思谋信息科技有限公司 Three-dimensional object modeling method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张九龙, 胡正国: "基于照片的人脸三维建模", 西北大学学报(自然科学版), no. 05, pages 21 - 22 *
束搏;邱显杰;王兆其;: "基于图像的几何建模技术综述", 计算机研究与发展, no. 03, pages 175 - 186 *

Also Published As

Publication number Publication date
CN116630550B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN109840500B (en) Three-dimensional human body posture information detection method and device
CN108875013B (en) Method and device for processing map data
CN109272016A (en) Object detection method, device, terminal device and computer readable storage medium
CN107592474A (en) A kind of image processing method and device
CN109636786B (en) Verification method and device of image recognition module
CN115393181A (en) Training and generating method of head portrait generating model with beautiful and romantic style and electronic equipment
CN116630550B (en) Three-dimensional model generation method and system based on multiple pictures
CN110852224A (en) Expression recognition method and related device
CN110427998A (en) Model training, object detection method and device, electronic equipment, storage medium
CN109821233A (en) A kind of data analysing method and device
CN110781084B (en) Method and device for determining stuck identification parameter, storage medium and electronic device
CN111179408B (en) Three-dimensional modeling method and equipment
CN111028322A (en) Game animation expression generation method and device and electronic equipment
CN112527573A (en) Interface testing method, device and storage medium
CN116524135B (en) Three-dimensional model generation method and system based on image
CN110008940B (en) Method and device for removing target object in image and electronic equipment
CN115205736A (en) Video data identification method and device, electronic equipment and storage medium
CN110490950B (en) Image sample generation method and device, computer equipment and storage medium
CN111369612B (en) Three-dimensional point cloud image generation method and device
CN111354082B (en) Method and device for generating surface map, electronic equipment and storage medium
CN109815796B (en) Method and device for testing influence factors of face recognition passing rate
CN112102205A (en) Image deblurring method and device, electronic equipment and storage medium
CN115830242B (en) 3D heterogeneous modeling method and device
CN116628818A (en) BIM-based simulation piling method and system
CN116126151B (en) Method, system, storage medium and equipment for drawing motor cortex region of upper hyoid muscle group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant