CN115168547A - Scene construction method, device, equipment and storage medium - Google Patents

Scene construction method, device, equipment and storage medium Download PDF

Info

Publication number
CN115168547A
CN115168547A CN202210916555.5A CN202210916555A CN115168547A CN 115168547 A CN115168547 A CN 115168547A CN 202210916555 A CN202210916555 A CN 202210916555A CN 115168547 A CN115168547 A CN 115168547A
Authority
CN
China
Prior art keywords
scene
dimensional
keyword
category
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210916555.5A
Other languages
Chinese (zh)
Inventor
乔慧丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xintang Sichuang Educational Technology Co Ltd
Original Assignee
Beijing Xintang Sichuang Educational Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xintang Sichuang Educational Technology Co Ltd filed Critical Beijing Xintang Sichuang Educational Technology Co Ltd
Priority to CN202210916555.5A priority Critical patent/CN115168547A/en
Publication of CN115168547A publication Critical patent/CN115168547A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The present disclosure relates to a scene construction method, apparatus, device, storage medium, and program product, the method comprising: extracting at least one scene keyword from the received text information; aiming at least one scene keyword, acquiring scene parameters matched with the scene keyword from a preset scene parameter library; determining a three-dimensional material matched with the scene keyword based on the scene parameters; and constructing a three-dimensional scene matched with the text information based on at least one three-dimensional material. According to the method and the device for automatically constructing the three-dimensional scene, the scene keywords in the text information are extracted to automatically construct the three-dimensional scene, so that the three-dimensional scene is automatically constructed, the three-dimensional scene is prevented from being constructed manually, and the construction efficiency of the three-dimensional scene is improved.

Description

Scene construction method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for scene construction.
Background
With the continuous development of virtual reality technology and/or augmented reality, more and more three-dimensional models are used for sharing applications, and a user can feel a plurality of scenes more intuitively to a great extent by constructing three-dimensional scenes through the three-dimensional models of the sharing applications, so that the user experience is improved.
In the prior art, a professional uses an editor to build a three-dimensional scene, and needs to manually select a corresponding three-dimensional model from a material library, manually place the three-dimensional model in the scene and combine the three-dimensional models to build the corresponding three-dimensional scene, according to the position and the scene effect of each object of an original drawing.
However, the existing manual construction mode is adopted, and the labor input cost is high.
Disclosure of Invention
In order to solve the technical problem, the embodiment of the present disclosure provides a scene construction method, apparatus, device and storage medium, which implement automatic construction of a three-dimensional scene, avoid manual construction of the three-dimensional scene, improve construction efficiency of the three-dimensional scene, and reduce human input cost.
In a first aspect, an embodiment of the present disclosure provides a scene construction method, including:
extracting scene keywords from text information in response to receiving the text information;
determining the category of the scene keyword;
determining a three-dimensional material matched with the scene keyword according to the category of the scene keyword;
and constructing a three-dimensional scene corresponding to the text information based on the three-dimensional materials.
In a second aspect, an embodiment of the present disclosure provides a scene constructing apparatus, including:
the scene keyword extraction module is used for responding to the received text information and extracting scene keywords from the text information;
the belonging category determining module is used for determining the belonging category of the scene keyword;
the three-dimensional material determining module is used for determining a three-dimensional material matched with the scene keyword according to the category of the scene keyword;
and the three-dimensional scene construction module is used for constructing a three-dimensional scene corresponding to the text information based on the three-dimensional material.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the scene construction method as described in any one of the first aspects above.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the scene construction method according to any one of the above first aspects.
The present disclosure relates to a scene construction method, apparatus, device, storage medium, and program product, the method comprising: extracting scene keywords from text information in response to receiving the text information; determining the category of the scene keyword; determining a three-dimensional material matched with the scene keyword according to the category of the scene keyword; and constructing a three-dimensional scene corresponding to the text information based on the three-dimensional material. According to the method and the device for automatically constructing the three-dimensional scene, the scene keywords in the text information are extracted to automatically construct the three-dimensional scene, so that the three-dimensional scene is prevented from being manually constructed, and the construction efficiency of the three-dimensional scene is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a scene construction method in an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a three-dimensional material determination method in an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a three-dimensional material determination method in an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a three-dimensional material determination method in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a scene constructing apparatus in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more complete and thorough understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The scene construction method proposed in the embodiment of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a scene construction method in the embodiment of the present disclosure, where the embodiment is applicable to a scene construction situation, the method may be executed by a scene construction device, the scene construction device may be implemented in a software and/or hardware manner, and the scene construction device may be configured in an electronic device.
For example: the electronic device may be a mobile terminal, a fixed terminal, or a portable terminal, such as a mobile handset, a station, a unit, a device, a multimedia computer, a multimedia tablet, an internet node, a communicator, a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet computer, a Personal Communication Systems (PCS) device, a personal navigation device, a Personal Digital Assistant (PDA), an audio/video player, a digital camera/camcorder, a positioning device, a television receiver, a radio broadcast receiver, an electronic book device, a gaming device, or any combination thereof, including accessories and peripherals of these devices, or any combination thereof.
The following steps are repeated: the electronic device may be a server, where the server may be an entity server or a cloud server, and the server may be one server or a server cluster.
As shown in fig. 1, the scene construction method provided by the embodiment of the present disclosure mainly includes steps S101 to S104.
S101, responding to the received text information, and extracting scene keywords from the text information.
Specifically, the text information refers to specific text information, such as: "campus with playground in autumn", wherein the scene keyword refers to some phrases describing the scene.
In one embodiment of the present disclosure, receiving the text information may be receiving text information input by a user through an input device, where the input includes various input methods such as pasting, editing, and the like. Receiving the text message may further include: and collecting the audio collected by a user through a sound pickup, carrying out voice recognition on the audio, and taking a recognition result as received text information. Receiving the text message further comprises: receiving a picture input by a user, carrying out image recognition on the picture, extracting character information in the picture, and determining the extracted character information as received text information. The present disclosure is not particularly limited.
In one embodiment of the present disclosure, extracting a scene keyword from text information includes: scene keywords are extracted from text information using word segmentation Processing in Natural Language Processing (NLP) technology.
The natural language processing is to process, understand and use human language by a computer, the natural language processing NLP is a branch subject of artificial intelligence and linguistics field, and specially discusses how to process, i.e. use natural language, the natural language processing comprises a plurality of aspects and steps, and basically comprises parts of cognition, understanding, generation and the like. The natural language cognition and understanding is to change the input language into related symbols and relations through a computer and then process the symbols and relations according to a preset purpose; the natural language generation system converts computer data into natural language.
In the embodiment of the present disclosure, the scene keyword may refer to a word describing the content of the scene, and specifically, may be an environment in which the scene is located, a person and/or an object included in the scene, a feature of the person or the object, and the like. For example: the text information of the campus with playground in autumn can obtain keywords of three scenes of autumn, playground and campus after word segmentation.
S102, determining the category of the scene keyword.
In the embodiment of the present disclosure, the category to which the user belongs may be understood as a generic concept of a scene key, and the category to which the user belongs mainly includes four categories: season, object, location, and character. It should be noted that, in the embodiment of the present disclosure, only the category to which the device belongs is exemplarily described, and other classification manners may also be available, for example: the categories include: buildings, appliances, etc.
For example: the category of "autumn" is "season"; the category to which the playground belongs is 'object'; the category of "campus" is "site".
In one embodiment of the present disclosure, there may be a plurality of scene keywords extracted from the text information, and when there are a plurality of scene keywords, step S102 and step S103 are performed for each scene keyword.
In one embodiment of the present disclosure, a correspondence between the keyword and the category to which the keyword belongs may be established in advance, and the correspondence may be stored in a database. After scene keywords are extracted from the text information, the database is queried according to each scene keyword, and the queried affiliated category is determined as the affiliated category corresponding to the keyword.
Further, if the category to which the scene keyword belongs is not searched in the database, the category to which the user inputs may be received, and the category to which the user inputs is determined as the category corresponding to the scene keyword. And establishing a corresponding relation between the scene key words and the categories to which the scene key words belong, and storing the corresponding relation in a database for subsequent query.
In one embodiment of the present disclosure, determining the category to which the scene keyword belongs includes: performing part-of-speech tagging on the scene keywords according to part-of-speech tagging rules to obtain scene keywords with part-of-speech tags; and inputting the scene key words with the part-of-speech labels into a pre-trained category determination model to obtain the category of the scene key words.
Where the part of speech includes nouns, adverbs, adjectives, verbs, and so on. Specifically, after the text information is processed through natural speech processing NLP to obtain a scene keyword, the part of speech tagging is performed on the scene keyword according to a part of speech tagging rule. For example: autumn, "playground" and "campus" are labeled as nouns, "with" and "as adverbs, and are mainly used for modification.
In an embodiment of the disclosure, the scene keyword with part-of-speech tag is processed through a pre-trained type recognition model, so as to obtain a category to which the scene keyword corresponds. The training process of the type recognition model comprises the following steps: obtaining a large number of words, labeling the corresponding belonged category of each word, and training the neural network model by using the words and the labeled belonged categories to obtain a category identification model. It should be noted that, the present embodiment is not limited to the specific training process of the category identification model.
Since adjectives and/or adverbs may exist in the scene keywords, such words are mainly used for defining the environment or the object of the scene, and do not need to be classified. In the embodiment of the disclosure, the scene keyword with part-of-speech tag is input into the type identification model for processing, and the type identification model can conveniently determine the part-of-speech of the scene keyword, and determine whether further category identification is needed according to the part-of-speech of the scene keyword. Specifically, when the part of speech of the scene keyword is an adverb, an adjective, a verb, or the like, it is sufficient to output only the part of speech of the scene keyword without further category recognition. And when the part of speech of the scene keyword is a noun, further identifying the category to obtain the category to which the scene keyword belongs. Therefore, the type recognition model can only process the scene keywords with the noun attributes, and the processing speed is improved.
The scene parameter library includes a plurality of preset corresponding relations between the scene keywords and the scene parameters, where the corresponding relations may be one-to-one or one-to-many, and in the embodiment of the present disclosure, no specific limitation is provided. The corresponding relation between the scene keywords and the scene parameters in the scene parameter library can be preset by a worker.
The scene parameter includes at least one of the following: the system comprises environment parameters, terrain parameters, object parameters, relative position relations between objects, role parameters and relative position relations between roles.
In one embodiment of the present disclosure, the object parameters may include an object color parameter, an object texture parameter, an object shape parameter, and a relative positional relationship between objects, and the like. The object may be one object or a plurality of objects extracted from the text information, for example: buildings, trees, cars, roads, playgrounds, and the like. The character parameters may include character color parameters, character texture parameters, character shape parameters, and relative positional relationships between characters, etc. The character can be understood as one character or a plurality of characters extracted from the text information, wherein the character can be a real character or a virtual character, for example: a character in a game or movie. The character color parameter may be understood as the color of the skin or clothing of the person.
In one embodiment of the present disclosure, the environmental parameters include: meteorological parameters and/or ground parameters. Meteorological parameters are parameters used to characterize weather conditions, such as: sunny days, cloudy days, clouds and clouds, lighting conditions, etc. The ground parameters are parameters for characterizing the ground situation, such as: hill parameters, plain parameters, lake parameters, and the like.
In an embodiment of the present disclosure, acquiring a scene parameter matched with the scene keyword from a preset scene parameter library includes: inquiring in a preset scene parameter library based on the accurate word senses of the scene keywords, and if the scene keywords can be inquired, determining the scene parameters corresponding to the scene keywords as the scene parameters matched with the scene keywords; if the scene keyword cannot be queried, querying in a preset scene parameter library based on the fuzzy word meaning of the scene keyword, and if the scene keyword can be queried, determining the scene parameter corresponding to the scene keyword as the scene parameter matched with the scene keyword; and if the scene keyword cannot be inquired, determining the default scene parameters of the system as the scene parameters matched with the scene keyword.
S103, determining the three-dimensional materials matched with the scene keywords according to the categories of the scene keywords.
The three-dimensional material can be understood as a material required for constructing a three-dimensional scene, and the three-dimensional material can be a three-dimensional object model and can also be an environmental effect set based on scene parameters. The three-dimensional material matched with the scene keyword refers to a three-dimensional object model corresponding to the scene keyword, or a three-dimensional model of an object which may appear in the scene keyword. For example: a "playground" is described in the text information, on which objects such as students and flagpoles may appear, and a three-dimensional model of the objects such as the students and the flagpoles is a three-dimensional material matched with a scene keyword "playground".
One or more three-dimensional materials matched with the scene keyword are available, and in the embodiment of the disclosure, the number of the three-dimensional materials matched with the scene keyword is not limited.
In one embodiment of the disclosure, according to the category of the scene keyword, different methods are adopted to determine the three-dimensional material matched with the scene keyword.
As shown in fig. 2, as a specific embodiment of S103, when the category to which the scene keyword belongs is a place category, the determining a three-dimensional material matched with the scene keyword according to the category to which the scene keyword belongs mainly includes steps S201 to S202.
S201, in response to the fact that the category of the scene keyword is determined to be the place category, determining an object matched with the scene keyword according to a preset information filling rule, and determining the name of the object to be a material name corresponding to the scene keyword.
The preset information supplementing rule can be set according to the life common knowledge of people and is stored in the database. For example: the campus comprises teaching buildings and students, and the river comprises a bridge.
In the embodiment of the present disclosure, a scene keyword is used as a keyword to perform a query in the database, and a queried object is used as an object matched with the scene keyword. For example: the scene keyword is "campus", and objects matched with the scene keyword include "playground", "teaching building", and "student", etc. And taking the matched object name as a material name corresponding to the scene keyword.
S202, taking the three-dimensional model corresponding to the material name as the three-dimensional material matched with the scene keyword.
In one embodiment of the disclosure, a three-dimensional model corresponding to a material name is acquired from a pre-established three-dimensional model library based on the material name.
In the embodiment of the present disclosure, the three-dimensional model library includes a plurality of three-dimensional stereo models, and each three-dimensional model has a corresponding model name. The three-dimensional model corresponding to the material name may be a three-dimensional model whose model name is identical to the material name, or a three-dimensional model belonging to a class. Further, the three-dimensional model refers to a three-dimensional representation of the object in a three-dimensional space.
And further, the material name is used as a keyword, the inquiry is carried out in a pre-established three-dimensional model library, and if the inquired model name is consistent with the object name, the three-dimensional model corresponding to the model name is determined as the three-dimensional model corresponding to the object name.
In the embodiment of the present disclosure, in the process of querying in the three-dimensional model library, an accurate search mode may be adopted, and a fuzzy search mode may also be adopted. The present embodiment is not particularly limited. For example: the material name is ' playground ', the playground ' is used as a key word to inquire in the three-dimensional model library, and if the model name is inquired to have the ' playground ', the three-dimensional model corresponding to the model name is used as the three-dimensional model corresponding to the ' playground '. The material name is 'teaching building', the 'teaching building' is used as a keyword to be inquired in the three-dimensional model base, if the model name is inquired to have the 'teaching building', the three-dimensional model corresponding to the model name is used as the three-dimensional model corresponding to the 'teaching building', if the model name is not inquired, the 'teaching building' can be subjected to fuzzy processing, the 'building' is used as the keyword to be inquired in the three-dimensional model base, and the inquired three-dimensional model is used as the three-dimensional model corresponding to the material name 'teaching building'.
As shown in fig. 3, as a specific implementation manner of S103, when the category to which the scene keyword belongs is an environment category, the determining a three-dimensional material matched with the scene keyword according to the category to which the scene keyword belongs mainly includes steps S301 to S303.
S301, in response to the fact that the category of the scene keyword is determined to be the environment category, determining environment information matched with the scene keyword according to a preset information supplementing rule.
The preset information supplementing rule can be used for facilities according to life common knowledge of people. For example: autumn is blue, evening has sunset in the sky, and raindrops in the sky in rainy days.
In the embodiment of the present disclosure, a scene keyword is used as a keyword to perform a query in the database, and a queried object is used as environment information matched with the scene keyword. For example: the scene keyword is autumn, and the environment information matched with the scene keyword comprises trees, weather types, lighting data, cloud shading data and the like.
In an embodiment of the present disclosure, the environmental information includes sky information and/or earth surface information. The sky information may include a weather type and a weather parameter, where the weather type may be any one of a cloudy day, a sunny day, a rainy day, a wind blowing condition, a hail condition, and the like. The weather parameter may be a parameter that expresses an additive in the sky, such as: illumination data, raindrop data and/or snowfall data and/or fog data and/or cloud cover data. Wherein, the surface information may include: the method comprises the following steps of obtaining terrain parameters and texture information, wherein the terrain parameters can be plains, lakes, mountains, deserts and the like, and the texture information can be lawns, water surfaces, sands and the like.
It should be noted that, according to the preset information completing rule, it is determined that the environment information matched with the scene keyword may also include an object, for example: the environmental information in autumn includes tree, at this time, the object name is used as a material name, and a three-dimensional model corresponding to the material name is used as a three-dimensional material matched with the scene keyword. Reference may be made in particular to the description in the above embodiments, which are not intended to be limiting in any way in the embodiments of the present disclosure.
S302, adding the environment information to a preset position of a three-dimensional scene editor so that the three-dimensional scene editor can set an environment effect corresponding to the environment information in a three-dimensional scene.
In one embodiment of the present disclosure, the environment information is input into a three-dimensional scene editor, and the three-dimensional scene editor automatically sets an environmental effect of the three-dimensional scene based on the received environment information.
Specifically, the environment information is added to preset positions of the scene editor, for example, the weather type, the weather parameter, the terrain parameter and the texture information are input to the corresponding positions of the scene editor, and the three-dimensional scene editor automatically sets the environment effect of the three-dimensional scene based on the received environment information.
In one embodiment of the present disclosure, the setting, by the three-dimensional scene editor, an environmental effect corresponding to the environmental information in a three-dimensional scene includes: acquiring a target sky box background matched with the weather type; rendering the target sky box background into the three-dimensional scene; acquiring weather materials matched with the weather parameters; and adding the weather material into a three-dimensional scene with the rendered target sky box background to obtain an environment effect corresponding to the sky information.
The weather material is a 3D material, and may be static or dynamic, and is not limited in the embodiment of the present disclosure. The creating mode of the weather material is as follows: and creating weather materials through three-dimensional animation software, customizing weather configuration parameters of the weather materials, and storing the weather materials and the corresponding weather parameters in a weather database.
In the embodiment of the present disclosure, first, the background of the sky box is determined as sky blue, and when the color of the sky in the environment information is black, the background of the sky box is modified to be night sky.
Furthermore, a weather type and a corresponding sky box background are preset and stored in a weather database. And inquiring in a preset weather database according to the weather type, and taking the inquired sky box background as a target sky box background. The sky box background can be set by three-dimensional software, for example: the background brightness of the sky box corresponding to the cloudy day is lower, and the background brightness of the sky box corresponding to the sunny day is higher.
In the embodiment of the disclosure, after the background of the target sky box is obtained, the background of the target sky box is rendered into the three-dimensional scene, the weather parameter is queried in a weather database, a weather material corresponding to the weather parameter is obtained, and the weather material is added to the three-dimensional scene with the background of the target sky box rendered, so as to obtain an environmental effect corresponding to the sky information. For example: lighting information, cloud mask data, etc. are added to the target sky box background.
In the disclosed embodiments, sky information is added to a three-dimensional scene to make the three-dimensional scene more realistic.
In one embodiment of the present disclosure, the setting, by the three-dimensional scene editor, an environmental effect corresponding to the environmental information in a three-dimensional scene includes: acquiring a pre-stored basic height field; processing the basic height field based on the terrain parameters to obtain a target height field matched with the terrain parameters; rendering the target height field into the three-dimensional scene; acquiring a texture map matched with the texture information; and adding the texture map into the three-dimensional scene with the rendered target height field to obtain an environmental effect corresponding to the surface information.
Wherein the base height field characterizes a base height distribution for each location in the three-dimensional scene area. In the embodiment of the present disclosure, all the positions may share one base height, for example, a height value with the largest occurrence number (or probability) in the actual three-dimensional scene may be empirically determined as the base height of each position, so that the workload may be reduced to some extent.
After the basic height field is obtained, the basic height field can be processed according to the terrain of at least part of the scene area represented by the terrain parameters, and the height field capable of representing the terrain height in the three-dimensional scene is obtained. The basic altitude field is processed in different ways according to different terrains.
For example, a portion of the base elevation field corresponding to the lake terrain may be processed to excavate surface channels (i.e., to reduce the terrain height). The portion of the base height field corresponding to the mountain terrain may be subjected to a ridge process to generate a mountain (i.e., a heightened terrain height process).
Wherein, the texture information mainly includes: one or more of sand covering texture, grass covering texture, snow covering texture, mountain covering texture, river covering texture, road covering texture. Texture maps corresponding to various textures are designed in advance through three-dimensional software and stored in a surface database.
And after the target height field is obtained, rendering the target height field into a three-dimensional scene, and inquiring in an earth surface database according to the extracted texture information to obtain a texture map corresponding to the texture information. And adding the acquired texture map to a corresponding position in the three-dimensional scene to realize the setting of the surface environment effect.
In the embodiment of the disclosure, according to the environmental information extracted from the two-dimensional scene, an environmental effect corresponding to the environmental information is set in the three-dimensional scene, so as to improve the quality of the three-dimensional scene.
And S303, taking the environment effect corresponding to the environment information as a three-dimensional material matched with the scene keyword.
As shown in fig. 4, as a specific implementation manner of S103, when the category to which the scene keyword belongs is an object category or a character category, the determining a three-dimensional material matched with the scene keyword according to the category to which the scene keyword belongs mainly includes steps S401 to S402.
S401, in response to the fact that the category of the scene keyword is determined to be an object category or a role category, taking the scene keyword as a corresponding material name.
In the embodiment of the present disclosure, when the scene keyword is an object category, the scene keyword is directly used as a corresponding material name, for example: the scene keyword is "car", and the corresponding material name is "car". When the scene keyword is a role category, the scene keyword is directly used as a corresponding material name, for example: the scene keyword is girl, and the corresponding material name is girl. For example: the scene keyword is "orychophragmus violaceus", and the corresponding material name is "orychophragmus violaceus".
S402, taking the three-dimensional model corresponding to the material name as the three-dimensional material matched with the scene keyword.
Step S402 in this disclosure is the same as the execution flow of step S202 in the foregoing embodiment, and reference may be specifically made to the description of the foregoing embodiment, which is not described again in this disclosure.
And S104, constructing a three-dimensional scene corresponding to the text information based on the three-dimensional material.
Specifically, after three-dimensional materials matched with the scene keywords are determined, the three-dimensional materials are combined based on the relative position relationship among the three-dimensional materials, and a three-dimensional scene is built.
In one embodiment of the present disclosure, constructing a three-dimensional scene corresponding to the text information based on the three-dimensional materials includes: determining the relative position relation between the scene corner and/or the scene object; determining the position information of the three-dimensional material based on the relative position relation; and placing the three-dimensional material at a position corresponding to the position information.
Specifically, the three-dimensional material should include the content in the text information as much as possible, for example, the environment parameters include information such as season and weather, the terrain parameters include information such as desert and plain, and the like, and besides the information in the text information, the relative position relationship between the materials can also represent the atmosphere such as people and environment. Furthermore, the positional relationship must be within a reasonable range, e.g., the sky should be above, the playground should be on the ground, the character should not be in the air on the ground, etc.
The scene angle and/or the relative positional relationship between the scene objects may be determined according to a predetermined positional rule. The preset position rule is set by common knowledge of people. For example: the computer is placed on a desk, and the automobile runs on a road or is assisted to stop on the ground. In the embodiment of the present disclosure, the relative position relationship of a scene character or a scene object is determined according to a set position rule. The text information indicates that the red flag mode is on the top of the flagpole if the red flag flies in the wind.
In one embodiment of the present disclosure, the scene angle and/or the relative position relationship between the scene objects may also be determined by the orientation word or other implied word in the text information. For example: and if the text information is that the football is under the feet of the child, determining that the three-dimensional material corresponding to the football is under the feet of the three-dimensional material corresponding to the child.
In one embodiment of the present disclosure, the three-dimensional material related to the ground may be set in advance, for example: and determining the position information of the three-dimensional material according to the position and the relative position relation of the placed material. The relative position relationship between the child and the playground is that the child runs on the playground, one position is selected on the playground as the position information of the child, and the three-dimensional material corresponding to the child is placed at the position corresponding to the position information.
On the basis of the foregoing embodiments, the embodiments of the present disclosure further optimize the scene construction method, and the optimized scene construction method further includes, after constructing a three-dimensional scene corresponding to the text information based on the three-dimensional material, the following steps: responding to the operation of the three-dimensional material in the three-dimensional scene, and acquiring adjustment information corresponding to the three-dimensional material; and modifying the rendering effect of the three-dimensional material in the three-dimensional scene based on the adjusting information.
In the embodiment of the present disclosure, after the three-dimensional scene corresponding to the text information is constructed, the constructed three-dimensional scene is displayed in a scene preview interface of a three-dimensional scene editor, and the three-dimensional scene editor further includes a parameter editing interface.
In the embodiment, in response to a trigger operation of a user on a certain three-dimensional material in a three-dimensional scene, the material is determined as a selected three-dimensional material. Or responding to the triggering operation of the scene parameters in the parameter editing interface by the user, and determining the three-dimensional material corresponding to the triggering operation as the selected three-dimensional material.
In one embodiment of the disclosure, in response to an input operation of a user, input adjustment information is received, and a rendering effect of a three-dimensional material in the three-dimensional scene is modified based on the adjustment information. Specifically, the selected three-dimensional model is model 4, and the input parameter is a left shift of 3 cm. The model 4 in three-dimensional space is moved 3 cm to the left based on the input parameters.
In the embodiment of the disclosure, after the three-dimensional scene is automatically constructed based on the text information, the three-dimensional scene can be adjusted according to the operation of the user, and secondary creation is allowed to be performed, so that the three-dimensional scene with better quality is obtained.
Fig. 5 is a schematic structural diagram of a scene constructing apparatus in the embodiment of the present disclosure, which is applicable to a scene constructing situation, the scene constructing apparatus may be implemented in a software and/or hardware manner, and the scene constructing apparatus may be configured in an electronic device.
As shown in fig. 5, a scene constructing apparatus 50 provided in the embodiment of the present disclosure mainly includes: a scene keyword extraction module 51, an belonging category determination module 52, a three-dimensional material determination module 53 and a three-dimensional scene construction module 54.
The scene keyword extraction module 51 is configured to, in response to receiving text information, extract a scene keyword from the text information; the belonging category determining module 52 is configured to determine a belonging category of the scene keyword; a three-dimensional material determining module 53, configured to determine, according to the category to which the scene keyword belongs, a three-dimensional material that matches the scene keyword; and a three-dimensional scene constructing module 54, configured to construct a three-dimensional scene corresponding to the text information based on the three-dimensional material.
In one possible embodiment, the category determining module 52 includes: a part-of-speech tagging unit, configured to perform part-of-speech tagging on the scene keyword according to a part-of-speech tagging rule, so as to obtain a scene keyword with part-of-speech tagging; and the affiliated category determining unit is used for inputting the scene keywords with the part-of-speech labels into a pre-trained category determining model to obtain the affiliated category of the scene keywords.
In one possible embodiment, the three-dimensional material determining module 53 includes: a first material name determining unit configured to determine an object matching the scene keyword according to a preset information completion rule in response to determining that the category to which the scene keyword belongs is a place category, and determine a name of the object as a material name corresponding to the scene keyword; and the three-dimensional material determining unit is used for taking the three-dimensional model corresponding to the material name as the three-dimensional material matched with the scene keyword.
In one possible embodiment, the three-dimensional material determining module 53 includes: the environment information determining unit is used for determining environment information matched with the scene keyword according to a preset information supplementing rule in response to the fact that the category of the scene keyword is determined to be an environment category; the environment effect setting unit is used for adding the environment information to a preset position of a three-dimensional scene editor so that the three-dimensional scene editor sets an environment effect corresponding to the environment information in a three-dimensional scene; and the three-dimensional material determining unit is further used for taking the environment effect corresponding to the environment information as the three-dimensional material matched with the scene keyword.
In one possible embodiment, the three-dimensional material determining module 53 includes: the material name determining unit is further used for responding to the fact that the category of the scene key word is determined to be an object category or a role category, and taking the scene key word as a corresponding material name; and the three-dimensional material determining unit is also used for taking the three-dimensional model corresponding to the material name as the three-dimensional material matched with the scene keyword.
In one possible embodiment, the three-dimensional material includes scene corners and/or scene objects, and the three-dimensional scene building module 54 includes: a relative position relation determining unit for determining a relative position relation between the scene angle and/or the scene object; the position information determining unit is used for determining the position information of the three-dimensional material based on the relative position relation; and the three-dimensional material placing unit is used for placing the three-dimensional material at a position corresponding to the position information.
In one possible embodiment, the method further comprises: the adjusting information acquiring unit is used for responding to the operation of the three-dimensional material in the three-dimensional scene and acquiring the adjusting information corresponding to the three-dimensional material; and the rendering effect modification unit is used for modifying the rendering effect of the three-dimensional material in the three-dimensional scene based on the adjustment information.
The scene construction device provided in the embodiment of the present disclosure may perform the steps performed in the scene construction method provided in the embodiment of the present disclosure, and the steps and the beneficial effects are not repeated here.
Fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure. Referring now specifically to fig. 6, a schematic structural diagram is shown that is suitable for use to implement an electronic device 600 in embodiments of the present disclosure. The electronic device 600 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), a wearable terminal device, and the like, and fixed terminals such as a digital TV, a desktop computer, a smart home device, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603 to implement the scene construction method of the embodiments as described in the present disclosure. In the RAM 603, various programs and data necessary for the operation of the terminal apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the terminal device 600 to perform wireless or wired communication with other devices to exchange data. While fig. 6 illustrates a terminal apparatus 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flowchart, thereby implementing the scene construction method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: in response to receiving text information, extracting scene keywords from the text information; determining the category of the scene keyword; determining a three-dimensional material matched with the scene keyword according to the category of the scene keyword; and constructing a three-dimensional scene corresponding to the text information based on the three-dimensional materials.
Optionally, when the one or more programs are executed by the terminal device, the terminal device may further perform other steps described in the above embodiments.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A scene construction method, comprising:
extracting scene keywords from text information in response to receiving the text information;
determining the category of the scene keyword;
determining a three-dimensional material matched with the scene keyword according to the category of the scene keyword;
and constructing a three-dimensional scene corresponding to the text information based on the three-dimensional materials.
2. The method of claim 1, wherein the determining the category to which the scene keyword belongs comprises:
performing part-of-speech tagging on the scene keyword according to a part-of-speech tagging rule to obtain the scene keyword with the part-of-speech tagging;
and inputting the scene keywords with the part-of-speech labels into a pre-trained category determination model to obtain the category of the scene keywords.
3. The method according to claim 1 or 2, wherein the determining the three-dimensional material matching the scene keyword according to the category of the scene keyword comprises:
in response to the fact that the category of the scene keyword is determined to be the place category, determining an object matched with the scene keyword according to a preset information filling rule, and determining the name of the object to be a material name corresponding to the scene keyword;
and taking the three-dimensional model corresponding to the material name as the three-dimensional material matched with the scene keyword.
4. The method according to claim 1 or 2, wherein the determining three-dimensional materials matching the scene keyword according to the category of the scene keyword comprises:
in response to the fact that the category of the scene keyword is determined to be an environment category, determining environment information matched with the scene keyword according to a preset information supplementing rule;
adding the environment information to a preset position of a three-dimensional scene editor so that the three-dimensional scene editor sets an environment effect corresponding to the environment information in a three-dimensional scene;
and taking the environment effect corresponding to the environment information as the three-dimensional material matched with the scene keyword.
5. The method according to claim 1 or 2, wherein the determining the three-dimensional material matching the scene keyword according to the category of the scene keyword comprises:
in response to determining that the category to which the scene keyword belongs is an object category or a role category, taking the scene keyword as a corresponding material name;
and taking the three-dimensional model corresponding to the material name as the three-dimensional material matched with the scene keyword.
6. The method according to claim 1 or 2, wherein the three-dimensional material comprises scene corners and/or scene objects,
the constructing of the three-dimensional scene corresponding to the text information based on the three-dimensional material comprises:
determining the relative position relation between the scene corner and/or the scene object;
determining the position information of the three-dimensional material based on the relative position relation;
and placing the three-dimensional material at a position corresponding to the position information.
7. The method of claim 1 or 2, further comprising:
responding to the operation of the three-dimensional material in the three-dimensional scene, and acquiring adjustment information corresponding to the three-dimensional material;
and modifying the rendering effect of the three-dimensional material in the three-dimensional scene based on the adjusting information.
8. A scene building apparatus, comprising:
the scene keyword extraction module is used for responding to the received text information and extracting scene keywords from the text information;
the belonging category determining module is used for determining the belonging category of the scene keyword;
the three-dimensional material determining module is used for determining a three-dimensional material matched with the scene keyword according to the category of the scene keyword;
and the three-dimensional scene construction module is used for constructing a three-dimensional scene corresponding to the text information based on the three-dimensional material.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202210916555.5A 2022-08-01 2022-08-01 Scene construction method, device, equipment and storage medium Pending CN115168547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210916555.5A CN115168547A (en) 2022-08-01 2022-08-01 Scene construction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210916555.5A CN115168547A (en) 2022-08-01 2022-08-01 Scene construction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115168547A true CN115168547A (en) 2022-10-11

Family

ID=83477319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210916555.5A Pending CN115168547A (en) 2022-08-01 2022-08-01 Scene construction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115168547A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115878867A (en) * 2023-02-22 2023-03-31 湖南视觉伟业智能科技有限公司 AI automatic virtual scene construction experience system and method based on metauniverse

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115878867A (en) * 2023-02-22 2023-03-31 湖南视觉伟业智能科技有限公司 AI automatic virtual scene construction experience system and method based on metauniverse

Similar Documents

Publication Publication Date Title
CN109618222B (en) A kind of splicing video generation method, device, terminal device and storage medium
CN108919944B (en) Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model
CN106845470B (en) Map data acquisition method and device
CN103003847A (en) Method and apparatus for rendering a location-based user interface
CN101909073A (en) Intelligent guide system, portable tour guide device and tour guide system
EP3996378A1 (en) Method and system for supporting sharing of experiences between users, and non-transitory computer-readable recording medium
CN109059901B (en) AR navigation method based on social application, storage medium and mobile terminal
CN108955715A (en) navigation video generation method, video navigation method and system
CN114125310B (en) Photographing method, terminal device and cloud server
CN115147554A (en) Three-dimensional scene construction method, device, equipment and storage medium
CN104281595A (en) Weather condition displaying method, device and system
CN115168547A (en) Scene construction method, device, equipment and storage medium
CN114004905B (en) Method, device, equipment and storage medium for generating character style pictogram
CN111696549A (en) Picture searching method and device, electronic equipment and storage medium
CN104572830A (en) Method and method for processing recommended shooting information
CN114140588A (en) Digital sand table creating method and device, electronic equipment and storage medium
CN111710017A (en) Display method and device and electronic equipment
CN117237511A (en) Cloud image processing method, cloud image processing device, computer and readable storage medium
CN115430142A (en) Game scene editing method, device, equipment and medium
CN116975170A (en) Map display method, map data generation method, map display device and electronic equipment
CN116883708A (en) Image classification method, device, electronic equipment and storage medium
CN113989404A (en) Picture processing method, device, equipment, storage medium and program product
CN107945201B (en) Video landscape processing method and device based on self-adaptive threshold segmentation
CN112036517A (en) Image defect classification method and device and electronic equipment
CN111127664A (en) Virtual reality method for realizing high-simulation gardens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination