CN115082627A - 3D stage model generation method and device, electronic equipment and readable storage medium - Google Patents

3D stage model generation method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115082627A
CN115082627A CN202210866180.6A CN202210866180A CN115082627A CN 115082627 A CN115082627 A CN 115082627A CN 202210866180 A CN202210866180 A CN 202210866180A CN 115082627 A CN115082627 A CN 115082627A
Authority
CN
China
Prior art keywords
stage
model
virtual
basic
stage object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210866180.6A
Other languages
Chinese (zh)
Inventor
欧阳霁
马牧野
肖征宇
鲁建福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Mango Vision Technology Co ltd
Hunan Mango Wuji Technology Co ltd
Original Assignee
Hunan Mango Vision Technology Co ltd
Hunan Mango Wuji Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Mango Vision Technology Co ltd, Hunan Mango Wuji Technology Co ltd filed Critical Hunan Mango Vision Technology Co ltd
Priority to CN202210866180.6A priority Critical patent/CN115082627A/en
Publication of CN115082627A publication Critical patent/CN115082627A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a method and a device for generating a 3D stage model, electronic equipment and a readable storage medium, wherein the 3D stage model generation comprises the following steps: acquiring a stage art design image; identifying the stage art design image according to a target stage object identification model to obtain a plurality of stage objects; generating a basic virtual stage design scheme according to the stage object and the configuration parameters corresponding to the stage object; and restoring all the stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library to obtain a virtual stage basic model. According to the method, the reduction model of each stage object in the virtual stage can be directly obtained through the stage art design image through the construction of the stage object identification model and the stage object model library, so that the construction efficiency of the virtual stage model is effectively improved.

Description

3D stage model generation method and device, electronic equipment and readable storage medium
Technical Field
The invention relates to the technical field of virtual stages, in particular to a 3D stage model generation method and device, electronic equipment and a computer-readable storage medium.
Background
The stage modeling is a key ring of a virtual stage rehearsal technology, and by carrying out dance beauty design in advance in a virtual stage, a large amount of stage construction cost can be saved.
In the prior art, a construction scheme of a virtual stage model usually needs a large amount of stage modeling work and stage rendering work. For each virtual stage model construction, stage design personnel are required to repeatedly carry out simulation design on the real scene, and time and labor are wasted.
Therefore, a virtual stage model generation scheme capable of being fast and efficient is needed.
Disclosure of Invention
In order to solve the above technical problem, an embodiment of the present application provides a method and an apparatus for generating a 3D stage model, an electronic device, and a readable storage medium, and the specific scheme is as follows:
in a first aspect, an embodiment of the present application provides a 3D stage model generation method, including:
acquiring a stage art design image;
identifying the stage art design image according to a target stage object identification model to obtain a plurality of stage objects;
generating a basic virtual stage design scheme according to the stage object and the configuration parameters thereof;
and restoring all the stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library to obtain a virtual stage basic model.
According to a specific implementation manner of the embodiment of the application, the step of constructing the target stage object recognition model includes:
acquiring a stage object training data set, wherein the stage object training data set comprises training data corresponding to multiple types of stage objects;
performing model training on the stage object training data set according to a preset image recognition algorithm to obtain a basic stage object recognition model;
and updating the basic stage object identification model according to the model adjustment parameters to obtain the target stage object identification model.
According to a specific implementation manner of an embodiment of the present application, the step of obtaining a stage object training data set includes:
acquiring a basic image dataset of a stage object;
classifying the basic image data set according to the type of the stage object to obtain basic image data subsets corresponding to different types of stage objects;
associating corresponding configuration parameters for each basic image data subset to obtain multiple types of stage object training data;
and integrating various types of stage object training data to obtain the stage object training data set.
According to a specific implementation manner of the embodiment of the present application, the preset image recognition algorithm is a Faster R-CNN algorithm.
According to a specific implementation manner of the embodiment of the application, the step of constructing the stage object model library includes:
identifying three-dimensional model association information of a plurality of stage objects based on a preset 3D digital tool, wherein the three-dimensional model association information comprises a three-dimensional space origin coordinate, a three-dimensional space coordinate orientation and a model three-dimensional space coordinate;
and storing all the three-dimensional model associated information in a model database in a classified manner according to the stage object type to obtain the stage object model database.
According to a specific implementation manner of the embodiment of the present application, the step of restoring all stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library to obtain a virtual stage basic model includes:
calculating reduction parameters of the corresponding stage according to shooting parameters of a preset stage object and a similarity transformation algorithm;
traversing all the stage objects to be restored in the basic virtual stage design scheme, and acquiring restoration parameters, type information and configuration parameters of the stage objects to be restored;
restoring a target three-dimensional model of each stage object to be restored according to the restoration parameters, the type information and the configuration parameters of the stage object to be restored and the three-dimensional model association information in the stage object model library, wherein the target three-dimensional model is a three-dimensional model with a real size;
and importing all the target three-dimensional models into a virtual preview system to obtain the virtual stage basic model.
According to a specific implementation manner of the embodiment of the present application, the step of restoring the target three-dimensional model of each stage object to be restored according to the restoration parameters, the type information and the configuration parameters of the stage object to be restored, and the three-dimensional model association information in the stage object model library includes:
inquiring corresponding target three-dimensional model association information in the stage object model library according to the type information of the stage object to be restored;
processing the target three-dimensional model association information according to a preset reduction algorithm and the reduction parameters to obtain a basic three-dimensional model of the stage object to be reduced;
and adjusting the basic three-dimensional model according to the configuration parameters of the stage object to be restored to obtain a target three-dimensional model of the stage object to be restored.
In a second aspect, an embodiment of the present application provides a 3D stage model generation apparatus, including:
the design image acquisition module is used for acquiring a stage art design image;
the stage object identification module is used for identifying the stage art design image according to a target stage object identification model so as to obtain a plurality of stage objects;
the stage scheme generating module is used for generating a basic virtual stage design scheme according to the stage object and the configuration parameters thereof;
and the stage object reduction module is used for reducing all stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library so as to obtain a virtual stage basic model.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores a computer program, and the computer program, when executed on the processor, executes the 3D stage model generating method according to any one of the foregoing first aspect and the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed on a processor, the computer program performs the 3D stage model generating method according to any one of the foregoing first aspect and the foregoing first aspect.
The embodiment of the application provides a 3D stage model generation method, a device, electronic equipment and a readable storage medium, wherein the 3D stage model generation comprises the following steps: acquiring a stage art design image; identifying the stage art design image according to a target stage object identification model to obtain a plurality of stage objects; generating a basic virtual stage design scheme according to the stage object and the configuration parameters corresponding to the stage object; and restoring all the stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library to obtain a virtual stage basic model. According to the method, the reduction model of each stage object in the virtual stage can be directly obtained through the stage art design image through the construction of the stage object identification model and the stage object model library, so that the construction efficiency of the virtual stage model is effectively improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. Like components are numbered similarly in the various figures.
Fig. 1 illustrates a method flow diagram of a 3D stage model generation method provided in an embodiment of the present application;
fig. 2 is an interactive flow diagram illustrating a 3D stage model generation method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating a principle of a preset reduction algorithm of a 3D stage model generation method according to an embodiment of the present application;
fig. 4 shows a schematic device module diagram of a 3D stage model generation device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Hereinafter, the terms "including", "having", and their derivatives, which may be used in various embodiments of the present invention, are only intended to indicate specific features, numbers, steps, operations, elements, components, or combinations of the foregoing, and should not be construed as first excluding the existence of, or adding to, one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
Referring to fig. 1, a method flow diagram of a 3D stage model generation method provided in an embodiment of the present application is shown, and as shown in fig. 1, the 3D stage model generation method provided in the embodiment of the present application includes:
step S101, obtaining a stage art design image;
in a specific embodiment, the stage art design image is image data obtained by laying out each stage device in the virtual stage according to stage requirements. For example, the stage art design image may be a stage live photograph taken after the layout is completed, or may be a stage effect drawing digitally designed in an electronic device, and a specific form of the stage art design image is not limited herein.
And inputting the stage design image into a virtual stage rehearsal system where the 3D stage model generation device is located by a user, so that the 3D stage model generation device executes subsequent model generation steps according to the stage art design image.
Step S102, identifying the stage art design image according to a target stage object identification model to obtain a plurality of stage objects;
in a specific embodiment, the stage object recognition model is a deep learning model, and is configured to process a preset stage art design image according to a preset image recognition algorithm to obtain all stage objects in the stage art design image.
Specifically, the stage object includes stage devices such as a stage lamp, a stage scene, a camera, a stage prop and a stage wei ya. The present embodiment does not specifically limit the type of the stage object.
The image recognition algorithm may be a recognition algorithm supported by various types of computer image recognition technologies, such as an image classification recognition technology, an object detection recognition technology, a semantic segmentation recognition technology, an instance segmentation recognition technology, and a panorama segmentation recognition technology.
The image classification and identification technology is used for identifying the type of each detection object in the image; the target detection and identification technology is used for identifying which detection objects are included in the image and a rectangular area occupied by each detection object in the image; the semantic segmentation recognition technology is used for marking pixel points occupied by different types of detection objects in the image by different colors; the example segmentation identification technology is used for marking pixel points occupied by different individuals of the same kind of detection object in the image by different colors; the panorama segmentation technology is used for detecting and segmenting all detection objects contained in an image.
It should be noted that the preset image recognition algorithm in this embodiment can recognize the type of each stage object in the preset stage art design image and the area size occupied by each stage object in the image.
Preferably, the preset image recognition algorithm is a Faster R-CNN algorithm.
Specifically, the Faster R-CNN algorithm is an image recognition algorithm based on a target detection recognition technology, and in the process of recognizing the stage objects, the Faster R-CNN algorithm determines the area occupied by each stage object in the image, and then determines which type of stage object the objects in the area belong to.
According to a specific implementation manner of the embodiment of the application, the step of constructing the target stage object recognition model includes:
acquiring a stage object training data set, wherein the stage object training data set comprises training data corresponding to multiple types of stage objects;
performing model training on the stage object training data set according to a preset image recognition algorithm to obtain a basic stage object recognition model;
and updating the basic stage object identification model according to the model adjustment parameters to obtain the target stage object identification model.
In a specific embodiment, as shown in fig. 2, the 3D stage model generation apparatus includes a stage object training module, and the stage object training module is configured to perform model training according to a stage object training data set input by a user to obtain a target stage object recognition model.
Specifically, in the process of performing model training by the stage object training module, the stage object training module is configured to add new stage object training data or model adjustment parameters to the stage object training module in real time according to the training condition of the basic stage object recognition model, and after receiving the adjustment parameters, the stage object training module repeatedly trains the basic stage object recognition model until a target stage object recognition model meeting the user prediction purpose is obtained.
The stage object identification model meeting the user prediction purpose is a model capable of completely identifying each stage object and the occupied area thereof designed by the user in the stage art design image.
According to a specific implementation manner of an embodiment of the present application, the step of obtaining a stage object training data set includes:
acquiring a basic image dataset of a stage object;
classifying the basic image data set according to the stage object types to obtain basic image data subsets corresponding to different types of stage objects;
associating corresponding configuration parameters for each basic image data subset to obtain multiple types of stage object training data;
and integrating various types of stage object training data to obtain the stage object training data set.
In a specific embodiment, the basic image data may be directly obtained through the internet, or shooting or scanning each stage object in an actual application scene may be performed by using a camera or a 3D scanner, which is not specifically limited in the step of obtaining the basic image data in this embodiment.
Specifically, one stage object training data set is composed of a plurality of stage object training data, wherein one stage object training data corresponds to one stage object classification, and one stage object training data is composed of a group of pictures and a group of mark information.
The marking information includes a stage object type to which the stage object training data belongs and a configuration parameter of the stage object.
The configuration parameters include a setting position, a setting orientation, and control information of the stage object.
It should be noted that the configuration parameters of different stage objects can be customized according to the use requirements in the actual application scenario, and the inclusion content of the configuration parameters is not limited uniquely here.
Alternatively, the orientation of the stage object may be marked using a horizontal direction and a vertical direction, wherein the horizontal direction includes front left, front right, back left, back right, and the like, and the vertical direction includes front upper, front lower, back upper, back lower, and the like.
The different types of stage objects are oriented differently. For example, the table mouth object only needs to pay attention to the inclination angle within 45 degrees from the left and right in front of the horizontal direction; the lamp object needs to provide training data for 60 degrees in the front-back direction and the left-right direction in the horizontal direction and 45 degrees in the up-down direction in the vertical direction.
The training data set is defined using the json format as shown in the following example:
<training-data-set>
<stage-object>
<images>



......
</images>
<annotation>
<category>spotlight-huaneng</category>
<direction>backdown-80-backright-30</direction>
</annotation>
</stage-object>
<stage-object>...</stage-object>
<stage-object>...</stage-object>
......
</training-data-set>
in the above embodiment, a spotlight object classified as "spotlight-huang" is defined, which is oriented vertically 80 ° rearward and downward, and horizontally 30 ° rearward and rightward.
In an actual application process, the configuration parameters of different stage objects may be the same or different, and this embodiment is not specifically limited to this.
Step S103, generating a basic virtual stage design scheme according to the stage object and the configuration parameters thereof;
in a specific embodiment, after the stage art design image is processed based on the stage object recognition model, setting areas, object types, and configuration parameters of a plurality of stage objects can be obtained.
The 3D stage model generating device may generate a base virtual stage design scheme in json format according to the recognition result, as shown in the following example:
<virtual-stage-draft>
<data>...</data>
<instantiation>
<param name=”origin”/>
<param name=”direction”/>
<instantiation>
<stage-objects>
<stage-object>
<category>spotlight-huaneng</category>
<pic-height>500</entity-height>
<pic-width>300</entity-width>
</stage-object>
<stage-object>...</stage-object>
<stage-object>...</stage-object>
......
</stage-objects>
</virtual-stage-draft>
in the above embodiment, one basic virtual stage scheme < virtual-stage-draft > having a plurality of stage objects </stage-object > is generated. And each stage object has type information such as spotlight-huang and configuration parameters.
And S104, restoring all stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library to obtain a virtual stage basic model.
In a specific embodiment, the 3D stage model generating device reads the basic virtual stage design scheme to obtain all stage objects to be restored. And acquiring three-dimensional model associated information corresponding to each stage object to be restored from the stage object model library, and processing the three-dimensional model associated information according to a preset model restoration algorithm to obtain a three-dimensional model of each stage object to be restored.
Specifically, according to the coordinate information of the three-dimensional model of each stage object to be restored, the three-dimensional model of each stage object may be added to a preset virtual stage to form a virtual stage base model with all the stage objects.
The user can perform stage preview processing according to the virtual stage base model, and the embodiment does not specifically limit the stage preview processing performed by using the virtual stage base model.
According to a specific implementation manner of the embodiment of the application, the step of constructing the stage object model library includes:
identifying three-dimensional model associated information of a plurality of stage objects based on a preset 3D digital tool, wherein the three-dimensional model associated information comprises a three-dimensional space origin coordinate, a three-dimensional space coordinate orientation and a model three-dimensional space coordinate;
and storing all the three-dimensional model associated information in a model database in a classified manner according to the stage object type to obtain the stage object model database.
In this embodiment, the three-dimensional model association information of the stage object can be directly obtained by scanning the actual stage object through the 3D scanner; the two-dimensional images of the actual stage object at multiple angles can be shot by the camera, and the three-dimensional analysis is carried out on the two-dimensional images at multiple angles to obtain the three-dimensional model correlation information of the stage object.
After the two-dimensional image data of a plurality of stage objects are obtained, analysis and calculation are carried out according to a preset imaging principle, and the three-dimensional space coordinate origin, the three-dimensional space coordinate orientation and the model three-dimensional space coordinate of each stage object can be obtained through calculation. According to the three-dimensional model correlation information of the stage objects, the specific position, the setting orientation information and the size of the space area occupied in the virtual stage area of each stage object in the virtual stage can be obtained.
In an actual application process, when the stage object model library is constructed, the user may also bind corresponding configuration information, such as the type of the stage object, the control scheme of the stage object in the stage, and the like, to the three-dimensional model association information of the stage object.
After obtaining the three-dimensional model associated information of each stage object, the user may perform adaptive adjustment on the coordinates and the orientation of each three-dimensional model associated information.
After the three-dimensional model associated information of the stage object is preprocessed, the three-dimensional model associated information of all the stage objects is classified and stored in a prepared database, and then the construction of the stage object model is completed.
And after acquiring the three-dimensional image associated information of each stage object, the 3D stage model generation device stores the three-dimensional image associated information into a model library by using a json format.
The json format of an exemplary stage object model is as follows:
<stage-object-model>
<data>...</data>
<instantiation>
<param name=”origin”/>
<param name=”direction”/>
<instantiation>
<asso-info>
<category>spotlight-huaneng</category>
<entity-height>500</entity-height>
<entity-width>300</entity-width>
<virtual-stage-relatives>
......
</virtual-stage-relatives>
</asso-info>
</stage-object-model>
in the above-described embodiment, the stage object model < stage-object-model > includes the pieces of three-dimensional image association information < interaction >.
It should be noted that the three-dimensional image association information < instance > of the stage object model is also bound with corresponding stage object type information. When the three-dimensional image associated information of the stage object is extracted from the stage object model base, an index can be established through the type information of the stage object so as to accurately extract the corresponding three-dimensional model associated information from the stage object model base.
It should be noted that, when the three-dimensional model association information in the stage object model library is obtained, a plurality of three-dimensional model association information of the same type as the stage object can be derived, so that the user can select the most relevant three-dimensional model association information.
According to a specific implementation manner of the embodiment of the present application, the step of restoring all stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library to obtain a virtual stage basic model includes:
calculating reduction parameters of the corresponding stage according to shooting parameters of a preset stage object and a similarity transformation algorithm;
traversing all the stage objects to be restored in the basic virtual stage design scheme, and acquiring restoration parameters, type information and configuration parameters of the stage objects to be restored;
restoring a target three-dimensional model of each stage object to be restored according to the restoration parameters, the type information and the configuration parameters of the stage object to be restored and the three-dimensional model association information in the stage object model library, wherein the target three-dimensional model is a three-dimensional model with a real size;
and importing all the target three-dimensional models into a virtual preview system to obtain the virtual stage basic model.
In a specific embodiment, the reduction parameter is a similarity transformation matrix.
The preset stage object can be any one stage object on a stage to be constructed, and preferably, the preset stage object is a stage opening object.
The shooting parameters are shooting distances between the preset stage object and the camera.
As shown in fig. 3, a distance d1 from the camera is an actual stage object size, and a distance d2 from the camera is an area size occupied by the stage object in the stage art design image.
Specifically, the conversion method of d1 and d2 is as follows:
Figure BDA0003758646770000141
wherein | S1 | is a length of a rectangular area diagonal line of an angle corresponding to the actual stage object, | S2 | is a length of a rectangular area diagonal line occupied by the stage object in the stage art design image, and the distance d1 is a shooting distance of the camera from the actual stage object on the stage. The shooting distance can be obtained according to deep learning and can also be known according to the experience of stage photography, and the shooting distance can be adaptively replaced according to the actual application scene.
With the knowledge of d1, d2 can be obtained by the conversion method.
According to the shooting imaging principle, images and real objects of all stage objects formed on stage art design images are in a similar relation mathematically, so that three-dimensional coordinates of the stage object real objects in the stage are calculated by using similarity transformation according to the position, the size and the real object size of d2 and the images, and a three-dimensional model with the real object size can be obtained.
The virtual preview system is any virtual stage simulation system and can simulate a stage model in the actual application process according to the input three-dimensional model data.
After the 3D stage model generation apparatus of this embodiment obtains the target three-dimensional models of all stage objects, the target three-dimensional models are input to the virtual preview system, that is, the virtual stage base model is obtained through the virtual language system simulation.
According to a specific implementation manner of the embodiment of the present application, the step of restoring the target three-dimensional model of each stage object to be restored according to the restoration parameters, the type information and the configuration parameters of the stage object to be restored, and the three-dimensional model association information in the stage object model library includes:
inquiring corresponding target three-dimensional model correlation information in the stage object model library according to the type information of the stage object to be restored;
processing the target three-dimensional model association information according to a preset reduction algorithm and the reduction parameters to obtain a basic three-dimensional model of the stage object to be reduced;
and adjusting the basic three-dimensional model according to the configuration parameters of the stage object to be restored to obtain a target three-dimensional model of the stage object to be restored.
In a specific embodiment, the preset restoration algorithm is an image restoration algorithm based on similarity transformation.
Traversing each stage object in the basic virtual stage design scheme, recording the stage object as a recovery _ obj, and restoring the recovery _ obj according to the following steps:
step 1, reading two-dimensional area information occupied by the reconstruction _ obj, synthesizing origin coordinates of a three-dimensional picture of an object to be restored, and recording the origin coordinates as O reconstruction Wherein O is reconstruction =[x y -d2]。
Wherein x and y are the abscissa and the ordinate of the object to be restored on the stage art design image.
Step 2, performing similarity transformation on the three-dimensional picture original point coordinates to obtain virtual stage original point coordinates of the restored stage object, and recording the virtual stage original point coordinates as O reconstructed In which O is reconstructed =O reconstruction *T Reduction of 。T Reduction of For the similarity transformation matrix:
Figure BDA0003758646770000151
wherein, | | S Physical object I is the length of the diagonal line of the rectangular area of the angle corresponding to the real object of the stage object to be restored, and S Imaging And | | is the length of the diagonal line of the rectangular area of the corresponding angle of the stage object to be restored in the stage art drawing.
Step 3, obtaining the model retrieval _ obj _ model of the stage object by taking the classification information retrieval _ obj in the model library as index query, and taking O as the index reconstructed And using the orientation information retrieval _ obj.direction as a parameter to instantiate the retrieval _ obj _ model to obtain the restored stage object, recording the restored stage object as the retrieved _ obj, and copying the three-dimensional image associated information in the retrieval _ obj _ model to the retrieved _ obj.
Generating a virtual stage base model by using all the restored stage objects, and storing the virtual stage base model in a json format:
Figure BDA0003758646770000152
Figure BDA0003758646770000161
and finally, converting the json-format virtual stage basic model into a format accepted by the virtual preview system, and importing the converted format into the virtual preview system to serve as a basis for further design.
After obtaining the virtual stage base model, the user may further input preset adjustment parameters to the 3D stage model generation device to update the virtual stage base model, where this embodiment does not specifically limit subsequent update processing of the virtual stage base model.
The embodiment of the application provides a 3D stage model generation method, and the generation and the construction of each stage object 3D model can be completed directly in a mode of inputting stage art design images by constructing a stage object identification model and a stage object model library in advance, so that the efficiency of virtual stage construction is effectively improved. In addition, each recognition model in the embodiment of the application is built by adopting a deep learning algorithm, so that the model content of a stage object model library can be continuously accumulated and the stage object recognition model can be continuously updated in the process of applying the 3D stage model generation method, so that the stage object recognition model is more accurate and has better predictability, and a user can more efficiently complete the construction of a virtual stage in the subsequent use process.
Referring to fig. 4, an apparatus module schematic diagram of a 3D stage model generating apparatus 400 provided in an embodiment of the present application is shown, and as shown in fig. 4, the 3D stage model generating apparatus 400 provided in the embodiment of the present application includes:
a design image acquisition module 401, configured to acquire a stage art design image;
a stage object recognition module 402, configured to recognize the stage art design image according to a target stage object recognition model to obtain a plurality of stage objects;
a stage scheme generating module 403, configured to generate a basic virtual stage design scheme according to the stage object and the configuration parameters thereof;
and a stage object restoration module 404, configured to restore all stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library, so as to obtain a virtual stage basic model.
As shown in fig. 2, according to a specific implementation manner of the embodiment of the present application, the 3D stage model generating apparatus 400 further includes: the stage object training module is used for acquiring a stage object training data set, wherein the stage object training data set comprises training data corresponding to various types of stage objects;
performing model training on the stage object training data set according to a preset image recognition algorithm to obtain a basic stage object recognition model;
and updating the basic stage object identification model according to the model adjustment parameters to obtain the target stage object identification model.
As shown in fig. 2, according to a specific implementation manner of the embodiment of the present application, the 3D stage model generating apparatus 400 further includes: the stage object modeling module is used for identifying three-dimensional model associated information of a plurality of stage objects based on a preset 3D digital tool, wherein the three-dimensional model associated information comprises a three-dimensional space origin coordinate, a three-dimensional space coordinate orientation and a model three-dimensional space coordinate;
and storing all the three-dimensional model associated information in a model database in a classified manner according to the stage object type to obtain the stage object model database.
The embodiment of the application provides a 3D stage model generating device obtains target stage object recognition model based on stage object training module training, acquires stage object model base based on stage object modeling module to can through to 3D stage model generating device inputs the mode of stage fine arts design image and founds the stage object 3D model, just the stage object 3D model that 3D stage model generating device founds has three-dimensional coordinate information and three-dimensional orientation information, thereby can directly with stage object 3D model places in the virtual stage with the one-to-one generation of actual stage, founds virtual stage basic model. The efficiency of virtual stage construction is effectively improved, and a large amount of stage construction cost is saved.
In addition, an electronic device is further provided in an embodiment of the present application, and the electronic device includes a processor and a memory, where the memory stores a computer program, and the computer program executes the 3D stage model generation method in the foregoing embodiment when running on the processor.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed on a processor, the computer program performs the 3D stage model generation method in the foregoing embodiment.
For specific implementation processes of the 3D stage model generation apparatus, the electronic device, and the computer-readable storage medium mentioned in the foregoing embodiments, reference may be made to the specific implementation processes of the foregoing method embodiments, which are not described in detail herein.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (10)

1. A3D stage model generation method is characterized by comprising the following steps:
acquiring a stage art design image;
identifying the stage art design image according to a target stage object identification model to obtain a plurality of stage objects;
generating a basic virtual stage design scheme according to the stage object and the configuration parameters thereof;
and restoring all the stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library to obtain a virtual stage basic model.
2. The 3D stage model generation method according to claim 1, wherein the step of constructing the target stage object recognition model includes:
acquiring a stage object training data set, wherein the stage object training data set comprises training data corresponding to multiple types of stage objects;
performing model training on the stage object training data set according to a preset image recognition algorithm to obtain a basic stage object recognition model;
and updating the basic stage object identification model according to the model adjustment parameters to obtain the target stage object identification model.
3. The 3D stage model generation method according to claim 2, wherein the step of obtaining a stage object training dataset includes:
acquiring a basic image dataset of a stage object;
classifying the basic image data set according to the type of the stage object to obtain basic image data subsets corresponding to different types of stage objects;
associating corresponding configuration parameters for each basic image data subset to obtain multiple types of stage object training data;
and integrating various types of stage object training data to obtain the stage object training data set.
4. The 3D stage model generation method according to claim 2, characterized in that the preset image recognition algorithm is the fast R-CNN algorithm.
5. The 3D stage model generation method according to claim 1, wherein the stage object model library building step includes:
identifying three-dimensional model associated information of a plurality of stage objects based on a preset 3D digital tool, wherein the three-dimensional model associated information comprises three-dimensional space origin coordinates, three-dimensional space coordinate orientation, model three-dimensional space coordinates and actual size proportion information;
and storing all the three-dimensional model associated information in a model database in a classified manner according to the stage object type to obtain the stage object model database.
6. The 3D stage model generation method according to claim 1, wherein the step of restoring all stage objects in a virtual stage according to the basic virtual stage design solution and the stage object model library to obtain a virtual stage basic model comprises:
calculating reduction parameters of the corresponding stage according to shooting parameters of a preset stage object and a similarity transformation algorithm;
traversing all the stage objects to be restored in the basic virtual stage design scheme, and acquiring the type information and configuration parameters of each stage object to be restored;
restoring a target three-dimensional model of each stage object to be restored according to the restoration parameters, the type information and the configuration parameters of the stage object to be restored and the three-dimensional model association information in the stage object model library, wherein the target three-dimensional model is a three-dimensional model with a real size;
and importing all the target three-dimensional models into a virtual preview system to obtain the virtual stage basic model.
7. The 3D stage model generation method according to claim 6, wherein the step of restoring the target three-dimensional model of each stage object to be restored according to the restoration parameters, the type information and configuration parameters of the stage object to be restored, and the three-dimensional model association information in the stage object model library includes:
inquiring corresponding target three-dimensional model association information in the stage object model library according to the type information of the stage object to be restored;
processing the target three-dimensional model association information according to a preset reduction algorithm and the reduction parameters to obtain a basic three-dimensional model of the stage object to be reduced;
and adjusting the basic three-dimensional model according to the configuration parameters of the stage object to be restored to obtain a target three-dimensional model of the stage object to be restored.
8. A 3D stage model generation apparatus, comprising:
the design image acquisition module is used for acquiring stage art design images;
the stage object identification module is used for identifying the stage art design image according to a target stage object identification model so as to obtain a plurality of stage objects;
the stage scheme generation module is used for generating a basic virtual stage design scheme according to the stage object and the configuration parameters thereof;
and the stage object reduction module is used for reducing all stage objects in the virtual stage according to the basic virtual stage design scheme and the stage object model library so as to obtain a virtual stage basic model.
9. An electronic device, characterized in that it comprises a processor and a memory, said memory storing a computer program which, when run on said processor, executes the 3D stage model generation method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when run on a processor, performs the 3D stage model generation method of any one of claims 1 to 7.
CN202210866180.6A 2022-07-22 2022-07-22 3D stage model generation method and device, electronic equipment and readable storage medium Pending CN115082627A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210866180.6A CN115082627A (en) 2022-07-22 2022-07-22 3D stage model generation method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210866180.6A CN115082627A (en) 2022-07-22 2022-07-22 3D stage model generation method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115082627A true CN115082627A (en) 2022-09-20

Family

ID=83242963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210866180.6A Pending CN115082627A (en) 2022-07-22 2022-07-22 3D stage model generation method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115082627A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150012322A (en) * 2013-07-10 2015-02-04 성균관대학교산학협력단 Apparatus and method for providing virtual reality of stage
CN108492356A (en) * 2017-02-13 2018-09-04 苏州宝时得电动工具有限公司 Augmented reality system and its control method
CN109191369A (en) * 2018-08-06 2019-01-11 三星电子(中国)研发中心 2D pictures turn method, storage medium and the device of 3D model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150012322A (en) * 2013-07-10 2015-02-04 성균관대학교산학협력단 Apparatus and method for providing virtual reality of stage
CN108492356A (en) * 2017-02-13 2018-09-04 苏州宝时得电动工具有限公司 Augmented reality system and its control method
CN109191369A (en) * 2018-08-06 2019-01-11 三星电子(中国)研发中心 2D pictures turn method, storage medium and the device of 3D model

Similar Documents

Publication Publication Date Title
CN109815843B (en) Image processing method and related product
CN108401112B (en) Image processing method, device, terminal and storage medium
Choi et al. Depth analogy: Data-driven approach for single image depth estimation using gradient samples
Farinella et al. Representing scenes for real-time context classification on mobile devices
JP2011028757A (en) Method and apparatus for representing and searching for an object in an image, and computer-readable storage medium for storing computer-executable process step for executing the method
JP5018614B2 (en) Image processing method, program for executing the method, storage medium, imaging device, and image processing system
CN110598019B (en) Repeated image identification method and device
US10373372B2 (en) System and method for object recognition
CN1979481A (en) Method and apparatus for representing and searching for an object using shape
CN105117399B (en) Image searching method and device
CN111738280A (en) Image identification method, device, equipment and readable storage medium
CN109842811B (en) Method and device for implanting push information into video and electronic equipment
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
CN107622497A (en) Image cropping method, apparatus, computer-readable recording medium and computer equipment
CN112215964A (en) Scene navigation method and device based on AR
CN105580050A (en) Providing control points in images
CN110942511A (en) Indoor scene model reconstruction method and device
Zhu et al. Large-scale architectural asset extraction from panoramic imagery
CN117422851A (en) Virtual clothes changing method and device and electronic equipment
CN116721008A (en) User-defined expression synthesis method and system
CN115082627A (en) 3D stage model generation method and device, electronic equipment and readable storage medium
CN113282781B (en) Image retrieval method and device
Bhoir et al. A decision-making tool for creating and identifying face sketches
CN110490950B (en) Image sample generation method and device, computer equipment and storage medium
KR20180069312A (en) Method for tracking of object using light field video and apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 410000, east side of Building 1, 3001, Phase I, Malanshan Mango Cultural and Creative Plaza, No. 30 Yazipu Road, Yuehu Street, Kaifu District, Changsha City, Hunan Province

Applicant after: Hunan Mango Rongchuang Technology Co.,Ltd.

Applicant after: Hunan mango Vision Technology Co.,Ltd.

Address before: Room 4, Room 96, No. 1, Yazipu Road, Yuehu Street, Kaifu District, Changsha City, Hunan Province 410000

Applicant before: Hunan Mango Wuji Technology Co.,Ltd.

Applicant before: Hunan mango Vision Technology Co.,Ltd.