CN112396688B - Three-dimensional virtual scene generation method and device - Google Patents

Three-dimensional virtual scene generation method and device Download PDF

Info

Publication number
CN112396688B
CN112396688B CN201910747186.XA CN201910747186A CN112396688B CN 112396688 B CN112396688 B CN 112396688B CN 201910747186 A CN201910747186 A CN 201910747186A CN 112396688 B CN112396688 B CN 112396688B
Authority
CN
China
Prior art keywords
dimensional virtual
target
point
object model
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910747186.XA
Other languages
Chinese (zh)
Other versions
CN112396688A (en
Inventor
林耀冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910747186.XA priority Critical patent/CN112396688B/en
Publication of CN112396688A publication Critical patent/CN112396688A/en
Application granted granted Critical
Publication of CN112396688B publication Critical patent/CN112396688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a method and a device for generating a three-dimensional virtual scene, wherein the method comprises the following steps: generating a corresponding three-dimensional virtual environment model for the target area; generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area; and based on the set placement rule of each target object in the target area, placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model to form a three-dimensional virtual scene. The method can realize that the three-dimensional virtual scene meeting the user expectations can be automatically built even if no real scene exists.

Description

Three-dimensional virtual scene generation method and device
Technical Field
The present application relates to the field of three-dimensional modeling technologies, and in particular, to a method and an apparatus for generating a three-dimensional virtual scene.
Background
At present, with the rapid development of three-dimensional modeling technology, three-dimensional virtual scenes reproduce real scenes in an vivid and vivid manner, so that the advantage of good visual experience is brought to users, the three-dimensional virtual scenes are widely applied to various industries, and based on the advantages, how to accurately and efficiently build the three-dimensional virtual scenes is more and more concerned and researched.
Current three-dimensional modeling software, such as 3D Max three-dimensional modeling software, is based on a real scene, that is, on the premise that a real scene exists, a three-dimensional virtual scene that is nearly identical to the real scene is re-simulated based on the real scene. This limits the current application of three-dimensional virtual scenes.
Disclosure of Invention
In view of the above, the present application provides a method and apparatus for generating a three-dimensional virtual scene, so as to generate the three-dimensional virtual scene.
According to a first aspect of an embodiment of the present application, there is provided a method for generating a three-dimensional virtual scene, the method including:
generating a corresponding three-dimensional virtual environment model for the target area;
generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area;
and based on the set placement rule of each target object in the target area, placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model to form a three-dimensional virtual scene.
According to a second aspect of an embodiment of the present application, there is provided a generating apparatus for a three-dimensional virtual scene, the apparatus including:
the first generation module is used for generating a corresponding three-dimensional virtual environment model for the target area;
The second generation module is used for generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area;
and the third generation module is used for placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area to form a three-dimensional virtual scene.
According to a third aspect of embodiments of the present application, there is provided an electronic device comprising a readable storage medium and a processor;
wherein the readable storage medium is for storing machine executable instructions;
the processor is configured to read the machine-executable instructions on the readable storage medium and execute the instructions to implement the steps of the method for generating a three-dimensional virtual scene provided by the embodiment of the present application.
According to a fourth aspect of the embodiment of the present application, there is provided a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program when executed by a processor implements the steps of the method for generating a three-dimensional virtual scene provided by the embodiment of the present application.
By the embodiment of the application, the three-dimensional virtual environment model corresponding to the target area and the three-dimensional virtual object model corresponding to the target object are constructed, the three-dimensional virtual object model corresponding to each target object is placed in the three-dimensional virtual environment model based on the placement rule of each target object in the target area, and the three-dimensional virtual scene is formed.
Drawings
Fig. 1 is a flowchart of an embodiment of a method for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application;
FIG. 2 is a flowchart of an embodiment of another method for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application;
FIG. 3 is a schematic view of the effect of placing a three-dimensional virtual object model corresponding to each shelf in a three-dimensional virtual environment model;
FIG. 4 is a schematic illustration of a three-dimensional spatial coordinate system established for an item placement area;
FIG. 5 is an example of a target image;
FIG. 6 is a flowchart of an exemplary process for generating a target image according to an exemplary embodiment of the present application;
FIG. 7 is an example of camera parameters of a virtual camera;
FIG. 8 is another example of a target image;
FIG. 9 is a flowchart of an embodiment of a process for labeling position information of an object in a target image according to an exemplary embodiment of the present application;
FIG. 10 is an example of a simple target image shown for ease of illustration of labeling location information of items in the target image;
FIG. 11 is an example of a target image with annotation information;
FIG. 12 is a block diagram of an embodiment of a three-dimensional virtual scene generating apparatus according to an exemplary embodiment of the present application;
Fig. 13 is a hardware configuration diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In order to solve the above problems, the present application provides a method for generating a three-dimensional virtual scene, in which the method does not need to take a real scene as a premise, and even if there is no real scene, the three-dimensional virtual scene meeting the user's expectations can be automatically built. The following examples are presented to illustrate the process in detail:
first, the following embodiment is shown, and a method for generating a three-dimensional virtual scene according to the present application will be described as a whole:
embodiment 1,
Referring to fig. 1, a flowchart of an embodiment of a method for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application may include the following steps:
step 101: and generating a corresponding three-dimensional virtual environment model for the target area.
In the embodiment of the application, three-dimensional modeling software, such as 3D MAX, maya, etc., may be used, and a corresponding three-dimensional model is generated for the target area according to parameters such as a two-dimensional plan view, spatial data, spatial features, etc. of the target area, and for convenience of description, the three-dimensional model is referred to as a three-dimensional virtual environment model.
As for the specific process of generating the three-dimensional virtual environment model, those skilled in the art can refer to the related descriptions in the prior art, and the present application will not be repeated.
Step 102: and generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area.
In the embodiment of the present application, each target object to be placed in the target area, for example, various objects, may be predefined, and a corresponding three-dimensional model is generated for each target object, and for convenience of description, the three-dimensional model is referred to as a three-dimensional virtual object model.
As one example, a corresponding three-dimensional virtual object model may be generated for each target object using a three-dimensional scanning technique. Specifically, taking a certain target object as an example, a three-dimensional scanner can be used for scanning the target object to obtain parameters such as spatial data and spatial characteristics of the target object, and then three-dimensional modeling software is used for generating a three-dimensional virtual object model of the target object according to the parameters such as the spatial data and the spatial characteristics obtained by scanning.
It should be noted that, the above specific implementation manner of generating the corresponding three-dimensional virtual object model for each target object is merely taken as an example, and in practical application, the corresponding three-dimensional virtual object model may also be generated for each target object by other manners, which is not limited in this application.
Step 103: and based on the set placement rule of each target object in the target area, placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model to form a three-dimensional virtual scene model.
In the embodiment of the application, the three-dimensional virtual object model corresponding to each target object can be placed in the three-dimensional virtual environment model based on the placement rule of each target object in the target area, and the three-dimensional model obtained after the processing is called as the three-dimensional virtual scene model for convenience of description. And then rendering the three-dimensional virtual scene model to form a three-dimensional virtual scene.
As one example, placement rules for different types of target objects in the target area are different.
As an example, a rendering texture corresponding to a three-dimensional virtual environment model may be determined for a three-dimensional virtual environment model corresponding to a target area, and a rendering texture corresponding to the three-dimensional virtual object model may be determined for a three-dimensional virtual object model corresponding to each target object, and then, in a three-dimensional virtual scene model, the three-dimensional virtual environment model may be rendered according to the rendering texture corresponding to the three-dimensional virtual environment model for the three-dimensional virtual environment model, and the three-dimensional virtual object model may be rendered according to the rendering texture corresponding to the three-dimensional virtual object model for each three-dimensional virtual object model.
In the three-dimensional virtual scene, there is an overlapping region between the three-dimensional virtual environment model and the three-dimensional virtual object model, and when rendering the three-dimensional virtual environment model, only the region other than the three-dimensional virtual object model in the three-dimensional virtual environment model may be rendered. By such processing, texture conflict caused by repeated rendering can be avoided.
As one example, the target region and each target object may be scanned with a three-dimensional scanner to determine a three-dimensional virtual environment model, and a rendering texture for each of the three-dimensional virtual object models.
As another example, a three-dimensional virtual environment model and a rendering texture corresponding to each of the three-dimensional virtual object models may be determined by two-dimensional images of the target region and each of the target objects.
It should be noted that, the specific implementation manner of determining the rendering textures corresponding to the three-dimensional virtual environment model and the three-dimensional virtual object model described above is merely an example, and in practical application, the three-dimensional virtual environment model and the rendering textures corresponding to the three-dimensional virtual object model may be determined in other manners, which is not limited in this application.
As can be seen from the above embodiments, by constructing the three-dimensional virtual environment model corresponding to the target area and the three-dimensional virtual object model corresponding to the target object, based on the set placement rule of each target object in the target area, the three-dimensional virtual object model corresponding to each target object is placed in the three-dimensional virtual environment model to form the three-dimensional virtual scene, and the three-dimensional virtual scene is generated without the need of taking the real scene as the premise, even without the real scene, and the three-dimensional virtual scene meeting the user's expectations can be automatically built.
Thus, the description of the first embodiment is completed.
Secondly, the following second embodiment is shown, taking a three-dimensional virtual scene of a supermarket automatically built as an example, and further explaining the generation method of the three-dimensional virtual scene provided by the application:
embodiment II,
Referring to fig. 2, a flowchart of an embodiment of another method for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application may include the following steps:
step 201: and generating a corresponding three-dimensional virtual environment model for the target area.
The detailed description of this step can be referred to the related description of step 101 in the above embodiment one, and will not be repeated here.
Step 202: and generating a corresponding three-dimensional virtual object model for each goods shelf to be placed in the target area, and generating a corresponding three-dimensional virtual object model for each article to be placed in the target area.
Under the application scene of automatically setting up the three-dimensional virtual scene of the supermarket, the target object at least can comprise a shelf and an article.
As one example, corresponding to a real scene, the target object may include at least multiple types of shelves, where different types of shelves are different in size, shape, and number of layers; accordingly, the target object may include at least a plurality of types of articles, wherein the different types of articles are different in size and shape, such as bottled beverages, canned beverages, bagged foods, various living goods, and the like.
For a specific process of generating a corresponding three-dimensional virtual object model for each shelf to be placed in the target area and generating a corresponding three-dimensional virtual object model for each object to be placed in the target area, reference may be made to the description of step 102 in the above embodiment, which is not repeated herein.
Further, as an example, a supermarket database may be pre-established, which may be used to store shelf information, item information, price tag information, etc. Wherein, the shelf information may include: the type, name, size, layer number of the goods shelf, the size of the accommodation space of each layer, the three-dimensional virtual object model corresponding to the goods shelf and the like; the item information may include: the type, name, size, price, shelf life, three-dimensional virtual object model corresponding to the article, etc.; the price tag information may include: the type, size, price tag, three-dimensional virtual object model corresponding to the price tag, etc.
Based on the example, the user can perform operations such as adding, deleting and modifying the supermarket database, for example, when the user is on a new article in a supermarket, the information of the article can be input into the supermarket database, and for example, when the user is on a supermarket for replacing a shelf, the information of the shelf to be eliminated can be deleted from the supermarket database, and the information of the new shelf can be input.
Based on the example, in the process of automatically setting up the three-dimensional virtual scene model of the supermarket, the three-dimensional virtual object model corresponding to each of the goods shelf and the articles can be obtained from the supermarket database.
Step 203: and placing the three-dimensional virtual object model corresponding to each shelf in the three-dimensional virtual environment model according to the set shelf placement rules.
In the embodiment of the application, the shelf placement rules can be preconfigured, and the shelf placement rules can be used for defining the placement direction of the shelves in the supermarket, the distance interval between adjacent shelves, the number of the shelves and the like.
As one example, different types of shelves may correspond to different shelf placement rules, e.g., one type of shelf may have only one side on which items are placed, then that type of shelf may be placed against a wall, while another type of shelf may have both sides on which items are placed, then that type of shelf may not be placed against a wall.
As another example, a unified shelf placement rule may be pre-configured, which may be used to define the type of shelf placed in the supermarket, the number of shelves of each type, the distance spacing between adjacent shelves, the direction of placement of shelves of each type within the supermarket, etc.
In the embodiment of the application, the three-dimensional virtual object model corresponding to each shelf can be placed in the three-dimensional virtual environment model according to the set shelf placement rule and according to the two-dimensional plan of the supermarket.
Taking the above unified shelf placement rule as an example, assume that the unified shelf placement rule represents: each type of shelf is arranged in a supermarket in a north-south trend, the distance interval between adjacent shelves is 1 meter, and 1 class A shelf, 2 class B shelves and 2 class C shelves are sequentially arranged in the order from west to east. Then, based on the shelf placement rule, an effect schematic after placing the three-dimensional virtual object model of each shelf object in the three-dimensional virtual environment model may be as shown in fig. 3.
Step 204: and placing the three-dimensional virtual object model corresponding to each article in the three-dimensional virtual object model corresponding to each goods shelf according to the set article placement rules.
In the step, firstly, an object placement area in a three-dimensional virtual object model corresponding to each goods shelf is identified, then, a target object to be placed in the object placement area is determined according to each object placement area, and finally, the three-dimensional virtual object model corresponding to the target object is placed in the object placement area according to the set object placement rules.
As one example, a target item to be placed may be specified by a user for each item placement area.
As another example, a target item to be placed may be automatically determined for each item placement area. In one example, taking a certain article placement area as an example, the size information of the virtual accommodating space of the article placement area may be matched with the size information of the three-dimensional virtual object model corresponding to each article, and the target article to be placed in the article placement area is determined according to the matching result. For example, assuming that the height of the object placement area is 20cm, the height of the three-dimensional virtual object model corresponding to bottled cola is 25cm, and the height of the three-dimensional virtual object model corresponding to barreled instant noodles is 15cm, the barreled instant noodles can be determined as target objects to be placed in the object placement area.
As one example, the above-described item placement rules may be preconfigured, which may be used to define the direction of placement of items on a shelf, the distance spacing of adjacent items on a shelf, the number of items placed per layer of a shelf, and so forth.
As one example, different types of items may correspond to different item placement rules, e.g., one type of item may not be stacked in a vertical direction, e.g., bottled beverages, while another type of item may be stacked in a vertical direction, e.g., items having an approximately rectangular parallelepiped shape in their outer package.
As another example, at least one article display schematic may be generated according to size information of the virtual accommodation space of the article placement area and size information of the three-dimensional virtual object model corresponding to the target article, and the at least one article display schematic may be output. And then, acquiring a target object display schematic diagram selected by a user, and generating an object placement rule according to the target object display schematic diagram.
As an example, the specific process of the above-mentioned "placing the three-dimensional virtual object model corresponding to the target object in the object placement area according to the set object placement rule" may include: first, a corresponding three-dimensional space coordinate system is established for the article placement area, for example, as shown in fig. 4, a larger cuboid in fig. 4 represents a virtual accommodation space of the article placement area, in fig. 4, an X-axis direction of the three-dimensional space coordinate system corresponds to a horizontal direction of the article placement area, a Y-axis direction corresponds to a vertical direction of the article placement area, and a Z-axis direction is perpendicular to the X-axis direction and the Y-axis direction. Then, starting from the origin of coordinates of the three-dimensional space coordinate system, and placing a three-dimensional virtual object model corresponding to the target object on the object placing area along the X-axis direction, the Y-axis direction and the Z-axis direction of the three-dimensional space coordinate system until the preset object placing condition is met.
As an example, the above-described preset article placement conditions may include: the number of the placed articles reaches the preset number, or the article placing area is full.
In one example, taking the above-mentioned target object as an object whose outer package shape is approximately rectangular (for example, a smaller rectangular body in fig. 4 represents a three-dimensional virtual object model corresponding to the target object), and the preset object placement condition is that the object placement area is full, the three-dimensional virtual object model corresponding to the target object may be placed on the object placement area according to the following procedure:
firstly taking the origin O of coordinates of a three-dimensional space coordinate system in fig. 4 as a first current point, placing a three-dimensional virtual object model corresponding to a target object at the first current point, after the placement is successful, shifting the first current point by a first distance value S1 along the X-axis direction, wherein S1 is the length of the three-dimensional virtual object model corresponding to the target object, for example, as shown in fig. 4, P1 is the first current point after one shift, then judging whether the distance between the first current point and a first designated point is greater than or equal to S1, wherein the coordinate information of the first designated point is (E1, 0), E1 is the length of a virtual accommodating space of an object placement area, for example, as shown in fig. 4, P2 is the first designated point; if so, the object placement area can be considered to still continuously accommodate the target object in the X-axis direction, so that the step of placing a three-dimensional virtual object model corresponding to the target object at the first current point can be returned until the distance between the first current point and the first designated point is judged to be smaller than S1, and the object placement area is considered to be incapable of continuously accommodating the target object in the X-axis direction, and the current flow is ended. Thus, the placement of the target article in the X-axis direction of the article placement area is completed.
And taking the second designated point as the second current point, wherein the coordinate information of the second designated point is (0, S2, 0), S2 is the height of the three-dimensional virtual object model corresponding to the target object, for example, as shown in fig. 4, point P3 is the second designated point, after the second current point is placed with the three-dimensional virtual object model corresponding to the target object, the second current point is shifted by S2 along the Y axis direction after the placement is successful, for example, as shown in fig. 4, point P4 is the second current point after the primary shift, then, whether the distance between the second current point and the third designated point is greater than or equal to S2 is judged, wherein the coordinate information of the third designated point is (0, E2, 0), point E2 is the height of the virtual accommodation space of the object placement area, for example, as shown in fig. 4, point P5 is the third designated point, if yes, the object placement area can be considered to continue accommodating the target object in the Y axis direction, therefore, the three-dimensional virtual object corresponding to the object can be placed at the second current point can be returned until the virtual object placement step corresponding to the second current point is not considered to be accommodated in the current direction of the third designated point, and the distance between the virtual object and the third designated point is not considered to be equal to the current point. Thus, the placement of the target article in the Y-axis direction of the article placement area is completed.
And taking a fourth designated point as a third current point, wherein the coordinate information of the fourth designated point is (0, S3), wherein S3 is the width of the three-dimensional virtual object model corresponding to the target object, for example, as shown in fig. 4, P6 is the fourth designated point, after the third current point is placed with the three-dimensional virtual object model corresponding to the target object, the third current point is shifted by S3 along the Z axis direction after the placement is successful, for example, as shown in fig. 4, P7 is the third current point after the primary shift, then, whether the distance between the third current point and the fifth designated point is greater than or equal to S3 is judged, wherein the coordinate information of the fifth designated point is (0, E3), E3 is the width of the virtual accommodating space of the object placement area, for example, as shown in fig. 4, P8 is the fifth designated point, if the placement of the object placement area can be considered to continue to accommodate the target object in the Z axis direction, and therefore, the three-dimensional virtual object corresponding to the target object can be placed at the third current point can be returned until the third current point is not considered to be accommodated in the Z axis direction until the three-dimensional virtual object corresponding to the target object placement area is judged to be smaller than the fifth designated point. Thus, the placement of the target article in the Z-axis direction of the article placement area is completed.
Step 205: and determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each shelf.
Step 206: and determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each article.
Step 207: in the three-dimensional virtual scene model, aiming at the three-dimensional virtual object model corresponding to each object, rendering the three-dimensional virtual object model by utilizing rendering textures corresponding to the three-dimensional virtual object model.
Step 208: and rendering the areas, except for the three-dimensional virtual object model corresponding to the object, on the three-dimensional virtual object model according to the rendering texture corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each goods shelf.
The detailed descriptions of steps 205 to 208 may be referred to the related descriptions of step 103 and step 104 in the above-mentioned embodiment one, and will not be repeated here.
It should be noted that the sequence between the steps 201 to 208 is merely an example, and in practical applications, other execution sequences conforming to the correct logic may exist, for example, the step 204 may be executed first, then the step 203 may be executed, for example, the step 207 may be executed synchronously with the step 208, or the step 207 may be executed first, then the step 208 may be executed. The present application is not further illustrative in this regard.
According to the embodiment, the three-dimensional virtual environment model of the supermarket, the three-dimensional virtual object model of each shelf and the three-dimensional virtual object model of each article are constructed, the three-dimensional virtual object model of each shelf is placed in the constructed three-dimensional virtual environment model according to the set shelf placement rules, meanwhile, the three-dimensional virtual object model of each article is placed in the three-dimensional virtual object model of the shelf according to the set article placement rules, the three-dimensional virtual scene model of the supermarket is obtained, and then the three-dimensional virtual scene model is colored to generate the three-dimensional virtual scene.
Thus, the description of the second embodiment is completed.
In addition, in the application, after the three-dimensional virtual scene is generated, the three-dimensional virtual scene can be displayed to the user.
As an example, the electronic device may control the display process of the three-dimensional virtual scene by interacting with a user, and specifically, the user may control the display angle of the three-dimensional virtual scene by setting camera parameters of the virtual camera in the three-dimensional virtual scene model, so as to observe the three-dimensional virtual scene under different viewing angles. It will be appreciated by those skilled in the art that the electronic device displays the three-dimensional virtual scene to the user in the form of a two-dimensional image, and for convenience of description, in the embodiment of the present application, the displayed image of the three-dimensional virtual scene at a certain viewing angle is referred to as a target image, for example, as shown in fig. 5, which is an example of the target image.
As follows, the following third embodiment is shown, and the generation process of the target image is described below:
third embodiment,
Referring to fig. 6, a flowchart of an embodiment of a process for generating a target image according to an exemplary embodiment of the present application includes the following steps:
step 601: and acquiring camera parameters of the virtual camera in the set three-dimensional virtual scene model.
As one example, the camera parameters described above may include: pitch, yaw, scale (distance between camera and object), for example, as shown in fig. 7, is one example of camera parameters of a virtual camera.
Step 602: and determining a conversion matrix for converting the world coordinate system and the image coordinate system according to the camera parameters, wherein the world coordinate system is a coordinate system corresponding to the three-dimensional virtual scene.
In the embodiment of the present application, the conversion matrix View for converting between the world coordinate system and the image coordinate system may be determined according to the camera parameters, for example, the View may be calculated by the following formula (one):
the eye in the above formula (one) can be calculated by the following formula (two); forword can be calculated by the following formula (III); right can be calculated by the following formula (four); head can be calculated by the following formula (five).
head=forward right formula (five)
Step 603: and converting coordinate position information of each object in the three-dimensional virtual scene in a world coordinate system into target position information in an image coordinate system by using a conversion matrix.
In the embodiment of the present application, the coordinate position information of each object in the world coordinate system in the three-dimensional virtual scene may be converted into the target position information in the image coordinate system by using the conversion matrix illustrated in the above formula (one), for example, the coordinate position information of each object in the world coordinate system in the three-dimensional virtual scene may be converted into the target position information in the image coordinate system by the following formula (six):
in the above formula (six), w represents the horizontal axis coordinate value of the object in the image coordinate system, h represents the vertical axis coordinate value of the object in the image coordinate system, (w, h) is the above target position information, and depth represents the distance from the camera; x represents the x-axis coordinate value of the object in the world coordinate system, y represents the y-axis coordinate value of the object in the world coordinate system, z represents the z-axis coordinate value of the object in the world coordinate system, and (x, y, z) is the coordinate position information.
Step 604: and mapping each object in the three-dimensional virtual scene to a corresponding position on the two-dimensional image configuration surface according to the target position information of the object in the image coordinate system to obtain a target image.
In the embodiment of the present application, for each object in the three-dimensional virtual scene, the object is mapped to a corresponding position on the two-dimensional image configuration surface according to the target position information of the object in the image coordinate system, so as to obtain the target image illustrated in fig. 5.
As can be seen from the above embodiments, a conversion matrix for converting between a world coordinate system and an image coordinate system is determined according to camera parameters of a virtual camera in a set three-dimensional virtual scene model; converting coordinate position information of each object in the three-dimensional virtual scene in a world coordinate system into target position information in an image coordinate system by utilizing a conversion matrix; for each object in the three-dimensional virtual scene, mapping the object to a corresponding position on the two-dimensional image configuration surface according to target position information of the object in the image coordinate system to obtain a target image, so that a user can control the display angle of the three-dimensional virtual scene by setting camera parameters of a virtual camera in the three-dimensional virtual scene model to observe the three-dimensional virtual scene under different view angles, and user experience is improved.
Thus, the description of the third embodiment is completed.
In addition, in some scenarios involving training of the image recognition model, when training the image recognition model based on the target image obtained in the third embodiment, first, position information, type information, and/or quantity information of an item in the target image (it will be understood by those skilled in the art that the item in the target image is not a real item but a virtual item) may be labeled, and then, the image recognition model may be trained according to at least one of the position information, type information, and quantity information of each labeled item.
As follows, the following fourth embodiment is shown, and the description is given of the above-described process of labeling the position information of the three-dimensional virtual object model corresponding to the object in the target image:
first, for ease of understanding, the preconditions for realizing the labeling process will be described:
as an example, in the second embodiment, the rendering texture of the three-dimensional virtual object model corresponding to the object may be a solid color texture, that is, a texture having a single color value, and the rendering textures of different objects have different single color values. By this processing, it is possible to realize that different objects are represented in different colors in the three-dimensional virtual scene, and in this rendering manner, the target image obtained by the third embodiment described above can be shown in fig. 8.
Fourth embodiment,
Referring to fig. 9, a flowchart of an embodiment of a process for labeling position information of an object in a target image according to an exemplary embodiment of the present application includes the following steps:
step 901: in the target image, selecting target pixel points which are not included in any set as current pixel points, wherein the target pixel points are used for representing the object.
Step 902: determining whether target pixel points which have the same color value as the current pixel point and are not included in any pixel point set exist in all pixel points adjacent to the current pixel point; if so, go to step 803; if not, execute step 904;
Step 903: classifying the target pixel point and the current pixel point into the same pixel point set, taking the target pixel point as the current pixel point, and returning to execute step 902;
the following describes the above steps 901 to 903:
first, in the embodiment of the present application, for convenience of description, a pixel point in a target image, which is used to represent a three-dimensional virtual object model corresponding to an object, is referred to as a target pixel point.
As an example, a target pixel that does not fall into any set may be optionally selected as the current pixel in the target image, for example, the Q point in fig. 10 is selected as the current pixel. Then, determining whether target pixel points which are adjacent to the current pixel point, have the same color value in the current pixel point and are not included in any pixel point set exist in all the pixel points adjacent to the current pixel point; if so, the target pixel point and the current pixel point can be classified into the same pixel point set, the target pixel point is taken as the current pixel point, and the step 802 is executed again until no target pixel point which has the same color value as the current pixel point and is not classified into any pixel point set exists in all the adjacent pixel points of the current pixel point, so that a complete pixel point set is obtained. For example, as shown in fig. 10, all the pixels in the region represented by the ellipse constitute one set of pixels, and the set of pixels corresponds to a three-dimensional virtual object model of one article.
Step 904: and labeling the object represented by the target pixel point in the pixel point set by using an external rectangle frame in the target image aiming at the pixel point set.
In this step, for the set of pixels, an object represented by a target pixel in the set of pixels may be marked in a target image by an circumscribed rectangular frame, for example, as shown in fig. 11, which is an example of a target image with marking information.
According to the embodiment, the adjacent pixels with the same color value are classified into the same pixel set, and the object represented by the target pixel in the pixel set is marked by an external rectangular frame in the target image aiming at the pixel set, so that the marking of the position information of the object in the target image can be realized, and the subsequent training of the image recognition model based on the target image is facilitated.
Thus, the description of the fourth embodiment is completed.
The application also provides an embodiment of a three-dimensional virtual scene generating device corresponding to the embodiment of the three-dimensional virtual scene generating method.
Referring to fig. 12, a block diagram of an embodiment of a three-dimensional virtual scene generating apparatus according to an exemplary embodiment of the present application may include: a first generation module 121, a second generation module 122, and a third generation module 123.
The first generating module 121 may be configured to generate a corresponding three-dimensional virtual environment model for the target area;
the second generating module 122 may be configured to generate a corresponding three-dimensional virtual object model for each target object to be placed in the target area;
the third generating module 123 may be configured to place the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area, so as to form a three-dimensional virtual scene.
In an embodiment, the target object comprises at least a shelf, an item;
the third generation module 123 may include (not shown in fig. 12):
the first placement sub-module is used for placing the three-dimensional virtual object model corresponding to each shelf in the three-dimensional virtual environment model according to the set shelf placement rules;
and the second placement sub-module is used for placing the three-dimensional virtual object model corresponding to each article in the three-dimensional virtual object model corresponding to each goods shelf according to the set article placement rules.
In an embodiment, the second placement sub-module may include (not shown in fig. 12):
the identification sub-module is used for identifying an article placement area in the three-dimensional virtual object model corresponding to each shelf;
The article determining submodule is used for determining a target article to be placed in each article placing area according to each article placing area;
and the object placement sub-module is used for placing a three-dimensional virtual object model corresponding to the target object in the object placement area according to the set object placement rules.
In one embodiment, the article determination submodule is specifically configured to:
and matching the size information of the virtual accommodating space of the article placing area with the size information of the three-dimensional virtual object model corresponding to each article, and determining the target article to be placed in the article placing area according to the matching result.
In an embodiment, the apparatus may further comprise (not shown in fig. 12):
the output module is used for generating at least one article display schematic diagram according to the size information of the virtual accommodating space of the article placement area and the size information of the three-dimensional virtual object model corresponding to the target article, and outputting the at least one article display schematic diagram;
and the rule setting module is used for acquiring the target object display schematic diagram selected by the user and generating object placement rules according to the target object display schematic diagram.
In one embodiment, the item placement sub-module may include (not shown in fig. 12):
The coordinate system construction submodule is used for establishing a corresponding three-dimensional space coordinate system for the article placement area, the origin of coordinates of the three-dimensional space coordinate system corresponds to an end point of one end of the article placement area, the X-axis direction of the three-dimensional space coordinate system corresponds to the horizontal direction of the article placement area, the Y-axis direction corresponds to the vertical direction of the article placement area, and the Z-axis direction is perpendicular to the X-axis direction and the Y-axis direction;
the first processing sub-module is used for placing a three-dimensional virtual object model corresponding to the target object on the object placing area along the X-axis direction, the Y-axis direction and the Z-axis direction of the three-dimensional space coordinate system from the coordinate origin of the three-dimensional space coordinate system until the preset object placing condition is met.
In an embodiment, the first processing sub-module is specifically configured to:
firstly taking the origin of coordinates as a first current point, placing a three-dimensional virtual object model corresponding to a target object at the first current point, shifting the first current point by a first distance value S1 and S1 along the X-axis direction after the placement is successful to be the length of the three-dimensional virtual object model corresponding to the target object, judging whether the distance between the first current point and a first designated point is greater than or equal to S1, wherein the coordinate information of the first designated point is (E1, 0), and E1 is the length of a virtual accommodating space of the object placement area; if yes, returning to the step of placing a three-dimensional virtual object model corresponding to the target object at the first current point, and if not, ending the current flow;
Taking a second designated point as a second current point, wherein the coordinate information of the second designated point is (0, S2,0,) S2 is the height of a three-dimensional virtual object model corresponding to the target object, placing a three-dimensional virtual object model corresponding to the target object at the second current point, shifting the second current point by S2 along the Y-axis direction after the placement is successful, judging whether the distance between the second current point and a third designated point is greater than or equal to S2, the coordinate information of the third designated point is (0, E2, 0), E2 is the height of a virtual accommodating space of the object placement area, if yes, returning to the step of placing a three-dimensional virtual object model corresponding to the target object at the second current point, and if not, ending the current flow;
and taking a fourth designated point as a third current point, wherein the coordinate information of the fourth designated point is (0, S3) S3 as the width of the three-dimensional virtual object model corresponding to the target object, placing a three-dimensional virtual object model corresponding to the target object at the third current point, shifting the third current point by S3 along the Z-axis direction after the placement is successful, judging whether the distance between the third current point and a fifth designated point is greater than or equal to S3, the coordinate information of the fifth designated point is (0, E3), E3 is the width of the virtual accommodating space of the object placement area, if yes, returning to the step of placing the three-dimensional virtual object model corresponding to the target object at the third current point, if no, ending the current flow.
In an embodiment, the third generating module 123 may include (not shown in fig. 12):
the third placement sub-module is used for placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the placement rule of each set target object in the target area to form a three-dimensional virtual scene model;
and the first rendering sub-module is used for rendering the three-dimensional virtual scene model to form a three-dimensional virtual scene.
In an embodiment, the first rendering sub-module may include (not shown in fig. 12):
the first texture determining submodule is used for determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each shelf;
the second texture determining submodule is used for determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each article;
the second rendering sub-module is used for rendering the three-dimensional virtual object model by utilizing rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each object in the three-dimensional virtual scene model; and rendering the areas of the three-dimensional virtual object model except the three-dimensional virtual object model corresponding to the object according to the rendering texture corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each goods shelf.
In an embodiment, the apparatus may further comprise (not shown in fig. 12):
the parameter acquisition module is used for acquiring camera parameters of the virtual camera in the set three-dimensional virtual scene model;
the matrix determining module is used for determining a conversion matrix for converting between a world coordinate system and an image coordinate system according to the camera parameters, wherein the world coordinate system is a coordinate system corresponding to the three-dimensional virtual scene;
the information conversion module is used for converting coordinate position information of each object in the three-dimensional virtual scene in a world coordinate system into target position information in an image coordinate system by utilizing the conversion matrix;
and the mapping module is used for mapping each object in the three-dimensional virtual scene to a corresponding position on the two-dimensional image configuration surface according to the target position information of the object in the image coordinate system to obtain a target image.
In an embodiment, the apparatus may further comprise (not shown in fig. 12):
the labeling module is used for labeling the position information, the type information and/or the quantity information of the three-dimensional virtual object model corresponding to the object in the target image;
the model training module is used for training an image recognition model according to at least one item of position information, type information and quantity information of each marked article.
In one embodiment, the labeling module may include (not shown in FIG. 12):
a selecting sub-module, configured to select, in the target image, a target pixel point that is not included in any set as a current pixel point, where the target pixel point is used to represent an article;
the judging submodule is used for determining whether target pixel points which have the same color value as the current pixel point and are not included in any pixel point set exist in all pixel points adjacent to the current pixel point;
the collection classification sub-module is used for classifying the target pixel point and the current pixel point into the same pixel point collection if the target pixel point and the current pixel point exist; the target pixel point is used as a current pixel point, and a step of determining whether target pixel points which have the same color value as the current pixel point and are not included in any pixel point set exist in all pixel points adjacent to the current pixel point is returned;
the second processing sub-module is used for labeling an external rectangular frame of an object corresponding to the target pixel point in the pixel point set in the target image aiming at the pixel point set.
With continued reference to fig. 13, the present application further provides an electronic device including a processor 1301, a communication interface 1302, a memory 1303, and a communication bus 1304.
Wherein the processor 1301, the communication interface 1302, and the memory 1303 communicate with each other through a communication bus 1304;
a memory 1303 for storing a computer program;
the processor 1301 is configured to execute a computer program stored in the memory 1303, where the processor 1301 implements the steps of the method for generating a three-dimensional virtual scene provided by the embodiment of the present application when executing the computer program.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for generating a three-dimensional virtual scene provided by the embodiment of the present application.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the application.

Claims (9)

1. A method for generating a three-dimensional virtual scene, the method comprising:
generating a corresponding three-dimensional virtual environment model for the target area;
generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area; the target object at least comprises a goods shelf and an article;
based on the set placement rule of each target object in the target area, placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model to form a three-dimensional virtual scene, wherein the three-dimensional virtual scene comprises:
placing the three-dimensional virtual object model corresponding to each shelf in the three-dimensional virtual environment model according to the set shelf placement rules; identifying an article placement area in the three-dimensional virtual object model corresponding to each shelf; determining a target object to be placed in each object placement area according to each object placement area; establishing a corresponding three-dimensional space coordinate system for the article placing area, wherein the origin of coordinates of the three-dimensional space coordinate system corresponds to an end point of one end of the article placing area, the X-axis direction of the three-dimensional space coordinate system corresponds to the horizontal direction of the article placing area, the Y-axis direction corresponds to the vertical direction of the article placing area, and the Z-axis direction is perpendicular to the X-axis direction and the Y-axis direction;
Firstly taking the origin of coordinates as a first current point, placing a three-dimensional virtual object model corresponding to a target object at the first current point, shifting the first current point by a first distance value S1 and S1 along the X-axis direction after the placement is successful to be the length of the three-dimensional virtual object model corresponding to the target object, judging whether the distance between the first current point and a first designated point is greater than or equal to S1, wherein the coordinate information of the first designated point is (E1, 0), and E1 is the length of a virtual accommodating space of the object placement area; if yes, returning to the step of placing a three-dimensional virtual object model corresponding to the target object at the first current point, and if not, ending the current flow;
taking a second designated point as a second current point, wherein the coordinate information of the second designated point is (0, S2,0,) S2 is the height of a three-dimensional virtual object model corresponding to the target object, placing a three-dimensional virtual object model corresponding to the target object at the second current point, shifting the second current point by S2 along the Y-axis direction after the placement is successful, judging whether the distance between the second current point and a third designated point is greater than or equal to S2, the coordinate information of the third designated point is (0, E2, 0), E2 is the height of a virtual accommodating space of the object placement area, if yes, returning to the step of placing a three-dimensional virtual object model corresponding to the target object at the second current point, and if not, ending the current flow;
And taking a fourth designated point as a third current point, wherein the coordinate information of the fourth designated point is (0, S3) S3 as the width of the three-dimensional virtual object model corresponding to the target object, placing a three-dimensional virtual object model corresponding to the target object at the third current point, shifting the third current point by S3 along the Z-axis direction after the placement is successful, judging whether the distance between the third current point and a fifth designated point is greater than or equal to S3, the coordinate information of the fifth designated point is (0, E3), E3 is the width of the virtual accommodating space of the object placement area, if yes, returning to the step of placing the three-dimensional virtual object model corresponding to the target object at the third current point, if no, ending the current flow.
2. The method of claim 1, wherein determining the target item to be placed by the item placement area comprises:
and matching the size information of the virtual accommodating space of the article placing area with the size information of the three-dimensional virtual object model corresponding to each article, and determining the target article to be placed in the article placing area according to the matching result.
3. The method of claim 1, wherein the item placement rules are set by:
Generating at least one article display schematic diagram according to the size information of the virtual accommodating space of the article placement area and the size information of the three-dimensional virtual object model corresponding to the target article, and outputting the at least one article display schematic diagram;
and acquiring a target object display schematic diagram selected by a user, and generating an object placement rule according to the target object display schematic diagram.
4. The method according to claim 1, wherein the placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area, and forming the three-dimensional virtual scene comprises:
based on the set placement rule of each target object in the target area, placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model to form a three-dimensional virtual scene model;
rendering the three-dimensional virtual scene model to form a three-dimensional virtual scene.
5. The method of claim 4, wherein the rendering the three-dimensional virtual scene model comprises:
determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each shelf;
Determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each article;
in the three-dimensional virtual scene model, rendering the three-dimensional virtual object model by utilizing rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each article; and rendering the areas of the three-dimensional virtual object model except the three-dimensional virtual object model corresponding to the object according to the rendering texture corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each goods shelf.
6. The method according to claim 1, characterized in that the method further comprises:
acquiring camera parameters of a virtual camera in the set three-dimensional virtual scene model;
determining a conversion matrix for converting between a world coordinate system and an image coordinate system according to the camera parameters, wherein the world coordinate system is a coordinate system corresponding to the three-dimensional virtual scene;
converting coordinate position information of each object in the three-dimensional virtual scene in a world coordinate system into target position information in an image coordinate system by utilizing the conversion matrix;
and mapping each object in the three-dimensional virtual scene to a corresponding position on a two-dimensional image configuration surface according to the target position information of the object in an image coordinate system to obtain a target image.
7. The method of claim 6, further comprising, after said mapping the object to a corresponding location on the two-dimensional image layout surface according to the target location information of the object in the image coordinate system to obtain the target image:
labeling the position information, the type information and/or the quantity information of the objects in the target image;
and training an image recognition model according to at least one item of position information, type information and quantity information of each marked article.
8. The method of claim 7, wherein labeling the location information of the item in the target image comprises:
selecting a target pixel point which does not belong to any set as a current pixel point in the target image, wherein the target pixel point is used for representing an article;
determining whether target pixel points which have the same color value as the current pixel point and are not included in any pixel point set exist in all pixel points adjacent to the current pixel point;
if the pixel point exists, the target pixel point and the current pixel point are classified into the same pixel point set; the target pixel point is used as a current pixel point, and a step of determining whether target pixel points which have the same color value as the current pixel point and are not included in any pixel point set exist in all pixel points adjacent to the current pixel point is returned; if not, ending the current flow;
And labeling an external rectangular frame of an object corresponding to the target pixel point in the pixel point set in the target image aiming at the pixel point set.
9. A three-dimensional virtual scene generation apparatus, the apparatus comprising:
the first generation module is used for generating a corresponding three-dimensional virtual environment model for the target area;
the second generation module is used for generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area; the target object at least comprises a goods shelf and an article;
the third generating module is configured to place, in the three-dimensional virtual environment model, a three-dimensional virtual object model corresponding to each target object based on a placement rule of each target object in the target area, to form a three-dimensional virtual scene, and includes:
placing the three-dimensional virtual object model corresponding to each shelf in the three-dimensional virtual environment model according to the set shelf placement rules; identifying an article placement area in the three-dimensional virtual object model corresponding to each shelf; determining a target object to be placed in each object placement area according to each object placement area; establishing a corresponding three-dimensional space coordinate system for the article placing area, wherein the origin of coordinates of the three-dimensional space coordinate system corresponds to an end point of one end of the article placing area, the X-axis direction of the three-dimensional space coordinate system corresponds to the horizontal direction of the article placing area, the Y-axis direction corresponds to the vertical direction of the article placing area, and the Z-axis direction is perpendicular to the X-axis direction and the Y-axis direction;
Firstly taking the origin of coordinates as a first current point, placing a three-dimensional virtual object model corresponding to a target object at the first current point, shifting the first current point by a first distance value S1 and S1 along the X-axis direction after the placement is successful to be the length of the three-dimensional virtual object model corresponding to the target object, judging whether the distance between the first current point and a first designated point is greater than or equal to S1, wherein the coordinate information of the first designated point is (E1, 0), and E1 is the length of a virtual accommodating space of the object placement area; if yes, returning to the step of placing a three-dimensional virtual object model corresponding to the target object at the first current point, and if not, ending the current flow;
taking a second designated point as a second current point, wherein the coordinate information of the second designated point is (0, S2,0,) S2 is the height of a three-dimensional virtual object model corresponding to the target object, placing a three-dimensional virtual object model corresponding to the target object at the second current point, shifting the second current point by S2 along the Y-axis direction after the placement is successful, judging whether the distance between the second current point and a third designated point is greater than or equal to S2, the coordinate information of the third designated point is (0, E2, 0), E2 is the height of a virtual accommodating space of the object placement area, if yes, returning to the step of placing a three-dimensional virtual object model corresponding to the target object at the second current point, and if not, ending the current flow;
And taking a fourth designated point as a third current point, wherein the coordinate information of the fourth designated point is (0, S3) S3 as the width of the three-dimensional virtual object model corresponding to the target object, placing a three-dimensional virtual object model corresponding to the target object at the third current point, shifting the third current point by S3 along the Z-axis direction after the placement is successful, judging whether the distance between the third current point and a fifth designated point is greater than or equal to S3, the coordinate information of the fifth designated point is (0, E3), E3 is the width of the virtual accommodating space of the object placement area, if yes, returning to the step of placing the three-dimensional virtual object model corresponding to the target object at the third current point, if no, ending the current flow.
CN201910747186.XA 2019-08-14 2019-08-14 Three-dimensional virtual scene generation method and device Active CN112396688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910747186.XA CN112396688B (en) 2019-08-14 2019-08-14 Three-dimensional virtual scene generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910747186.XA CN112396688B (en) 2019-08-14 2019-08-14 Three-dimensional virtual scene generation method and device

Publications (2)

Publication Number Publication Date
CN112396688A CN112396688A (en) 2021-02-23
CN112396688B true CN112396688B (en) 2023-09-26

Family

ID=74602733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910747186.XA Active CN112396688B (en) 2019-08-14 2019-08-14 Three-dimensional virtual scene generation method and device

Country Status (1)

Country Link
CN (1) CN112396688B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114065334A (en) * 2020-08-04 2022-02-18 广东博智林机器人有限公司 Method and device for determining measurement position of virtual guiding rule and storage medium
CN113763113B (en) * 2021-03-04 2024-07-16 北京沃东天骏信息技术有限公司 Article display method and device
CN113760389A (en) * 2021-04-19 2021-12-07 北京沃东天骏信息技术有限公司 Shelf display method, equipment, storage medium and program product based on three dimensions
CN113066183A (en) * 2021-04-28 2021-07-02 腾讯科技(深圳)有限公司 Virtual scene generation method and device, computer equipment and storage medium
CN115048001A (en) * 2022-06-16 2022-09-13 亮风台(云南)人工智能有限公司 Virtual object display method and device, electronic equipment and storage medium
CN117495666B (en) * 2023-12-29 2024-03-19 山东街景智能制造科技股份有限公司 Processing method for generating 2D data based on 3D drawing
CN117876642B (en) * 2024-03-08 2024-06-11 杭州海康威视系统技术有限公司 Digital model construction method, computer program product and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609934A (en) * 2011-12-22 2012-07-25 中国科学院自动化研究所 Multi-target segmenting and tracking method based on depth image
CN103425825A (en) * 2013-08-02 2013-12-04 苏州两江科技有限公司 3D supermarket displaying method based on CAD graphic design drawing
CN106991548A (en) * 2016-01-21 2017-07-28 阿里巴巴集团控股有限公司 A kind of warehouse goods yard planing method, device and electronic installation
CN107393017A (en) * 2017-08-11 2017-11-24 北京铂石空间科技有限公司 Image processing method, device, electronic equipment and storage medium
CN108427820A (en) * 2017-08-12 2018-08-21 中民筑友科技投资有限公司 A kind of shelf simulation management method and system based on BIM
CN108804061A (en) * 2017-05-05 2018-11-13 上海盟云移软网络科技股份有限公司 The virtual scene display method of virtual reality system
CN109685905A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 Cell planning method and system based on augmented reality

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609934A (en) * 2011-12-22 2012-07-25 中国科学院自动化研究所 Multi-target segmenting and tracking method based on depth image
CN103425825A (en) * 2013-08-02 2013-12-04 苏州两江科技有限公司 3D supermarket displaying method based on CAD graphic design drawing
CN106991548A (en) * 2016-01-21 2017-07-28 阿里巴巴集团控股有限公司 A kind of warehouse goods yard planing method, device and electronic installation
CN108804061A (en) * 2017-05-05 2018-11-13 上海盟云移软网络科技股份有限公司 The virtual scene display method of virtual reality system
CN107393017A (en) * 2017-08-11 2017-11-24 北京铂石空间科技有限公司 Image processing method, device, electronic equipment and storage medium
CN108427820A (en) * 2017-08-12 2018-08-21 中民筑友科技投资有限公司 A kind of shelf simulation management method and system based on BIM
CN109685905A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 Cell planning method and system based on augmented reality

Also Published As

Publication number Publication date
CN112396688A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112396688B (en) Three-dimensional virtual scene generation method and device
CN108062784B (en) Three-dimensional model texture mapping conversion method and device
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
RU2215326C2 (en) Image-based hierarchic presentation of motionless and animated three-dimensional object, method and device for using this presentation to visualize the object
CN104322060B (en) System, method and apparatus that low latency for depth map is deformed
CN111563923A (en) Method for obtaining dense depth map and related device
WO2017154705A1 (en) Imaging device, image processing device, image processing program, data structure, and imaging system
CN109795830A (en) It is automatically positioned the method and device of logistics tray
JP2009080578A (en) Multiview-data generating apparatus, method, and program
US20140368542A1 (en) Image processing apparatus, image processing method, program, print medium, and print-media set
CN110210328A (en) The method, apparatus and electronic equipment of object are marked in image sequence
US11209277B2 (en) Systems and methods for electronic mapping and localization within a facility
CN109711472B (en) Training data generation method and device
CN109559349A (en) A kind of method and apparatus for calibration
CN111881892B (en) Ordered point cloud 5D texture grid data structure generation method, device, equipment and medium
CN111161388B (en) Method, system, device and storage medium for generating retail commodity shelf images
US11360068B2 (en) Monitoring plants
JP4649559B2 (en) 3D object recognition apparatus, 3D object recognition program, and computer-readable recording medium on which the same is recorded
CN115641322A (en) Robot grabbing method and system based on 6D pose estimation
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
JP2019211981A (en) Information processor, information processor controlling method and program
US20120313942A1 (en) System and method for digital volume processing with gpu accelerations
CN113126944B (en) Depth map display method, display device, electronic device, and storage medium
KR102662058B1 (en) An apparatus and method for generating 3 dimension spatial modeling data using a plurality of 2 dimension images acquired at different locations, and a program therefor
CN110211243B (en) AR equipment and entity labeling method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant