WO2023151271A1 - Procédé et appareil de présentation de modèle, et dispositif électronique et support de stockage - Google Patents

Procédé et appareil de présentation de modèle, et dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023151271A1
WO2023151271A1 PCT/CN2022/118201 CN2022118201W WO2023151271A1 WO 2023151271 A1 WO2023151271 A1 WO 2023151271A1 CN 2022118201 W CN2022118201 W CN 2022118201W WO 2023151271 A1 WO2023151271 A1 WO 2023151271A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
model
projection
area
Prior art date
Application number
PCT/CN2022/118201
Other languages
English (en)
Chinese (zh)
Inventor
范涛
周玉杰
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023151271A1 publication Critical patent/WO2023151271A1/fr

Links

Images

Classifications

    • G06T3/06

Definitions

  • the present disclosure relates to the technical field of three-dimensional models, and in particular, to a model display method, device, electronic equipment, and storage medium.
  • each 3D model can be roamed.
  • two virtual cameras can be set.
  • the positions of the two virtual cameras are the same on the horizontal plane, and their positions are different in the direction vertical to the horizontal plane.
  • the two virtual cameras move synchronously, and in the process of moving , using the first virtual camera to collect the perspective view of the 3D model, using the second virtual camera to collect the top view of the 3D model, and completing the roaming of each 3D model.
  • the method of using two virtual cameras to collect images of the 3D model has the problem of multi-camera synchronization, and the operation process is cumbersome and the efficiency is low.
  • the present disclosure at least provides a model display method, device, electronic equipment and storage medium.
  • the present disclosure provides a model presentation method, including:
  • the overlay information includes: in the image overlay area The overlay position and/or display size on the ;
  • the thumbnail corresponding to each three-dimensional model is superimposed on the image superposition area of the projected image to generate a target display image.
  • the projection image is collected by using the projection camera, and the thumbnail corresponding to each 3D model in the projection image is obtained; the thumbnail can represent the shape of the 3D model;
  • the superimposition information corresponding to the thumbnail is to superimpose the thumbnail corresponding to each 3D model on the image superposition area of the projected image to generate a target display image.
  • the target display image includes not only the projection image of the 3D model, but also the thumbnail representing the shape of the 3D model, which realizes the roaming of each 3D model with a virtual camera, alleviates the problem of multi-camera synchronization, and improves the efficiency of model roaming.
  • using one frame of target display image to display the projected image and thumbnail alleviates the problem of waste of resources caused by generating multiple frames of images during the roaming process of the 3D model.
  • generating a thumbnail corresponding to each 3D model includes:
  • a thumbnail corresponding to the three-dimensional model is generated.
  • the plan view of the 3D model in the direction of target acquisition wherein the plan view of the 3D model in the direction of target acquisition can more accurately represent the shape of the 3D model, and then based on the plan view, after generating the corresponding thumbnail of the 3D model,
  • the thumbnail can represent the 3D model more accurately.
  • the process of determining the thumbnail does not need manual hand drawing, and the generation process of the thumbnail is relatively simple and efficient.
  • the superimposition information includes a superimposition position
  • the overlay information of the thumbnail includes:
  • the superimposition position of the thumbnail image corresponding to the 3D model is determined.
  • the superimposed position of the thumbnail corresponding to the three-dimensional model is determined, so that the multiple three-dimensional models on the projected image
  • the relative positional relationship is matched with the relative positional relationship of the thumbnails respectively corresponding to the plurality of three-dimensional models on the image superposition area, thereby improving the accuracy of the determined superimposition position.
  • determining the position information of the three-dimensional model on the projection image includes:
  • the display position information is converted into the view coordinate system corresponding to the projection image, so as to obtain the position information of the three-dimensional model on the projection image.
  • the position information of the 3D model on the projected image can be flexibly determined through the above two methods, so as to provide data support for subsequent determination of the superimposition position of the thumbnail corresponding to the 3D model on the image superimposition area.
  • the 3D model corresponds to The overlay position of the thumbnail, including:
  • the abscissa information in the position information of the 3D model on the projected image and the width ratio in the size ratio information it is easier and more efficient to determine the lateral distance information of the thumbnail corresponding to the 3D model;
  • the vertical coordinate information in the position information on the projection image and the height ratio in the size ratio information are relatively simple and efficient to determine the longitudinal distance information of the thumbnail corresponding to the 3D model; realize the use of the position of each 3D model in the projection image Information, more accurately determine the superposition position of the thumbnail corresponding to each 3D model.
  • the determination of the 3D model corresponding to the 3D model is based on the projection information of the 3D model in the projection image and the area information.
  • the overlay information of the thumbnail includes:
  • the display size of the thumbnail image corresponding to the 3D model is determined.
  • the display size of the thumbnail image corresponding to the 3D model is determined, so that the 3D
  • the ratio information between the display size of the thumbnail corresponding to the model and the image size of the three-dimensional model matches the size ratio information between the image superposition area and the projected image, thereby improving the display effect of the thumbnail.
  • the thumbnail corresponding to the 3D model is determined based on the size ratio information and the projection area information of the 3D model in the projection image indicated by the projection information.
  • display dimensions including:
  • the display height in the display size is determined.
  • the display size of the thumbnail image corresponding to the 3D model can be determined more accurately , so that the size of the thumbnail corresponding to the 3D model matches the size of the 3D model, and the size of the image superposition area matches the size of the projected image, and then the follow-up can be more accurate based on the display size
  • the thumbnail corresponding to the 3D model is overlaid in the image overlay area.
  • model display device including:
  • An acquisition module configured to acquire a projection image captured by the projection camera and a thumbnail corresponding to each 3D model included in the projection image
  • a first determination module configured to determine the area information of the image superposition area set in the projected image
  • the second determination module is configured to determine the overlay information of the thumbnail image corresponding to the three-dimensional model based on the projection information of the three-dimensional model in the projected image and the area information; wherein the overlay information includes : the overlay position and/or display size on the image overlay area;
  • the first generating module is configured to superimpose the thumbnail corresponding to each 3D model on the image superposition area of the projected image according to the superimposition information of the thumbnail to generate a target display image.
  • the present disclosure provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the The memory communicates with each other through a bus, and when the machine-readable instructions are executed by the processor, the steps of the model presentation method as described in the first aspect or any implementation manner are executed.
  • the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the model as described in the above-mentioned first aspect or any implementation mode is executed. Show the steps of the method.
  • Fig. 1 shows a schematic flowchart of a model presentation method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of multiple acquisition directions corresponding to a 3D model in a model presentation method provided by an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of a target display image provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic structural diagram of a model display device provided by an embodiment of the present disclosure
  • Fig. 5 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • each 3D model can be roamed.
  • two virtual cameras can be set.
  • the positions of the two virtual cameras are the same on the horizontal plane, and their positions are different in the direction vertical to the horizontal plane.
  • the two virtual cameras move synchronously, and in the process of moving , using the first virtual camera to collect the perspective view of the 3D model, using the second virtual camera to collect the top view of the 3D model, and completing the roaming of each 3D model.
  • the method of using two virtual cameras to collect images of the 3D model has the problem of multi-camera synchronization, and the operation process is cumbersome and the efficiency is low.
  • using the first virtual camera to collect the perspective view of the 3D model and using the second virtual camera to collect the perspective view of the 3D model will generate multiple frames of images with the same resolution, resulting in a waste of resources.
  • embodiments of the present disclosure provide a model display method, device, electronic device, and storage medium.
  • the execution subject of the model presentation method provided by the embodiments of the present disclosure is generally a computer device with certain computing capabilities, and the computer device includes, for example: a terminal device or a server; the server may be, for example, a local server, a cloud server, and the like.
  • the terminal device may be, for example, a mobile phone, a tablet, an augmented reality (Augmented Reality, AR) glasses, a personal digital assistant (Personal Digital Assistant, PDA) and other devices.
  • the model presentation method can be implemented by a processor calling computer-readable instructions stored in a memory.
  • FIG. 1 it is a schematic flowchart of a model display method provided by an embodiment of the present disclosure.
  • the method includes: S101-S104, specifically:
  • the projection image is collected by using the projection camera, and the thumbnail corresponding to each 3D model in the projection image is obtained; the thumbnail can represent the shape of the 3D model;
  • the superimposition information corresponding to the thumbnail is to superimpose the thumbnail corresponding to each 3D model on the image superposition area of the projected image to generate a target display image.
  • the target display image includes not only the projection image of the 3D model, but also the thumbnail representing the shape of the 3D model, which realizes the roaming of each 3D model with a virtual camera, alleviates the problem of multi-camera synchronization, and improves the efficiency of model roaming.
  • using one frame of target display image to display the projected image and thumbnail alleviates the problem of waste of resources caused by generating multiple frames of images during the roaming process of the 3D model.
  • the projection camera can be a virtual projection camera in the model rendering engine; or, the generated camera generation module can also be called to generate a virtual projection camera, and the virtual projection camera is used as a projection camera.
  • the projection camera may be a perspective projection camera, or an orthogonal projection camera.
  • the projection image is a perspective projection image
  • the projection image is an orthographic projection image.
  • the projection camera corresponds to pose data
  • the pose data includes camera position and camera pose
  • the camera position can be the coordinate information of the projection camera in the world coordinate system corresponding to the virtual scene
  • the camera pose can be the Euler angle of the projection camera information.
  • the projection camera may be used to collect a projection image of at least one three-dimensional model included in the virtual scene.
  • a thumbnail corresponding to each three-dimensional model in the projected image is obtained.
  • the thumbnail corresponding to each three-dimensional model may be pre-generated before the projection image is collected, or may be generated in real time after the projection image is collected.
  • the thumbnail image corresponding to the 3D model may be an image capable of representing the shape of the 3D model.
  • the thumbnail of the 3D model may be a front view, a top view, etc. of the 3D model.
  • generating a thumbnail corresponding to each 3D model may include: determining a plan view of the 3D model in a target collection direction; and generating a thumbnail corresponding to the 3D model based on the plan view.
  • the target collection direction corresponding to the 3D model can be set as required, wherein different 3D models can correspond to one target collection direction, or different 3D models can also correspond to different target collection directions.
  • the target acquisition direction can be any one of the top view direction, the bottom view direction, the front view direction, the back view direction, the left view direction, and the right view direction. See Figure 2 for a schematic diagram of multiple acquisition directions.
  • the target acquisition direction corresponding to the three-dimensional model can be determined according to the structural characteristics of the three-dimensional model. For example, if the form displayed in the front view direction of 3D model 1 has iconic characteristics, then the target acquisition direction corresponding to 3D model 1 can be the front view direction; The acquisition direction of the target can be the overlooking direction and the like.
  • the plane view of the 3D model in the target collection direction can be determined.
  • the newly generated virtual camera can be used to collect the plane view of the 3D model in the direction of the target collection.
  • a projection camera may also be used to collect a plane view of the 3D model in the target collection direction.
  • the plane view may be used as a thumbnail corresponding to the 3D model.
  • pixel information such as size and brightness of the plan view may be adjusted, and the adjusted plan view may be used as a thumbnail corresponding to the 3D model.
  • the plan view of the 3D model in the direction of target acquisition wherein the plan view of the 3D model in the direction of target acquisition can more accurately represent the shape of the 3D model, and then based on the plan view, after generating the corresponding thumbnail of the 3D model,
  • the thumbnail can represent the 3D model more accurately.
  • the process of determining the thumbnail does not need manual hand drawing, and the generation process of the thumbnail is relatively simple and efficient.
  • an image overlay area can be set in the projected image, so that the adjusted thumbnail can be overlaid on the image overlay area later.
  • the size of the image superposition area may be determined according to the size of the projected image; for example, the size of the image superposition area may be one-tenth, one-twentieth, etc. of the projected image. That is, if the size of the projected image is 510 ⁇ 510, the size of the image superposition area may be 51 ⁇ 51.
  • the proportional relationship between the width and the height of the projected image may also be determined according to the size information of the projected image; based on the proportional relationship, the size of the image superposition area is determined.
  • the size of the projected image is 1024 ⁇ 1024, the size of the image overlay area can be 128 ⁇ 128, 64 ⁇ 64, etc., the size of the projected image is 1024 ⁇ 512, and the size of the image overlay area can be 128 ⁇ 64, 64 ⁇ 32 wait.
  • the position of the image superposition area can be determined according to the position of the blank area included in the projected image. For example, if the blank area at the upper left of the projected image is relatively large, an image superposition area of a specific size may be set at the upper left of the projected image. Alternatively, one of the upper left position, lower left position, upper right position, and lower right position of the projected image may be arbitrarily selected as the position of the image superposition area.
  • area information of the image superimposed area in the projected image can be determined.
  • the area information may be position information, size information, etc. of the image superposition area in the projected image.
  • the thumbnail After the thumbnail corresponding to each 3D model is generated, the thumbnail should be superimposed on the image superposition area of the projected image.
  • the thumbnail When superimposing the thumbnail on the image superimposing area, it is necessary to determine the display size and/or superimposing position of the thumbnail on the image superimposing area.
  • the projection information may include: the image size of the 3D model in the projected image, the position information of the 3D model on the projected image, that is, the coordinate information in the view coordinate system corresponding to the projected image.
  • the area information may include an area size of the image overlay area.
  • the image proportion occupied by the three-dimensional model in the projected image can be determined according to the image size of the three-dimensional model in the projected image and the size information of the projected image;
  • the area size of the overlay area determines the display size of the thumbnail corresponding to the 3D model, so that the image proportion occupied by the 3D model in the projected image matches the area proportion occupied by the thumbnail corresponding to the 3D model in the image overlay area .
  • the superimposed position of the thumbnail corresponding to the 3D model on the image projected area can be determined, so that the 3D model can be projected
  • the position in the image, and the position of the thumbnail corresponding to the 3D model in the image overlay area match.
  • the 3D model corresponding to the 3D model is determined based on the projection information of the 3D model in the projection image and the area information.
  • the superimposed information of the above thumbnails may include:
  • Step A1 based on the area size indicated by the area information and the image size corresponding to the projection image, determine the size ratio information between the image superposition area and the projection image;
  • Step A2 based on the size ratio information and the position information of the 3D model on the projection image indicated by the projection information corresponding to the 3D model, determine the superimposition of the thumbnail image corresponding to the 3D model Location.
  • step A1 based on the area size indicated by the area information of the image superposition area and the image size corresponding to the projected image, size ratio information between the image superimposed area and the projected image is determined.
  • the size ratio information may include: width ratio and height ratio. For example, it is possible to determine the first ratio between the width value in the area size of the image superposition area and the width value in the image size of the projected image, and determine the first ratio as the width ratio; and determine the area size of the image overlay area A second ratio between the height value in and the height value in the image size of the projected image is determined as the height ratio.
  • step A2 the position information of the 3D model on the projected image can be determined first; based on the size ratio information and the position information of the 3D model on the projected image, the superimposed position of the thumbnail corresponding to the 3D model can be determined.
  • the position information can be multiplied by the size ratio information, and the obtained product result can be used to determine the superposition position of the thumbnail corresponding to the 3D model, so that the distance between the position of the 3D model and the lower boundary of the projected image and the height of the projected image can be determined.
  • the first ratio between matches the distance between the superimposed position of the thumbnail and the lower boundary of the image superimposed area, and the second ratio between the height of the image superimposed area, and the distance between the position of the 3D model and the left boundary of the projected image
  • the third ratio between the width of the projected image matches the distance between the superimposed position of the thumbnail image and the left border of the image superimposed area, and the fourth ratio between the width of the image superimposed area.
  • the width value of the area size of the image superposition area is a1, the height value is b1, the width value of the image size corresponding to the projected image is a2, and the height value is b2.
  • the superimposed position of the thumbnail corresponding to the three-dimensional model is determined, so that the multiple three-dimensional models on the projected image
  • the relative positional relationship is matched with the relative positional relationship of the thumbnails respectively corresponding to the plurality of three-dimensional models on the image superposition area, thereby improving the accuracy of the determined superimposition position.
  • the position information of the three-dimensional model on the projection image may be determined according to the following two methods:
  • Mode 1 Based on the projection area information of the 3D model in the projection image, determine the center point of the projection area corresponding to the 3D model, and determine the position information of the center point as the 3D model in the projected image. location information on the projected image.
  • Method 2 Obtain the display position information of the 3D model in the set world coordinate system; use the determined coordinate system conversion matrix to convert the display position information into the view coordinate system corresponding to the projected image, and obtain the Position information of the 3D model on the projected image.
  • the projection area information of the 3D model in the projected image is determined.
  • the projected area can be the outline area of the 3D model on the projected image (the outline area can be the projected area), or the 3D model on the projected image.
  • the detection frame corresponding to the contour area of the contour area may be a projection area); that is, the projection area information may include the position information of the contour area on the projection image, or the position information of the detection frame corresponding to the contour area on the projection image location information, etc.
  • the center point of the projection area corresponding to the three-dimensional model may be determined, and the position information of the center point may be determined as the position information of the three-dimensional model on the projection image.
  • the display pose of the 3D model in a real scene may be determined, and the display pose includes display position information and pose information. Therefore, the display position information of the 3D model in the set world coordinate system can be obtained.
  • the world coordinate system may be a coordinate system constructed with the target scene position in the real scene as the origin.
  • the determined first coordinate system conversion matrix can be used to convert the display position information of the 3D model in the world coordinate system to the camera coordinate system corresponding to the projection camera to obtain intermediate position information; and then use the second coordinate system conversion matrix , the intermediate display position information of the 3D model in the camera coordinate system corresponding to the projection camera is transformed into the view coordinate system corresponding to the projection image, and the position information of the 3D model on the projection image is obtained.
  • the conversion matrix of the first coordinate system may be determined according to information such as camera extrinsic parameters and the position of the projection camera.
  • the conversion matrix of the second coordinate system may be determined according to information such as camera internal parameters and the position of the projection camera.
  • the position information of the 3D model on the projected image can be flexibly determined through the above two methods, so as to provide data support for subsequent determination of the superimposition position of the thumbnail corresponding to the 3D model on the image superimposition area.
  • the 3D model is determined based on the size ratio information and the position information of the 3D model on the projection image indicated by the projection information corresponding to the 3D model
  • the corresponding superposition position of the thumbnail may include:
  • the abscissa position in the superimposed position can be determined based on the lateral distance information and the area information.
  • the ordinate position in the superimposed position can be determined.
  • the abscissa position and the ordinate position may be coordinate information in a view coordinate system corresponding to the projected image. Alternatively, it may also be coordinate information in the view coordinate system corresponding to the image superposition area.
  • the position information of the three-dimensional model D in the view coordinate system corresponding to the projection image in the figure is (768, 128), that is, the abscissa information is 768, and the ordinate information is 128; in the size ratio information
  • the width ratio is a1/a2, that is, the width ratio is 1/16;
  • the height ratio in the size ratio information is b1/b2, that is, the height ratio is 1/16;
  • the ordinate information (128) in the position information of D is multiplied, and the longitudinal distance information is 8; then the abscissa position and the ordinate position of the superposition position can be determined according to the determined horizontal distance information, longitudinal distance information and area information.
  • the superimposed position of the thumbnail corresponding to the three-dimensional model D may be (48, 488), and the superimposed position is position information in the view coordinate system corresponding to the projected image.
  • the abscissa information in the position information of the 3D model on the projected image and the width ratio in the size ratio information it is easier and more efficient to determine the lateral distance information of the thumbnail corresponding to the 3D model;
  • the vertical coordinate information in the position information on the projection image and the height ratio in the size ratio information are relatively simple and efficient to determine the longitudinal distance information of the thumbnail corresponding to the 3D model; realize the use of the position of each 3D model in the projection image Information, more accurately determine the superposition position of the thumbnail corresponding to each 3D model.
  • the superimposition information includes a display size
  • determine the corresponding 3D model based on the projection information of the 3D model in the projection image and the area information.
  • the superimposed information of the above thumbnails may include:
  • Step B1 based on the area size indicated by the area information and the image size corresponding to the projected image, determine the size ratio information between the image superposition area and the projected image;
  • Step B2 based on the size ratio information and the projection area information of the 3D model in the projection image indicated by the projection information, determine the display size of the thumbnail image corresponding to the 3D model.
  • step B1 the size ratio information between the image superposition region and the projected image can be determined based on the region size indicated by the region information of the image superposition region and the image size corresponding to the projected image; wherein, the size ratio information includes: width ratio and height ratio.
  • the size ratio information includes: width ratio and height ratio.
  • step B2 based on the size ratio information and the projection area information of each 3D model in the projection image indicated by the projection information, the display size of the thumbnail image corresponding to each 3D model is determined.
  • the projection area information may include a detection frame corresponding to a contour area of the three-dimensional model on the projection image, and size information on the projection image.
  • target detection may be performed on the projection image, and projection area information corresponding to each three-dimensional model in the projection image may be determined.
  • the trained neural network can be used to perform object detection on the projected image.
  • the width value indicated by the projection area information can be multiplied by the width ratio in the size ratio information to obtain the width value in the display size;
  • the height value indicated by the projection area information can be multiplied by the height ratio in the size ratio information , to get the height value in the display size.
  • the width ratio is a1/a2, and the height ratio is b1/b2;
  • the projection area information corresponding to the three-dimensional model D includes :
  • the width is m2, the height is n2, and the display size of the thumbnail D' corresponding to the 3D model can be determined. That is, the display size can be width: a1/a2 ⁇ m2, height: b1/b2 ⁇ n2.
  • the display size of the thumbnail image corresponding to the 3D model is determined, so that the 3D
  • the ratio information between the display size of the thumbnail corresponding to the model and the image size of the three-dimensional model matches the size ratio information between the image superposition area and the projected image, thereby improving the display effect of the thumbnail.
  • the determination of the thumbnail corresponding to the 3D model is based on the size ratio information and the projection area information of the 3D model in the projection image indicated by the projection information.
  • Display dimensions which can include:
  • the display height in the display size is determined.
  • the display width in the display size is determined; for example, the display width can be obtained by multiplying the width information in the projection area information and the width ratio.
  • the display height in the display size is determined; for example, the height information in the projection area information can be multiplied by the height ratio to obtain the display height.
  • the projection information indicates that the three-dimensional model D has a width of m2 and a height of n2 on the projection image;
  • the width ratio in the size ratio information is a1/a2
  • the height in the size ratio information is The ratio is b1/b2;
  • the ratio b1/b2 is multiplied by the height n2 of the 3D model D, that is, (1/16) ⁇ 16, and the display height of the thumbnail image corresponding to the 3D model is 1; that is, the display of the thumbnail image D' corresponding to the 3D model D can be determined
  • Dimensions include: 4 for display width and 1 for display height.
  • the display size of the thumbnail image corresponding to the 3D model can be determined more accurately , so that the size of the thumbnail corresponding to the 3D model matches the size of the 3D model, and the size of the image superposition area matches the size of the projected image, and then the follow-up can be more accurate based on the display size
  • the thumbnail corresponding to the 3D model is overlaid in the image overlay area.
  • the thumbnail corresponding to each 3D model After determining the overlay position and/or display size of the thumbnail corresponding to each 3D model in the image overlay area, the thumbnail corresponding to each 3D model can be superimposed on the projection according to the overlay position and/or display size corresponding to the thumbnail On the image superposition area of the image, the target display image is obtained.
  • the overlay information includes the overlay position but does not include the display size
  • the overlay information does not include the overlay position and display size
  • a target display image is generated at a preset position in the region.
  • the preset size and preset position can be set as required.
  • the overlay information includes the overlay position and display size
  • the display screen of any device may be controlled to display the target display image.
  • the device may be a mobile phone, a computer, and the like.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides a model display device, as shown in FIG.
  • the second determination module 403 and the first generation module 404 specifically:
  • An acquisition module 401 configured to acquire a projection image captured by the projection camera and a thumbnail corresponding to each 3D model included in the projection image;
  • the first determination module 402 is configured to determine the area information of the image superposition area set in the projected image
  • the second determination module 403 is configured to determine, based on the projection information of the 3D model in the projection image and the area information, the overlay information of the thumbnail image corresponding to the 3D model; wherein the overlay information Including: the overlay position and/or display size on the image overlay area;
  • the first generation module 404 is configured to superimpose the thumbnail corresponding to each 3D model on the image superposition area of the projected image according to the superimposition information of the thumbnail to generate a target display image.
  • the device further includes: a second generation module 405, configured to generate a thumbnail corresponding to each 3D model according to the following steps:
  • a thumbnail corresponding to the three-dimensional model is generated.
  • the second determination module 403 if the superimposition information includes a superimposition position, the projection information based on the 3D model in the projection image and the area information , when determining the overlay information of the thumbnail corresponding to the 3D model, it is used for:
  • the superimposition position of the thumbnail image corresponding to the 3D model is determined.
  • the second determination module 403 when determining the position information of the 3D model on the projection image, is configured to:
  • the display position information is converted into the view coordinate system corresponding to the projection image, so as to obtain the position information of the three-dimensional model on the projection image.
  • the second determination module 403 based on the size ratio information and indicated by the projection information corresponding to the 3D model, the 3D model on the projection image
  • the position information is used to determine the overlay position of the thumbnail corresponding to the 3D model:
  • the second determination module 403 when the superimposition information includes display size, the projection information based on the 3D model in the projection image and the area information , when determining the overlay information of the thumbnail corresponding to the 3D model, it is used for:
  • the display size of the thumbnail image corresponding to the 3D model is determined.
  • the second determination module 403 determines the projection area information of the 3D model in the projection image based on the size ratio information and the projection information.
  • the second determination module 403 determines the projection area information of the 3D model in the projection image based on the size ratio information and the projection information.
  • the display height in the display size is determined.
  • the functions of the device provided by the embodiments of the present disclosure or the included templates can be used to execute the methods described in the above method embodiments, and its specific implementation can refer to the description of the above method embodiments. For brevity, here No longer.
  • an embodiment of the present disclosure also provides an electronic device.
  • FIG. 5 it is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure, including a processor 501 , a memory 502 , and a bus 503 .
  • the memory 502 is used to store execution instructions, including a memory 5021 and an external memory 5022; the memory 5021 here is also called an internal memory, and is used to temporarily store calculation data in the processor 501 and exchange data with an external memory 5022 such as a hard disk.
  • the processor 501 exchanges data with the external memory 5022 through the memory 5021.
  • the processor 501 communicates with the memory 502 through the bus 503, so that the processor 501 executes the following instructions:
  • the overlay information includes: in the image overlay area The overlay position and/or display size on the ;
  • the thumbnail corresponding to each three-dimensional model is superimposed on the image superposition area of the projected image to generate a target display image.
  • an embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the model demonstration method described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • the embodiment of the present disclosure also provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the steps of the model presentation method described in the above method embodiment, for details, please refer to the above method The embodiment will not be repeated here.
  • the above-mentioned computer program product may be specifically realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
  • a software development kit Software Development Kit, SDK
  • the embodiments of the present disclosure relate to the field of augmented reality.
  • the target object may involve faces, limbs, gestures, actions, etc. related to the human body, or markers and markers related to objects, or sand tables, display areas or display items related to venues or places.
  • Vision-related algorithms can involve visual positioning, SLAM, 3D reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc.
  • Specific applications can not only involve interactive scenes such as guided tours, navigation, explanations, reconstructions, virtual effect overlays and display related to real scenes or objects, but also special effects processing related to people, such as makeup beautification, body beautification, special effect display, virtual Interactive scenarios such as model display.
  • the relevant features, states and attributes of the target object can be detected or identified through the convolutional neural network.
  • the above-mentioned convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Abstract

La présente invention concerne un procédé et un appareil de présentation de modèle, ainsi qu'un dispositif électronique et un support de stockage. Le procédé consiste à : acquérir une image de projection collectée par une caméra de projection, et une vignette correspondant à chaque modèle tridimensionnel compris dans l'image de projection ; déterminer des informations de région d'une région de superposition d'images, qui est définie dans l'image de projection ; sur la base d'informations de projection des modèles tridimensionnels dans l'image de projection et des informations de région, déterminer des informations de superposition des vignettes correspondant aux modèles tridimensionnels, les informations de superposition comprenant : des positions de superposition et/ou des tailles de présentation sur la région de superposition d'images ; et, selon les informations de superposition des vignettes, superposer, sur la région de superposition d'images de l'image de projection, les vignettes correspondant aux modèles tridimensionnels, de façon à générer une image présentée cible.
PCT/CN2022/118201 2022-02-10 2022-09-09 Procédé et appareil de présentation de modèle, et dispositif électronique et support de stockage WO2023151271A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210125905.6 2022-02-10
CN202210125905.6A CN114463167A (zh) 2022-02-10 2022-02-10 模型展示方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023151271A1 true WO2023151271A1 (fr) 2023-08-17

Family

ID=81412779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/118201 WO2023151271A1 (fr) 2022-02-10 2022-09-09 Procédé et appareil de présentation de modèle, et dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN114463167A (fr)
WO (1) WO2023151271A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463167A (zh) * 2022-02-10 2022-05-10 北京市商汤科技开发有限公司 模型展示方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107689082A (zh) * 2016-08-03 2018-02-13 腾讯科技(深圳)有限公司 一种数据投影方法以及装置
US20180241986A1 (en) * 2016-03-09 2018-08-23 Tencent Technology (Shenzhen) Company Limited Image processing method and device
CN112672139A (zh) * 2021-03-16 2021-04-16 深圳市火乐科技发展有限公司 投影显示方法、装置及计算机可读存储介质
CN113645396A (zh) * 2020-04-27 2021-11-12 成都术通科技有限公司 图像叠加方法、装置、设备及存储介质
CN114463167A (zh) * 2022-02-10 2022-05-10 北京市商汤科技开发有限公司 模型展示方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180241986A1 (en) * 2016-03-09 2018-08-23 Tencent Technology (Shenzhen) Company Limited Image processing method and device
CN107689082A (zh) * 2016-08-03 2018-02-13 腾讯科技(深圳)有限公司 一种数据投影方法以及装置
CN113645396A (zh) * 2020-04-27 2021-11-12 成都术通科技有限公司 图像叠加方法、装置、设备及存储介质
CN112672139A (zh) * 2021-03-16 2021-04-16 深圳市火乐科技发展有限公司 投影显示方法、装置及计算机可读存储介质
CN114463167A (zh) * 2022-02-10 2022-05-10 北京市商汤科技开发有限公司 模型展示方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114463167A (zh) 2022-05-10

Similar Documents

Publication Publication Date Title
JP7231306B2 (ja) イメージ内のターゲットオブジェクトに自動的にアノテーションするための方法、装置およびシステム
US11393173B2 (en) Mobile augmented reality system
US11869205B1 (en) Techniques for determining a three-dimensional representation of a surface of an object from a set of images
US11170561B1 (en) Techniques for determining a three-dimensional textured representation of a surface of an object from a set of images with varying formats
US10977818B2 (en) Machine learning based model localization system
CN107993216B (zh) 一种图像融合方法及其设备、存储介质、终端
CN109615703B (zh) 增强现实的图像展示方法、装置及设备
Tian et al. Handling occlusions in augmented reality based on 3D reconstruction method
CN107484428B (zh) 用于显示对象的方法
Zhang et al. Framebreak: Dramatic image extrapolation by guided shift-maps
US10223839B2 (en) Virtual changes to a real object
WO2019035155A1 (fr) Système de traitement d'image, procédé de traitement d'image et programme
JP2006053694A (ja) 空間シミュレータ、空間シミュレート方法、空間シミュレートプログラム、記録媒体
EP3467788A1 (fr) Système de génération de modèle tridimensionnel, procédé de génération de modèle tridimensionnel et programme
US11854228B2 (en) Methods and systems for volumetric modeling independent of depth data
US11900552B2 (en) System and method for generating virtual pseudo 3D outputs from images
CN107330980A (zh) 一种基于无标志物的虚拟家具布置系统
Placitelli et al. Low-cost augmented reality systems via 3D point cloud sensors
US20200118333A1 (en) Automated costume augmentation using shape estimation
WO2023151271A1 (fr) Procédé et appareil de présentation de modèle, et dispositif électronique et support de stockage
CN114399610A (zh) 基于引导先验的纹理映射系统和方法
Li et al. Outdoor augmented reality tracking using 3D city models and game engine
Englert et al. Enhancing the ar experience with machine learning services
US11770551B2 (en) Object pose estimation and tracking using machine learning
Saran et al. Augmented annotations: Indoor dataset generation with augmented reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22925630

Country of ref document: EP

Kind code of ref document: A1