CN115810100B - Method, device and storage medium for determining object placement plane - Google Patents

Method, device and storage medium for determining object placement plane Download PDF

Info

Publication number
CN115810100B
CN115810100B CN202310065054.5A CN202310065054A CN115810100B CN 115810100 B CN115810100 B CN 115810100B CN 202310065054 A CN202310065054 A CN 202310065054A CN 115810100 B CN115810100 B CN 115810100B
Authority
CN
China
Prior art keywords
plane
preset
placement
virtual object
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310065054.5A
Other languages
Chinese (zh)
Other versions
CN115810100A (en
Inventor
蔡羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310065054.5A priority Critical patent/CN115810100B/en
Publication of CN115810100A publication Critical patent/CN115810100A/en
Application granted granted Critical
Publication of CN115810100B publication Critical patent/CN115810100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, device, storage medium and program product for determining an object placement plane, wherein the method comprises the following steps: responding to the placement operation of the virtual object, and acquiring image information of the three-dimensional scene; judging whether a placement plane is detected from the image information within a preset time length; if the placement plane is not detected from the image information within the preset time period, obtaining plane parameters corresponding to the virtual object and camera pose information corresponding to the image information; according to the camera pose information and the plane parameters, a preset plane is constructed in the three-dimensional scene; and driving the virtual object to be placed on a preset plane of the three-dimensional scene. According to the method and the device, when the system fails to timely detect the placement plane, the preset plane is constructed through interaction behavior and virtual object category estimation, the influence of the system plane detection capability on XR use experience is reduced, the service availability is effectively improved, and the interaction experience of a user is improved.

Description

Method, device and storage medium for determining object placement plane
Technical Field
The present application relates to the field of computer technology, and in particular, to a method, apparatus, storage medium, and program product for determining an object placement plane.
Background
XR refers to a Virtual and Reality combined and man-machine interactive environment generated by computer technology and wearable equipment, and comprises VR (Virtual Reality), AR ((Augmented Reality, augmented Reality) and the like.
In an actual scene, AR is a virtual-real combined three-dimensional space. When placing virtual objects in an AR scene, a plane is found, and a placement point on the plane is determined, which is the first step of placing all ARs in the scene. Plane detection relies on the plane detection capability provided by the terminal system, and the capability of plane detection varies greatly between different terminal device models. Taking a mobile phone as an example, the mobile phone is limited by hardware capability, the older mobile phones with older models have weaker plane detection capability, and some older mobile phones with older models have difficulty in detecting planes, so that continuous mobile camera searching is required. Some mobile phones with better performance find the plane with average time of about 10s, and the average time is worse under the condition of poor light, and the found plane area is very small.
In a practical scenario, if a user opens the AR space to take too much time to find the plane in the first step, the experience is very poor, and the loss of the user and the jump of the service may be directly caused.
Disclosure of Invention
The main object of the embodiment of the application is to provide a method, a device, a storage medium and a program product for determining an object placement plane, which realize that when a system fails to detect the placement plane in time, a preset plane is constructed through interactive behavior and virtual object category estimation, so that the influence of the system plane detection capability on XR use experience is reduced, the service availability is effectively improved, and the interactive experience of a user is improved.
In a first aspect, an embodiment of the present application provides a method for determining an object placement plane, including: responding to the placement operation of the virtual object, and acquiring image information of the three-dimensional scene; judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is a plane in the three-dimensional scene; if the placement plane is not detected from the image information within the preset time period, obtaining plane parameters corresponding to the virtual object and camera pose information corresponding to the image information; according to the camera pose information and the plane parameters, a preset plane is constructed in the three-dimensional scene; and driving the virtual object to be placed on a preset plane of the three-dimensional scene.
In an embodiment, the three-dimensional scene is a real three-dimensional space scene; the responding to the placement operation of the virtual object, obtaining the image information of the three-dimensional scene, comprises the following steps: and responding to the placement operation of the virtual object, starting an image acquisition device, and acquiring the image information of the real three-dimensional space through the image acquisition device.
In an embodiment, the three-dimensional scene is a virtual three-dimensional space scene; the responding to the placement operation of the virtual object, obtaining the image information of the three-dimensional scene, comprises the following steps: and responding to the placement operation of the virtual object, starting a virtual camera, and acquiring the image information of the virtual three-dimensional space through the virtual camera.
In an embodiment, if the placement plane is not detected from the image information within the preset time period, the obtaining the plane parameter corresponding to the virtual object includes: if the placement plane is not detected from the image information within the preset time period, determining the type identifier of the virtual object; and reading plane parameters corresponding to the type identifiers from a database, wherein the plane parameters corresponding to the type identifiers are preconfigured in the database.
In one embodiment, the plane parameters include: the relative position relation between the preset plane and the preset reference plane and the size information of the preset plane.
In an embodiment, the constructing a preset plane in the three-dimensional scene according to the camera pose information and the plane parameters includes: determining the position of a central point of the preset plane according to the camera pose information and the relative position relation; and constructing the preset plane in the three-dimensional scene by taking the center point position as the center according to the relative position relation and the size information.
In one embodiment, the plane parameters include: a preset distance between the preset plane and the camera position; the determining the position of the center point of the preset plane according to the camera pose information and the relative position relation comprises the following steps: determining rays taking the camera position as an endpoint and the camera direction as an emission direction according to the camera pose information; determining a transition plane which is parallel to the reference plane and has the preset distance from the camera position according to the relative position relation, and determining the intersection point position of the ray and the transition plane; if the intersection point position is in the preset area range, determining the intersection point position as the center point position of the preset plane; and if the intersection point position is not in the preset area range, sending prompt information, wherein the prompt information is used for prompting a user to adjust the camera pose, and continuously detecting the adjusted intersection point position after the user adjusts the camera pose until the adjusted intersection point position is in the preset area range, and determining the intersection point position as the center point position of the preset plane.
In an embodiment, the driving the virtual object to be placed on a preset plane of the three-dimensional scene includes: and driving the virtual object to be placed at the center point position on the preset plane.
In one embodiment, the method further comprises: and if the placement plane is detected from the image information within the preset time, driving the virtual object to be placed on the placement plane.
In an embodiment, after said driving said virtual object to be placed on a preset plane of said three-dimensional scene, further comprising: responding to the interactive operation of the virtual object, and acquiring current image information of the three-dimensional scene; judging whether a placement plane is detected from the current image information; if a placement plane is detected from the current image information, driving the virtual object to execute the interactive operation on the placement plane, and removing the preset plane; and if the placement plane is not detected from the image information, driving the virtual object to execute the interactive operation on the preset plane.
In a second aspect, embodiments of the present application provide a method for determining an object placement plane, including: responding to the placement operation of a user on the commodity virtual model on the interactive interface, and acquiring the image information of the three-dimensional scene; judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is a plane in the three-dimensional scene; if the placement plane is not detected from the image information within the preset time period, acquiring plane parameters corresponding to the commodity virtual model and camera pose information corresponding to the image information; according to the camera pose information and the plane parameters, a preset plane is built in the three-dimensional scene; and driving the commodity virtual model to be placed on a preset plane of the three-dimensional scene, and displaying the state of the commodity virtual model placed in the three-dimensional scene on the interactive interface.
In a third aspect, an embodiment of the present application provides an apparatus for determining an object placement plane, including:
the first acquisition module is used for responding to the placement operation of the virtual object and acquiring the image information of the three-dimensional scene;
the first judging module is used for judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is a plane in the three-dimensional scene;
the second acquisition module is used for acquiring plane parameters corresponding to the virtual object and camera pose information corresponding to the image information if the placement plane is not detected from the image information within the preset time length;
the construction module is used for constructing a preset plane in the three-dimensional scene according to the camera pose information and the plane parameters;
and the driving module is used for driving the virtual object to be placed on a preset plane of the three-dimensional scene.
In an embodiment, the three-dimensional scene is a real three-dimensional space scene; the first acquisition module is used for responding to the placement operation of the virtual object, starting the image acquisition equipment and acquiring the image information of the real three-dimensional space through the image acquisition equipment.
In an embodiment, the three-dimensional scene is a virtual three-dimensional space scene; the first acquisition module is used for responding to the placement operation of the virtual object, starting the virtual camera, and acquiring the image information of the virtual three-dimensional space through the virtual camera.
In an embodiment, the second obtaining module is configured to determine a type identifier of the virtual object if the placement plane is not detected from the image information within the preset duration; and reading plane parameters corresponding to the type identifiers from a database, wherein the plane parameters corresponding to the type identifiers are preconfigured in the database.
In one embodiment, the plane parameters include: the relative position relation between the preset plane and the preset reference plane and the size information of the preset plane.
In an embodiment, the building module is configured to determine a location of a center point of the preset plane according to the camera pose information and the relative positional relationship; and constructing the preset plane in the three-dimensional scene by taking the center point position as the center according to the relative position relation and the size information.
In one embodiment, the plane parameters include: a preset distance between the preset plane and the camera position; the determining the position of the center point of the preset plane according to the camera pose information and the relative position relation comprises the following steps: determining rays taking the camera position as an endpoint and the camera direction as an emission direction according to the camera pose information; determining a transition plane which is parallel to the reference plane and has the preset distance from the camera position according to the relative position relation, and determining the intersection point position of the ray and the transition plane; if the intersection point position is in the preset area range, determining the intersection point position as the center point position of the preset plane; and if the intersection point position is not in the preset area range, sending prompt information, wherein the prompt information is used for prompting a user to adjust the camera pose, and continuously detecting the adjusted intersection point position after the user adjusts the camera pose until the adjusted intersection point position is in the preset area range, and determining the intersection point position as the center point position of the preset plane.
In an embodiment, the driving module is configured to drive the virtual object to be placed at the center point position on the preset plane.
In an embodiment, the driving module is further configured to drive the virtual object to be placed on the placement plane if the placement plane is detected from the image information within the preset time period.
In one embodiment, the method further comprises: the third acquisition module is used for responding to the interactive operation of the virtual object after the virtual object is driven to be placed on the preset plane of the three-dimensional scene, and acquiring the current image information of the three-dimensional scene; the second judging module is used for judging whether a placement plane is detected from the current image information; the driving module is further configured to, if a placement plane is detected from the current image information, drive the virtual object to perform the interactive operation on the placement plane, and remove the preset plane; and the driving module is further configured to drive the virtual object to perform the interactive operation on the preset plane if the placement plane is not detected from the image information.
In a fourth aspect, embodiments of the present application provide a method for determining an object placement plane, including: responding to the placement operation of a user on the interactive interface on the virtual object, and acquiring the image information of the current scene; judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is the plane in the current scene; if the placement plane is not detected from the image information within the preset time length, constructing a preset plane in a three-dimensional space corresponding to the current scene; and driving the virtual object to be placed on a preset plane in the three-dimensional space, and displaying a state image of the virtual object placed in the current scene on the interactive interface.
In an embodiment, the constructing a preset plane in the three-dimensional space corresponding to the current scene includes: acquiring plane parameters corresponding to the virtual object and camera pose information corresponding to the image information; and constructing a preset plane in a three-dimensional space corresponding to the current scene according to the camera pose information and the plane parameters.
In one embodiment, the plane parameters include: the method comprises the steps of presetting a preset distance between a preset plane and a camera position, a relative position relation between the preset plane and a preset reference plane and size information of the preset plane; the constructing a preset plane in the three-dimensional space corresponding to the current scene according to the camera pose information and the plane parameters comprises the following steps: determining rays taking the camera position as an endpoint and the camera direction as an emission direction according to the camera pose information; determining a transition plane which is parallel to the reference plane and has the preset distance from the camera position according to the relative position relation, and determining the intersection point position of the ray and the transition plane; if the intersection point position is in the preset area range, determining the intersection point position as the center point position of the preset plane; if the intersection point position is not in the preset area range, sending prompt information, wherein the prompt information is used for prompting a user to adjust the camera pose, and continuously detecting the adjusted intersection point position after the user adjusts the camera pose until the adjusted intersection point position is in the preset area range, and determining the intersection point position as the center point position of the preset plane; and constructing the preset plane in a three-dimensional space corresponding to the current scene by taking the central point position as the center according to the relative position relation and the size information.
In a fifth aspect, embodiments of the present application provide an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to cause the electronic device to perform the method of any of the above aspects.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having stored therein computer executable instructions that when executed by a processor implement the method of any one of the above aspects.
In a seventh aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the above aspects.
According to the method, the device, the storage medium and the program product for determining the object placement plane, when a user places a virtual object in an XR scene, images of a three-dimensional scene are acquired in real time, whether a system detects the placement plane from the images within a preset time period is judged first, if not, a preset plane is built in the images according to the preset plane height preset by the virtual object and the current camera pose, and the virtual object is placed in the preset plane. Therefore, when the system fails to timely detect the placement plane, the preset plane is timely constructed through the interaction behavior and the virtual object category, the situation that the system plane detection capability is uneven and needs a user to wait for a long time is avoided, the influence of the system plane detection capability on XR use experience is reduced, the service availability is effectively improved, and the interaction experience of the user is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It will be apparent to those of ordinary skill in the art that the drawings in the following description are of some embodiments of the invention and that other drawings may be derived from them without inventive faculty.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a system for determining an object placement plane according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a method for determining an object placement plane according to an embodiment of the present application;
fig. 4 is a schematic diagram of operation input of a virtual object according to an embodiment of the present application;
fig. 5 is a schematic diagram of constructing a preset plane in an AR scene according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a method for determining an object placement plane according to an embodiment of the present application;
fig. 7A is an initial interface schematic diagram of placing a virtual object in an AR scene according to an embodiment of the present application;
Fig. 7B is an interface schematic diagram after a preset plane is constructed in an AR scene according to an embodiment of the present application;
fig. 7C is an interface schematic diagram of an AR scene after a virtual object is placed on a preset plane according to an embodiment of the present application;
fig. 7D is an interface schematic diagram of switching a virtual object to a placement plane in an AR scene according to an embodiment of the present application;
FIG. 8 is a schematic flow chart of a method for determining an object placement plane according to an embodiment of the present application;
FIG. 9A is a flow chart of a method for determining an object placement plane according to an embodiment of the present application;
FIG. 9B is a flowchart of a method for determining an object placement plane according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an apparatus for determining a placement plane of an object according to an embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application.
The term "and/or" is used herein to describe association of associated objects, and specifically indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
In order to clearly describe the technical solutions of the embodiments of the present application, firstly, the terms referred to in the present application are explained:
XR: refers to a Virtual and Reality combined, human-machine interactive environment generated by computer technology and wearable devices, which includes VR (Virtual Reality), AR ((Augmented Reality, augmented Reality), etc.
AR: augmented Reality, augmented reality.
VR: virtual Reality.
3D:3-dimension, three-dimensional.
Detection plane: the real world video stream input by the camera of the XR device is extracted into a plane such as the ground, the wall, the table top and the like through a certain XR detection capability, so that the XR virtual-real interaction operation such as the placement of a virtual object plane can be performed.
Plane placement: in XR a virtual object is placed on a plane in a three-dimensional scene, which may be the real world floor, wall, etc.
Android: android is an operating system based on free and open source code of the Linux kernel.
ARCore: is a software platform for building an augmented reality application program based on android.
iOS: is a mobile operating system.
ARKit: and (3) constructing a software platform of the augmented reality application program based on the iOS system.
As shown in fig. 1, the present embodiment provides an electronic apparatus 1 including: at least one processor 11 and a memory 12, one processor being exemplified in fig. 1. The processor 11 and the memory 12 are connected by a bus 10. The memory 12 stores instructions executable by the processor 11, and the instructions are executed by the processor 11, so that the electronic device 1 can execute all or part of the flow of the method in the following embodiments, so as to reduce the influence of the system plane detection capability on the XR use experience, effectively improve the service availability, and improve the interactive experience of the user.
In an embodiment, the electronic device 1 may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, or a large computing system composed of a plurality of computers.
Fig. 2 is a schematic diagram of a system 200 for determining a placement plane of an object according to an embodiment of the present application. As shown in fig. 2, the system includes: server 210 and terminal 220, wherein:
the server 210 may be a data platform that provides detection plane services, such as an e-commerce shopping platform that provides AR interaction. In a practical scenario, one e-commerce shopping platform may have multiple servers 210, for example 1 server 210 in fig. 2.
The terminal 220 may be a computer, a mobile phone, a tablet, AR glasses, or other devices used when the user logs in to the e-commerce shopping platform, or a plurality of terminals 220 may be provided, and 2 terminals 220 are illustrated in fig. 2 as an example.
Information transmission between the terminal 220 and the server 210 may be performed through the internet, so that the terminal 220 may access data on the server 210. The terminal 220 and/or the server 210 may be implemented by the electronic device 1.
The method for determining the object placement plane can be applied to any field needing to detect the plane. Such as a 3D commodity detail scene, a digital space store commodity scene, an AR plane placement scene, and the like.
XR, which refers to a virtual and real combined, man-machine interactive environment created by computer technology and wearable devices, contains VR, AR, etc. Taking the AR technology as an example, the AR technology is a technology for skillfully fusing virtual information with the real world, and various technical means such as multimedia, three-dimensional modeling, real-time tracking and registering, intelligent interaction, sensing and the like are widely applied, after the virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is simulated, the virtual information is applied to the real world, and the two kinds of information are mutually complemented, so that the enhancement of the real world is realized.
In an actual scene, AR is a virtual-real combined three-dimensional space. When a virtual object is placed in an AR scene, for a plane placement scene, plane detection depends on the plane detection capability of ARCore/ARKit provided by an Android/iOS system of an AR terminal, and the plane detection capability of different device models has huge difference. Taking a mobile phone as an example, the mobile phone is limited by hardware capability, the older the hardware model, the weaker the mobile phone detection plane capability, some older mobile phones hardly detect planes, continuous mobile cameras are required to search, the average time for finding planes of the common mobile phones is about 10s, the worse the light is, and the found plane area is very small.
Finding the plane, determining the placement points on the plane is the first step of all AR placement scenarios, and if the user opens the AR space to find the plane in the first step, it takes much time, the experience is very poor, and the loss of the user and the jump of the service may be directly caused.
In order to solve the above-mentioned problem, the embodiment of the present application provides a solution for determining an object placement plane, when a user places a virtual object in an XR scene, an image of a three-dimensional scene is acquired in real time, first, whether the system detects the placement plane from the image within a preset time period is judged, if not, a preset plane is constructed in the image according to a preset plane height preset by the virtual object and a current camera pose, and the virtual object is placed in the preset plane. Therefore, when the system fails to timely detect the placement plane, the preset plane is timely constructed through the interaction behavior and the virtual object category, the situation that the system plane detection capability is uneven and needs a user to wait for a long time is avoided, the influence of the system plane detection capability on XR use experience is reduced, the service availability is effectively improved, and the interaction experience of the user is improved.
In an actual scene, the object in the XR is virtual, the coordinates of the object in the three-dimensional world are relative, and a user is insensitive to three-dimensional coordinate errors of the virtual object in a certain range, so that errors can be tolerated to a plane where the virtual object is placed, and therefore, a general plane position can be estimated quickly by a certain method, and service availability is guaranteed preferentially.
The above-mentioned scheme for determining the object placement plane may be deployed on the server 210, or may be deployed on the terminal 220, or may be partially deployed on the server 210, or partially deployed on the terminal 220. The actual scene may be selected based on actual requirements, which is not limited in this embodiment.
When a plan for determining the object placement plane is fully or partially deployed on the server 210, a call interface may be opened to the terminal 220 to provide algorithmic support to the terminal 220.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. In the case where there is no conflict between the embodiments, the following embodiments and features in the embodiments may be combined with each other. In addition, the sequence of steps in the method embodiments described below is only an example and is not strictly limited.
Please refer to fig. 3, which is a method for determining an object placement plane according to an embodiment of the present application, the method may be executed by the electronic device 1 shown in fig. 1, and may be applied to an application scenario of XR shown in fig. 2, so as to reduce an influence of system plane detection capability on XR usage experience, effectively improve service availability, and improve user interaction experience. In this embodiment, taking the terminal 220 as an executing terminal as an example, the method includes the following steps:
Step 301: and responding to the placement operation of the virtual object, and acquiring the image information of the three-dimensional scene.
In this step, the virtual object may be a virtual model established based on an actual object, such as a three-dimensional model of an actual object such as a household appliance, a household object, an animal or plant, and in an e-commerce scene, the virtual object may be a three-dimensional virtual model of an on-sale commodity. The virtual object may also be a virtual model built using computer technology, such as a character model or prop model in an AR game, etc. The placing operation is used for driving the virtual object to be placed on a plane. The placement operation may be directly input by the user through the interactive interface of the terminal 220, as shown in fig. 4, where the interactive interface displaying the virtual object is displayed on the touch screen of the terminal 220, and the virtual object may be placed at the point a by directly performing a sliding operation on the virtual object by the touch screen 221 of the terminal 220, where the sliding operation is captured by the touch screen 221, so as to generate a driving instruction to drive the virtual object to be placed at the point a. The interactive interface of the AR can be projected onto other entity surfaces, and a user can input the placement operation through the interactive interface, for example, in AR projection and AR wall-mounted scenes, the user can directly input the placement operation through the interactive interface projected on the wall.
In an embodiment, the placement operation of the virtual object may also be triggered by changing the position of the terminal 220 in the actual three-dimensional space, for example, the user may directly move the mobile phone to trigger and adjust the placement position of the refrigerator model in the three-dimensional scene by holding the mobile phone to watch the placement effect of the refrigerator model in the AR scene.
In one embodiment, the three-dimensional scene may be a real three-dimensional space scene. Step 301 may specifically include: and responding to the placement operation of the virtual object, starting the image acquisition equipment, and acquiring image information of the real three-dimensional space through the image acquisition equipment.
In this embodiment, the three-dimensional scene may refer to a spatial region in an actual scene, for example, for a real indoor scene in an AR scene, when a user wants to place a virtual object in the real three-dimensional space scene, the image capturing device is first turned on to capture image information in the real three-dimensional space, where the image capturing device may be an independent camera device, or may be a camera of the AR device, for example, a camera of a mobile phone.
In one embodiment, the three-dimensional scene may be a virtual three-dimensional space scene. Step 301 may specifically include: and responding to the placement operation of the virtual object, starting the virtual camera, and acquiring image information of the virtual three-dimensional space through the virtual camera.
In this embodiment, the three-dimensional scene may be virtual, for example, for a virtual three-dimensional scene of an electronic game under a VR scene, when a user wants to place a virtual object in the virtual three-dimensional scene, for example, when the user places a virtual prop in the game scene in the VR game, a virtual camera corresponding to the virtual three-dimensional scene is started in response to a user operation, and image information of the virtual three-dimensional space is acquired through the virtual camera. Thus, no matter whether the three-dimensional scene is real or virtual, the image information corresponds to certain camera pose information, and reliable parameters can be provided for subsequent calculation.
Step 302: and judging whether the placement plane is detected from the image information within a preset time period.
In this step, the placement plane may be a plane in any direction in the three-dimensional space region, including but not limited to a horizontal plane, and is also applicable to a vertical plane or a plane in any direction, such as a floor, a wall, a table top, etc. of an indoor scene, and may be extracted from image information of the three-dimensional scene by using a plane detection technique. The preset duration may be a tolerance duration for the user to wait for the detection plane process, and may be set based on actual requirements. After the image information of the three-dimensional scene is acquired, firstly, the system performs plane detection on the image information, and no matter what the plane detection capability of the terminal 220 is used by the user, firstly, the detection result of the terminal 220 system on the placement plane is waited, wherein the detection of the placement plane can depend on the plane detection capability of the terminal 220 system, such as the plane detection capability of ARCore/ARKit provided by the Android/iOS system. If the proper placement plane is not detected within the preset time period, and the user tolerance time is reached, step 303 may be entered to avoid the image user experience, otherwise, if the proper placement plane is detected within the preset time period, step 306 may be entered.
In an embodiment, the preset duration may be configured based on hardware parameters of the terminal 220, for example, the user's tolerance time is mainly aimed at, and in an actual scenario, the user expects a longer detection time for the terminal 220 of the low-end machine type, and the preset duration may be configured to be longer, so that more detection time can be given to the system. The terminal 220 of the high-end model may be configured with a short preset duration, and even the preset duration may be zero, so as to avoid that the user experience is lower than the expected experience, and improve the interaction experience of the terminal 220.
Step 303: and acquiring plane parameters corresponding to the virtual object and camera pose information corresponding to the image information.
In this step, if the system fails to detect a proper placement plane from the image information within a preset time period, and the user tolerance time is reached, a preset plane may be actively constructed to avoid the image user experience. Specifically, first, plane parameters corresponding to the virtual object and camera pose information corresponding to image information of the three-dimensional scene are acquired. The plane parameters are parameters of preset planes to be constructed, and corresponding plane parameters can be preset for different types of virtual objects. The camera pose information is camera pose information corresponding to capturing image information of a three-dimensional scene, so that a preset plane can be adapted to the image information.
In one embodiment, step 303 may specifically include: if the placement plane is not detected from the image information within the preset time period, determining the type identification of the virtual object. And reading plane parameters corresponding to the type identifiers from a database, wherein the plane parameters corresponding to the type identifiers are preconfigured in the database.
In this embodiment, corresponding plane parameters may be configured for different types of virtual objects in advance, and an association relationship between a type identifier of the virtual object and the corresponding plane parameters may be stored in a database. If the system fails to detect a proper placement plane from the image information within the preset time, the plane parameters corresponding to the type identifier of the current virtual object can be directly read from the database, so that the method is convenient and quick, and the data calculation amount is reduced.
In one embodiment, the plane parameters include, but are not limited to: the relative positional relationship between the preset plane and the preset reference plane and the size information of the preset plane.
In this embodiment, the reference plane may be selected based on actual requirements, and the reference plane is used to calibrate the position of the preset plane. For example, for a virtual object (a virtual model of a refrigerator, a washing machine, a microwave oven, etc.) which needs to be placed horizontally, a user generally stands or sits to operate the AR device, and then a horizontal ground can be selected as a reference plane, and then the relative positional relationship between the preset plane and the reference plane can include: the preset plane is parallel to the horizontal ground. In addition, in order to define the placement position of the virtual object more accurately, proper size information can be configured for the preset plane. For example, based on the size of the virtual object, the size information of the preset plane is set, so that the size of the preset plane can be guaranteed to accommodate the virtual object, for example, the virtual object is a refrigerator model, and the corresponding size information of the preset plane can be 3m by 3 m.
In one embodiment, the plane parameters include: a preset distance between the preset plane and the camera position. In this embodiment, the plane parameters corresponding to the virtual object may further include a predicted height of the preset plane, and in order to improve accuracy of the preset plane, a preset distance between the preset plane and the camera position may be used to represent height information of the preset plane. The method can be specifically configured as follows: for the case that the operation medium is the handheld terminal 220, the camera position of the terminal 220 is generally at the hand position of the user, and is configured based on the class of the virtual object, for example, the class of goods placed on the ground, such as a refrigerator and a washing machine, the preset plane is in the horizontal ground, and the estimated height of the hand of the user (the camera position of the terminal 220) from the sole (the preset plane) can be configured as the preset distance, for example, 1 meter. The table top is used for placing commodities such as an electric cooker and a microwave oven, the corresponding preset plane is generally arranged on the table top, the table top is generally arranged at the waist position of the user, and the estimated height of the hand (the camera position of the terminal 220) from the waist can be configured as the preset distance.
Step 304: and constructing a preset plane in the three-dimensional scene according to the camera pose information and the plane parameters.
In this step, after the pose information and plane parameters of the camera are determined, a suitable preset plane may be constructed in the three-dimensional scene, where the preset plane is a virtual plane, and may coincide with or be similar to a certain plane of the three-dimensional scene displayed in the image information, for example, the preset plane may coincide with or be similar to the ground or a desktop, and in the sense of the user, the virtual object is placed on the preset plane, and the effect on the plane placed in the three-dimensional scene is the same or similar.
In one embodiment, step 304 may specifically include: and determining the position of the central point of the preset plane according to the pose information of the camera and the relative position relation. And constructing a preset plane in the three-dimensional scene by taking the position of the central point as the center according to the relative position relation and the size information.
In this embodiment, based on the camera pose information and the relative positional relationship between the preset plane and the reference plane, the center point position of the preset plane may be determined first, then an appropriate preset plane may be constructed with the center point position as the center, for example, for a horizontally placed scene, the user generally stands or sits to operate, the position of the hand from the ground has a fixed range, assuming that it is 1m, then a placement point of the virtual object may be calculated according to the current direction of the camera, so as to obtain a plane center point, and then a preset plane with a certain size may be constructed with this point. The preset plane is a virtual plane constructed in the three-dimensional scene, and the preset plane is fused in the image information, so that the display of the image information of the three-dimensional scene is not affected.
In an embodiment, determining the position of the center point of the preset plane according to the camera pose information and the relative position relationship includes: and determining rays taking the camera position as an end point and the camera direction as an emission direction according to the camera pose information. And determining a transition plane which is parallel to the reference plane and has a preset distance from the camera position according to the relative position relation, and determining the intersection point position of the ray and the transition plane. If the intersection point position is in the preset area range, determining the intersection point position as the center point position of the preset plane. If the intersection point position is not in the preset area range, sending prompt information, wherein the prompt information is used for prompting a user to adjust the camera pose, and after the user adjusts the camera pose, continuously detecting the adjusted intersection point position until the adjusted intersection point position is in the preset area range, and determining the intersection point position as the center point position of the preset plane.
In this embodiment, taking the virtual object as a horizontal ground placement class as an example, as shown in fig. 5, it is assumed that the terminal 220 is a mobile phone used by a user, and the camera pose information is pose information of a camera of the mobile phone, which may include a position and a direction (forward) of the camera (i.e., a camera) of the mobile phone. In an actual scene, a user generally stands or sits to operate, the preset plane is located in the horizontal ground, the position of the hand from the ground is assumed to be 1m, that is, the preset distance between the preset plane and the position of the camera is 1m, and then the horizontal ground is the transition plane at the moment. Firstly, determining rays taking the position of a camera as an end point and taking the direction forward of the camera as the emitting direction, wherein the position P of the intersection point of the rays and the horizontal ground is the center point position of a preset plane, and then constructing the preset plane with a certain size by using the P point. Specifically, if the P point position is too far from the user or too close to the user, that is, if the P point position of the intersection point is within the preset area, it is indicated that the P point position of the intersection point is more suitable for the user to observe the placement state of the virtual object, the P point position of the intersection point is determined to be the center point position of the preset plane, if the P point position of the intersection point is not within the preset area, it is indicated that the P point position of the intersection point is too far from the user or too close to the user, so that the user is not facilitated to observe the virtual object, a prompt can be sent, the prompt information is used for prompting the user to adjust the position of the camera, for example, prompting the user to change the position of the mobile phone, and after the user adjusts the position of the camera, the adjusted position of the intersection point is continuously detected until the adjusted position of the intersection point is within the preset area. Therefore, the determined preset plane is ensured to be accurate, and the user interaction experience is improved.
In addition, some behavior guidance can be performed for the user to improve the success rate of plane detection, for example, ambient light and shade prompt is performed for the user, so that the user can find out the image information of a brighter place shooting scene, and when the direction of the mobile phone pose is too vertical, the user can be prompted to aim at the ground beat for assisting the user to obtain more accurate image information, and further more accurate preset plane positions are determined.
Step 305: the virtual object is driven to be placed on a preset plane of the three-dimensional scene.
In this step, after the preset plane is determined, the virtual object can be driven to be placed on the preset plane of the three-dimensional scene, for example, in response to the clicking operation of the user in the preset plane, the virtual object is placed on a certain position on the preset plane, so that the virtual object is placed on the ground within the tolerance time of the user, and the use of the XR service is preferentially ensured.
In one embodiment, the step 305 may specifically include: the virtual object is driven to be placed at the center point position on the preset plane.
In this embodiment, as shown in fig. 5, the point P is the center point position, the distance relationship between the position and the user more accords with the viewing habit of the user, and the virtual object is placed at the position of the point P, so that the scene sense more fitting the viewing habit of the user can be presented, and the interaction experience of the user and the XR scene is improved.
Step 306: the driving virtual object is placed on the placement plane.
In the step, if the placement plane is detected from the image information within the preset time, the plane detection capability of the system is relatively rapid, a proper placement plane in the image information can be rapidly found, and the virtual object can be directly placed on the placement plane found by the system.
In one embodiment, after step 305, if the determination system detects a placement plane from the image information of the three-dimensional scene, the virtual object may be switched onto the placement plane. So as to ensure the smooth proceeding of the subsequent interaction process. Specifically, in order to ensure that the user does not feel when switching planes, there may be interactive operations of the user to trigger the switching planes.
According to the method for determining the object placement plane, when a user places a virtual object in an XR scene, images of a three-dimensional scene are acquired in real time, whether a system detects the placement plane from a current image within a preset time period is judged first, if not, a preset plane is constructed in the current image according to the preset plane height preset by the virtual object and the current camera pose, and the virtual object is placed in the preset plane. Therefore, when the system fails to timely detect the placement plane, the preset plane is timely constructed through the interaction behavior and the virtual object category, the situation that the system plane detection capability is uneven and needs long-time waiting of a user is avoided, the placement operation of the user is not delayed, the influence of the system plane detection capability on XR use experience is reduced, the service availability is effectively improved, and the interaction experience of the user is improved. The scheme does not depend on external algorithms, is not influenced by environmental factors such as illumination and the like, and has wide application range.
Please refer to fig. 6, which is a method for determining an object placement plane according to an embodiment of the present application, the method may be executed by the electronic device 1 shown in fig. 1, and may be applied to an application scenario of XR shown in fig. 2, so as to reduce an influence of system plane detection capability on XR usage experience, effectively improve service availability, and improve user interaction experience. In this embodiment, taking the terminal 220 as an execution end as an example, compared with the previous embodiment, the present embodiment further includes a step of replacing the preset plane with the placement plane detected by the system in the subsequent interactive operation, where the method includes the following steps:
step 601: and responding to the placement operation of the virtual object, and acquiring the image information of the three-dimensional scene. The detailed procedure can be seen from the description of step 301 in the previous embodiment.
Step 602: and judging whether the placement plane is detected from the image information within a preset time period. If yes, go to step 606, otherwise go to step 603. The detailed procedure may be found in the description of step 302 in the previous embodiments.
Step 603: and if the placement plane is not detected from the image information within the preset time length, acquiring plane parameters corresponding to the virtual object and camera pose information corresponding to the image information. The detailed procedure can be seen from the description of step 303 in the previous embodiment.
Step 604: and constructing a preset plane in the three-dimensional scene according to the camera pose information and the plane parameters. The detailed procedure can be seen from the description of step 304 in the previous embodiments.
Step 605: the virtual object is driven to be placed on a preset plane of the three-dimensional scene. The detailed procedure can be seen from the description of step 305 in the previous embodiment.
Step 606: and if the placement plane is detected from the image information within the preset time period, driving the virtual object to be placed on the placement plane. The detailed procedure can be found in the description of step 306 in the previous embodiment.
Step 607: and responding to the interactive operation of the virtual object, and acquiring the current image information of the three-dimensional scene.
In this step, if the virtual object is placed in the preset plane, the user performs further interactive operations with the AR scene, such as moving the virtual object, or operating the mobile terminal 220, the position of the virtual object in the AR scene is necessarily caused to move in this process, so that current image information of the three-dimensional scene needs to be acquired in real time, such as capturing a current video stream of the indoor scene in real time through a mobile phone camera.
Step 608: it is determined whether a placement plane is detected from the current image information. If yes go to step 609, otherwise go to step 610.
In this step, the plane detection capability provided by the terminal 220 system may continue to extract the plane from the current image information, which is limited to the performance of the hardware of the terminal 220, and may not be able to detect the placement plane in time, so that it is required to determine in real time whether the system detects the placement plane from the current image information, if so, step 609 is performed, and if not, step 610 is performed.
Step 609: and driving the virtual object to perform interactive operation on the placement plane, and removing the preset plane.
In this step, if the system detects the placement plane from the current image information, the virtual object may be switched to the placement plane for processing at this time, that is, the interactive operation is performed on the placement plane returned by the system, for example, the virtual object is moved on the placement plane, in an actual scene, the user generally expects the movement of the virtual object during the interactive operation, at this time, the plane where the virtual object is switched accords with the user expectation, and the virtual object may be switched to the placement plane detected by the system under the condition that the user does not perceive, so as to ensure that the interactive operation is performed smoothly, and the preset plane may be removed, for example, the memory data of the preset plane may be deleted, so as to reduce the resource consumption.
Step 610: the virtual object is driven to perform interactive operation on the preset plane.
In this step, if the system still does not detect the placement plane from the image information, in order to ensure that the interactive operation proceeds smoothly, the preset plane may be continuously used, that is, the interactive operation between the user and the virtual object is performed in the preset plane, so as to reduce the bad experience caused by the insufficient detection capability of the system plane.
Taking the example of the interaction scene between the user and the commodity model of the washing machine in the AR scene, taking the mobile phone as an operation medium, when the user clicks on the AR commodity details of the washing machine, as shown in FIG. 7A, the mobile phone camera shoots the image information of the three-dimensional scene, and the system can prompt the mobile phone to identify the plane at the interaction interface based on the image information detection plane of the current three-dimensional scene. Assuming that the preset duration of the user's mobile phone configuration is 3 seconds, if the system does not return to the detected placement plane within 3 seconds, the steps 603 to 604 are triggered, and a preset plane is constructed, as shown in fig. 7B, and a prompt "click to place merchandise" may be given near the preset plane, so as to prompt the user to place the washing machine on the preset plane, where the user is unaware. After the user clicks to place the washing machine, the interface display effect is shown in fig. 7C. Assuming that the system detects the placement plane at this time as shown in fig. 7C, it can be seen that the placement plane partially coincides with the preset plane, and when the user moves the washing machine, the washing machine can be switched to the placement plane, and the display effect after the switching of the planes is shown in fig. 7D.
According to the method for determining the object placement plane, after the system determines that the time of the object placement plane reaches the tolerance time, a preset plane is automatically built at a proper position for a user to place a virtual object, if the virtual object is moved in the interaction process between a subsequent user and the virtual object, the virtual object is switched to the placement plane detected by the system to execute interaction operation if the placement plane detected by the system is encountered, the preset plane is cleared, and if the placement plane detected by the system is not obtained at all time later, the interaction process is kept smoothly by using the preset plane, so that the image of the interaction process by the system plane detection capability is reduced under the condition that the user does not perceive the image, and the interaction experience of the user is improved.
Please refer to fig. 8, which is a method for determining an object placement plane according to an embodiment of the present application, the method may be executed by the electronic device 1 shown in fig. 1, and may be applied to an application scenario of XR shown in fig. 2, so as to reduce an influence of system plane detection capability on XR usage experience, effectively improve service availability, and improve user interaction experience. In this embodiment, the terminal 220 is taken as an executing terminal, and compared with the previous embodiment, the android mobile phone is taken as an example, that is, the system plane detection capability provided by the ARKit platform is taken as an example, the method includes the following steps:
S1: and the user opens the mobile phone camera to shoot the image information of the current three-dimensional scene.
S2: ARKit continuously detects planes based on image information.
S3: the vertical ray emitted by the center of the mobile phone screen tries to generate an intersection point 1 with the system detection plane, and the process can be triggered by the mobile phone pose of the user.
S4: judging whether the intersection point 1 appears within 3 seconds, if so, proceeding to S11, otherwise proceeding to S5
S5: the method comprises the steps of obtaining the preset plane height, wherein the preset plane is the preset plane, and the preset plane height is the preset distance between the preset plane and the camera position, namely obtaining the plane parameters of the preset plane, such as that the preset plane is parallel to the horizontal ground, and the preset distance between the preset plane and the camera position of the mobile phone is 1m.
S6: and calculating the intersection distance between the vertical ray at the center of the screen and the estimated plane, namely calculating the intersection 2 between the vertical ray emitted by the center of the mobile phone screen and the preset plane, and determining the distance between the intersection 2 and the mobile phone.
S7: and judging whether the distance between the intersection point 2 and the mobile phone is within a limited range, if so, entering S9, otherwise, entering S8.
S8: prompting the user to adjust the position and the posture of the mobile phone, and returning to the step S3 after the user adjusts the position and the posture of the mobile phone.
S9: a virtual plane (i.e., a preset plane) of a certain area is constructed, and specifically, a preset plane is constructed according to set size information, such as a preset plane of 3m×3m is constructed.
S10: and acquiring an intersection point 2 of a ray vertically emitted by the center of the mobile phone screen and a preset plane.
S11: and displaying the placement positions at the intersection points, and prompting the user to click to place the virtual object.
S12: after clicking by the user, the virtual object is placed at the intersection point position.
S13: the user moves the virtual object, triggering a movement operation.
S14: and determining a contact point operated on the mobile phone screen, judging whether a ray emitted from the contact point on the mobile phone screen at a camera angle has an intersection point with an ARKit detection plane (namely a placement plane), if so, entering S15, and otherwise, entering S16.
S15: and removing all preset planes, returning to the step S12, switching the virtual object to the placement plane, and executing the moving operation.
S16: and judging whether the rays emitted from the contacts on the mobile phone screen at the camera angle have intersection points with a preset plane, if so, returning to the step S12, otherwise, entering the step S17.
S17: the user is prompted that the virtual object cannot be placed currently, such as marking the virtual object red, or a text prompt.
In this embodiment, after the user opens the camera, the user continuously adjusts the detection plane of the ARKit, and the user continuously adjusts the camera to aim at a certain plane, if the process continues for a preset period of time, for example, 3 seconds, the ARKit does not find the plane yet, and the user is likely to aim at a proper ground in the real three-dimensional scene at the moment, so that a preset plane can be actively constructed. For a horizontally placed scene, the user generally stands or sits to operate, the position of the hand from the ground is in a certain range, and the position is assumed to be 1m, so that the plane distance can be calculated easily according to the current direction of the camera, the plane center point is obtained, and a preset plane with a certain size is constructed according to the plane center point. Of course, if the center position is too far/too close, the condition is not met, and the user is prompted to adjust the pose. Therefore, the placing operation of the user is not delayed, the placing plane may be detected by the subsequent system, the subsequent interactive operation is triggered and corrected to be executed on the placing plane detected by the system, and the smooth proceeding of the interactive process is ensured.
Please refer to fig. 9A, which is a method for determining an object placement plane according to an embodiment of the present application, the method may be executed by the electronic device 1 shown in fig. 1 and may be applied to the application scenario of XR shown in fig. 2, so as to reduce the influence of the system plane detection capability on the XR use experience, effectively improve the service availability, and improve the user interaction experience. In this embodiment, taking the terminal 220 as an executing end as an example, taking the case that the user uses the e-commerce shopping platform to purchase goods in AR scene, the method includes the following steps:
Step 901: and responding to the placement operation of the user on the commodity virtual model on the interactive interface, and acquiring the image information of the three-dimensional scene.
In this step, taking the commodity virtual model as a virtual object as an example, when a user uses the e-commerce shopping platform to purchase a commodity, the user can enter an AR scene on a 3D commodity detail page, and drag the commodity virtual model in the AR scene to realize multi-azimuth checking of commodity details, assist the user in purchasing, and promote the interactive experience of the user. The detailed procedure can be seen from the description of step 301 in the previous embodiment.
Step 902: and judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is a plane in the three-dimensional scene. If yes, go to step 906, otherwise go to step 903. The detailed procedure may be found in the description of step 302 in the previous embodiments.
Step 903: and if the placement plane is not detected from the image information within the preset time length, acquiring plane parameters corresponding to the commodity virtual model and camera pose information corresponding to the image information. The detailed procedure can be seen from the description of step 303 in the previous embodiment.
Step 904: and constructing a preset plane in the three-dimensional scene according to the camera pose information and the plane parameters. The detailed procedure can be seen from the description of step 304 in the previous embodiments.
Step 905: and driving the commodity virtual model to be placed on a preset plane of the three-dimensional scene, and displaying the state of the commodity virtual model placed in the three-dimensional scene on the interactive interface. The detailed procedure can be seen from the description of step 305 in the previous embodiment.
Step 906: and driving the commodity virtual model to be placed on the placement plane, and displaying the state of the commodity virtual model placed in the three-dimensional scene on the interactive interface. The detailed procedure can be found in the description of step 306 in the previous embodiment.
According to the method for determining the object placement plane, when the tolerance time of the user is reached, a proper preset plane is established according to estimation of user behaviors, virtual object types and the like, initial usability of service is guaranteed, a user is assisted in better commodity purchasing, interaction performance of the terminal 220 is improved, and interaction experience of the user is improved. Does not depend on external algorithm, is not influenced by environmental factors such as illumination and the like, and has wide application range. In the practical application of online service of a certain e-commerce platform, taking a mobile phone as an example, the average plane detection time is reduced from the original 16s to the current 3s, and the page stay time is also improved.
Please refer to fig. 9B, which is a method for determining an object placement plane according to an embodiment of the present application, the method may be executed by the electronic device 1 shown in fig. 1 and may be applied to an application scenario of AR shown in fig. 2, so as to reduce an influence of system plane detection capability on AR use experience, effectively improve service availability, and improve user interaction experience. In this embodiment, taking the terminal 220 as an executing terminal as an example, and taking interaction between a user and an AR scene as an example, the method includes the following steps:
Step 1101: and responding to the placement operation of the user on the virtual object on the interactive interface, and acquiring the image information of the current scene.
In this step, taking an AR scene as an example, the current scene may be an entity environment scene where the user is currently located, and specific reference may be made to the related description of the foregoing embodiment.
Step 1102: and judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is the plane in the current scene. If yes, go to step 1105, otherwise go to step 1103. The detailed procedure may be found in the description of step 302 in the previous embodiments.
Step 1103: if the placement plane is not detected from the image information within the preset time length, a preset plane is built in a three-dimensional space corresponding to the current scene.
In this step, the current scene is a physical space scene, a three-dimensional space corresponding to the current scene can be reconstructed based on the image information of the current scene, the preset plane is a virtual plane established in the three-dimensional space, and the detailed process can refer to the description of step 303 in the foregoing embodiment.
In an embodiment, the step 1103 may specifically include: and acquiring plane parameters corresponding to the virtual object and camera pose information corresponding to the image information. And constructing a preset plane in a three-dimensional space corresponding to the current scene according to the camera pose information and the plane parameters. The detailed procedure can be found in the description of step 303 in the previous embodiments.
In one embodiment, the plane parameters may include: the preset distance between the preset plane and the camera position, the relative position relation between the preset plane and the preset reference plane and the size information of the preset plane. According to the camera pose information and plane parameters, a preset plane is constructed in a three-dimensional space corresponding to the current scene, and the method comprises the following steps: and determining rays taking the camera position as an end point and the camera direction as an emission direction according to the camera pose information. And determining a transition plane which is parallel to the reference plane and has a preset distance from the camera position according to the relative position relation, and determining the intersection point position of the ray and the transition plane. If the intersection point position is in the preset area range, determining the intersection point position as the center point position of the preset plane. If the intersection point position is not in the preset area range, sending prompt information, wherein the prompt information is used for prompting a user to adjust the camera pose, and after the user adjusts the camera pose, continuously detecting the adjusted intersection point position until the adjusted intersection point position is in the preset area range, and determining the intersection point position as the center point position of the preset plane. And constructing a preset plane in a three-dimensional space corresponding to the current scene by taking the position of the central point as the center according to the relative position relation and the size information.
In this embodiment, in the AR scenario, if the intersection point of the preset planes is found to be too far or too close when the preset planes are constructed, the user is reminded to adjust the pose of the mobile phone, and the description of step 304 can be referred to in the foregoing embodiment.
Step 1104: and driving the virtual object to be placed on a preset plane in the three-dimensional space, and displaying a state image of the virtual object placed in the current scene on the interactive interface. The detailed procedure can be seen from the description of step 305 in the previous embodiment.
Step 1105: and driving the virtual object to be placed on the placement plane, and displaying the state of the virtual object placed in the three-dimensional scene on the interactive interface. The detailed procedure can be found in the description of step 306 in the previous embodiment.
In the method for determining the object placement plane, in the AR scene, when the user tolerance time is reached, a proper preset plane is constructed according to estimation of user behaviors, virtual object types and the like, so that the usability of AR interaction is ensured, the interaction performance of the terminal 220 is improved, and the interaction experience of the user is improved.
Please refer to fig. 10, which is an apparatus 1000 for determining an object placement plane according to an embodiment of the present application, where the apparatus may be applied to the electronic device 1 shown in fig. 1 and may be applied to an application scenario of XR shown in fig. 2, so as to reduce an influence of system plane detection capability on XR usage experience, effectively improve service availability, and improve interactive experience of a user. The device comprises: the functional principles of the first obtaining module 1001, the first judging module 1002, the second obtaining module 1003, the constructing module 1004 and the driving module 1005 are as follows:
The first obtaining module 1001 is configured to obtain image information of a three-dimensional scene in response to a placement operation of a virtual object.
A first determining module 1002, configured to determine whether a placement plane is detected from the image information within a preset period of time, where the placement plane is a plane in the three-dimensional scene.
The second obtaining module 1003 is configured to obtain a plane parameter corresponding to the virtual object and camera pose information corresponding to the image information if the placement plane is not detected from the image information within a preset time period.
A construction module 1004 is configured to construct a preset plane in the three-dimensional scene according to the camera pose information and the plane parameters.
A driving module 1005, configured to drive the virtual object to be placed on a preset plane of the three-dimensional scene.
In one embodiment, the three-dimensional scene is a real three-dimensional space scene. The first obtaining module 1001 is configured to start the image capturing device in response to a placement operation of the virtual object, and capture image information of the real three-dimensional space through the image capturing device.
In one embodiment, the three-dimensional scene is a virtual three-dimensional space scene. The first obtaining module 1001 is configured to start the virtual camera in response to a placement operation of the virtual object, and collect image information of the virtual three-dimensional space through the virtual camera.
In an embodiment, the second obtaining module 1003 is configured to determine the type identifier of the virtual object if the placement plane is not detected from the image information within the preset time period. And reading plane parameters corresponding to the type identifiers from a database, wherein the plane parameters corresponding to the type identifiers are preconfigured in the database.
In one embodiment, the plane parameters include: the relative positional relationship between the preset plane and the preset reference plane and the size information of the preset plane.
In one embodiment, the constructing module 1004 is configured to determine a center point position of the preset plane according to the pose information and the relative positional relationship of the camera. And constructing a preset plane in the three-dimensional scene by taking the position of the central point as the center according to the relative position relation and the size information.
In one embodiment, the plane parameters include: a preset distance between the preset plane and the camera position. According to the camera pose information and the relative position relation, determining the position of the central point of the preset plane comprises the following steps: and determining rays taking the camera position as an end point and the camera direction as an emission direction according to the camera pose information. And determining a transition plane which is parallel to the reference plane and has a preset distance from the camera position according to the relative position relation, and determining the intersection point position of the ray and the transition plane. If the intersection point position is in the preset area range, determining the intersection point position as the center point position of the preset plane. If the intersection point position is not in the preset area range, sending prompt information, wherein the prompt information is used for prompting a user to adjust the camera pose, and after the user adjusts the camera pose, continuously detecting the adjusted intersection point position until the adjusted intersection point position is in the preset area range, and determining the intersection point position as the center point position of the preset plane.
In one embodiment, the driving module 1005 is configured to drive the virtual object to be placed at a center point position on a preset plane of the three-dimensional scene.
In an embodiment, the driving module 1005 is further configured to drive the virtual object to be placed on the placement plane if the placement plane is detected from the image information within the preset time period.
In one embodiment, the method further comprises: and the third acquisition module is used for responding to the interactive operation of the virtual object after the virtual object is driven to be placed on the preset plane of the three-dimensional scene, and acquiring the current image information of the three-dimensional scene. And the second judging module is used for judging whether the placement plane is detected from the current image information. The driving module 1005 is further configured to, if a placement plane is detected from the current image information, drive the virtual object to perform an interactive operation on the placement plane, and remove the preset plane. The driving module 1005 is further configured to, if the placement plane is not detected from the image information, drive the virtual object to perform an interactive operation on the preset plane.
For a detailed description of the above device 1000 for determining an object placement plane, please refer to the description of the related method steps in the above embodiment, which has similar principle and technical effects, and the description of this embodiment is omitted herein.
The embodiment of the application further provides a computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, and when the processor executes the computer executable instructions, the method of any of the foregoing embodiments is implemented.
Embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the preceding embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of modules is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple modules may be combined or integrated into another system, or some features may be omitted or not performed.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or processor to perform some steps of the methods of the various embodiments of the present application.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU for short), other general purpose processors, digital signal processor (Digital Signal Processor, DSP for short), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution. The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). It is also possible that the processor and the storage medium reside as discrete components in an electronic device or a master device.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method of the embodiments of the present application.
In the technical scheme of the application, the related information such as user data and the like is collected, stored, used, processed, transmitted, provided, disclosed and the like, and all meet the requirements of related laws and regulations without violating the common-practice custom.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (14)

1. A method of determining an object placement plane, the method comprising:
responding to the placement operation of the virtual object, and acquiring image information of the three-dimensional scene;
judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is a plane in the three-dimensional scene;
if the placement plane is not detected from the image information within the preset time period, obtaining plane parameters corresponding to the virtual object and camera pose information corresponding to the image information;
according to the camera pose information and the plane parameters, a preset plane is constructed in the three-dimensional scene;
driving the virtual object to be placed on a preset plane in the three-dimensional scene;
the plane parameters include: the relative position relation between the preset plane and the preset reference plane and the size information of the preset plane.
2. The method of claim 1, wherein the three-dimensional scene is a real three-dimensional space scene; the responding to the placement operation of the virtual object, obtaining the image information of the three-dimensional scene, comprises the following steps:
and responding to the placement operation of the virtual object, starting an image acquisition device, and acquiring the image information of the real three-dimensional space through the image acquisition device.
3. The method of claim 1, wherein the three-dimensional scene is a virtual three-dimensional spatial scene; the responding to the placement operation of the virtual object, obtaining the image information of the three-dimensional scene, comprises the following steps:
and responding to the placement operation of the virtual object, starting a virtual camera, and acquiring the image information of the virtual three-dimensional space through the virtual camera.
4. The method of claim 1, wherein if no placement plane is detected from the image information within the preset time period, obtaining the plane parameter corresponding to the virtual object includes:
if the placement plane is not detected from the image information within the preset time period, determining the type identifier of the virtual object;
and reading plane parameters corresponding to the type identifiers from a database, wherein the plane parameters corresponding to the type identifiers are preconfigured in the database.
5. The method of claim 1, wherein constructing a preset plane in the three-dimensional scene from the camera pose information and the plane parameters comprises:
determining the position of a central point of the preset plane according to the camera pose information and the relative position relation;
And constructing the preset plane in the three-dimensional scene by taking the center point position as the center according to the relative position relation and the size information.
6. The method of claim 5, wherein the plane parameters comprise: a preset distance between the preset plane and the camera position; the determining the position of the center point of the preset plane according to the camera pose information and the relative position relation comprises the following steps:
determining rays taking the camera position as an endpoint and the camera direction as an emission direction according to the camera pose information;
determining a transition plane which is parallel to the reference plane and has the preset distance from the camera position according to the relative position relation, and determining the intersection point position of the ray and the transition plane;
if the intersection point position is in the preset area range, determining the intersection point position as the center point position of the preset plane;
and if the intersection point position is not in the preset area range, sending prompt information, wherein the prompt information is used for prompting a user to adjust the camera pose, and continuously detecting the adjusted intersection point position after the user adjusts the camera pose until the adjusted intersection point position is in the preset area range, and determining the intersection point position as the center point position of the preset plane.
7. The method of claim 5, wherein said driving the virtual object to be placed on a preset plane of the three-dimensional scene comprises:
and driving the virtual object to be placed at the center point position on the preset plane.
8. The method as recited in claim 1, further comprising:
and if the placement plane is detected from the image information within the preset time, driving the virtual object to be placed on the placement plane.
9. The method of claim 1, further comprising, after said driving said virtual object to be placed on a preset plane of said three-dimensional scene: responding to the interactive operation of the virtual object, and acquiring current image information of the three-dimensional scene;
judging whether a placement plane is detected from the current image information;
if a placement plane is detected from the current image information, driving the virtual object to execute the interactive operation on the placement plane, and removing the preset plane;
and if the placement plane is not detected from the image information, driving the virtual object to execute the interactive operation on the preset plane.
10. A method of determining an object placement plane, comprising:
responding to the placement operation of a user on the commodity virtual model on the interactive interface, and acquiring the image information of the three-dimensional scene;
judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is a plane in the three-dimensional scene;
if the placement plane is not detected from the image information within the preset time period, acquiring plane parameters corresponding to the commodity virtual model and camera pose information corresponding to the image information;
according to the camera pose information and the plane parameters, a preset plane is built in the three-dimensional scene;
driving the commodity virtual model to be placed on a preset plane of the three-dimensional scene, and displaying the state of the commodity virtual model placed in the three-dimensional scene on the interactive interface;
the plane parameters include: the relative position relation between the preset plane and the preset reference plane and the size information of the preset plane.
11. A method of determining an object placement plane, comprising:
responding to the placement operation of a user on the interactive interface on the virtual object, and acquiring the image information of the current scene;
Judging whether a placement plane is detected from the image information within a preset time length, wherein the placement plane is the plane in the current scene;
if the placement plane is not detected from the image information within the preset time length, constructing a preset plane in a three-dimensional space corresponding to the current scene;
driving the virtual object to be placed on a preset plane in the three-dimensional space, and displaying a state image of the virtual object placed in the current scene on the interactive interface;
the constructing a preset plane in the three-dimensional space corresponding to the current scene comprises the following steps:
acquiring plane parameters corresponding to the virtual object and camera pose information corresponding to the image information;
according to the camera pose information and the plane parameters, a preset plane is built in a three-dimensional space corresponding to the current scene;
the plane parameters include: the method comprises the steps of presetting a distance between the preset plane and a camera position, a relative position relation between the preset plane and a preset reference plane and size information of the preset plane.
12. The method according to claim 11, wherein constructing a preset plane in the three-dimensional space corresponding to the current scene according to the camera pose information and the plane parameters comprises:
Determining rays taking the camera position as an endpoint and the camera direction as an emission direction according to the camera pose information;
determining a transition plane which is parallel to the reference plane and has the preset distance from the camera position according to the relative position relation, and determining the intersection point position of the ray and the transition plane;
if the intersection point position is in the preset area range, determining the intersection point position as the center point position of the preset plane;
if the intersection point position is not in the preset area range, sending prompt information, wherein the prompt information is used for prompting a user to adjust the camera pose, and continuously detecting the adjusted intersection point position after the user adjusts the camera pose until the adjusted intersection point position is in the preset area range, and determining the intersection point position as the center point position of the preset plane;
and constructing the preset plane in a three-dimensional space corresponding to the current scene by taking the central point position as the center according to the relative position relation and the size information.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor;
Wherein the memory stores instructions executable by the at least one processor to cause the electronic device to perform the method of any one of claims 1-12.
14. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor implement the method of any of claims 1-12.
CN202310065054.5A 2023-02-06 2023-02-06 Method, device and storage medium for determining object placement plane Active CN115810100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310065054.5A CN115810100B (en) 2023-02-06 2023-02-06 Method, device and storage medium for determining object placement plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310065054.5A CN115810100B (en) 2023-02-06 2023-02-06 Method, device and storage medium for determining object placement plane

Publications (2)

Publication Number Publication Date
CN115810100A CN115810100A (en) 2023-03-17
CN115810100B true CN115810100B (en) 2023-05-05

Family

ID=85487523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310065054.5A Active CN115810100B (en) 2023-02-06 2023-02-06 Method, device and storage medium for determining object placement plane

Country Status (1)

Country Link
CN (1) CN115810100B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665506A (en) * 2016-07-29 2018-02-06 成都理想境界科技有限公司 Realize the method and system of augmented reality
CN115482359A (en) * 2021-05-31 2022-12-16 华为技术有限公司 Method for measuring size of object, electronic device and medium thereof

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9367961B2 (en) * 2013-04-15 2016-06-14 Tencent Technology (Shenzhen) Company Limited Method, device and storage medium for implementing augmented reality
CN107665508B (en) * 2016-07-29 2021-06-01 成都理想境界科技有限公司 Method and system for realizing augmented reality
CN108492363B (en) * 2018-03-26 2020-03-10 Oppo广东移动通信有限公司 Augmented reality-based combination method and device, storage medium and electronic equipment
CN108876900A (en) * 2018-05-11 2018-11-23 重庆爱奇艺智能科技有限公司 A kind of virtual target projective techniques merged with reality scene and system
CN108961423B (en) * 2018-07-03 2023-04-18 百度在线网络技术(北京)有限公司 Virtual information processing method, device, equipment and storage medium
US11054896B1 (en) * 2019-02-07 2021-07-06 Facebook, Inc. Displaying virtual interaction objects to a user on a reference plane
CN110533780B (en) * 2019-08-28 2023-02-24 深圳市商汤科技有限公司 Image processing method and device, equipment and storage medium thereof
CN111242908B (en) * 2020-01-07 2023-09-15 青岛小鸟看看科技有限公司 Plane detection method and device, plane tracking method and device
CN111141217A (en) * 2020-04-03 2020-05-12 广东博智林机器人有限公司 Object measuring method, device, terminal equipment and computer storage medium
CN113048980B (en) * 2021-03-11 2023-03-14 浙江商汤科技开发有限公司 Pose optimization method and device, electronic equipment and storage medium
CN114329747B (en) * 2022-03-08 2022-05-10 盈嘉互联(北京)科技有限公司 Virtual-real entity coordinate mapping method and system for building digital twins

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665506A (en) * 2016-07-29 2018-02-06 成都理想境界科技有限公司 Realize the method and system of augmented reality
CN115482359A (en) * 2021-05-31 2022-12-16 华为技术有限公司 Method for measuring size of object, electronic device and medium thereof

Also Published As

Publication number Publication date
CN115810100A (en) 2023-03-17

Similar Documents

Publication Publication Date Title
CN110215690B (en) Visual angle switching method and device in game scene and electronic equipment
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN103079092B (en) Obtain the method and apparatus of people information in video
CN114138121B (en) User gesture recognition method, device and system, storage medium and computing equipment
CN104123520A (en) Two-dimensional code scanning method and device
CN104035555A (en) System, Information Processing Apparatus, And Information Processing Method
CN111950521A (en) Augmented reality interaction method and device, electronic equipment and storage medium
CN109582147B (en) Method for presenting enhanced interactive content and user equipment
CN111142673A (en) Scene switching method and head-mounted electronic equipment
CN111739169A (en) Product display method, system, medium and electronic device based on augmented reality
CN113010018B (en) Interaction control method, terminal device and storage medium
US10936079B2 (en) Method and apparatus for interaction with virtual and real images
CN112181141B (en) AR positioning method and device, electronic equipment and storage medium
CN112598805A (en) Prompt message display method, device, equipment and storage medium
CN115439171A (en) Commodity information display method and device and electronic equipment
CN115810100B (en) Method, device and storage medium for determining object placement plane
CN111722711B (en) Augmented reality scene output method, electronic device and computer readable storage medium
KR20220125715A (en) Data generation methods, devices, devices, storage media and programs
KR20120082319A (en) Augmented reality apparatus and method of windows form
CN113963355B (en) OCR character recognition method, device, electronic equipment and storage medium
CN115061577B (en) Hand projection interaction method, system and storage medium
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
CN106127858B (en) Information processing method and electronic equipment
CN116129085B (en) Virtual object processing method, device, storage medium, and program product
CN108898572B (en) Light spot extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant