CN111862345A - Information processing method and device, electronic equipment and computer readable storage medium - Google Patents

Information processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111862345A
CN111862345A CN202010694217.2A CN202010694217A CN111862345A CN 111862345 A CN111862345 A CN 111862345A CN 202010694217 A CN202010694217 A CN 202010694217A CN 111862345 A CN111862345 A CN 111862345A
Authority
CN
China
Prior art keywords
virtual object
virtual
initial
real
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010694217.2A
Other languages
Chinese (zh)
Other versions
CN111862345B (en
Inventor
吕烨华
张嘉益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010694217.2A priority Critical patent/CN111862345B/en
Publication of CN111862345A publication Critical patent/CN111862345A/en
Application granted granted Critical
Publication of CN111862345B publication Critical patent/CN111862345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an information processing method and device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a point cloud of a real object in an AR scene; and constructing a corresponding first virtual object for the real object based on the point cloud of the real object, wherein the first virtual object is located at the position where the real object is displayed in the AR scene and is used as an actual motion interaction object of a second virtual object when the second virtual object in the AR scene and the real object perform motion interaction. By the technical scheme, the AR scene is more authentic and reliable, and the use experience of the user on the AR scene is improved.

Description

Information processing method and device, electronic equipment and computer readable storage medium
[ technical field ] A method for producing a semiconductor device
The present invention relates to the field of augmented reality technologies, and in particular, to an information processing method and apparatus, an electronic device, and a computer-readable storage medium.
[ background of the invention ]
With the development of scientific technology, AR (Augmented Reality) technology has come into the lives of people. In the process of applying the AR technology, the virtual information is simulated and then displayed together with information in the real world on the display device, for example, in a game applying the AR technology, the virtual backboard is set on the real wall surface, and a user can throw the virtual basketball to the virtual backboard on the real wall surface through touch operation or gesture operation, so that collision between the virtual basketball and the virtual backboard can be generated.
However, this solution only enables interaction between virtual objects, and in the above example, if the user's direction of projection of the virtual basketball deviates, the virtual basketball fails to hit the virtual backboard, but instead hits the real wall surface. At this time, since no setting is made for the interaction between the virtual object and the real object, the accurate path after the virtual basketball collides with the real wall surface cannot be obtained, and in the actual display, situations such as the screen being stuck, the virtual basketball penetrating into the real wall surface being displayed, and the wrong path after the virtual basketball collides with the real wall surface being displayed may occur. Therefore, the interaction situation of the virtual object and the real object cannot be clearly and accurately displayed, so that the high-quality AR effect cannot be realized, and the user experience is low.
Therefore, how to obtain a better AR effect is a technical problem to be solved urgently at present.
[ summary of the invention ]
The embodiment of the invention provides an information processing method and device, electronic equipment and a computer readable storage medium, and aims to solve the technical problem that a high-quality AR effect cannot be realized due to the fact that the interaction condition of a virtual object and a real object cannot be clearly and accurately displayed in the related art.
In a first aspect, an embodiment of the present invention provides an information processing method, including: acquiring a point cloud of a real object in an AR scene; and constructing a corresponding first virtual object for the real object based on the point cloud of the real object, wherein the first virtual object is located at the position where the real object is displayed in the AR scene and is used as an actual motion interaction object of a second virtual object when the second virtual object in the AR scene and the real object perform motion interaction.
In the above embodiment of the present invention, optionally, the step of constructing a corresponding first virtual object for the real object based on the point cloud of the real object includes: constructing an initial first virtual object for the real object based on the point cloud of the real object, wherein the initial first virtual object is located at a position where the real object is displayed in the AR scene, and shape information of the initial first virtual object is consistent with shape information of the real object; and reducing the number of the surfaces of the initial first virtual object based on the surface information of the initial first virtual object to obtain the first virtual object with the simplified shape.
In the above embodiment of the present invention, optionally, the step of reducing the number of the faces of the initial first virtual object based on the face information of the initial first virtual object includes: for any surface of the initial first virtual object, if the surface of the initial first virtual object meets one or more conditions that the area of the surface is smaller than a specified area, the included angle between the plane where the surface of the initial first virtual object is located and the plane where the adjacent surface is located is smaller than a specified angle, and the number of edges of the surface is larger than a specified number, deleting the surface; after deleting a plurality of faces, performing extension processing and/or rotation processing on the remaining adjacent faces of each deleted face by a specified axis to obtain the first virtual object.
In the above embodiment of the present invention, optionally, the method further includes: determining a motion path of the second virtual object based on the acquired initial motion information of the second virtual object and the plurality of surfaces of the first virtual object related to the motion path; and rendering the motion interaction process of the second virtual object and the first virtual object based on the motion path of the second virtual object and the surfaces of the first virtual object related to the motion path.
In the above embodiment of the present invention, optionally, the method further includes: displaying the first virtual object as a mask of the real object in the AR scene; or in the AR scene, the real object is used as a mask of the first virtual object for displaying.
In a second aspect, an embodiment of the present invention provides an information processing apparatus, including: the point cloud obtaining unit is used for obtaining a point cloud of a real object in the AR scene; and the actual motion interaction object generating unit is used for constructing a corresponding first virtual object for the real object based on the point cloud of the real object, wherein the first virtual object is located at the position where the real object is displayed in the AR scene, and is used as an actual motion interaction object of a second virtual object when the second virtual object in the AR scene and the real object perform motion interaction.
In the above embodiment of the present invention, optionally, the actual moving interactive object generating unit includes: an initial first virtual object constructing unit, configured to construct an initial first virtual object for the real object based on the point cloud of the real object, where the initial first virtual object is located at a position in the AR scene where the real object is displayed, and shape information of the initial first virtual object is consistent with shape information of the real object; and the surface simplifying unit is used for reducing the number of the surfaces of the initial first virtual object based on the surface information of the initial first virtual object to obtain the first virtual object with the simplified shape.
In the above embodiment of the present invention, optionally, the face simplification unit is configured to include: and deleting any one of the surfaces of the initial first virtual object if the area of the surface of the initial first virtual object is smaller than a specified area, the included angle between the plane where the surface of the initial first virtual object is located and the plane where the adjacent surface is located is smaller than a specified angle, the number of the edges of the surface of the initial first virtual object is larger than one or more of specified numbers, and after deleting a plurality of surfaces, performing extension processing and/or rotation processing on the remaining adjacent surface of each deleted surface by using a specified axis to obtain the first virtual object.
In the above embodiment of the present invention, optionally, the method further includes: a motion path obtaining unit, configured to determine a motion path of the second virtual object based on the obtained initial motion information of the second virtual object and the plurality of surfaces of the first virtual object related to the motion path; and the motion rendering unit is used for rendering the motion interaction process of the second virtual object and the first virtual object based on the motion path of the second virtual object and the surfaces of the first virtual object related to the motion path.
In the above embodiment of the present invention, optionally, the method further includes: a first display unit, configured to display, in the AR scene, the first virtual object as a mask of the real object; or, further comprising: and the second display unit is used for displaying the real object as a mask of the first virtual object in the AR scene.
In a third aspect, an embodiment of the present invention provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being arranged to perform the method of any of the first aspects above.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer-executable instructions for performing the method flow described in any one of the first aspect.
According to the technical scheme, aiming at the technical problem that the high-quality AR effect cannot be realized due to the fact that the interaction condition of the virtual object and the real object cannot be clearly and accurately displayed in the related technology, the corresponding virtual object is obtained in a mode of modeling the real object, and therefore the display of the motion interaction of the virtual object and the real object is realized through displaying the motion interaction between the virtual objects.
Specifically, first, in an AR scene, a point cloud of real objects is obtained, which refer to objects of motion interaction that will be or are predicted to be second virtual objects, where the motion interaction includes, but is not limited to, contact, collision, separation, and the like. The point cloud of the real object refers to a collection of point data of the real object, wherein, the AR scene may be a three-dimensional scene or other scenes with any dimension, and correspondingly, the point cloud of the real object therein may be three-dimensional data or other point data with any dimension. The point cloud of the real object reflects the object shape, size, color, depth and other information displayed by the real object in the AR scene, and reflects the real situation of the motion interaction object of the second virtual object.
And then, constructing a first virtual object with the same position for the real object in the AR scene by taking the point cloud of the real object as a basis, wherein the information such as the shape, the size, the color, the depth and the like of the object of the first virtual object is the same as or similar to that of the real object because the first virtual object is obtained by the point cloud of the real object.
In the related art, since the motion interaction between the second virtual object and the real object is random, there is no virtual data distribution capable of interacting with the second virtual object at the display position where the real object is located in the AR scene, and therefore, the motion interaction between the second virtual object and the real object is not controllable. Once the second virtual object collides with the real object in the AR scene, the second virtual object may possibly pass through the real object, the screen may be stuck, and the like, which may result in wrong display content, and affect the user experience of the AR scene.
Therefore, based on the above technical scheme, the corresponding first virtual object is set for the real object, and when the second virtual object collides with the real object and the like, the first virtual object is actually distributed at the position where the real object is located. That is, the collision of the second virtual object with the first virtual object may occur, and information such as a motion trajectory and a collision situation generated by motion interaction between the virtual objects may be calculated. Therefore, when the second virtual object collides with the real object, the motion interaction process and the motion interaction result of the first virtual object corresponding to the real object can be displayed, so that the motion of the object in the AR scene conforms to the natural motion rule. From this, can effectively increase the adaptation degree of AR scene and actual scene for the AR scene has more authenticity and reliability, has promoted the user and has experienced to the use of AR scene.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 shows a flow diagram of an information processing method according to one embodiment of the invention;
FIG. 2 shows a flow diagram of an information processing method according to another embodiment of the invention;
FIG. 3 shows a flow diagram of an information processing method according to yet another embodiment of the invention;
FIG. 4 shows a block diagram of an information processing apparatus according to an embodiment of the present invention;
FIG. 5 shows a block diagram of an electronic device according to an embodiment of the invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The AR technology is a technology for fusing virtual information with the real world, and widely uses various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing, and the like, and applies virtual information such as characters, images, three-dimensional models, music, videos, and the like generated by a computer to the real world after analog simulation, so that the virtual information and the information in the real world complement each other, thereby realizing the 'enhancement' of the real world. In the process of applying the AR technology, the virtual information is simulated and then displayed together with information in the real world on the display device, for example, in a game applying the AR technology, the virtual backboard is set on the real wall surface, and a user can throw the virtual basketball to the virtual backboard on the real wall surface through touch operation or gesture operation, so that collision between the virtual basketball and the virtual backboard can be generated.
However, this solution only enables interaction between virtual objects, and in the above example, if the user's direction of projection of the virtual basketball deviates, the virtual basketball fails to hit the virtual backboard, but instead hits the real wall surface. At this time, since no setting is made for the interaction between the virtual object and the real object, the accurate path after the virtual basketball collides with the real wall surface cannot be obtained.
Specifically, since the motion interaction between the virtual object and the real object is random, there is no virtual data distribution in the AR scene where the real object is located, which can interact with the virtual object, and therefore, the motion interaction between the virtual object and the real object is not controllable. Once the virtual object collides with the real object in the AR scene, the second virtual object may possibly pass through the real object, and the picture may be stuck, which may cause wrong display content, and affect the user experience of the AR scene.
Aiming at the technical problem that the high-quality AR effect cannot be achieved due to the fact that the interaction condition of the virtual object and the real object cannot be clearly and accurately displayed in the related technology, the corresponding virtual object is obtained in a mode of modeling the real object, and therefore the display of the motion interaction of the virtual object and the real object is achieved through displaying the motion interaction between the virtual objects. The technical solution of the present application is further illustrated by a plurality of examples below.
Example one
Fig. 1 shows a flowchart of an information processing method according to an embodiment of the present invention.
As shown in fig. 1, a flow of an information processing method according to an embodiment of the present invention includes:
step 102, point clouds of real objects in the AR scene are obtained.
A real object refers to a moving interaction object that will be, or is predicted to be, a second virtual object, where the moving interaction includes, but is not limited to, contact, collision, separation, and the like. The point cloud of the real object refers to a collection of point data of the real object, wherein, the AR scene may be a three-dimensional scene or other scenes with any dimension, and correspondingly, the point cloud of the real object therein may be three-dimensional data or other point data with any dimension. The point cloud of the real object reflects the object shape, size, color, depth and other information displayed by the real object in the AR scene, and reflects the real situation of the motion interaction object of the second virtual object.
And 104, constructing a corresponding first virtual object for the real object based on the point cloud of the real object, wherein the first virtual object is located at the position where the real object is displayed in the AR scene, and is used as an actual motion interaction object of a second virtual object when the second virtual object in the AR scene performs motion interaction with the real object.
The point cloud of the real object is taken as a basis, a first virtual object with the same position is constructed for the real object in the AR scene, and the first virtual object is obtained through the point cloud of the real object, so that the object shape, size, color, depth and other information of the first virtual object are the same as or similar to those of the real object.
In the related art, since the motion interaction between the second virtual object and the real object is random, there is no virtual data distribution capable of interacting with the second virtual object at the display position where the real object is located in the AR scene, and therefore, the motion interaction between the second virtual object and the real object is not controllable. Once the second virtual object collides with the real object in the AR scene, the second virtual object may possibly pass through the real object, the screen may be stuck, and the like, which may result in wrong display content, and affect the user experience of the AR scene.
Therefore, based on the above technical scheme, the corresponding first virtual object is set for the real object, and when the second virtual object collides with the real object and the like, the first virtual object is actually distributed at the position where the real object is located. That is, the collision of the second virtual object with the first virtual object may occur, and information such as a motion trajectory and a collision situation generated by motion interaction between the virtual objects may be calculated. Therefore, when the second virtual object collides with the real object, the motion interaction process and the motion interaction result of the first virtual object corresponding to the real object can be displayed, so that the motion of the object in the AR scene conforms to the natural motion rule. From this, can effectively increase the adaptation degree of AR scene and actual scene for the AR scene has more authenticity and reliability, has promoted the user and has experienced to the use of AR scene.
Example two
On the basis of the first embodiment, fig. 2 shows a flowchart of an information processing method according to another embodiment of the present invention.
As shown in fig. 2, a flow of an information processing method according to another embodiment of the present invention includes:
step 202, point clouds of real objects in the AR scene are obtained.
Step 204, constructing an initial first virtual object for the real object based on the point cloud of the real object, wherein the initial first virtual object is located at a position where the real object is displayed in the AR scene, and shape information of the initial first virtual object is consistent with shape information of the real object.
The method comprises the steps of constructing initial first virtual objects with the same positions for real objects in an AR scene by using point clouds of the real objects as a basis, wherein shape information such as object shapes, sizes, colors, depths and the like of the first virtual objects are the same as those of the real objects because the initial first virtual objects are obtained through the point clouds of the real objects.
However, the actual real objects have various shapes, and are likely to be irregular objects, for example, the concave-convex surface formed by a plurality of tiled toe pressing plates is formed by a plurality of thousands of surfaces which are positioned on different planes. If the virtual object is a small ball, simulating the collision between the small ball and the concave-convex surface through the AR scene, selecting point cloud on the concave-convex surface to display the collision condition of the small ball and the concave-convex surface to the maximum extent, and obtaining an initial virtual concave-convex surface with more than one thousand surfaces through the point cloud. The small ball can bounce on the concave-convex surface repeatedly, and the motion path of the small ball can involve a large number of surfaces of the initial virtual concave-convex surface, so that the motion path required to be displayed by bouncing of the small ball each time needs to be calculated based on different surfaces, and the calculation amount is overlarge. In fact, there is no precisely distinguishable display requirement in an AR scene for both the bouncing of a ball on a tiled toe board and the bouncing of a ball on a flat ground. In other words, the calculation of the bounce path of the ball does not need to be accurate to each surface, and the display effect similar to that of each surface can be obtained. Based on this example, the shape of the initial first virtual object can be simplified, so that the calculation amount caused by the motion of the virtual object in the AR scene can be reduced while obtaining a sufficiently ideal display effect.
And step 206, based on the plane information of the initial first virtual object, reducing the number of the planes of the initial first virtual object to obtain the first virtual object with a simplified shape, where the first virtual object is used as an actual motion interaction object of a second virtual object when the second virtual object in the AR scene performs motion interaction with the real object.
The method for simplifying the shape of the initial first virtual object is specifically embodied in that the surface of the initial first virtual object is reduced, the first virtual object is obtained, the fewer the surfaces of the first virtual object are, the fewer the surfaces the second virtual object contacts with the first virtual object in motion interaction, and accordingly, the less the calculation amount of the motion path of the second virtual object is.
Specifically, simplifying the shape of the initial first virtual object is performed based on the plane information of the initial first virtual object. In one possible design, the plane information of any plane of the initial first virtual object includes one or more of, but is not limited to, the area of the plane, the angle between the plane of the plane and the plane of the adjacent plane, and the number of edges of the plane.
On the basis, the step of simplifying the shape of the initial first virtual object comprises the following steps: for any surface of the initial first virtual object, if the surface of the initial first virtual object meets one or more conditions that the area of the surface is smaller than a specified area, the included angle between the plane where the surface of the initial first virtual object is located and the plane where the adjacent surface is located is smaller than a specified angle, and the number of edges of the surface is larger than a specified number, deleting the surface; after deleting a plurality of faces, performing extension processing and/or rotation processing on the remaining adjacent faces of each deleted face by a specified axis to obtain the first virtual object.
The designated area is the minimum area required under the condition that the face influences the motion trail of the second virtual object displayed in the AR scene; the designated angle refers to a minimum included angle required under the condition that the difference between the surface and the adjacent surface of the designated angle is enough to influence the motion trail of the second virtual object displayed in the AR scene; the specified number refers to the minimum number of faces necessary if the number of edges is sufficient to affect the motion trajectory of the second virtual object displayed in the AR scene. The above three simplified conditions of the information can be used alone or in combination.
Of course, the designated area, the designated angle and the designated number can be flexibly set by the AR scene designer based on actual display requirements, or determined by the client based on the current AR scene use requirements of the user and/or the device information of the device where the client is located.
Of course, the method for simplifying the shape of the initial first virtual object includes, but is not limited to, the above method, and the surface information of the initial first virtual object may be processed by deep learning or the like to obtain the surface information of the first virtual object with the surface deleted, so as to obtain the first virtual object.
In conclusion, the technical scheme can obtain a sufficiently ideal display effect and reduce the calculation amount caused by the movement of the virtual object in the AR scene.
EXAMPLE III
On the basis of any of the above embodiments, fig. 3 shows a flowchart of an information processing method according to yet another embodiment of the present invention.
As shown in fig. 3, a flow and method of the information processing method according to still another embodiment of the present invention includes:
step 302, point clouds of real objects in the AR scene are acquired.
Step 304, constructing a corresponding first virtual object for the real object based on the point cloud of the real object, wherein the first virtual object is located at a position in the AR scene where the real object is displayed, and is used as an actual motion interaction object of a second virtual object when the second virtual object in the AR scene performs motion interaction with the real object.
In one possible design, taking steps 302 and 304 as initial settings, a first virtual object is established for each real object in the AR scene at the beginning of the establishment or the display of the AR scene, so as to serve as an actual motion interaction object of a second virtual object in the AR scene when the second virtual object performs motion interaction with the real object.
In another possible design, to reduce the amount of computation and reduce the resource consumption of the AR device or the device in which the client that implements the AR scene is located, the first virtual object may be established for the real object only when the second virtual object is detected to move, that is, when there is a possibility of motion interaction with the real object.
Of course, in the process of generating the first virtual object, the surface of the initial first virtual object generated based on the point cloud of the real object may be simplified to reduce the amount of calculation, and this portion is consistent with the contents described in the second embodiment, and is not described herein again.
Step 306, obtaining initial motion information of a second virtual object in the AR scene.
Step 308, determining a motion path of the second virtual object based on the obtained initial motion information of the second virtual object and the plurality of surfaces of the first virtual object related to the motion path.
The initial motion information of the second virtual object comprises parameters such as initial speed, acceleration, motion direction and gravity acceleration, and the initial motion path of the second virtual object moving to one surface of the first virtual object is calculated through simulation. At this time, of course, based on the relative position of the second virtual object and the surface of the first virtual object, the parameters of the second virtual object, such as the speed, the acceleration, the moving direction and the like after the collision of the first virtual object, can be determined, so as to calculate the subsequent moving path of the second virtual object again.
And 310, rendering the motion interaction process of the second virtual object and the first virtual object based on the motion path of the second virtual object and the surfaces of the first virtual object related to the motion path.
The moving path of the second virtual object relates to a plurality of surfaces of the first virtual object, namely, the plane where the second virtual object contacts with the real object and the expected contact position in the moving process, so that the second virtual object can be determined to move according to the moving path and collide with the corresponding position on the first virtual object when moving to the contact position with the real object and the expected contact position. Based on this, rendering is performed on the motion interaction process of the second virtual object and the first virtual object, and finally, the rendering result displayed in the AR scene is: and the second virtual object collides with the real object in the motion process and obtains corresponding feedback according to a natural rule.
For example, a virtual ball in an AR scene collides with a real car.
In the related art, since the position of the real automobile does not have a virtual object capable of interacting with the virtual ball, the virtual ball can only be jammed in the picture, or the virtual ball is displayed to pass through the real automobile, or the virtual ball moves on the surface of the real automobile, and the result of the virtual ball colliding with the real automobile cannot be displayed.
By applying the technical scheme of the application, when an AR scene with a real automobile is established, or at the initial moment of the movement of the virtual ball, virtual objects are respectively generated for the real automobile and the real ground, namely the virtual automobile and the virtual ground are generated. Then, based on the parameters of the initial velocity, acceleration, moving direction, gravity acceleration, etc. of the virtual small ball, and the position information, shape information, etc. of the virtual car and the virtual ground, the initial moving path of the virtual small ball can be calculated as follows: the virtual automobile is collided with the surface a of the virtual automobile, and bouncing occurs. And then, determining parameters such as the speed, the acceleration and the moving direction of the virtual small ball after the virtual small ball is collided with the surface a of the virtual automobile based on the factors such as the relative position of the surface a of the virtual automobile and the virtual small ball, and calculating the subsequent moving path of the virtual small ball again by combining the gravity acceleration. At this time, the virtual small ball collides with the surface b of the virtual automobile through a subsequent motion path and bounces. And then, determining parameters such as the speed, the acceleration and the moving direction of the virtual small ball after the virtual small ball is collided with the surface b of the virtual automobile based on the factors such as the relative position of the surface b of the virtual automobile and the virtual small ball, and calculating the next moving path of the virtual small ball again by combining the gravity acceleration. The next motion path of the virtual small ball is that the virtual small ball falls on the virtual ground after bouncing on the surface b of the virtual automobile.
And after the motion process of the virtual small ball is rendered according to the motion path, the virtual small ball is thrown out from the user perspective, collides with the real automobile according to the natural law, bounces, falls on the real automobile again, and bounces to the real ground.
Of course, the number of bounces of the virtual small ball is not limited to two times as given in the above example, and the virtual object is not limited to the virtual small ball, the virtual car, and the like, and may be a virtual object obtained by performing AR simulation on any real object.
In addition, on the basis of any of the above embodiments, since the motion path of the second virtual object in the AR scene is calculated continuously according to the first virtual object, in order to obtain a more real display effect of the motion process, the first virtual object may be used as a mask of the real object in the AR scene to be displayed, that is, the first virtual object is displayed to cover the real object. Therefore, the display effect of the motion process in the AR scene can be improved, and the experience of the user with high demand on the motion process effect is improved.
On the basis of any of the above embodiments, the real object may be used as a mask of the first virtual object in the AR scene for displaying, that is, the real object covers the first virtual object for displaying. Therefore, on the basis of clearly and accurately displaying the interaction condition of the virtual object and the real object, the authenticity effect of the AR scene is guaranteed, and the experience of a user with high authenticity effect demand on the AR scene is improved.
Fig. 4 shows a block diagram of an information processing apparatus according to an embodiment of the present invention.
As shown in fig. 4, an embodiment of the present invention provides an information processing apparatus 400, including: a point cloud obtaining unit 402, configured to obtain a point cloud of a real object in an AR scene; an actual moving interactive object generating unit 404, configured to construct a corresponding first virtual object for the real object based on the point cloud of the real object, where the first virtual object is located at a position in the AR scene where the real object is displayed, and is used as an actual moving interactive object of a second virtual object when the second virtual object in the AR scene performs moving interaction with the real object.
In the above embodiment of the present invention, optionally, the actual moving interactive object generating unit 404 includes: an initial first virtual object constructing unit, configured to construct an initial first virtual object for the real object based on the point cloud of the real object, where the initial first virtual object is located at a position in the AR scene where the real object is displayed, and shape information of the initial first virtual object is consistent with shape information of the real object; and the surface simplifying unit is used for reducing the number of the surfaces of the initial first virtual object based on the surface information of the initial first virtual object to obtain the first virtual object with the simplified shape.
In the above embodiment of the present invention, optionally, the face simplification unit is configured to include: and deleting any one of the surfaces of the initial first virtual object if the area of the surface of the initial first virtual object is smaller than a specified area, the included angle between the plane where the surface of the initial first virtual object is located and the plane where the adjacent surface is located is smaller than a specified angle, the number of the edges of the surface of the initial first virtual object is larger than one or more of specified numbers, and after deleting a plurality of surfaces, performing extension processing and/or rotation processing on the remaining adjacent surface of each deleted surface by using a specified axis to obtain the first virtual object.
In the above embodiment of the present invention, optionally, the method further includes: a motion path obtaining unit, configured to determine a motion path of the second virtual object based on the obtained initial motion information of the second virtual object and the plurality of surfaces of the first virtual object related to the motion path; and the motion rendering unit is used for rendering the motion interaction process of the second virtual object and the first virtual object based on the motion path of the second virtual object and the surfaces of the first virtual object related to the motion path.
In the above embodiment of the present invention, optionally, the method further includes: a first display unit, configured to display, in the AR scene, the first virtual object as a mask of the real object; or, further comprising: and the second display unit is used for displaying the real object as a mask of the first virtual object in the AR scene.
The information processing apparatus 400 uses the solutions of any of the above embodiments, and therefore, has all the technical effects described above, and is not described herein again.
FIG. 5 shows a block diagram of an electronic device of one embodiment of the invention.
As shown in FIG. 5, an electronic device 500 of one embodiment of the invention includes at least one memory 502; and a processor 504 communicatively coupled to the at least one memory 502; wherein the memory stores instructions executable by the at least one processor 504, the instructions being configured to perform the aspects of any of the embodiments described above. Therefore, the electronic device 500 has the same technical effects as those of the solutions described in any of the above embodiments, and details are not repeated herein.
The electronic device of embodiments of the present invention exists in a variety of forms, including but not limited to:
(1) mobile communication devices, which are characterized by mobile communication capabilities and are primarily targeted at providing voice and data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) The ultra-mobile personal computer equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include PDA, MID, and UMPC devices, such as ipads.
(3) Portable entertainment devices such devices may display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capability, stability, reliability, safety, expandability, manageability and the like because of the need of providing highly reliable services.
(5) And other electronic devices with data interaction functions.
In addition, an embodiment of the present invention provides a computer-readable storage medium, which stores computer-executable instructions for executing the method flow described in any of the above embodiments.
The technical scheme of the invention is described in detail in combination with the attached drawings, and the technical scheme of the invention can effectively increase the adaptation degree of the AR scene and the actual scene, so that the AR scene has more authenticity and reliability, and the use experience of a user on the AR scene is improved.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, etc. may be used to describe virtual objects in embodiments of the present invention, these virtual objects should not be limited to these terms. These terms are only used to distinguish virtual objects from each other. For example, a first virtual object may also be referred to as a second virtual object, and similarly, a second virtual object may also be referred to as a first virtual object, without departing from the scope of embodiments of the present invention.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. An information processing method characterized by comprising:
acquiring a point cloud of a real object in an AR scene;
constructing a corresponding first virtual object for the real object based on the point cloud of the real object,
the first virtual object is located at a position in the AR scene where the real object is displayed, and is used as an actual motion interaction object of a second virtual object when the second virtual object in the AR scene performs motion interaction with the real object.
2. The information processing method according to claim 1, wherein the step of constructing a corresponding first virtual object for the real object based on the point cloud of the real object comprises:
constructing an initial first virtual object for the real object based on the point cloud of the real object,
wherein the initial first virtual object is located at a position in the AR scene where the real object is displayed, and shape information of the initial first virtual object is consistent with shape information of the real object;
and reducing the number of the surfaces of the initial first virtual object based on the surface information of the initial first virtual object to obtain the first virtual object with the simplified shape.
3. The information processing method according to claim 2, wherein the step of reducing the number of faces of the initial first virtual object based on the face information of the initial first virtual object includes:
for any surface of the initial first virtual object, if the surface of the initial first virtual object meets one or more conditions that the area of the surface is smaller than a specified area, the included angle between the plane where the surface of the initial first virtual object is located and the plane where the adjacent surface is located is smaller than a specified angle, and the number of edges of the surface is larger than a specified number, deleting the surface;
after deleting a plurality of faces, performing extension processing and/or rotation processing on the remaining adjacent faces of each deleted face by a specified axis to obtain the first virtual object.
4. The information processing method according to claim 2 or 3, characterized by further comprising:
determining a motion path of the second virtual object based on the acquired initial motion information of the second virtual object and the plurality of surfaces of the first virtual object related to the motion path;
and rendering the motion interaction process of the second virtual object and the first virtual object based on the motion path of the second virtual object and the surfaces of the first virtual object related to the motion path.
5. The information processing method according to claim 1, further comprising:
displaying the first virtual object as a mask of the real object in the AR scene; or
Displaying, in the AR scene, the real object as a mask for the first virtual object.
6. An information processing apparatus characterized by comprising:
the point cloud obtaining unit is used for obtaining a point cloud of a real object in the AR scene;
and the actual motion interaction object generating unit is used for constructing a corresponding first virtual object for the real object based on the point cloud of the real object, wherein the first virtual object is located at the position where the real object is displayed in the AR scene, and is used as an actual motion interaction object of a second virtual object when the second virtual object in the AR scene and the real object perform motion interaction.
7. The information processing apparatus according to claim 6, wherein the actual moving interactive object generating unit includes:
an initial first virtual object constructing unit, configured to construct an initial first virtual object for the real object based on the point cloud of the real object, where the initial first virtual object is located at a position in the AR scene where the real object is displayed, and shape information of the initial first virtual object is consistent with shape information of the real object;
And the surface simplifying unit is used for reducing the number of the surfaces of the initial first virtual object based on the surface information of the initial first virtual object to obtain the first virtual object with the simplified shape.
8. The information processing apparatus according to claim 7, wherein the face reduction unit is configured to include:
and deleting any one of the surfaces of the initial first virtual object if the area of the surface of the initial first virtual object is smaller than a specified area, the included angle between the plane where the surface of the initial first virtual object is located and the plane where the adjacent surface is located is smaller than a specified angle, the number of the edges of the surface of the initial first virtual object is larger than one or more of specified numbers, and after deleting a plurality of surfaces, performing extension processing and/or rotation processing on the remaining adjacent surface of each deleted surface by using a specified axis to obtain the first virtual object.
9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the instructions being arranged to perform the method of any of the preceding claims 1 to 5.
10. A computer-readable storage medium having stored thereon computer-executable instructions for performing the method flow of any of claims 1-5.
CN202010694217.2A 2020-07-17 2020-07-17 Information processing method and device, electronic equipment and computer readable storage medium Active CN111862345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010694217.2A CN111862345B (en) 2020-07-17 2020-07-17 Information processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010694217.2A CN111862345B (en) 2020-07-17 2020-07-17 Information processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111862345A true CN111862345A (en) 2020-10-30
CN111862345B CN111862345B (en) 2024-03-26

Family

ID=73000569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010694217.2A Active CN111862345B (en) 2020-07-17 2020-07-17 Information processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111862345B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379884A (en) * 2021-07-05 2021-09-10 北京百度网讯科技有限公司 Map rendering method and device, electronic equipment, storage medium and vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108579085A (en) * 2018-03-12 2018-09-28 腾讯科技(深圳)有限公司 Treating method and apparatus, storage medium and the electronic device of barrier collision
CN109200582A (en) * 2018-08-02 2019-01-15 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that control virtual objects are interacted with ammunition
CN109857259A (en) * 2019-02-26 2019-06-07 网易(杭州)网络有限公司 Collision body interaction control method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108579085A (en) * 2018-03-12 2018-09-28 腾讯科技(深圳)有限公司 Treating method and apparatus, storage medium and the electronic device of barrier collision
CN109200582A (en) * 2018-08-02 2019-01-15 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that control virtual objects are interacted with ammunition
CN109857259A (en) * 2019-02-26 2019-06-07 网易(杭州)网络有限公司 Collision body interaction control method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
董世明: "基于Kinect的增强现实交互技术研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
赵大伟: "面向大规模数值模拟的并行非结构网格生成方法研究", 《中国博士学位论文全文数据库 (信息科技辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379884A (en) * 2021-07-05 2021-09-10 北京百度网讯科技有限公司 Map rendering method and device, electronic equipment, storage medium and vehicle
CN113379884B (en) * 2021-07-05 2023-11-17 北京百度网讯科技有限公司 Map rendering method, map rendering device, electronic device, storage medium and vehicle

Also Published As

Publication number Publication date
CN111862345B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US10282882B2 (en) Augmented reality simulation continuum
CN110292771B (en) Method, device, equipment and medium for controlling tactile feedback in game
US8957858B2 (en) Multi-platform motion-based computer interactions
JP4412716B2 (en) GAME DEVICE, PROGRAM, AND INFORMATION STORAGE MEDIUM
CN110465087B (en) Virtual article control method, device, terminal and storage medium
CN111714886B (en) Virtual object control method, device, equipment and storage medium
CN112915542B (en) Collision data processing method and device, computer equipment and storage medium
CN113382790B (en) Toy system for augmented reality
US20100309197A1 (en) Interaction of stereoscopic objects with physical objects in viewing area
CN110891659A (en) Optimized delayed illumination and foveal adaptation of particle and simulation models in a point of gaze rendering system
CN110801629B (en) Method, device, terminal and medium for displaying virtual object life value prompt graph
CN113559518A (en) Interaction detection method and device of virtual model, electronic equipment and storage medium
CN111467804A (en) Hit processing method and device in game
CN112316429A (en) Virtual object control method, device, terminal and storage medium
CN111589114B (en) Virtual object selection method, device, terminal and storage medium
WO2023142354A1 (en) Target locking method and apparatus, and electronic device and storage medium
CN111714875B (en) System for testing command execution delay in video game
CN107807813B (en) Information processing method and terminal
CN111862345B (en) Information processing method and device, electronic equipment and computer readable storage medium
JP7346055B2 (en) game program
CN107930124B (en) Method and device for matching movement between doll models, terminal equipment and storage medium
JP2004303034A (en) Image generating system, program, and information storage medium
KR20210004479A (en) Augmented reality-based shooting game method and system for child
CN109597480A (en) Man-machine interaction method, device, electronic equipment and computer readable storage medium
CN112791418B (en) Determination method and device of shooting object, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant