CN108320333B - Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method - Google Patents

Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method Download PDF

Info

Publication number
CN108320333B
CN108320333B CN201711478023.3A CN201711478023A CN108320333B CN 108320333 B CN108320333 B CN 108320333B CN 201711478023 A CN201711478023 A CN 201711478023A CN 108320333 B CN108320333 B CN 108320333B
Authority
CN
China
Prior art keywords
scene
virtual
dimensional
virtual reality
virtual world
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711478023.3A
Other languages
Chinese (zh)
Other versions
CN108320333A (en
Inventor
刘想
华锦芝
拓天甜
乐旭
张莉敏
王宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN201711478023.3A priority Critical patent/CN108320333B/en
Publication of CN108320333A publication Critical patent/CN108320333A/en
Application granted granted Critical
Publication of CN108320333B publication Critical patent/CN108320333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides scene-adaptive virtual reality conversion equipment and a virtual reality scene adaptation method, and belongs to the technical field of Virtual Reality (VR). The scene adaptive VR conversion apparatus of the present invention includes a calculation processing section including: the system comprises a scene and object recognition module, a three-dimensional reconstruction module and a fusion module, wherein the scene and object recognition module is used for recognizing scenes and objects from scene and object information collected from a real space, the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the recognized scenes to obtain a three-dimensional space model, and the fusion module is used for modifying the virtual world at least based on the three-dimensional space model so that a movable area under the modified virtual world is adaptive to a movable area in the real space. The VR conversion equipment has low cost and small space requirement, and is easy to be accepted by users.

Description

Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method
Technical Field
The invention belongs to the technical field of Virtual Reality (VR), and relates to scene-adaptive Virtual Reality conversion equipment, Virtual Reality conversion equipment and a Virtual Reality scene adaptation method.
Background
Virtual Reality (VR) technology has developed very rapidly and various VR products and devices are beginning to move to market applications. But because of the limitations of VR technology, products are rarely expensive. At present, more devices or VR products are used in the market, the VR products do not need to sense the real world, so that a user is easily separated from the real world, accidents such as collision injury of the user are easily caused when a blocking object exists in a real scene, and the use experience of the user is greatly limited; in addition, various payment verification devices are difficult to access in a VR scene of a VR product, so that new payment technologies or forms cannot be promoted in the VR scene, and the commercial prospect of the VR product is limited.
Currently, when researching a future payment scenario (VR payment scenario), two problems are encountered: (1) whether the virtual reality can go to the vast users or not, namely the problem of VR payment scene scale; (2) the form of the VR payment.
To the problem (1), the VR high-end products have not been widely used in the public at present, firstly, the cost of the high-end products is not civilized, and secondly, the space required by the VR products is lacked in the families of ordinary users. In the existing Virtual Reality (VR) products or devices, a fixed barrier-free space is needed, and the general household or office lacks the barrier-free space, which limits the popularization of VR products to a certain extent. Furthermore, if there is an obstacle in the empty space, it is also easy to cause a VR user to get bruised.
For the problem (2), there are two kinds of current VR payments, one is to pay away from the VR world, and the other is to verify in the VR world by using a virtual keyboard, for example, which are both inconvenient. However, the current payment market gradually develops towards the direction of biological features, such as face recognition, fingerprint recognition, palm vein recognition, iris recognition, and the like, and the existing VR devices completely do not support the payment verification of the biological recognition mode.
Disclosure of Invention
It is an object of the present invention to disclose a solution that eliminates or at least mitigates at least one aspect of the drawbacks mentioned above that occur in the prior art solutions. It is also an object of the invention to achieve one or more of the following advantages:
-providing a low cost VR conversion device;
the problem that existing VR equipment has high requirements on barrier-free space is solved;
-improving the VR payment experience in a VR scenario;
large-scale promotion of VR device usage in users;
and popularizing various novel payment modes for application in VR scenes.
To achieve the above and other objects, the present invention provides the following technical solutions.
According to a first aspect of the present invention, there is provided a scene-adaptive virtual reality converting apparatus including a calculation processing section including:
the scene and object identification module is used for identifying scenes and objects from scene and object information collected from a real space;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the identified scene to obtain a three-dimensional space model; and
and the fusion module is used for modifying the virtual world at least based on the three-dimensional space model so that the modified movable region under the virtual world is adaptive to the movable region in the real space.
The scene adaptive virtual reality conversion device according to an embodiment of the present invention further includes a sensor for acquiring information of the scene and the object.
According to an embodiment of the invention, the scene adaptive virtual reality converting apparatus comprises a sensor, a display unit and a controller, wherein the sensor is any one or combination of the following components: image sensor, infrared sensor, depth information sensor.
According to the scene adaptive virtual reality conversion device of the embodiment of the invention, the three-dimensional reconstruction module is further configured to perform three-dimensional reconstruction on the identified object to obtain a corresponding three-dimensional object model and coordinate, direction and size information of the three-dimensional object model in a coordinate reference system known by the virtual world;
and the fusion module is also used for matching and fusing the virtual object at the corresponding position in the movable area under the modified virtual world at least based on the three-dimensional object model and the coordinate, direction and size information thereof.
According to an embodiment of the invention, the fusion module is further configured to perform matching fusion in terms of size, color and/or display manner on the virtual object at the corresponding position in the movable region under the modified virtual world based on at least the three-dimensional object model and the coordinate, direction and size information thereof.
According to an embodiment of the invention, the fusion module is further configured to modify a certain virtual object at a corresponding position in a movable region under the modified virtual world in accordance with consistency in size and color based on at least the three-dimensional object model of the certain object and coordinate, direction and size information thereof.
According to the scene adaptive virtual reality conversion apparatus of an embodiment of the present invention, the certain object is a payment verification apparatus.
According to an embodiment of the invention, the scene adaptive virtual reality conversion device is any one or a combination of the following components: the device comprises a face recognition component, a fingerprint recognition component, a palm vein recognition component and a gait recognition component.
According to the scene adaptive virtual reality conversion device in one embodiment of the present invention, the three-dimensional reconstruction module is further configured to obtain coordinates, directions, and size information of the corresponding three-dimensional space model in a coordinate reference system known to the virtual world;
the fusion module is further configured to determine a movable region in the real space based on the three-dimensional space model and coordinate, direction, and size information thereof.
According to an embodiment of the invention, the fusion module is further configured to perform matching fusion on the size, color and/or display manner of the virtual scene and/or the virtual object in the virtual world based on at least the three-dimensional space model and the coordinate, direction and size information thereof.
According to an embodiment of the invention, the scene adaptive virtual reality conversion device is further configured to identify the scene and the object by using a deep learning method.
According to an embodiment of the present invention, the scene adaptive virtual reality converting apparatus further includes a scene and object identifying module configured to identify a category of the object.
According to a second aspect of the present invention, there is provided a scene adaptive virtual reality system, comprising:
a virtual reality device for providing a virtual world comprising a virtual scene and a virtual object; and
the above-described scene-adaptive virtual reality transforming apparatus.
According to a third aspect of the present invention, there is provided a virtual reality converting apparatus including a calculation processing section including:
the scene and object identification module is used for identifying scenes and objects from scene and object information collected from a real space;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the identified object to obtain a corresponding three-dimensional object model and coordinate, direction and size information of the three-dimensional object model under a coordinate reference system known by the virtual world; and
a fusion module for consistent modification in size and color of virtual objects at corresponding locations in a movable area under a virtual world based at least on the three-dimensional object model and its coordinate, orientation and size information.
According to an embodiment of the present invention, the virtual reality conversion apparatus is a payment verification apparatus.
The virtual reality conversion device according to an embodiment of the present invention, wherein the payment verification device is any one or a combination of the following components: the device comprises a face recognition component, a fingerprint recognition component, a palm vein recognition component and a gait recognition component.
The virtual reality conversion device according to an embodiment of the present invention further includes a sensor for acquiring the scene and object information.
According to a fourth aspect of the present invention, there is provided a virtual reality system comprising:
a virtual reality device for providing a virtual world comprising a virtual scene and a virtual object; and
the virtual reality transforming apparatus of any of the above.
According to a fifth aspect of the present invention, there is provided a scene adaptation method of virtual reality, comprising the steps of:
identifying scenes and objects from scene and object information collected from a real space;
performing three-dimensional reconstruction on the identified scene to obtain a three-dimensional space model; and
and modifying the virtual world at least based on the three-dimensional space model so that the movable region under the modified virtual world is adapted to the movable region in the real space.
The scene adaptation method according to an embodiment of the present invention further includes the steps of:
carrying out three-dimensional reconstruction on the identified object to obtain a corresponding three-dimensional object model and coordinate, direction and size information of the three-dimensional object model under a coordinate reference system known by the virtual world; and
and matching and fusing the virtual object at the corresponding position in the movable area under the modified virtual world at least based on the three-dimensional object model and the coordinate, direction and size information thereof.
According to the scene adaptation method provided by the embodiment of the invention, the virtual objects at the corresponding positions in the movable area under the modified virtual world are subjected to matching fusion in terms of size, color and/or display mode at least based on the three-dimensional object model and the coordinate, direction and size information of the three-dimensional object model.
According to an embodiment of the present invention, in the matching fusion step: and carrying out consistency modification on the size and the color of a certain virtual object at a corresponding position in a movable area under the modified virtual world at least based on the three-dimensional object model of the certain object and the coordinate, direction and size information of the certain virtual object.
According to a sixth aspect of the present invention, there is provided a scene adaptation method of virtual reality, including the steps of:
identifying scenes and objects from scene and object information collected from a real space;
carrying out three-dimensional reconstruction on the identified object to obtain a corresponding three-dimensional object model and coordinate, direction and size information of the three-dimensional object model under a coordinate reference system known by the virtual world; and
and performing consistent modification on the size and the color of the virtual object at the corresponding position in the movable area under the virtual world at least based on the three-dimensional object model and the coordinate, direction and size information of the three-dimensional object model.
According to a seventh aspect of the present invention there is provided a computer apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the steps of any of the methods described above.
According to an eighth aspect of the invention there is provided a computer readable storage medium having stored thereon a computer program for execution by a processor to perform the steps of any of the methods described above.
The above features and operation of the present invention will become more apparent from the following description and the accompanying drawings.
Drawings
The above and other objects and advantages of the present invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which like or similar elements are designated by like reference numerals.
Fig. 1 is a schematic view of an application scenario of a VR system according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a VR system according to an embodiment of the present invention, in which a VR conversion apparatus according to an embodiment of the present invention is used.
Fig. 3 is a flowchart of a scene adaptation method of virtual reality according to an embodiment of the present invention.
Fig. 4 is a flowchart of a scene adaptation method of virtual reality according to still another embodiment of the present invention.
Detailed Description
The present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different processing devices and/or microcontroller devices.
Fig. 1 is a schematic view of an application scenario of a VR system according to an embodiment of the invention; fig. 2 is a schematic structural diagram of a VR system according to an embodiment of the present invention, in which a VR conversion apparatus according to an embodiment of the present invention is used. The VR conversion apparatus 200, VR system 20 is illustrated below in conjunction with fig. 1 and 2.
Taking a space 90 applied to a real room as shown in fig. 1 as an example, in the space 90 of the room, walls, floors, ceilings, and the like of the room constitute a predetermined scene, which mainly determines a movable area in the real world; various objects also exist in the scene of the space 90, such as a relatively large window 911 that is likely to affect the activity area, and large items such as a bed 912 and a table 913; in the scene of the space 90, there are also relatively small objects or articles, such as a chair 921, an air purifier 922, a book 923, and the like, and even a payment verification device 931, where the payment verification device 931 may specifically be a human face recognition component, a fingerprint recognition component, a palm vein recognition component, or a gait recognition component, or a combination thereof, and a specific type of the payment verification device 931 is not limited, and devices that can be used to perform payment verification in the real world (for example, various biometric-enabled verification devices) may be applied to the present invention.
It will be appreciated that the scene primarily determines the movable area of the space 90, and that objects in the scene may have some sort of motivating inconvenient impact on the movable area of the space 90, and may also have a movable area that affects the space 90.
A VR device 100 is included in the VR system 20, and a wearable portion of the VR device 100 (e.g., a VR headset) can be used by the user 80 and present a virtual world to the user 80. VR device 100 may include corresponding computing processing means, the specific structure and type of which are not limiting.
As shown in fig. 2, the VR conversion apparatus 200 is coupled to the VR device 100 (e.g., a computing processing device of the VR device 100), which may be implemented as an external product, for example, and may be conveniently connected to the VR device 100, thereby forming the VR system 20 according to an embodiment of the present invention.
The VR conversion device 200 may be a scene-adaptive VR conversion device that uses a computational processing component 220, such as implemented by a processor or chip, and a sensor 210 that collects scene and object information in the space 90 using the sensor 210, the sensor 210 may be any one or combination of the following: image sensor, infrared sensor, depth information sensor. Illustratively, the sensor 210 is an image sensor that can take multiple views of the space 90 to obtain scene and object information in the space 90 as comprehensively and accurately as possible. Specifically, the cameras are fixedly arranged near the walls of a plurality of higher positions of the room, one mobile camera is used for shooting the space 90 at multiple angles to acquire pictures, the room size can be measured in a user-assisted mode to serve as a size standard, and a depth information sensor can be adopted in combination for improving the accuracy of subsequent three-dimensional reconstruction. It will be understood that the sensor 210 is not limited to one, and may be a plurality of sensors installed at different locations.
As shown in fig. 2, the calculation processing unit 220 is provided with a scene and object recognition module 221, and the scene and object recognition module 221 performs scene and object recognition on scene and object information collected from the real space 90. For example, scenes such as walls, ceilings, floors, etc. in the space 90 may be recognized, large objects such as a window 911 and a bed 912 and a table 913 may be recognized, and objects such as a chair 921, an air purifier 922, a book 923, etc. may be recognized. Various graphics processing techniques may be used in the recognition process to improve the accuracy of the recognition.
In one embodiment, the scene and object identification module 221 uses a deep learning approach to identify scenes and objects in the space 90. In scene recognition, an LSUN (large-scale scene understanding) database is used for training indoor scenes to obtain an FCN (full Convolutional network) segmentation model and a ResNet (Deep Residual network) classification model, and the models are used for upsampling and calculating an input picture (acquired by the sensor 210) to obtain a probability map with the same size as the input picture; by screening the probability value, the house type (such as a wall, a ceiling and the like) is identified, and a house type boundary outline is constructed together with the size information obtained by auxiliary measurement. In the object recognition, objects which can appear in a home room under ordinary conditions, such as a chair 921, a table 913, a bed 912, a window 911 and the like, are determined, and a fast Regions with CNN (fast area convolutional network) deep learning detection model is trained according to parameters of the objects; the model can be used for identifying the position of the object contour in the image, calculating the probability that the object corresponds to the object in each training set, and finally determining the class of the object. Thus, the scene and object identification module is also used to identify the categories of various objects.
As further shown in fig. 2, a three-dimensional reconstruction module 222 is disposed in the calculation processing unit 220, and the three-dimensional reconstruction module 222 is configured to perform three-dimensional reconstruction on the identified scene to obtain a three-dimensional space model; that is, the three-dimensional reconstruction module 222 may perform three-dimensional modeling on the scene, so as to obtain a three-dimensional space model, and based on the three-dimensional space model, coordinate, direction and size information of the scene, the object and the virtual world in a known coordinate reference system may be obtained. In one embodiment, the three-dimensional reconstruction module 222 is further configured to obtain coordinate, orientation and dimension information of the corresponding three-dimensional space model in a coordinate reference system known to the virtual world, which can be used to calculate or determine the moveable region in real space.
The three-dimensional reconstruction may also keep track of the size, location, and orientation of the items in space 90, and a fixed sensor 210 (e.g., a camera) will locate the relative positions of the various items to space 90. Therefore, when the deep learning model is trained, the picture information of each article under different sizes, different models and different angles can be adopted for training.
It will be appreciated that the reconstructed three-dimensional model is substantially congruent with the scene, and that a more accurate object model will enhance the subsequent virtual scene fusion effect. In an embodiment, the three-dimensional reconstruction module 222 is further configured to perform three-dimensional reconstruction on the identified object to obtain a corresponding three-dimensional object model and coordinate, direction and size information of the three-dimensional object model in a coordinate reference system known to the virtual world. It will be appreciated that collecting data for more different rooms and household objects (e.g., air purifier 922, book 923, etc.) will improve the accuracy of scene and object recognition, increasing the fineness of the three-dimensional reconstructed three-dimensional object model.
When the three-dimensional reconstruction module 222 reconstructs the three-dimensional object model, first, parameters and a rotation angle of the sensor 210 (for example, a camera) are obtained, and a visual angle in the space 90 is selected as a space origin, and if there is no visual angle, an intersection point of a wall and the ground can be selected as the origin. Secondly, the identified position of the object on the horizontal plane, namely the x and y coordinates, can be obtained by the intersection point of the light projected to the object by the camera and the horizontal plane. The height of the object on the z axis can be proportionally calculated by the camera according to the height of the wall (if only the vanishing line of the wall can be seen, the height of the camera is calculated). Thus, the x, y and z axis coordinates of an object in the room are obtained, which facilitates the three-dimensional reconstruction in the space 90.
It should be noted that the three-dimensional reconstruction of the window 911 can be specially processed, because the window 911 is attached to the wall surface rather than the ground surface, and the rectangular boundary between the window 911 and the wall body needs to be found during the reconstruction.
As further shown in fig. 2, a fusion module 223 is disposed in the computing processing component 220, and the fusion module 223 is mainly used for modifying the virtual world at least based on the three-dimensional space model so that the movable region under the modified virtual world is adapted to the movable region in the real space 90. Therefore, the movable area under the virtual world is determined by the movable area in the real space 90, the space 90 of the room is fused with the virtual scene space, when the user 80 wears the VR device 100 to enter the virtual world, the user 80 can basically move in the movable area in the real space 90, the problem that the user is completely separated from the scene of the real space when only using the VR device 100 is avoided, and the problem that the existing VR device has high requirements for a barrier-free space is solved.
In one embodiment, the fusion module 223 performs matching fusion on the virtual objects at the corresponding positions in the movable region under the modified virtual world based on at least the three-dimensional object model and the coordinate, direction and size information thereof. Therefore, the arrangement of the virtual scene can be adjusted according to the objects in the scene of the space 90, so that the movable area under the virtual world is not obstructed by the objects in the scene of the space 90, and the movable area under the virtual world is more accurately limited by the virtual objects under the virtual world.
Specifically, the fusion module 223 may modify the virtual world according to the three-dimensional reconstructed scene and the object model, so as to perform matching fusion on the composition data of the virtual world and the three-dimensional reconstructed scene and the object model. The virtual scene includes a visual field region and a movable region, the visual field region is a region that can be seen by the user 80 under the virtual world, and the movable region is a region that can be reached by the user under the virtual world. When the fusion module 223 performs scene fusion, the movable region can be reduced or enlarged, and the movable range is ensured to be less than or equal to the movable region of the space 90, without limitation of the view region. After the fusion, the user cannot see the real scene and the three-dimensional reconstruction scene. In one embodiment, a mechanism for performing an early warning in a manner that a user is close to the boundary of the active area and a prompt appears may be used.
In actual operation, information such as the type, size, position, and direction of an object existing in the space 90; at the moment, the size of the freely movable area can be calculated, if the requirement of the VR movable area is met, virtual scene fusion is not needed, and the VR movable area is limited in the area; and if the requirement of the VR activity area cannot be met, calculating the relative size of each object in the space, and combining the types and the sizes of the objects to perform fusion and correction by the following method.
For larger immovable larger items such as beds 912, tables 913, etc., there are two modes of operation: (a) if the virtual scene has objects with corresponding sizes, such as steps, boxes, big trees and the like, matching fusion can be directly carried out; (b) the items are directly hidden, and the movable area is reduced, so that the items are outside the movable area. For the mode of operation (a), e.g., bed 912 is merged with a number of large boxes in the virtual scene, so that the area above the number of large boxes is also a movable area under the virtual world, the user 80 can experience the act of climbing onto the boxes.
In an embodiment, the fusion module 223 further performs matching fusion in terms of size, color and/or display manner on the virtual object at the corresponding position in the movable region under the modified virtual world based on at least the three-dimensional object model and its coordinate, direction and size information. The corresponding relation is established and fused according to the matching of the size and the category of the object and the virtual scene object, and the real scene model is marked through the color or the transparency, so that the user 80 can conveniently identify the object which possibly obstructs the activity of the user in the virtual world.
For example, for a small movable object, matching fusion is performed on the small movable object and a virtual world object according to the category and the size, and the virtual object can be more fit with a three-dimensional object model by using small-scale size transformation during fusion; in the modified virtual scene, the matched virtual object and the unmatched virtual object are distinguished from the pure virtual object by using different colors or transparencies or early warning of approaching color change and the like, so that the user can conveniently identify the objects and the damage of the objects is prevented. Optionally, the size of the modified virtual object is slightly larger than the size of the actual object, and the user does not have to touch the object in real space 90 when interacting with the virtual object (e.g., picking up an item).
In a further embodiment, the fusion module 223 further performs a consistent modification in size and color for a virtual object at a corresponding position in the movable region under the modified virtual world based on at least the three-dimensional object model of the object and its coordinate, orientation and size information.
Illustratively, when the certain object is the payment verification device 931, a certain virtual object at a corresponding position in the movable region under the modified virtual world (for example, the payment verification device under the virtual world) may be modified in terms of consistency in terms of size and color according to the three-dimensional object model of the payment verification device 931 and the coordinate, direction, and size information thereof, and even according to information such as color, so that the payment verification device under the virtual world substantially and truly embodies the payment verification device 931, when the user performs virtual payment under the virtual scene, the virtual object substantially consistent with the payment verification device 931 is presented in the virtual world, and according to the scene of the virtual world, the user 80 may actually touch the payment verification device 931, thereby completing, for example, palm vein verification (if the payment verification device 931 is a palm vein identification component).
The virtual scene modified by the fusion module 223 can be presented to the user 90 through the VR device 100, and the user 90 can enter the virtual world for experience.
The VR conversion apparatus 200 of the above embodiment is low in cost to implement, and can be conveniently used with the VR apparatus 100 to form the VR system 20. The VR system 20 does not completely isolate the user 80 from reality, but ensures immersion in the VR world. The VR conversion equipment 200 solves the problem that the existing VR equipment 100 has high requirements on barrier-free space, is easy to be accepted by users, and is beneficial to large-scale popularization of the VR equipment for use in the users; meanwhile, the VR conversion equipment 200 of the above embodiment can be applied to VR payment, VR payment experience under a VR scene is improved, and various novel payment modes are easily popularized and applied to the VR scene. Thus, the commercial prospects of VR devices are greatly improved, VR payments are easier to walk into the user, and both of them boost each other.
It is to be understood that although the VR conversion apparatus 200 exemplified above is explained based on a scene-adaptive VR conversion apparatus, in still another embodiment, the VR conversion apparatus 200 may not necessarily have to have a scene-adaptive function. Specifically, as shown in fig. 2, the scene and object recognition module 221 of the VR conversion device 200 is also used for recognizing scenes and objects from scene and object information collected from a real space; the three-dimensional reconstruction module 221 is mainly configured to perform three-dimensional reconstruction on the identified object to obtain a corresponding three-dimensional object model and coordinate, direction, and size information of the three-dimensional object model in a known coordinate reference system of the virtual world; the fusion module 223 is mainly used for performing consistent modification in terms of size and color on the virtual object at the corresponding position in the movable region under the virtual world based on at least the three-dimensional object model and the coordinate, direction and size information thereof. Illustratively, when the certain object is the payment verification device 931, a certain virtual object at a corresponding position in the virtual world (for example, the payment verification device in the virtual world) may be modified in terms of consistency in terms of size and color according to a three-dimensional object model of the payment verification device 931 and coordinate, direction, and size information thereof, and even according to information such as color, so that the payment verification device in the virtual world substantially and truly embodies the payment verification device 931, when a user performs virtual payment in the virtual scene, the virtual world immediately presents a virtual object substantially consistent with the payment verification device 931, and according to the scene of the virtual world, the user 80 may actually touch the payment verification device 931, thereby completing, for example, palm vein verification (if the payment verification device 931 is a palm vein recognition component).
Fig. 3 is a flowchart illustrating a scene adaptation method for virtual reality according to an embodiment of the present invention. The following is illustrated with reference to fig. 1 to 3.
First, in step S310, scene and object information in real space 90 is acquired. In this step, which is implemented by the sensor 210, the acquisition process may be multiple times.
Further, the scene and the object information collected from the real space are identified, that is, step S320.
Further, in step S330, a three-dimensional reconstruction is performed on the identified scene to obtain a three-dimensional space model. In this step S330, if necessary, the identified object may be three-dimensionally reconstructed to obtain a corresponding three-dimensional object model and coordinate, direction and size information of the three-dimensional object model in a coordinate reference system known to the virtual world.
Further, in step S340, the virtual world is modified at least based on the three-dimensional space model, so that the movable region under the modified virtual world is adapted to the movable region in the real space. In a further embodiment, the method further comprises the steps of: matching and fusing the virtual objects at the corresponding positions in the movable area under the modified virtual world at least based on the three-dimensional object model obtained in the step S330 and the coordinate, direction and size information of the three-dimensional object model; for example, matching fusion in terms of size, color and/or display mode is carried out on the virtual object at the corresponding position in the movable area under the modified virtual world at least based on the three-dimensional object model and the coordinate, direction and size information thereof; or for example, a certain virtual object at a corresponding position in a movable area under the modified virtual world is subjected to consistent modification in terms of size and color at least based on a three-dimensional object model of the certain object and coordinate, direction and size information of the certain object, so that the certain object in the world, particularly a payment verification device, can be intervened and displayed in the virtual world, and various types of payment verification devices can be favorably used in a VR payment scene, and the VR payment experience is improved.
Further, a modified virtual world is input, step S350.
Taking the specific application to the VR payment process as an example, first, the VR conversion device 200 and the payment verification device 931 are connected to the host of the VR device 100, and here, taking the payment verification device 931 as a vertical palm vein recognition component as an example, the palm vein recognition component is placed in a required space area of the VR device 100.
Further, a camera is used for data acquisition, for example, acquisition is performed at multiple angles, so as to obtain scene and object information.
Further, the scene and the object in the real space 90 are identified, and three-dimensional modeling and matching fusion are performed, so that when VR payment is not needed, the palm vein identification component is covered by the object in the virtual world, the user sees the object in the virtual world at the position corresponding to the palm vein identification component, and when VR payment is needed, the palm vein identification component appears at the corresponding position in the virtual world (the color distribution rendered by the palm vein identification component can still be consistent with the color distribution according to the theme of the virtual world), the host of the VR device 100 communicates with the palm vein device to start collecting biological characteristic data, and the user 80 places the palm on the palm vein identification component to complete the authentication and payment process.
Steps S310 to S330 are performed only by performing virtual scene fusion again, that is, step S340, when the user 80 can perform one-time setting or operation before experiencing virtual reality, that is, when a subsequent virtual scene is changed. For example, after the user 80 arranges the devices and collects the information of the room space 90, the background starts to perform the steps S320 to S330, the user 80 enters the game interface, and the background performs the step S340 to perform fusion correction on a plurality of scenes of the virtual game, so that the user can smoothly experience.
Fig. 4 is a flowchart illustrating a scene adaptation method for virtual reality according to another embodiment of the present invention. Therein, an example illustrating the fusion process and principles is shown.
It should be noted that the computing processing unit 220 of the above embodiment of the present invention may be implemented by computer program instructions, for example, by a special purpose program, which are provided to a processor of a general purpose computer, a special purpose computer, or other programmable data processing apparatus to constitute the computing processing unit 220 of the embodiment of the present invention, and these instructions executed by the processor of the computer or other programmable data processing apparatus create a unit or unit for implementing the functions/operations specified in the flowcharts and/or blocks and/or one or more flowchart blocks.
Also, these computer program instructions may be stored in a computer-readable memory that can direct a computer or other programmable processor to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
It should also be noted that, in some alternative implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be noted that the elements (including flow charts and block diagrams in the figures) disclosed and depicted herein mean logical boundaries between elements. However, in accordance with software or hardware engineering practices, the depicted elements and their functions may be executed on a machine by a computer-executable medium having a processor capable of executing program instructions stored thereon as a single-chip software structure, as stand-alone software modules, or as modules using external programs, code, services, etc., or any combination of these, and all such implementations may fall within the scope of the present disclosure.
While different non-limiting embodiments have components specifically illustrated, embodiments of the present invention are not limited to these specific combinations. It is possible to use some of the components or features from any non-limiting embodiment in combination with features or components from any other non-limiting embodiment.
Although particular step sequences are shown, disclosed, and claimed, it should be understood that steps may be performed in any order, separated or combined unless otherwise indicated and will still benefit from the present disclosure.
The foregoing description is exemplary rather than defined as being limited thereto. Various non-limiting embodiments are disclosed herein, however, one of ordinary skill in the art would recognize that, based on the teachings above, various modifications and alterations would come within the scope of the appended claims. It is, therefore, to be understood that within the scope of the appended claims, disclosure other than the specific disclosure may be practiced. For that reason the following claims should be studied to determine true scope and content.

Claims (17)

1. A scene-adaptive virtual reality conversion apparatus comprising a calculation processing section including:
the scene and object identification module is used for identifying scenes and objects from scene and object information collected from a real space;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the identified scene to obtain a three-dimensional space model; and
the fusion module is used for modifying the virtual world at least based on the three-dimensional space model so that a movable region under the modified virtual world is adapted to a movable region in a real space, wherein the three-dimensional reconstruction module is also used for performing three-dimensional reconstruction on the identified object to obtain a corresponding three-dimensional object model and obtaining the coordinate, direction and size information of the three-dimensional object model under a coordinate reference system known by the virtual world;
the fusion module is further used for matching and fusing the virtual objects at the corresponding positions in the movable area under the modified virtual world at least based on the three-dimensional object model and the coordinate, direction and size information of the three-dimensional object model, wherein when the virtual reality payment is not needed, the payment verification device is covered by the objects in the virtual world, and when the virtual reality payment is needed, the virtual objects corresponding to the payment verification device appear at the corresponding positions in the virtual world.
2. The scene-adaptive virtual reality transition apparatus of claim 1, further comprising a sensor for acquiring the scene and object information.
3. The scene-adaptive virtual reality transformation apparatus of claim 2, wherein the sensor is any one or a combination of the following components: image sensor, infrared sensor, depth information sensor.
4. The scene-adaptive virtual reality transformation device according to claim 1, wherein the fusion module is further configured to perform matching fusion in terms of size, color, and/or display manner for the virtual object at the corresponding position in the movable region under the modified virtual world based on at least the three-dimensional object model and its coordinate, orientation, and size information.
5. The scene-adaptive virtual reality transformation device of claim 1, wherein the fusion module is further configured to perform a consistent modification in size and color of a virtual object at a corresponding position in the movable region under the modified virtual world based on at least the three-dimensional object model of the object and its coordinate, orientation, and size information.
6. The scene-adaptive virtual reality transformation device of claim 5, wherein the certain object is a payment verification device.
7. The scene-adaptive virtual reality transformation device of claim 6, wherein the payment verification device is any one or a combination of the following components: the device comprises a face recognition component, a fingerprint recognition component, a palm vein recognition component and a gait recognition component.
8. The scene-adaptive virtual reality transformation apparatus of claim 1, wherein the three-dimensional reconstruction module is further configured to obtain coordinate, orientation and dimension information of the corresponding three-dimensional spatial model in a coordinate reference system known to the virtual world;
the fusion module is further configured to determine a movable region in the real space based on the three-dimensional space model and coordinate, direction, and size information thereof.
9. The scene-adaptive virtual reality transformation device according to claim 8, wherein the fusion module is further configured to perform matching fusion of the size, color and/or display manner of the virtual scene and/or the virtual object in the virtual world based on at least the three-dimensional space model and its coordinate, orientation and size information.
10. The scene-adaptive virtual reality transformation device of claim 1, wherein the scene and object recognition module is further configured to recognize scenes and objects using a deep learning method.
11. The scene-adaptive virtual reality transformation device of claim 1, wherein the scene and object identification module is further configured to identify a category of the object.
12. A scene adaptive virtual reality system, comprising:
a virtual reality device for providing a virtual world comprising a virtual scene and a virtual object; and
the scene-adaptive virtual reality transforming apparatus of any one of claims 1 to 11.
13. A scene adaptation method of virtual reality is characterized by comprising the following steps:
identifying scenes and objects from scene and object information collected from a real space;
performing three-dimensional reconstruction on the identified scene to obtain a three-dimensional space model; and
modifying the virtual world based on at least the three-dimensional space model such that the movable region under the modified virtual world is adapted to the movable region in the real space, wherein the method further comprises the steps of:
carrying out three-dimensional reconstruction on the identified object to obtain a corresponding three-dimensional object model and coordinate, direction and size information of the three-dimensional object model under a coordinate reference system known by the virtual world; and
matching and fusing the virtual objects at the corresponding positions in the movable area under the modified virtual world at least based on the three-dimensional object model and the coordinate, direction and size information thereof,
when the virtual reality payment is needed, the virtual object corresponding to the payment verification device appears at the corresponding position of the virtual world.
14. The scene adaptation method according to claim 13, characterized in that the virtual objects of the corresponding positions in the movable area under the modified virtual world are subjected to matching fusion in terms of size, color and/or display manner based on at least the three-dimensional object model and its coordinate, orientation and size information.
15. The scene adaptation method according to claim 13, wherein in the matching fusion step: and carrying out consistency modification on the size and the color of a certain virtual object at a corresponding position in a movable area under the modified virtual world at least based on the three-dimensional object model of the certain object and the coordinate, direction and size information of the certain virtual object.
16. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 13 to 15 are implemented when the program is executed by the processor.
17. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor to implement the steps of the method according to any of claims 13 to 15.
CN201711478023.3A 2017-12-29 2017-12-29 Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method Active CN108320333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711478023.3A CN108320333B (en) 2017-12-29 2017-12-29 Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711478023.3A CN108320333B (en) 2017-12-29 2017-12-29 Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method

Publications (2)

Publication Number Publication Date
CN108320333A CN108320333A (en) 2018-07-24
CN108320333B true CN108320333B (en) 2022-01-11

Family

ID=62893502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711478023.3A Active CN108320333B (en) 2017-12-29 2017-12-29 Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method

Country Status (1)

Country Link
CN (1) CN108320333B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242958A (en) * 2018-08-29 2019-01-18 广景视睿科技(深圳)有限公司 A kind of method and device thereof of three-dimensional modeling
CN109409244B (en) * 2018-09-29 2021-03-09 维沃移动通信有限公司 Output method of object placement scheme and mobile terminal
CN110120090B (en) * 2019-04-01 2020-09-25 贝壳找房(北京)科技有限公司 Three-dimensional panoramic model construction method and device and readable storage medium
CN110147770A (en) * 2019-05-23 2019-08-20 北京七鑫易维信息技术有限公司 A kind of gaze data restoring method and system
CN110288650B (en) * 2019-05-27 2023-02-10 上海盎维信息技术有限公司 Data processing method and scanning terminal for VSLAM
CN110415359A (en) * 2019-07-29 2019-11-05 恒信东方文化股份有限公司 A kind of three-dimensional modeling method and system
CN110703916B (en) * 2019-09-30 2023-05-09 恒信东方文化股份有限公司 Three-dimensional modeling method and system thereof
CN110711382B (en) * 2019-10-21 2020-12-01 腾讯科技(深圳)有限公司 Control method and device of virtual operation object, storage medium and electronic device
CN111127029A (en) * 2019-12-27 2020-05-08 上海诺亚投资管理有限公司 VR video-based payment method and system
CN112650395A (en) * 2020-12-30 2021-04-13 上海建工集团股份有限公司 Real-time updating method for virtual reality scene of architectural engineering
CN113254915B (en) * 2021-05-06 2023-03-21 西安交通大学 Cross-scene and equipment keystroke behavior authentication method, system, equipment and medium
CN115082648B (en) * 2022-08-23 2023-03-24 海看网络科技(山东)股份有限公司 Marker model binding-based AR scene arrangement method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999160A (en) * 2011-10-14 2013-03-27 微软公司 User controlled real object disappearance in a mixed reality display
CN106354251A (en) * 2016-08-17 2017-01-25 深圳前海小橙网科技有限公司 Model system and method for fusion of virtual scene and real scene
CN107016730A (en) * 2017-04-14 2017-08-04 陈柳华 The device that a kind of virtual reality is merged with real scene
CN107077755A (en) * 2016-09-30 2017-08-18 深圳达闼科技控股有限公司 Virtually with real fusion method, system and virtual reality device
CN107122792A (en) * 2017-03-15 2017-09-01 山东大学 Indoor arrangement method of estimation and system based on study prediction
CN107251100A (en) * 2015-02-27 2017-10-13 微软技术许可有限责任公司 The virtual environment that physics is limited moulds and anchored to actual environment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9704298B2 (en) * 2015-06-23 2017-07-11 Paofit Holdings Pte Ltd. Systems and methods for generating 360 degree mixed reality environments

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999160A (en) * 2011-10-14 2013-03-27 微软公司 User controlled real object disappearance in a mixed reality display
CN107251100A (en) * 2015-02-27 2017-10-13 微软技术许可有限责任公司 The virtual environment that physics is limited moulds and anchored to actual environment
CN106354251A (en) * 2016-08-17 2017-01-25 深圳前海小橙网科技有限公司 Model system and method for fusion of virtual scene and real scene
CN107077755A (en) * 2016-09-30 2017-08-18 深圳达闼科技控股有限公司 Virtually with real fusion method, system and virtual reality device
CN107122792A (en) * 2017-03-15 2017-09-01 山东大学 Indoor arrangement method of estimation and system based on study prediction
CN107016730A (en) * 2017-04-14 2017-08-04 陈柳华 The device that a kind of virtual reality is merged with real scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IM2CAD;Hamid Izadinia et al.;《arXiv》;20170424;第1-9 *

Also Published As

Publication number Publication date
CN108320333A (en) 2018-07-24

Similar Documents

Publication Publication Date Title
CN108320333B (en) Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method
US10417775B2 (en) Method for implementing human skeleton tracking system based on depth data
CN106598227B (en) Gesture identification method based on Leap Motion and Kinect
CN103778635B (en) For the method and apparatus processing data
CN108257139B (en) RGB-D three-dimensional object detection method based on deep learning
JP7337104B2 (en) Model animation multi-plane interaction method, apparatus, device and storage medium by augmented reality
US11308347B2 (en) Method of determining a similarity transformation between first and second coordinates of 3D features
CN106705837B (en) Object measuring method and device based on gestures
CA3147320A1 (en) Artificial intelligence systems and methods for interior design
WO2020024569A1 (en) Method and device for dynamically generating three-dimensional face model, and electronic device
WO2020042970A1 (en) Three-dimensional modeling method and device therefor
CN107004275A (en) For determining that at least one of 3D in absolute space ratio of material object reconstructs the method and system of the space coordinate of part
JP2011095797A (en) Image processing device, image processing method and program
CN103970264A (en) Gesture recognition and control method and device
CN106527719A (en) House for sale investigation system based on AR (Augmented Reality) technology and real-time three-dimensional modeling
CN109359514A (en) A kind of gesture tracking identification federation policies method towards deskVR
Stommel et al. Model-free detection, encoding, retrieval, and visualization of human poses from kinect data
CN111882380A (en) Virtual fitting method, device, system and electronic equipment
JP6770208B2 (en) Information processing device
CN109426336A (en) A kind of virtual reality auxiliary type selecting equipment
Feng et al. Motion capture data retrieval using an artist’s doll
Weiss et al. Automated layout synthesis and visualization from images of interior or exterior spaces
Wang et al. Im2fit: Fast 3d model fitting and anthropometrics using single consumer depth camera and synthetic data
Boyali et al. 3D and 6 DOF user input platform for computer vision applications and virtual reality
Zou et al. Precise 3D reconstruction from a single image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant