CN114332419A - Processing method, device, equipment and medium for virtual display - Google Patents

Processing method, device, equipment and medium for virtual display Download PDF

Info

Publication number
CN114332419A
CN114332419A CN202111628390.3A CN202111628390A CN114332419A CN 114332419 A CN114332419 A CN 114332419A CN 202111628390 A CN202111628390 A CN 202111628390A CN 114332419 A CN114332419 A CN 114332419A
Authority
CN
China
Prior art keywords
target
data
virtual object
spatial
preset virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111628390.3A
Other languages
Chinese (zh)
Inventor
陈凯彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202111628390.3A priority Critical patent/CN114332419A/en
Publication of CN114332419A publication Critical patent/CN114332419A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a processing method of virtual display, a virtual display method and a related device, wherein the method comprises the following steps: acquiring a spatial model of a target environment and spatial data corresponding to different positions in the spatial model; obtaining a target display position of a preset virtual object in a space model; according to the method, the target space data can be configured for the preset virtual object, so that the preset virtual object can be displayed at the position corresponding to the target space data subsequently, and the virtual object can be reproduced at a fixed position.

Description

Processing method, device, equipment and medium for virtual display
Technical Field
The present application relates to the field of augmented reality technologies, and in particular, to a method, an apparatus, a device, and a medium for processing a virtual display.
Background
The Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information and a real world, wherein virtual information such as characters, images, space characteristic point models, music, videos and the like generated by a computer is simulated and displayed on user equipment together, and the two kinds of information are mutually supplemented, so that the real world is enhanced. For example, a user may see a real-time image of the real world and virtual AR objects placed therein in a device.
In the long-term research and development process, the applicant of the present application finds that in the prior art, a user can select to place an AR object in a real-time image of the real world by using an AR application, but the AR object cannot be positioned, and once the user exits from the currently used AR application and uses the application again, the user cannot see the historically placed AR object at the position, and the AR object cannot be reproduced.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a processing method, device, equipment and medium for virtual display.
In order to solve the technical problem, the application adopts a technical scheme that: a processing method of virtual display is provided, the method comprises: acquiring a spatial model of a target environment and spatial data corresponding to different positions in the spatial model; obtaining a target display position of a preset virtual object in a space model; and obtaining target space data corresponding to the preset virtual object based on the space data corresponding to the target display position, wherein the target space data is used for determining the display position of the preset virtual object at the user terminal under the condition that the user is located in the target environment.
Therefore, the display position of the preset virtual object in the space model of the target environment is determined, and the target space data corresponding to the preset virtual object in the target environment is further determined, so that the target space data can be configured for the preset virtual object, namely the corresponding placement position of the preset virtual object in the target environment, so that the preset virtual object is displayed at the position corresponding to the target space data subsequently, and the virtual object is reproduced at the fixed position.
The method for acquiring the space model of the target environment and the space data corresponding to different positions in the space model includes: and constructing to obtain a spatial model of the target environment by using the first construction mode, constructing to obtain map data of the target environment by using the second construction mode, wherein the map data comprises spatial data corresponding to different positions in the spatial model, and synchronously aligning the constructed spatial model and the map data.
Therefore, the space model and the map data of the target environment can be acquired for the subsequent placement of the preset virtual object and the configuration of the target space data. In addition, the space model and the map data are synchronously aligned, so that the corresponding relation between the space model and the space data in the map data can be determined, and the configuration of the corresponding position of the preset virtual object in the target environment is further realized.
Wherein the construction of the spatial model and the construction of the map data are performed synchronously.
Therefore, the space model and the map data are synchronously aligned by constructing the space model and the map data.
The first construction mode is constructed by adopting a Mesh construction mode, and the second construction mode is constructed by adopting an ARWorldMap mode.
Therefore, the spatial model and the map data can be constructed by adopting the Mesh construction mode and the ARWorldMap construction mode.
The method for constructing the space model of the target environment by using the first construction mode comprises the following steps: acquiring a target environment by using a first acquisition device to obtain first acquisition data; acquiring a first spatial feature point and the depth of the first spatial feature point in the target environment based on the first acquisition data; constructing a space model by using the first space characteristic points and the depths of the first space characteristic points; and/or constructing and obtaining map data of the target environment by using a second construction mode, wherein the second construction mode comprises the following steps: acquiring a target environment by using a second acquisition device to obtain second acquisition data; and determining a second spatial feature point in the target environment and spatial data corresponding to the second spatial feature point based on the second acquired data.
Therefore, the spatial model and the map data can be constructed based on the collected data of the target environment, and the spatial model and the map data reflecting the target environment can be obtained.
The first collecting device and the second collecting device are the same collecting device or two collecting devices, and under the condition that the first collecting device and the second collecting device are the two collecting devices, the first collecting device and the second collecting device are arranged on the same equipment.
Therefore, the target environment can be synchronously acquired through the same acquisition device or two acquisition devices of the same equipment, so that the spatial model and the map data can be aligned.
The first collected data is at least one of image data and radar data, and the second collected data is image data.
Therefore, information of the target environment can be acquired by using the image data or the radar data, and a space model and map data reflecting the target environment are further constructed and obtained.
The method for obtaining the target display position of the preset virtual object in the space model comprises the following steps: displaying the spatial model; responding to the placement operation of a user, and displaying the appointed position of a preset virtual object on the space model; the placing operation is used for indicating to place the preset virtual object at the specified position, and the specified position is obtained and used as the target display position.
Therefore, the appointed position of the preset virtual object in the space model can be determined through the placing operation of the user so as to determine the position of the preset virtual object in the target environment, and therefore the position of the preset virtual object in the target environment can be flexibly configured according to the user operation.
The method comprises the following steps that space data are stored in map data of a target environment, and after target space data of a preset virtual object are obtained based on the space data corresponding to a target display position, the method further comprises the following steps: and adding target space data of the preset virtual object into the map data.
Therefore, the preset virtual object can be added into the map data by adding the target space data of the preset virtual object into the map data, that is, the position of the virtual object is stored in the map data, so that the position of the virtual object is determined by using the map data in the subsequent virtual display process, and the virtual object can be reproduced at a fixed position.
After obtaining target space data corresponding to a preset virtual object based on space data corresponding to a target display position, the method comprises the following steps: storing target space data corresponding to a preset virtual object, and responding to a preset trigger operation of a user to acquire the stored target space data so as to determine a target space position of the preset virtual object in a target environment; responding to the detection of the current shooting target space position; and displaying a preset virtual object in the current shooting picture.
Therefore, the target space position of the preset virtual object in the target environment is determined by storing the target space data of the preset virtual object, and when the target space position is shot at present, the preset virtual object is displayed in the current shooting picture, that is, the target space position in the target environment is used for intelligently and virtually displaying the preset virtual object, so that the preset virtual object can be reproduced at a fixed position.
Wherein, the step of saving the target space data corresponding to the preset virtual object comprises the step of adding the target space data of the preset virtual object to the map data, and the step of obtaining the saved target space data comprises the steps of: acquiring target space data from the map data, wherein the target space data represent the target space position of a preset virtual object in a target environment; detecting the current shooting target space position, including: positioning by using the map data and the current shooting picture to obtain current positioning data; and detecting the current shooting target space position based on the current positioning data.
Therefore, by performing positioning, target space data of the preset virtual object can be acquired from the map data, so that the target space position is determined, and the virtual display of the preset virtual object at the target space position is realized.
Wherein, it is detected that the target space position is shot at present based on the present positioning data, including: determining whether at least one group of matching point pairs exists based on the current positioning data, wherein the matching point pairs comprise first characteristic points in the current shooting picture and second characteristic points located at the target space position in the target environment; and determining the current shooting target space position in response to the existence of at least one group of matching point pairs.
Therefore, whether the target space position is shot at present can be determined by utilizing the matching condition of the feature points, so that whether the preset virtual object needs to be displayed is determined, and the preset virtual object is accurately displayed.
Wherein the current positioning data comprises a current pose; displaying a preset virtual object on a current display picture, comprising: determining display parameters of a preset virtual object based on the current pose and the target space position, wherein the display parameters comprise at least one of the following parameters: displaying positions in a current display picture and displaying forms of preset virtual objects; and displaying the preset virtual object in the current display picture according to the display parameters.
Therefore, how to display the preset virtual object can be determined by using the current pose and the target space position of the preset virtual object, so that the preset virtual object can be accurately displayed.
Wherein the display modality includes at least one of a size and an orientation.
Accordingly, the preset virtual object can be accurately displayed using at least one of the size and the orientation.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a processing apparatus for virtual display, the apparatus comprising: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a space model of a target environment and space data corresponding to different positions in the space model, and the second acquisition module is used for acquiring a target display position of a preset virtual object in the space model; the third obtaining module is used for obtaining target space data corresponding to the preset virtual object based on the space data corresponding to the target display position, wherein the target space data is used for determining the display position of the preset virtual object at the user terminal under the condition that the user is located in the target environment.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided an electronic device comprising a processor and a memory, the memory being arranged to store program data, the processor being arranged to execute the program data to implement any of the methods of processing or methods of virtual display described above.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer-readable storage medium for storing program data that can be executed to implement any of the above-described methods of processing a virtual display or methods of virtual display.
In the above scheme, a target space position can be preconfigured for the preset virtual object, so that the preset virtual object is subsequently intelligently displayed at the target space position, and the virtual object can be reproduced at a fixed position.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for processing a virtual display according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating a processing method for virtual display according to another embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating another embodiment of step S210 of the present application;
FIG. 4 is a schematic flow chart diagram illustrating another embodiment of step S220;
FIG. 5 is a schematic flow chart illustrating another embodiment of step S240 of the present application;
FIG. 6 is a schematic flow chart diagram illustrating a method for processing a virtual display according to yet another embodiment of the present disclosure;
FIG. 7 is a schematic flowchart of another embodiment of step S660;
FIG. 8 is a schematic flow chart diagram illustrating another embodiment of step S670 of the present application;
FIG. 9 is a block diagram of an embodiment of a processing device for virtual display according to the present application;
FIG. 10 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 11 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
In order to make the purpose, technical solution and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
It is understood that the methods of the present application can include any of the method embodiments described below as well as any non-conflicting combinations of the method embodiments described below.
It is understood that the processing method of the virtual display in the present application may be executed by a processing device, and the processing device may be any electronic device with processing capability, such as a mobile phone, a tablet computer, a computer, and the like.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a processing method for virtual display according to the present application, the method including:
step S110: and acquiring a space model of the target environment and space data corresponding to different positions in the space model.
The target environment is an environment in which a user needs to add a preset virtual object, and the preset virtual object is a virtual object which the user needs to view.
For example, some users may have a need to view a virtual object in a particular environment, such as a virtual pet in a room, or a punch-card virtual object at a landmark building. In order to achieve the above requirement, first, a preset virtual object that a user needs to view needs to be fixedly placed in a target environment, and the processing device may be implemented by executing relevant steps of virtual object positioning in the processing method of virtual display in the present application.
In the method for processing a virtual display according to the present application, the user may include a virtual display application developer, a virtual display application user, and the like. Generally speaking, the user of the relevant step related to virtual object positioning in the processing method of virtual display may be a developer, and the user of the relevant step related to virtual display may be a user. Specifically, the processing device responds to the operation of a user to execute the relevant steps of virtual object positioning, and realizes that the preset virtual object needing to be viewed is fixedly placed in the target environment. And then executing the relevant steps of virtual display, and displaying the preset virtual object at the target space position of the target environment for the user to view.
The spatial data is spatial position information, and spatial position information of different positions in the spatial model corresponding to the target environment can be determined by using the spatial data.
Step S120: and obtaining the target display position of the preset virtual object in the space model.
It should be noted that the spatial model is a visual model constructed based on the target environment, and spatial data corresponding to different positions in the spatial model is not visual, the spatial model may be used for a user to determine a target display position of the preset virtual object in the visual model, where the target display position may correspond to a target spatial position of the preset virtual object in the target environment, and the processing device may obtain the target display position of the preset virtual object in the spatial model determined by the user, so as to determine the target spatial position subsequently.
Step S130: and obtaining target space data corresponding to the preset virtual object based on the space data corresponding to the target display position.
It should be noted that after the spatial data corresponding to different positions in the spatial model and the target display position are acquired, the spatial data corresponding to the target display position may be determined as target spatial data, and the target spatial data may be used to determine a target spatial position of a preset virtual object in a target environment in the process of virtual display.
The processing method of the virtual display includes a step related to processing of the virtual display and a step related to the virtual display, where the step related to processing of the virtual display is executed by the processing device, and the step related to the virtual display may be executed by the processing device or by a user terminal connected to the processing device.
In the above scheme, a target space position can be configured for the preset virtual object, so that the preset virtual object is displayed in the target space position in a follow-up intelligent manner, and the virtual object can be reproduced at a fixed position.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another embodiment of a processing method for virtual display according to the present application, the method including:
it should be noted that the obtaining of the spatial model of the target environment and the spatial data corresponding to different positions in the spatial model may be obtaining of a spatial model and map data constructed by other devices, where the map data includes spatial data corresponding to different positions in the spatial model; or the processing device may be used to construct a spatial model and map data of the target environment, so as to obtain the spatial model of the target environment and spatial data corresponding to different positions in the spatial model. The latter is taken as an example in the present embodiment, and step S110 may be implemented by step S210 and step S220, and executed by the processing device. If the spatial model and the map data of the target environment are constructed by other devices, the other devices execute step S210 and step S210, and then the processing device acquires the spatial model and the map data constructed by the other devices.
The constructed spatial model and the map data are both obtained according to the target environment and are aligned synchronously, and the map data comprise spatial data corresponding to different positions in the spatial model.
Step S210: and constructing to obtain a spatial model of the target environment by using the first construction mode.
Further, the first construction method may be a Mesh construction method, and the spatial model of the visual target environment obtained by the construction method may reflect an approximate outline of an object included in the target environment.
Step S220: and constructing and obtaining the map data of the target environment by using a second construction mode.
Further, the second construction method may be a method of using ARWorldMap, by which a map of the target environment can be constructed, that is, map data is obtained, the map is invisible to the user, and the map may be used to determine spatial data at different positions in the target environment.
In order to align the above-described spatial model and map data, the construction of the spatial model and the construction of the map data are performed in synchronization, that is, step S210 and step S220 are performed simultaneously.
Referring to fig. 2 and fig. 3 in combination, fig. 3 is a schematic flowchart illustrating another embodiment of step S210 in the present application, where step S210 includes:
the spatial model and the map data for constructing the target environment are acquired based on the target environment, and then constructed according to the acquired data. The processing equipment may comprise a collecting device for collecting the target environment, the collecting devices used in different construction modes may be the same or different, and if the collecting devices used in the two construction modes are different, the two collecting devices are arranged on the same processing equipment.
Step S311: and acquiring the target environment by using a first acquisition device to obtain first acquisition data.
The acquisition device used in step S210 is a first acquisition device, the first acquisition device may be a shooting component or a radar, the form of the first acquisition data corresponds to the type of the first acquisition device used, and the first acquisition data is at least one of image data and radar data.
Specifically, the user starts the first acquisition device in the target environment and scans the target environment, so that the target environment is acquired by the first acquisition device of the processing device. The processing device may acquire information of the outside world, i.e., the target environment, by using the first acquisition apparatus in the user scanning process, thereby obtaining first acquisition data.
Step S312: based on the first acquisition data, a first spatial feature point and a depth of the first spatial feature point in the target environment are acquired.
The first spatial feature points are selected from a plurality of feature points in the target environment, the radar data is utilized to extract the first spatial feature points in the target environment and determine corresponding depths, wherein the depths refer to the distances between the first spatial feature points and the processing equipment, or the image data is analyzed to extract the first spatial feature points in the target environment and determine the corresponding depths.
Step S313: and constructing to obtain a space model by using the first space characteristic point and the depth of the first space characteristic point.
After obtaining the first spatial feature point and the corresponding depth in the target environment, the processing device may construct a spatial model of the target environment based on the above information, the spatial model being capable of substantially reflecting an object contour and the like in the target environment through the first spatial feature point and the corresponding depth.
Referring to fig. 2 and 4 in combination, fig. 4 is a schematic flowchart illustrating another embodiment of step S220 of the present application, wherein step S220 includes:
step S421: and acquiring the target environment by using a second acquisition device to obtain second acquisition data.
Wherein, the second collection system is for shooing the subassembly, and specifically, the user can open the second collection system in the target environment to scan the target environment, thereby realize utilizing the second collection system of equipment to gather the target environment, and utilize first collection system to gather the target environment and utilize the second collection system to gather these two steps of target environment and go on in step, thereby can guarantee that the space model and the map data of establishing are aligned.
It should be noted that, the user can move and rotate in the target environment, so that the processing device can collect the target environment from multiple positions and multiple angles, and collect enough data, thereby constructing more accurate spatial model and map data.
Step S422: and determining a second spatial feature point in the target environment and spatial data corresponding to the second spatial feature point based on the second acquired data.
The second collected data is image data, the second spatial feature points are a plurality of feature points selected from a target environment, the processing device can extract the second spatial feature points from the collected image data, the environment texture in the target environment can be used as the second spatial feature points, and for example, the corner of a table or the wood texture on the table can be used as the second spatial feature points.
It should be noted that, the map data actually stores the information of the second spatial feature point in the target environment, so that the brightness, texture details, and the like of the target environment may affect the acquisition of the target environment, and further affect the construction of the map data, and if the brightness is too low or the texture details are lacking, for example, the acquisition of one white wall may affect the construction of the map data.
The spatial data corresponding to the second spatial feature point can reflect the spatial position of the second spatial feature point in the target environment, and specifically, the second spatial feature point and the corresponding spatial data are used as map data, and the map data can reflect the distribution of the second spatial feature point in the target environment, and can be used for determining the position and orientation, that is, the pose, of the preset virtual object and the processing device in the target environment. Since the map data and the spatial model are aligned, the map data also includes spatial data corresponding to different positions in the spatial model.
Step S230: and synchronously aligning the constructed space model and the map data.
The space model and the map data are constructed based on the target environment, and can be constructed synchronously, so that the space model and the map data can be aligned synchronously, the space model and the map data can be corresponding, and the space data corresponding to different positions in the space model, namely the space position information, can be determined based on the map data.
Step S240: and obtaining the target display position of the preset virtual object in the space model.
It should be noted that the processing device may run a development tool, which may be provided for the user to operate therein, so as to implement steps S240 to S260. The user may use the development tool to develop a virtual display application, that is, an AR application, that is, a main body for subsequently executing the virtual display method, for example, the development tool may include a Unity real-time content development platform, and the like.
In some embodiments, several identical or different virtual objects may be placed in one spatial model, and may be determined according to the actual needs of the user, which is not limited herein.
Referring to fig. 2 and fig. 5 in combination, fig. 5 is a schematic flowchart of another embodiment of step S240 in the present application, and step S240 includes:
step S541: and displaying the space model.
Specifically, the acquired spatial model and the corresponding map data are imported into a development tool, and the processing device can display the spatial model in an interface of the development tool for a user to view and determine the position where the preset virtual object is placed.
Since the spatial model may reflect an approximate contour of an object in the target environment, by placing the preset virtual object in the spatial model, the processing device may determine a target spatial location of the preset virtual object in the target environment.
Step S542: and responding to the placement operation of the user, and displaying the preset virtual object at a specified position on the space model.
It should be noted that, the processing device may store a plurality of already-constructed virtual objects in advance, and may display the virtual objects in the development tool interface for the user to select and place the preset virtual objects from the development tool interface. The user can select the preset virtual object, and drag the preset virtual object to a certain position in the space model for placement, wherein the position is the designated position determined by the user.
The processing device may display the preset virtual object at the designated position on the spatial model in response to the above-mentioned placing operation, so that the user may view an effect of placing the preset virtual object at the designated position.
Step S543: the designated position is acquired as a target display position.
And the target display position of the space model corresponds to the target space position of a preset virtual object in the target environment.
Step S250: and obtaining target space data corresponding to the preset virtual object based on the space data corresponding to the target display position.
After the spatial data corresponding to the target display position is determined, the spatial data corresponding to the target display position can be obtained according to the spatial model and the map data aligned with the spatial model, the spatial data is used as target spatial data corresponding to a preset virtual object, and the target spatial position of the preset virtual object in the target environment can be determined through the target spatial data corresponding to the preset virtual object.
In some embodiments, the processing device may modify the target space data of the preset virtual object in response to an operation of a user moving the position of the preset virtual object in the space model, so that in a subsequent process of displaying the preset virtual object, the modified target space data is used as a reference, thereby flexibly configuring the position of the preset virtual object in a target environment according to the user operation, conveniently adjusting the position of the preset virtual object, and improving the development efficiency.
In some embodiments, the processing device may modify the target space data of the preset virtual object in response to an operation of modifying the preset virtual object by a user, for example, adding, reducing, replacing, and the like, so that in a subsequent process of displaying the preset virtual object, the modified target space data is used as a reference, thereby conveniently modifying the preset virtual object and improving the development efficiency.
Step S260: and adding target space data of the preset virtual object into the map data.
It should be noted that, if the processing device stores the map data obtained in step S260 in advance, the user may scan the environment of the user through the virtual display application installed in the processing device to determine whether the environment is in the target environment, and if so, the map data may be used to display the preset virtual object at the target spatial position when the target spatial position is photographed.
In some embodiments, the above steps may be repeatedly performed to obtain a plurality of map data, and the plurality of map data are preset in the virtual display application for the device to acquire and install the virtual display application, so that the user may see the corresponding preset virtual object in different environments through the processing device.
In the above scheme, a target space position can be configured for the preset virtual object, and the target space data of the preset virtual object is added to the map data, that is, the position of the virtual object is stored in the map data, so that the position of the displayed virtual object is determined by using the map data in the subsequent virtual display process, and the virtual object can be reproduced at a fixed position.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a processing method for virtual display according to another embodiment of the present application, the method including:
step S610: and acquiring a space model of the target environment and space data corresponding to different positions in the space model.
Step S620: and obtaining the target display position of the preset virtual object in the space model.
Step S630: and obtaining target space data corresponding to the preset virtual object based on the space data corresponding to the target display position.
It is understood that the related descriptions of step S610 to step S630 may refer to the related contents of step S110 to step S130, which are not described herein again.
Step S640: and storing target space data of the preset virtual object.
Specifically, the saving of the target space data of the preset virtual object may include adding the target space data of the preset virtual object to the map data.
It should be noted that, through the above steps S610 to S640, the preset virtual object may be placed in the target environment, and the steps S610 to S640 may also be repeatedly executed, so that the virtual objects are placed in a plurality of environments respectively.
Step S650: and responding to preset trigger operation of a user, and acquiring the stored target space data to determine the target space position of the preset virtual object in the target environment.
It is understood that a virtual display application may be run in the processing device, and the virtual display application may be obtained through steps S610 to S640, and the data of the application may include the map data, i.e., the map data of the target environment. The map data of the target environment may be obtained by the processing device executing the steps S610 to S640, and the map data may include target space data of the preset virtual object, where the target space data represents a target space position of the preset virtual object in the target environment.
The preset triggering operation may be a preset operation for triggering a relevant step of executing the virtual display, for example, opening a virtual display application.
Specifically, the processing device may acquire and install the virtual display application, so as to acquire map data of the target environment, and in response to a preset trigger operation of a user, acquire target space data from the map data, so as to determine a target space position of a preset virtual object in the target environment by using the target space data.
Step S660: whether the target space position is shot at present is detected.
It should be noted that the processing device may not capture the target spatial position at present, and the processing device may not display the preset virtual object at this time. The preset virtual object is displayed only when the target space position is shot, so that whether the target space position is shot by the processing equipment at present or not is detected firstly.
Referring to fig. 6 and 7 in combination, fig. 7 is a schematic flowchart illustrating another embodiment of step S660 of the present application, where step S660 includes:
step S761: and positioning by using the map data and the current shooting picture to obtain current positioning data.
It should be noted that, the positioning process may also be referred to as repositioning, a user may scan an environment of the user by using a virtual display application running in a processing device to obtain a current shot picture, and may determine spatial feature point information in the current environment based on the current shot picture, since map data, that is, map data of a target environment is pre-stored in the processing device, where the map data includes all spatial feature point information in the target environment, the processing device may determine whether the current environment is consistent with the target environment by using a spatial feature point in the current environment determined based on the current shot picture and all spatial feature points in the target environment in the map data to perform matching, and if the spatial feature point in the current environment is matched with a plurality of spatial feature points in the target environment, may determine that the current environment is consistent with the target environment, then, the current location data can be determined to be in the target environment, and the current location data can be further obtained by directly utilizing the map data without re-collecting and constructing a map of the current environment.
If the spatial feature point of the current environment does not correspond to a spatial feature point in the map data, it may be determined that the relocation has failed. It should be noted that even if the user is in the target environment, the processing device may be located at a different location at the time of relocation than during the process of building the map. For the same spatial feature point, the collection angle when collecting the information of the spatial feature point during map construction may be different from the collection angle for the spatial feature point during relocation, and if the difference between the two angles is too large, it may result in that even the same spatial feature point may fail to be matched. Therefore, when the map is constructed, the information of the spatial feature points in the environment is collected from multiple positions and multiple angles as much as possible, the success rate of subsequent relocation can be improved, the collection pose when relocation is carried out is consistent with the collection pose when the map is constructed as much as possible, and the success rate of relocation can be improved.
Step S762: whether the target space position is shot currently is detected based on the current positioning data.
It should be noted that after the positioning is completed, it can be determined that the target environment is currently located, that is, the map data of the target environment and the target space data included therein can be determined.
The current positioning data obtained by positioning includes the current pose of the processing device, that is, the position and posture currently located in the target environment. The processing device can determine whether the target space position is shot currently or not according to the relation between the pose and the target space position.
Or whether the target spatial position is currently captured may also be determined by the feature point in the currently captured picture and the feature point at the target spatial position, and specifically, the processing device may determine whether at least one set of matching point pairs exists based on the currently captured picture and the current positioning data, where the matching point pairs include a first feature point in the currently captured picture and a second feature point in the target environment at the target spatial position, and in response to the existence of the at least one set of matching point pairs, determine that the processing device is currently captured at the target spatial position.
Step S670: and in response to the detection of the current shooting target space position, displaying a preset virtual object in the current shooting picture.
It will be appreciated that the same pre-set virtual object captured by the processing device at different positions will be different for the same pre-set virtual object, for example, the size or orientation may be different, depending on the target spatial position of the pre-set virtual object and the orientation and pose of the processing device.
Referring to fig. 6 and 8 in combination, fig. 8 is a schematic flowchart of another embodiment of step S670 of the present application, where step S670 includes:
step S871: and determining display parameters of the preset virtual object based on the current pose and the target space position.
It will be appreciated that during relocation, the current pose of the processing device may be determined as current positioning data.
Specifically, the display parameters of the preset virtual object may be determined according to a position relationship between the current pose of the processing device and a target spatial position of the preset virtual object in the current environment. Wherein the display parameters may include at least one of: the display position in the current display screen and the display form of the preset virtual object can include at least one of size and orientation.
Step S872: and displaying the preset virtual object in the current display picture according to the display parameters.
Specifically, the preset virtual object is displayed at the determined display position in accordance with the determined size and orientation.
Steps S650 to S670 may be considered as related steps of virtual display, and may be executed by the processing device or the user terminal.
In the above scheme, by pre-storing the target space position of the preset virtual object in the target environment, when the processing device shoots the target space position at present, the preset virtual object is displayed in the current shooting picture, that is, the target space position in the target environment is used for intelligently and virtually displaying the preset virtual object, so that the preset virtual object can be reproduced at a fixed position.
Referring to fig. 9, fig. 9 is a schematic diagram of a framework of an embodiment of a processing device for virtual display according to the present application.
In this embodiment, the processing device 90 for virtual display includes a first obtaining module 91, a second obtaining module 92, and a third obtaining module 93.
The first obtaining module 91 is configured to obtain a space model of a target environment and space data corresponding to different positions in the space model, the second obtaining module 92 is configured to obtain a target display position of a preset virtual object in the space model, and the third obtaining module 93 is configured to obtain target space data corresponding to the preset virtual object based on the space data corresponding to the target display position, where the target space data is used to determine a display position of the preset virtual object at a user terminal when a user is located in the target environment.
The above obtaining of the spatial model of the target environment and the spatial data corresponding to different positions in the spatial model specifically includes: constructing a space model of the target environment by using a first construction mode, constructing map data of the target environment by using a second construction mode, wherein the map data comprises space data corresponding to different positions in the space model, and the constructed space model and the map data are synchronously aligned; or acquiring the space model and the map data which are constructed by other equipment.
The construction of the space model and the construction of the map data are carried out synchronously, the first construction mode is constructed by adopting a Mesh construction mode, and the second construction mode is constructed by adopting an ARWorldMap mode.
The constructing of the spatial model of the target environment by using the first construction method specifically includes: acquiring a target environment by using a first acquisition device to obtain first acquisition data; acquiring a first spatial feature point and the depth of the first spatial feature point in the target environment based on the first acquisition data; and constructing to obtain a space model by using the first space characteristic point and the depth of the first space characteristic point.
The constructing and obtaining of the map data of the target environment by using the second construction method specifically includes: acquiring a target environment by using a second acquisition device to obtain second acquisition data; and determining a second spatial feature point in the target environment and spatial data corresponding to the second spatial feature point based on the second acquired data.
The first acquisition device and the second acquisition device are the same acquisition device or two acquisition devices, and under the condition that the first acquisition device and the second acquisition device are two acquisition devices, the first acquisition device and the second acquisition device are arranged on the same equipment; the first collected data is at least one of image data and radar data, and the second collected data is image data.
The obtaining of the target display position of the preset virtual object in the spatial model specifically includes: displaying the spatial model; responding to the placing operation of a user, and placing a preset virtual object on the displayed space model; and acquiring the current display position of the preset virtual object on the space model as a target display position.
The processing device 90 for virtual display further includes an adding module, configured to add target space data of a preset virtual object to the map data.
The processing apparatus 90 for virtual display further includes a storage module, a fourth obtaining module, and a display module, where the storage module is configured to store target space data corresponding to a preset virtual object, the fourth obtaining module is configured to obtain the stored target space data in response to a preset trigger operation of a user, so as to determine a target space position of the preset virtual object in a target environment, and the display module is configured to display the preset virtual object in a current shooting picture.
The step of storing the target space data corresponding to the preset virtual object specifically comprises the step of adding the target space data of the preset virtual object into map data, and the step of acquiring the stored target space data specifically comprises the step of acquiring the target space data from the map data, wherein the target space data represents the target space position of the preset virtual object in a target environment, and the step of detecting the currently shot target space position specifically comprises the step of positioning by using the map data and a currently shot picture to obtain current positioning data; and detecting the current shooting target space position based on the current positioning data.
The detecting of the currently shot target spatial position based on the current positioning data specifically includes determining whether at least one group of matching point pairs exists based on the current positioning data, where the matching point pairs include a first feature point in a currently shot picture and a second feature point located at the target spatial position in the target environment; and determining the current shooting target space position in response to the existence of at least one group of matching point pairs.
The above-mentioned current positioning data includes a current pose, and the above-mentioned displaying of the preset virtual object on the current display screen specifically includes determining, based on the current pose and the target spatial position, a display parameter of the preset virtual object, where the display parameter includes at least one of: displaying positions in a current display picture and displaying forms of preset virtual objects; and displaying the preset virtual object in the current display picture according to the display parameters.
Referring to fig. 10, fig. 10 is a schematic diagram of a frame of an embodiment of an electronic device according to the present application.
In this embodiment, the electronic device 100 may be a processing device or a user terminal in the above embodiments, and the electronic device 100 includes a memory 101 and a processor 102, where the memory 101 is coupled to the processor 102. Specifically, various components of the electronic device 100 may be coupled together by a bus, or the processor 102 of the electronic device 100 may be connected with other components one by one, respectively. The electronic device 100 may be any device with processing capabilities, such as a computer, a tablet, a cell phone, etc.
The memory 101 is used for storing program data executed by the processor 102, data of the processor 102 during processing, and the like. Such as map data, spatial models, preset virtual objects, etc. The memory 101 includes a nonvolatile storage portion for storing the program data.
The processor 102 controls the operation of the electronic device 100, and the processor 102 may also be referred to as a Central Processing Unit (CPU). The processor 102 may be an integrated circuit chip having signal processing capabilities. The processor 102 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 102 may be implemented collectively by a plurality of circuit-forming chips.
The processor 102 is configured to execute instructions to implement any of the processing methods of the virtual display or the virtual display methods described above by calling the program data stored in the memory 101.
Referring to fig. 11, fig. 11 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application.
In this embodiment, the computer readable storage medium 110 stores processor executable program data 111, which can be executed to implement any of the processing methods or virtual display methods of virtual display described above.
The computer-readable storage medium 110 may be a medium that can store program data, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may be a server that stores the program data, and the server may send the stored program data to another device for operation, or may self-operate the stored program data.
In some embodiments, computer-readable storage medium 110 may also be a memory as shown in FIG. 10.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (15)

1. A method for processing a virtual display, the method comprising:
acquiring a spatial model of a target environment and spatial data corresponding to different positions in the spatial model;
obtaining a target display position of a preset virtual object in the space model;
and obtaining target space data corresponding to the preset virtual object based on the space data corresponding to the target display position, wherein the target space data is used for determining the display position of the preset virtual object at the user terminal under the condition that the user is located in the target environment.
2. The method of claim 1, wherein obtaining a spatial model of a target environment and spatial data corresponding to different locations in the spatial model comprises:
constructing to obtain a spatial model of the target environment by using a first construction mode, and constructing to obtain map data of the target environment by using a second construction mode, wherein the map data comprises spatial data corresponding to different positions in the spatial model;
and synchronously aligning the constructed space model and the map data.
3. The method of claim 2, wherein the construction of the spatial model and the construction of the map data are performed synchronously;
and/or the first construction mode is constructed by adopting a Mesh construction mode, and the second construction mode is constructed by adopting an ARWorldMap mode.
4. The method according to claim 2 or 3, wherein the constructing a spatial model of the target environment using the first construction method comprises:
acquiring the target environment by using a first acquisition device to obtain first acquisition data;
acquiring a first spatial feature point in the target environment and the depth of the first spatial feature point based on the first acquisition data;
constructing and obtaining the space model by using the first space characteristic point and the depth of the first space characteristic point;
and/or the presence of a gas in the gas,
the constructing and obtaining of the map data of the target environment by using the second construction mode includes:
acquiring the target environment by using a second acquisition device to obtain second acquisition data;
and determining a second spatial feature point in the target environment and spatial data corresponding to the second spatial feature point based on the second acquired data.
5. The method according to claim 4, wherein the first and second acquisition devices are the same acquisition device or two acquisition devices, and in the case of two acquisition devices, the first and second acquisition devices are provided on the same apparatus;
and/or the first acquisition data is at least one of image data and radar data, and the second acquisition data is image data.
6. The method according to any one of claims 1 to 5, wherein the obtaining of the target display position of the preset virtual object in the spatial model comprises:
displaying the spatial model;
responding to the placement operation of a user, and displaying the preset virtual object at a specified position on the space model; the placing operation is used for indicating to place the preset virtual object at the specified position;
and acquiring the specified position as the target display position.
7. The method according to any one of claims 1 to 6, wherein the spatial data is saved in the map data of the target environment, and after the target spatial data of the preset virtual object is obtained based on the spatial data corresponding to the target display position, the method further comprises:
and adding the target space data of the preset virtual object into the map data.
8. The method according to any one of claims 1 to 6, wherein after obtaining the target spatial data corresponding to the preset virtual object based on the spatial data corresponding to the target display position, the method further comprises:
storing target space data of the preset virtual object;
responding to preset trigger operation of a user, and acquiring the stored target space data to determine a target space position of a preset virtual object in a target environment;
and responding to the detection that the target space position is shot currently, and displaying the preset virtual object in the current shooting picture.
9. The method of claim 8, wherein the saving the target space data of the preset virtual object comprises:
adding target space data of the preset virtual object to the map data;
the acquiring the saved target space data comprises:
acquiring the target space data from the map data, wherein the target space data represents the target space position of the preset virtual object in the target environment;
the detecting that the target space position is shot currently comprises:
positioning by using the map data and the current shooting picture to obtain the current positioning data;
and detecting the current shooting to the target space position based on the current positioning data.
10. The method of claim 9, wherein the detecting the current shooting to the target spatial location based on the current positioning data comprises:
determining whether at least one group of matching point pairs exists based on the current positioning data, wherein the matching point pairs comprise a first characteristic point in the current shooting picture and a second characteristic point located at the target space position in the target environment;
in response to the existence of the at least one set of matching point pairs, determining that the target spatial location is currently captured.
11. The method of any one of claims 8 to 10, wherein the current positioning data includes a current pose;
the displaying the preset virtual object on the current display picture comprises:
determining display parameters of the preset virtual object based on the current pose and the target space position, wherein the display parameters comprise at least one of the following: displaying positions in the current display picture and displaying forms of the preset virtual objects;
and displaying the preset virtual object in the current display picture according to the display parameters.
12. The method of claim 11, wherein the display modality includes at least one of size and orientation.
13. A processing apparatus for virtual display, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a spatial model of a target environment and spatial data corresponding to different positions in the spatial model;
the second acquisition module is used for acquiring a target display position of a preset virtual object in the space model;
and the third obtaining module is used for obtaining target space data corresponding to the preset virtual object based on the space data corresponding to the target display position, wherein the target space data is used for determining the display position of the preset virtual object at the user terminal under the condition that the user is located in the target environment.
14. An electronic device, characterized in that the device comprises a processor and a memory for storing program data, the processor being adapted to execute the program data to implement the method according to any of claims 1-12.
15. A computer-readable storage medium for storing program data executable to implement the method of any one of claims 1-12.
CN202111628390.3A 2021-12-28 2021-12-28 Processing method, device, equipment and medium for virtual display Pending CN114332419A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111628390.3A CN114332419A (en) 2021-12-28 2021-12-28 Processing method, device, equipment and medium for virtual display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111628390.3A CN114332419A (en) 2021-12-28 2021-12-28 Processing method, device, equipment and medium for virtual display

Publications (1)

Publication Number Publication Date
CN114332419A true CN114332419A (en) 2022-04-12

Family

ID=81014198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111628390.3A Pending CN114332419A (en) 2021-12-28 2021-12-28 Processing method, device, equipment and medium for virtual display

Country Status (1)

Country Link
CN (1) CN114332419A (en)

Similar Documents

Publication Publication Date Title
WO2016029939A1 (en) Method and system for determining at least one image feature in at least one image
CN108200334B (en) Image shooting method and device, storage medium and electronic equipment
CN106033601B (en) The method and apparatus for detecting abnormal case
JP2016066360A (en) Text-based 3D augmented reality
GB2560340A (en) Verification method and system
KR101181967B1 (en) 3D street view system using identification information.
CN112907751B (en) Virtual decoration method, system, equipment and medium based on mixed reality
US11113571B2 (en) Target object position prediction and motion tracking
US9595125B2 (en) Expanding a digital representation of a physical plane
CN112037314A (en) Image display method, image display device, display equipment and computer readable storage medium
KR20150039252A (en) Apparatus and method for providing application service by using action recognition
CN108985263B (en) Data acquisition method and device, electronic equipment and computer readable medium
JP2016170610A (en) Three-dimensional model processing device and camera calibration system
KR20220160066A (en) Image processing method and apparatus
CN112882576B (en) AR interaction method and device, electronic equipment and storage medium
CN104952056A (en) Object detecting method and system based on stereoscopic vision
CN111833457A (en) Image processing method, apparatus and storage medium
US11665332B2 (en) Information processing apparatus, control method thereof and storage medium
CN112230765A (en) AR display method, AR display device, and computer-readable storage medium
US20160110909A1 (en) Method and apparatus for creating texture map and method of creating database
CN109598177A (en) The method and apparatus of the state of electrical equipment for identification
CN107479715A (en) The method and apparatus that virtual reality interaction is realized using gesture control
CN113557546B (en) Method, device, equipment and storage medium for detecting associated objects in image
WO2022166173A1 (en) Video resource processing method and apparatus, and computer device, storage medium and program
US11989928B2 (en) Image processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination