CN116048260A - Space display method and device, electronic equipment and storage medium - Google Patents

Space display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116048260A
CN116048260A CN202310015123.1A CN202310015123A CN116048260A CN 116048260 A CN116048260 A CN 116048260A CN 202310015123 A CN202310015123 A CN 202310015123A CN 116048260 A CN116048260 A CN 116048260A
Authority
CN
China
Prior art keywords
space
description information
virtual
entity
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310015123.1A
Other languages
Chinese (zh)
Inventor
张释方
韩伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202310015123.1A priority Critical patent/CN116048260A/en
Publication of CN116048260A publication Critical patent/CN116048260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a space display method and device, electronic equipment and storage medium. Wherein the method is applied to a real imaging device; the method comprises the following steps: under the condition that the target entity space is detected, acquiring space description information of the target entity space and pose description information of entity objects in the target entity space; and displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.

Description

Space display method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of terminals, and in particular relates to a space display method and device, electronic equipment and a storage medium.
Background
A real-world imaging device is a popular mobile terminal that can be worn on the head or eyes of a user for viewing the environment.
However, in some special scenes, it is difficult for a user to observe the environment through the real imaging device, so that the user cannot know the condition of the environment in time, and unnecessary trouble is caused. For example, in darkness, it is difficult for a user to observe whether there is a shade on a travel route through a real imaging device due to insufficient light intensity, and thus a collision or the like occurs.
Disclosure of Invention
The disclosure provides a space display method and device, electronic equipment and storage medium, which can observe the environment in an extreme scene, thereby avoiding inconvenience brought by the extreme scene.
According to a first aspect of the present disclosure, there is provided a spatial display method applied to a real imaging apparatus; the method comprises the following steps:
under the condition that the target entity space is detected, acquiring space description information of the target entity space and pose description information of entity objects in the target entity space;
and displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.
According to a second aspect of the present disclosure, there is provided a spatial display apparatus applied to a real imaging device; the device comprises:
the acquisition unit acquires space description information of the target entity space and pose description information of entity objects in the target entity space under the condition that the target entity space is detected;
And the display unit is used for displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of the first aspect by executing the executable instructions.
According to a fourth aspect of the present disclosure there is provided a computer readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the method according to the first aspect.
In the technical scheme of the disclosure, the real imaging device can acquire the space description information of the target entity space and the pose description information of the entity objects in the target entity space under the condition that the target entity space is detected. On the basis, the reality imaging device can display a virtual space corresponding to the target entity space in the display area based on the acquired space description information, and create and place virtual objects corresponding to the entity objects in the target entity space in the virtual space according to the acquired pose description information.
It should be appreciated that, since the present disclosure may display a virtual space corresponding to a target physical space in a display area of a real imaging device, and may create and place a virtual object corresponding to an entity object in the target physical space in the virtual space, a user may understand a situation in the target physical space where the user is currently located by observing contents displayed in the real imaging device, thereby avoiding a problem in the related art that the environment where the user is not necessarily bothersome due to a failure to directly observe the environment under a specific situation.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of a spatial display method shown in an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart of another spatial display method shown in an exemplary embodiment of the present disclosure;
FIG. 3 is a block diagram of a spatial display apparatus shown in an exemplary embodiment of the present disclosure;
FIG. 4 is a block diagram of another spatial display apparatus shown in an exemplary embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
Augmented reality (Augmented Reality, AR) technology is a technology of skillfully fusing virtual information with the real world, and AR glasses are currently popular terminal devices adopting AR technology.
When the AR glasses are used, users can wear the AR glasses on eyes like conventional glasses, so that real-time pictures of the environment can be observed through the AR glasses, and the description information added by the AR glasses based on the environment can be seen.
The most typical functions of AR glasses are: a tag is added to the item in the environment. For example, when a user is in a mall, store type tags may be added to nearby stores, and gender, height, etc. tags may be added to the people that meet, so that the user may better understand the people, things, and things in the environment.
It will be appreciated that in a conventional environment, an AR glasses-like device may perform the above-described functions and be able to observe the environment and react accordingly. But when the user is in some specific condition, the user may not be able to observe the environment in which they are located, resulting in the AR glasses not providing the above functions, and the user not being able to react appropriately based on the condition of the environment in which they are located.
For example, in a dark scene, since the user cannot observe the condition of the environment through the AR glasses, the problems of "collision with the articles placed in the environment", "no access to the desired articles from the environment", etc. may occur; the AR glasses also have a problem that they cannot provide a labeling function similar to the above because they cannot acquire information of the environment.
Therefore, the disclosure proposes a spatial display method to avoid the problem that the environment cannot be observed due to the specific scene similar to darkness in the related art.
Fig. 1 is a flowchart illustrating a spatial display method according to an exemplary embodiment of the present disclosure. The method is applied to a real imaging device. As shown in fig. 1, the method may include the steps of:
step 102, under the condition that the target entity space is detected, acquiring space description information of the target entity space and pose description information of entity objects in the target entity space.
In the present disclosure, a real imaging apparatus refers to: the device that may be used to view the environment may be generally similar to the AR glasses described above, with a display area that, when used by a user, covers the eyes of the user so that the user views the environment through the display area. In other words, the user can observe the environment through the display area of the real imaging device in the present disclosure, or in other words, the display area of the real imaging device in the present disclosure can display a real-time picture of the environment in which the user is located.
It is noted that how a user, when using a real imaging device, views a real-time picture of the environment through the real imaging device is relevant to the specific type of the real imaging device. For example, when the real imaging device is a device employing AR technology like AR glasses, the user typically observes the environment through the display area of the real imaging device; for another example, when the real imaging device is a VR (Virtual Reality) technology similar to a head-mounted VR or the like, the display area of the real imaging device may display the real-time screen of the collected environment in real time. Of course, the examples herein are illustrative only, and the actual imaging devices in this disclosure, and in particular what actual imaging technology is employed, may be determined by one skilled in the art based on actual circumstances, and this disclosure is not limited thereto.
In the present disclosure, the real imaging apparatus may acquire space description information of a target physical space and pose description information of physical objects in the target physical space in case that the target physical space is detected. On the basis, the virtual space corresponding to the target entity space can be displayed in the display area of the real imaging device based on the acquired space description information, and virtual objects corresponding to the entity objects in the target entity space are created and placed in the virtual space according to the acquired pose description information.
It should be understood that, based on the spatial description information of the target physical space and the pose description information of the physical object contained therein, the present disclosure displays a virtual space in the display area of the real imaging apparatus, and places a virtual object corresponding to the physical object in the virtual space, which corresponds to reproducing the environment in which the user is located in the display area of the real imaging apparatus. Under the premise, even if the user is in the specific environment, namely the user is similar to darkness and the like and cannot observe the environment, the user can observe the environment through the reproduced virtual space in the real imaging equipment, so that the problems that the user collides with objects in the environment and cannot find the needed objects from the environment are avoided.
It should be stated that, in this disclosure, "a physical object in a physical space" may refer to either: movable items placed in physical space, such as furniture, e.g., a table chair bench placed in a room; the method can also be as follows: items that cannot be moved in a physical space, such as a closet embedded in a wall in a room, a kitchen counter. The physical articles in the present disclosure particularly refer to which articles can be determined by those skilled in the art according to actual conditions, and the present disclosure is not limited thereto.
In the present disclosure, the target entity space refers to an entity space in which a user is currently located. The present disclosure may obtain spatial description information of the target physical space, and pose description information of physical objects in the target physical space in various manners.
In an embodiment, the spatial description information of each entity space acquired in advance and the pose description information of the entity object in the corresponding entity space may be uploaded to the cloud end in advance. When the real imaging device detects the target entity space, a description information acquisition request for the target entity space can be initiated to the cloud end, and the cloud end can search the space description information of the target entity space and the pose description information of the entity objects in the target entity space in the space description information of each entity space and the pose description information of the entity objects in each entity space maintained by the cloud end. For example, the description information acquisition request initiated by the real imaging device may include identification information of the target entity space, so that the cloud end searches for the space description information based on the identification information, and pose description information of the entity object included in the space description information.
In another embodiment, the real imaging device may maintain description information of a plurality of physical spaces locally, and pose description information of physical objects in each physical space. On the basis, under the condition that the target entity space is detected, the real imaging equipment can search the space description information of the target entity space from the space description information of the plurality of maintained entity spaces; and searching the pose description information of the physical object in the target physical space from the maintained pose description information of the physical object in each physical space.
In other words, the present disclosure may store the spatial description information and the pose description information both at the cloud end and locally at the real imaging device.
The spatial description information and the pose description information stored in the cloud can be uploaded by at least one of a real imaging device initiating the description information acquisition request, other real imaging devices different from the real imaging device, and other types of devices. The other type of device may be any type of device, for example, may be a server, a smart phone, a tablet computer, etc., and the spatial description information and the pose description information uploaded by the devices may be obtained by a user through a manner of logging after measurement in a corresponding entity space, etc., which is not limited in this disclosure.
And the space description information and the pose description information which are locally maintained by the real imaging equipment can be acquired in various modes.
For example, the space description information of a part of the physical space and the pose description information of the physical object in the part of the physical space may be acquired from the cloud end and cached locally in the real imaging device, in this case, the real imaging device may only maintain the space description information of the physical space where the real imaging device enters more frequently than a preset frequency, and the pose description information of the physical object therein, so that when the user is in an environment where the user frequently enters and exits, the virtual space corresponding to the corresponding physical space may be quickly constructed.
For another example, the real imaging device may scan each physical space in advance to obtain space description information of the corresponding physical space and pose description information of physical objects therein, for example, when the real imaging device is located in any physical space, the real imaging device may call the assembled camera to take an image of the physical space, and on one hand, may perform spatial analysis on the taken image to obtain space description information for representing a spatial structure of the physical space; on the other hand, the object recognition can be performed on the photographed image to obtain pose description information for representing the pose state of the physical object in the physical space. On the basis, the space description information obtained by space analysis and the pose description information obtained by object identification can be uploaded to the cloud or stored in the local of the real imaging equipment. If the physical space and the pose description information of the physical object in the physical space are uploaded to the cloud, the physical space and the pose description information of the physical object in the physical space can be shared with other real imaging devices, and further the problem that the other real imaging devices need to repeatedly perform space analysis and object identification on the physical space is avoided.
In the present disclosure, the real imaging apparatus may also preferentially determine whether it is in the specific environment described above before acquiring the spatial description information and the pose description information. For example, light intensity detection can be preferentially performed on the environment where the real imaging device is located, and if the light intensity detection result indicates that the light intensity in the target entity space is lower than the preset intensity, the operation of acquiring the space description information of the target entity space and the pose description information of the entity object in the space description information is performed; otherwise, the target entity space is proved not to be in a specific scene, and the user can directly observe the environment without constructing a virtual space.
It should be noted that the above-described dark scene is merely illustrative, and the specific environment in the present disclosure may refer to any environment where a user cannot observe the environment, and is not limited to the dark environment. The particular type of environment in this disclosure, which is specific, may be determined by one of ordinary skill in the art based on actual needs and is not limiting of this disclosure.
And 104, displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.
In the present disclosure, the virtual space and the virtual articles placed therein displayed in the display area may or may not be completely identical to the physical space and the physical articles therein, but only show the general state of the physical space and the articles therein. Under the condition of complete consistency, a virtual space consistent with the spatial structure of the target entity can be displayed in a display area based on the acquired spatial description information, and virtual objects consistent with the grade and posture states of the entity are created and placed in the virtual space according to the acquired position and posture description information; and under the condition of incomplete consistency, the virtual articles corresponding to the physical articles can be represented by the same structure such as a sphere and the like so as to display the positions of the physical articles in the target physical space approximately, and further avoid the collision between the user and the physical articles. Of course, this is merely exemplary, and how to display the virtual space corresponding to the target physical space and how to display the virtual object corresponding to the physical object can be determined by those skilled in the art according to actual requirements, which is not limited in this disclosure.
In the present disclosure, on the basis of displaying a virtual space and placing virtual objects in the virtual space, a certain additional process may also be performed on the displayed virtual space. For example, a fluorescent special effect may be added to virtual items placed in the virtual space to highlight the positions of the respective items; for another example, item tags may be added to virtual items placed in the virtual space to identify attributes of individual physical items. Of course, this example is merely illustrative, and how to perform additional processing for virtual articles in particular, may be determined by one of ordinary skill in the art based on actual needs, and is not limited by this disclosure.
In the present disclosure, in the case that a target entity space includes a plurality of entity objects, and at least one of the plurality of entity objects is an internet of things device, after a user observes an environment where the user is located through a displayed virtual space, the user may initiate a control instruction for any one of the internet of things devices, and the real imaging device may respond to the control instruction and send a control message to any one of the internet of things devices to instruct the any one of the internet of things devices to execute an operation corresponding to the control instruction. For example, a user may set in advance a correspondence between executable operations of each internet of things device and various gestures (or various controls) in the real imaging device, and on this basis, a control instruction for a specific operation of a specific internet of things device may be initiated by displaying a preset gesture to the real imaging device or triggering a preset control. Of course, this example is merely illustrative, and how to initiate a control instruction for an internet of things device may be determined by one skilled in the art according to actual needs, which is not limited by the present disclosure.
In the present disclosure, the real imaging device may also monitor a distance between itself and the physical object in the target physical space, and if the distance between the real imaging device and any physical object is less than a preset distance threshold, a collision warning may be issued.
The present disclosure may issue collision warning in different ways depending on actual needs. For example, a prompt voice for prompting an impending collision may be broadcast; as another example, a pop-up window for warning of impending collision may be presented in the display area; for another example, the collision warning identifier may be displayed at a virtual article corresponding to any one of the above-described physical articles in the virtual space.
In the present disclosure, a plurality of physical objects included in a target physical space may include a movable internet of things device. At this time, the distance between itself and portable thing networking equipment can be monitored to reality imaging device, if the distance between reality imaging device and the portable thing networking equipment of arbitrary is less than the distance threshold value of predetermineeing, then can send the displacement instruction to this arbitrary portable thing networking equipment to instruct this arbitrary portable thing networking equipment to remove to the direction of keeping away from reality imaging device. By the mode, the collision between the user and the movable internet of things equipment can be avoided.
It should be stated that the pose description information acquired by the real imaging device may include at least one of "structural information of the physical object, position information of the physical object, and pose information of the physical object", so that the real imaging device at least reflects an attribute of the corresponding physical object in at least one of three dimensions of the structure, the position, and the pose when displaying the virtual object. If a virtual article completely consistent with the physical article is to be displayed in the virtual space, the obtained pose description information may include three of structural information, position information and pose information, and of course, since the pose information generally includes structural information, only the pose information and the position information may be included.
According to the technical scheme, the real imaging device can acquire the space description information of the target entity space and the pose description information of the entity objects in the target entity space under the condition that the target entity space is detected. Based on the above, a virtual space corresponding to the target entity space can be displayed in the display area based on the acquired space description information, and virtual objects corresponding to the entity objects in the target entity space can be created and placed in the virtual space according to the acquired pose description information.
It should be appreciated that, since the present disclosure may display a virtual space corresponding to a target physical space in a display area of a real imaging device, and may create and place a virtual object corresponding to a physical object in the target physical space in the virtual space, which is equivalent to reproducing an environment in which a user is located to some extent in the real imaging device. Therefore, through the technical scheme, the user can know the current situation in the target entity space by observing the content displayed in the real imaging device, so that the problem that unnecessary trouble is caused because the environment cannot be directly observed under the specific situation in the related technology is avoided.
Next, taking "the description information is stored in the cloud end, and when the AR glasses are in the dark environment, the description information is obtained from the cloud end, and taking the construction of the virtual space" as an example, the technical scheme of the disclosure is introduced.
Fig. 2 is a flow chart of another spatial display method according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the method comprises the steps of:
in step 201, light intensity detection is performed on the environment.
In this embodiment, when a user wears AR glasses to enter any entity space, light intensity detection may be performed on the located environment to determine whether the user is in a dark environment, if so, the located entity space is determined to be the target entity space, and further space description information of the target entity space and pose description information of entity objects contained in the space description information are obtained from the cloud end, so as to be used for constructing a virtual space corresponding to the target entity space.
Step 202, judging whether the intensity of the located ambient light is lower than a light intensity threshold; if yes, jump to step 203; otherwise, jump to step 208
In this embodiment, a light intensity threshold may be preset to determine whether the user is in darkness, and if the light intensity detection result indicates that the light intensity of the current environment is lower than the light intensity threshold, the user is determined to be in darkness, so as to perform an operation of building a virtual space; in contrast, if the light intensity detection result indicates that the light intensity of the current environment is not lower than the light intensity threshold, it is determined that the current environment is not in darkness, and the user can directly observe the environment, so that the process may jump to step 208 to operate the AR glasses in the normal mode.
Step 203, determining the target entity space.
In this embodiment, when determining that the object is in the dark, it is possible to determine which entity space is currently in, that is, determine the target entity space described above, so as to obtain the space description information and the pose description information corresponding to the target entity space.
The embodiment can determine which mode to determine the target entity space according to the actual situation. For example, the geographic position of each entity space can be maintained in the AR glasses, so that the AR glasses can determine the current target entity space by positioning; for another example, the AR glasses may determine the target physical space according to the picture content recorded during entering the target physical space and according to the picture content in the vicinity thereof. Of course, this is merely illustrative, and the present embodiment is not limited in this regard, as to how the target entity space is determined.
Step 204, space description information and pose description information corresponding to the target entity space are obtained from the cloud.
In this embodiment, the spatial description information and the pose description information of each entity space stored in the cloud may be obtained by real-time scanning of the environment where each AR glasses is located in the actual use process.
For example, when a user wears AR glasses in any room, the AR glasses may call a camera to take images of various locations of the room. On the one hand, the AR glasses can perform spatial analysis on the photographed image so as to obtain spatial description information such as the structure, the azimuth and the like of the room; on the other hand, the AR glasses can recognize the photographed image to obtain pose description information such as the placement position and the structural pose of the physical object in the room. For example, when a user enters the room, the three-dimensional coordinate system may be established by using the position of the camera of the AR glasses as the origin, and then the coordinates of each corner of the room may be obtained based on spatial analysis, and the information such as the structure, position, and posture of each item in the room may be obtained based on item identification. On the basis, the corresponding relation between the room and the obtained space description information and pose description information can be constructed, and the corresponding relation is uploaded to the cloud.
It should be noted that, each AR glasses may perform the above-mentioned operation of "scanning the environment where the environment is located" and uploading the spatial description information and the pose description information obtained by scanning the environment where the environment is located to the cloud end ", so that each AR glasses may share the description information of the entity space that has been entered. For example, the description information of each entity space stored in the cloud may be as follows in table 1:
Figure BDA0004039857290000111
TABLE 1
It should be noted that "space description information A, B, C" shown in table 1 is merely used to simply represent space description information of a corresponding entity space, and what is actually recorded in it is used to describe information such as structure, azimuth, etc. of the corresponding entity space, for example, coordinate information of each vertex of a room may be recorded; similarly, the "pose description information a, b, c …" shown in table 1 is merely used to simply represent pose description information of the corresponding physical object, and what is actually recorded is used to describe information such as structure, pose, position, and the like of the corresponding physical object, for example, by recording the structure and specific coordinates thereof. Of course, this is merely illustrative, and how to record the spatial description information and the pose description information can be determined by those skilled in the art according to actual requirements, which is not limited in this embodiment.
Based on the above example, on the basis that the cloud end stores the description information shown in table 1, assuming that the currently determined target entity space is room 1, a description information acquisition request can be initiated to the cloud end based on the identification information of room 1, so that the cloud end returns the space description information a, the pose description information a of the sofa, the pose description information b of the television and the pose description information c of the tea table.
Step 205, displaying the virtual space consistent with the target entity space structure based on the space description information.
For example, the spatial description information a may record the structural information of the room 1, and if the room 1 is in a cube shape, the AR glasses may return the coordinate information of 8 vertices of the room 1, and display the virtual room consistent with the structure of the room 1 based on the coordinate information of 8 vertices.
At step 206, virtual items are constructed and placed in the virtual space based on the pose description information.
With the above example in mind, the pose description information a may also record information such as the configuration, placement pose, position, etc. of the sofa by recording coordinates of each point, so that the AR glasses create and place the virtual sofa in the displayed virtual room based on the pose description information. Similar to the sofa, a virtual television can be created and placed in the virtual room according to the pose description information b, and a virtual tea table can be created and placed in the virtual room according to the pose description information c.
Step 207, adding a fluorescence effect to the virtual article.
In this embodiment, a fluorescence effect may be added to the virtual objects created and placed in the virtual space, so as to highlight the positions of the respective objects in the space, and avoid the collision between the user and the virtual objects in the travelling process.
Step 208, operate in a normal mode.
According to the technical scheme, in the technical scheme of the embodiment, the space description information of each entity space and the pose description information of the entity object in the space description information can be uploaded to the cloud end for maintenance by the cloud end. And under the condition that the user wears the AR glasses to enter any entity space and the entity space is in darkness, the AR glasses can acquire the description information of the entity space from the cloud. The obtained description information comprises space description information of the entity space and pose description information of entity objects in the entity space, the AR glasses can display a virtual space consistent with the structure of the entity space based on the space description information, and create and place virtual objects consistent with the pose state of the entity objects in the displayed virtual space according to the pose description information. Obviously, through the observation of the virtual space, the user can know the position of the physical object in the physical space, and further collision with the physical object when traveling in the dark is avoided.
Fig. 3 is a block diagram of a spatial display apparatus according to an exemplary embodiment of the present disclosure. The device is applied to a reality imaging device; referring to fig. 3, the apparatus includes an acquisition unit 301 and a display unit 302.
An acquiring unit 301, configured to acquire, when a target physical space is detected, space description information of the target physical space and pose description information of physical objects in the target physical space;
and a display unit 302 for displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.
Optionally, the acquiring unit 301 is further configured to:
initiating a description information acquisition request aiming at the target entity space to a cloud end so as to instruct the cloud end to search and return space description information of the target entity space and pose description information of entity objects in the target entity space; or alternatively, the process may be performed,
searching the space description information of the target entity space from the space description information of a plurality of entity spaces maintained by the reality imaging equipment; and searching the pose description information of the physical object in the target physical space from the pose description information of the physical object in each physical space maintained by the real imaging equipment.
Alternatively to this, the method may comprise,
the cloud-stored space description information and pose description information are uploaded by the real imaging device or other real imaging devices different from the real imaging device or other types of devices;
the real imaging device only maintains the space description information of the entity space with the entering frequency higher than the preset frequency of the real imaging device and the pose description information of the entity object therein.
Optionally, the display unit 302 is further configured to:
displaying a virtual space consistent with the target entity space structure in the display area based on the space description information
And creating and placing a virtual object consistent with the physical object grade and posture state in the virtual space according to the pose description information.
Optionally, the pose description information includes:
the structure information of the entity article, the position information of the entity article and the posture information of the entity article.
As shown in fig. 4, fig. 4 is a block diagram of another spatial display apparatus according to an exemplary embodiment of the present disclosure, which further includes, on the basis of the foregoing embodiment shown in fig. 3: photographing unit 303, analysis unit 304, identification unit 305, uploading unit 306, detection unit 307, adding unit 308, and transmitting unit 309.
Optionally, the method further comprises:
the shooting unit 303 invokes a camera assembled by the real imaging device to shoot an image of the target entity space;
the analysis unit 304 performs spatial analysis on the photographed image to obtain spatial description information for representing the spatial structure of the target entity space;
the identifying unit 305 performs object identification on the shot image to obtain pose description information for representing the pose state of the entity object in the target entity space;
and the uploading unit 306 is used for uploading the space description information obtained by space analysis and the pose description information obtained by object identification to the cloud or storing the space description information and the pose description information in the local of the real imaging equipment.
Optionally, the method further comprises:
a detection unit 307 for detecting the light intensity of the environment where the real imaging device is located;
the operation of acquiring the space description information and the pose description information is only performed when the light intensity detection result shows that the light intensity in the target entity space is lower than a preset intensity.
Optionally, the method further comprises:
an adding unit 308 for adding a fluorescence effect to the virtual article placed in the virtual space; and/or adding an article tag to the virtual article placed in the virtual space.
Optionally, the target entity space contains a plurality of entity articles; at least one of the plurality of physical objects is an internet of things device; further comprises:
and the sending unit 309 is configured to send, in response to a control instruction for any one of the devices of the internet of things, a control message to the any one of the devices of the internet of things, so as to instruct the any one of the devices of the internet of things to execute an operation corresponding to the control instruction.
Optionally, the method further comprises:
a monitoring unit 310 for monitoring the distance between the real imaging device and each physical object;
and a warning unit 311 for giving a collision warning when the distance between the real imaging device and any physical object is smaller than a preset distance threshold.
Optionally, the warning unit 311 is further configured to:
broadcasting prompt voice for prompting impending collision; or alternatively, the process may be performed,
displaying a popup window for warning about an impending collision within the display area;
and displaying collision warning identification at the virtual article corresponding to any entity article in the virtual space.
Optionally, the target entity space contains a plurality of entity articles; at least one of the plurality of physical items is: the mobile internet of things device;
The monitoring unit 310 is also used to: monitoring the distance between the real imaging device and the movable internet of things device;
the transmitting unit 309 is also used to: and when the distance between the real imaging device and any movable internet of things device is smaller than a preset distance threshold, sending a displacement instruction to any movable internet of things device to instruct the any movable internet of things device to move in a direction away from the real imaging device.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the objectives of the disclosed solution. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Correspondingly, the disclosure also provides a space display device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement a spatial display method according to any of the above embodiments, for example the method may comprise: under the condition that the target entity space is detected, acquiring space description information of the target entity space and pose description information of entity objects in the target entity space; and displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.
Accordingly, the present disclosure also provides an electronic device including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by the one or more processors, the one or more programs including instructions for implementing the spatial display method according to any of the above embodiments, for example, the method may include: under the condition that the target entity space is detected, acquiring space description information of the target entity space and pose description information of entity objects in the target entity space; and displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.
Fig. 5 is a block diagram illustrating an apparatus 500 for implementing a spatial display method according to an exemplary embodiment. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
Referring to fig. 5, an apparatus 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the apparatus 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operations at the apparatus 500. Examples of such data include instructions for any application or method operating on the apparatus 500, contact data, phonebook data, messages, pictures, videos, and the like. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 500.
The multimedia component 508 includes a screen between the device 500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the apparatus 500 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the apparatus 500. For example, the sensor assembly 514 may detect the on/off state of the device 500, the relative positioning of the components, such as the display and keypad of the device 500, the sensor assembly 514 may also detect a change in position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, the orientation or acceleration/deceleration of the device 500, and a change in temperature of the device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G LTE, 5G NR (New Radio), or a combination thereof. In one exemplary embodiment, the communication component 516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 504, including instructions executable by processor 520 of apparatus 500 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present disclosure.

Claims (14)

1. A spatial display method, characterized by being applied to a real imaging device; the method comprises the following steps:
Under the condition that the target entity space is detected, acquiring space description information of the target entity space and pose description information of entity objects in the target entity space;
and displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.
2. The method of claim 1, wherein the obtaining the spatial description information of the target physical space and the pose description information of the physical object in the target physical space comprises:
initiating a description information acquisition request aiming at the target entity space to a cloud end so as to instruct the cloud end to search and return space description information of the target entity space and pose description information of entity objects in the target entity space; or alternatively, the process may be performed,
searching the space description information of the target entity space from the space description information of a plurality of entity spaces maintained by the reality imaging equipment; and searching the pose description information of the physical object in the target physical space from the pose description information of the physical object in each physical space maintained by the real imaging equipment.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the cloud-stored space description information and pose description information are uploaded by the real imaging device or other real imaging devices different from the real imaging device or other types of devices;
the real imaging device only maintains the space description information of the entity space with the entering frequency higher than the preset frequency of the real imaging device and the pose description information of the entity object therein.
4. The method as recited in claim 1, further comprising:
invoking a camera assembled by the real imaging equipment to shoot an image of the target entity space;
carrying out space analysis on the shot image to obtain space description information for representing the space structure of the target entity space;
carrying out object recognition on the shot image to obtain pose description information for representing the pose state of the entity object in the target entity space;
and uploading the space description information obtained by space analysis and the pose description information obtained by object identification to a cloud or storing the space description information and the pose description information in the local of the real imaging equipment.
5. The method as recited in claim 1, further comprising:
detecting the light intensity of the environment where the real imaging equipment is located;
the operation of acquiring the space description information and the pose description information is only performed when the light intensity detection result shows that the light intensity in the target entity space is lower than a preset intensity.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the displaying, in the display area, a virtual space corresponding to the target entity space based on the space description information includes: displaying a virtual space consistent with the target entity space structure in the display area based on the space description information
The creating and placing the virtual object corresponding to the physical object in the virtual space according to the pose description information comprises the following steps: and creating and placing a virtual object consistent with the physical object grade and posture state in the virtual space according to the pose description information.
7. The method as recited in claim 1, further comprising:
adding a fluorescence effect to the virtual object placed in the virtual space; and/or the number of the groups of groups,
and adding an article label to the virtual article placed in the virtual space.
8. The method of claim 1, wherein the target physical space comprises a plurality of physical items; at least one of the plurality of physical objects is an internet of things device; the method further comprises the steps of:
and responding to a control instruction aiming at any one of the Internet of things equipment, and sending a control message to the any one of the Internet of things equipment so as to instruct the any one of the Internet of things equipment to execute an operation corresponding to the control instruction.
9. The method as recited in claim 1, further comprising:
monitoring the distance between the real imaging device and each physical object;
and when the distance between the real imaging equipment and any physical object is smaller than a preset distance threshold value, a collision warning is sent out.
10. The method of claim 9, wherein the issuing a collision warning comprises:
broadcasting prompt voice for prompting impending collision; or alternatively, the process may be performed,
displaying a popup window for warning about an impending collision within the display area;
and displaying collision warning identification at the virtual article corresponding to any entity article in the virtual space.
11. The method of claim 1, wherein the target physical space comprises a plurality of physical items; at least one of the plurality of physical items is: the mobile internet of things device; the method further comprises the steps of:
Monitoring the distance between the real imaging device and the movable internet of things device;
and when the distance between the real imaging device and any movable internet of things device is smaller than a preset distance threshold, sending a displacement instruction to any movable internet of things device to instruct the any movable internet of things device to move in a direction away from the real imaging device.
12. A space display device, which is characterized by being applied to a reality imaging device; the device comprises:
the acquisition unit acquires space description information of the target entity space and pose description information of entity objects in the target entity space under the condition that the target entity space is detected;
and the display unit is used for displaying a virtual space corresponding to the target entity space in the display area based on the space description information, and creating and placing a virtual object corresponding to the entity object in the virtual space according to the pose description information.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any of claims 1-11 by executing the executable instructions.
14. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of any of claims 1-11.
CN202310015123.1A 2023-01-05 2023-01-05 Space display method and device, electronic equipment and storage medium Pending CN116048260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310015123.1A CN116048260A (en) 2023-01-05 2023-01-05 Space display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310015123.1A CN116048260A (en) 2023-01-05 2023-01-05 Space display method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116048260A true CN116048260A (en) 2023-05-02

Family

ID=86125006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310015123.1A Pending CN116048260A (en) 2023-01-05 2023-01-05 Space display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116048260A (en)

Similar Documents

Publication Publication Date Title
CN108037863B (en) Method and device for displaying image
EP3182716A1 (en) Method and device for video display
JP2017531927A (en) Video abnormality information attention method and apparatus
EP3796317A1 (en) Video processing method, video playing method, devices and storage medium
US10379602B2 (en) Method and device for switching environment picture
EP3232156A1 (en) Obstacle locating method, apparatus and system, computer program and recording medium
EP3147802B1 (en) Method and apparatus for processing information
CN112153400A (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN111970456B (en) Shooting control method, device, equipment and storage medium
CN107132769B (en) Intelligent equipment control method and device
CN109618192B (en) Method, device, system and storage medium for playing video
CN108346179B (en) AR equipment display method and device
CN114009003A (en) Image acquisition method, device, equipment and storage medium
CN106331328B (en) Information prompting method and device
CN107908325B (en) Interface display method and device
CN112146676B (en) Information navigation method, device, equipment and storage medium
CN106896917B (en) Method and device for assisting user in experiencing virtual reality and electronic equipment
CN111405382B (en) Video abstract generation method and device, computer equipment and storage medium
CN111355879B (en) Image acquisition method and device containing special effect pattern and electronic equipment
CN108924529B (en) Image display control method and device
CN116048260A (en) Space display method and device, electronic equipment and storage medium
CN114549797A (en) Painting exhibition method, device, electronic equipment, storage medium and program product
WO2021237744A1 (en) Photographing method and apparatus
CN113989424A (en) Three-dimensional virtual image generation method and device and electronic equipment
CN107968742B (en) Image display method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination