CN115047976A - Multi-level AR display method and device based on user interaction and electronic equipment - Google Patents

Multi-level AR display method and device based on user interaction and electronic equipment Download PDF

Info

Publication number
CN115047976A
CN115047976A CN202210730219.1A CN202210730219A CN115047976A CN 115047976 A CN115047976 A CN 115047976A CN 202210730219 A CN202210730219 A CN 202210730219A CN 115047976 A CN115047976 A CN 115047976A
Authority
CN
China
Prior art keywords
user
model
models
interaction
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210730219.1A
Other languages
Chinese (zh)
Inventor
肖东晋
张立群
刘顺宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alva Systems
Original Assignee
Alva Systems
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alva Systems filed Critical Alva Systems
Priority to CN202210730219.1A priority Critical patent/CN115047976A/en
Publication of CN115047976A publication Critical patent/CN115047976A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides a multi-level AR display method and device based on user interaction and electronic equipment, and relates to the field of artificial intelligence and computer vision. The method comprises the steps of obtaining user interaction operation; judging one or more AR models which need to be displayed currently and the corresponding visual states thereof according to the user interaction operation; and displaying the one or more AR models to the user according to the corresponding visual states. In this way, one or more AR models can be displayed to the user according to the corresponding visual states of the AR models according to the operation of the user, the user can interact with the environment through the actual operation of the real object, and the AR models can be regulated and controlled by the user to present contents of different levels.

Description

Multi-level AR display method and device based on user interaction and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence and computer vision, and in particular, to a multi-level AR display method and apparatus based on user interaction, and an electronic device.
Background
In recent years, AR technology has been developed in a great deal of application fields, the display mode combining virtuality and reality can expand the visual field of people to the maximum extent, the cognitive ability of people is strengthened, knowledge and information of different layers and different forms are comprehensively presented in the same picture, and the user experience which cannot be reached by plane display is realized.
The existing AR display system generally has the problem of insufficient interaction with a user, the AR system adopts independent display more, and the user is difficult to regulate and control the specific displayed flow. Especially, when multi-level display is performed, the display form is not diversified enough, and the user experience is difficult to meet.
Disclosure of Invention
The disclosure provides a multi-level AR display method and device based on user interaction and electronic equipment.
According to a first aspect of the present disclosure, a multi-level AR presentation method based on user interaction is provided. The method comprises the following steps: acquiring user interaction operation;
judging one or more AR models which need to be displayed currently and corresponding visual states thereof according to the user interaction operation;
and displaying the one or more AR models to the user according to the corresponding visual states.
In some implementations of the first aspect, obtaining the user interaction comprises:
and carrying out somatosensory tracking on the user, and acquiring corresponding interactive operation according to the position information of the corresponding part of the user and the key part of the AR model.
In some implementation manners of the first aspect, the performing somatosensory tracking on the user, and obtaining corresponding interactive operations according to the position information of the corresponding part of the user and the key part of the AR model includes:
judging whether the position of the corresponding part of the user and the key part on the AR model generate preset first interaction or not;
and after the first interaction is carried out, continuously acquiring the position information of the corresponding part of the user until the position of the corresponding part of the user is judged to have a preset second interaction with the key part on the AR model.
In some implementations of the first aspect, the determining, according to the user interaction operation, one or more AR models that need to be currently displayed and corresponding visual states thereof includes:
and judging one or more AR models which need to be displayed currently and the corresponding visual states of the AR models in real time according to the acquired position information of the corresponding part of the user and the interaction states of the AR models and the position information.
In some implementations of the first aspect, the one or more AR models are organized according to a hierarchical relationship.
In some implementations of the first aspect, the hierarchical relationship includes:
and setting an initial placement mode of the AR model of each layer.
In some implementations of the first aspect, the determining, in real-time, one or more AR models that need to be currently displayed and their corresponding visual states includes:
according to the acquired position information of the corresponding part of the user and the interaction state of the position information and the AR model, determining the current level corresponding to the position information, adjusting the placing mode of the AR model of the current level, and correspondingly adjusting the placing modes of the AR models of other levels.
In some implementations of the first aspect, the presenting the one or more AR models to the user according to their corresponding visual states comprises:
and displaying the visual state of the current-level AR model to the user according to the user visual angle and the placing mode of the AR model.
According to a second aspect of the present disclosure, a multi-level AR presentation device based on user interaction is provided. The device includes:
the user somatosensory tracking unit is used for acquiring user interaction operation;
the AR model adjusting unit is used for judging one or more AR models which need to be displayed currently and corresponding visual states thereof according to the user interaction operation;
and the display unit is used for displaying the one or more AR models to the user according to the corresponding visual states.
According to a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
According to the method and the device, one or more AR models are displayed to the user according to the corresponding visual states of the AR models, the user can interact with the environment for the actual operation of the real object, so that the AR models can be regulated and controlled by the user, and contents of different levels are presented.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. The accompanying drawings are included to provide a further understanding of the present disclosure, and are not intended to limit the disclosure thereto, and the same or similar reference numerals will be used to indicate the same or similar elements, where:
fig. 1 is a flowchart of a multi-level AR presentation method based on user interaction according to an embodiment of the present disclosure;
FIG. 2 is a block diagram of a multi-level AR presentation apparatus based on user interaction provided by an embodiment of the present disclosure;
fig. 3 is a block diagram of an electronic device for a multi-level AR presentation method based on user interaction according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In recent years, AR technology has been developed in a great number of application fields, the display mode combining virtuality and reality can expand the visual field of people to the maximum extent, the cognitive ability of people is strengthened, knowledge and information of different layers and different forms are comprehensively presented in the same picture, and the user experience which cannot be achieved by common display is realized.
However, the existing AR display system generally has a problem of insufficient interaction with the user, the AR system adopts independent display more, and the user is difficult to regulate and control the specific flow of display. Therefore, the reality degree and the user experience of AR presentation are attempted to be improved, specifically, the method starts with the interaction between the user and the environment in the AR system, especially when multi-level presentation is carried out, the user can carry out body sensing operation with a real object and interact with the environment, the AR model can be regulated and controlled by the user, the contents of different levels are presented, and the user experience is enhanced.
Fig. 1 is a flowchart of a multi-level AR presentation method 100 based on user interaction according to an embodiment of the present disclosure.
As shown in fig. 1, the multi-level AR presentation method 100 based on user interaction includes:
s101, acquiring user interaction operation;
when a plurality of users perform the interaction operation simultaneously, in S102, it needs to be respectively determined according to the plurality of user interaction operations, and for each user, one or more AR models and corresponding visual states thereof that need to be displayed currently are provided. Similarly, when a plurality of users perform interactive operations simultaneously, in S103, the one or more AR models need to be respectively displayed to the corresponding users according to the corresponding visual states thereof.
The AR model is an analog simulation processing model, and a user can conjecture a corresponding real object according to an analog simulation object.
The acquiring of the user interaction operation comprises:
and carrying out somatosensory tracking on the user, and acquiring the interactive operation between the user and the AR model according to the position information of the corresponding part of the user and the key part of the AR model.
Specifically, the motion sensing tracking may be limited to tracking of a certain part of the user, for example, tracking a hand of the user, and tracking a hand motion may employ a computer vision technology to search a part near the AR model in a shot picture to detect the presence of the hand of the user. If necessary, the user can wear the prominent mark on the hand, which facilitates the detection of the system. After detecting the user's hand, the system determines whether the user's hand position has a predetermined interaction with a key location on the AR model.
According to the embodiment of the disclosure, the somatosensory tracking is carried out on the user to acquire the interactive operation between the user and the AR model, so that the user can interact with the AR model through the somatosensory tracking, and the interactive experience between the user and the AR model is further improved.
In some embodiments, the performing somatosensory tracking on the user and acquiring corresponding interactive operations according to the position information of the corresponding part of the user includes:
judging whether the position of the corresponding part of the user and the key part on the AR model generate preset first interaction or not;
and after the first interaction is carried out, continuously acquiring the position information of the corresponding part of the user until the position of the corresponding part of the user is judged to have a preset second interaction with the key part on the AR model.
The first interaction is to determine whether the first interaction is the 'start' of a certain specific interaction behavior of the user and the AR model through the shooting capture and analysis of the action of the corresponding part of the user; the second interaction is to track the position and the posture of the corresponding part of the user and determine whether the second interaction is the termination of a certain specific interaction behavior of the user and the AR model; . For example, an action of opening a refrigerator, namely, a first interaction that a user holds a handle of the refrigerator and is definitely 'opening a door', namely, the 'start' of the interaction action of the user and the AR model; the second interaction, i.e. the user releases the handle after opening the refrigerator door, which is defined as "door open", which is the "termination" of the user's interaction with the AR model.
According to the embodiment of the disclosure, the interactive operation between the user and the AR model is judged according to the first interaction and the second interaction, so that the AR system provides multi-level AR model display capability driven by man-machine interaction, and the problems of monotonous display form and insufficient levels of the AR model are solved.
In the present disclosure, an AR model has two states, i.e., a visible state and an invisible state, the visible state is a state that can be seen by a user, and the invisible state is a state that cannot be seen by the user, where the invisible state is divided into a set invisible state and a natural invisible state, and the set invisible state is set by a system to hide a corresponding AR model from the user; the natural invisible state is that the AR model is invisible due to visual relations such as mosaicing, shading and the like. For example, if the AR model includes a box on the desktop, it may be set that the user cannot see the box, i.e., set the invisible state. The box can be used for placing articles, and when the box is in a closed state, a user cannot see the articles in the box, namely the box is in a natural invisible state. In addition, when multiple users are present, the same AR model may present different states to different users. In the case of the box, when multiple users interact with the box, and one of the users opens the box, due to the shielding of the box cover, different users can see some objects in an invisible state from different angles. Therefore, the natural invisible state of the AR model can be realistically restored to the field operation, and the user operation experience is greatly improved.
S102, judging one or more AR models which need to be displayed currently and corresponding visual states thereof according to the user interaction operation;
in some embodiments, one or more AR models that need to be displayed currently and the corresponding visual states thereof are determined in real time according to the acquired position information of the corresponding part of the user and the interaction states of the AR models with the position information.
That is, after a predetermined second interaction occurs, the level of each AR model and the state, i.e., the visible state or the invisible state, corresponding to the AR model to be displayed are determined according to the second interaction. The refrigerator has no change in the visual state of the AR model, but the placing mode of the refrigerator is changed, namely the door is opened. The state (visible/invisible) of an article placed in the refrigerator is changed along with the opening of the refrigerator door as an AR model, and the invisible state when the refrigerator door is closed is changed into the visible state when the refrigerator door is opened.
It should be understood that when facing multiple users, the AR model to be displayed for each user may be different, for example, in the above example of the refrigerator, the user a "opens" the "door" of the refrigerator by hand, stands behind the door, and the user B is located at an angle opposite to the door of the refrigerator, then for the user a, the AR model to be displayed is the refrigerator, whose state is visible, and the manner of placing is that the door of the refrigerator is open, and the state of the object inside the refrigerator is invisible (because the object standing behind the door is blocked from view by the door). For the user B, the AR model to be displayed is various objects inside the refrigerator, the state is visible, and when the user B continues to move away from a certain object inside the refrigerator, the AR model to be displayed is the object behind the object and at the next level. These objects are not visible to the state of user B until the objects of the previous level are removed; but after the objects at the previous level are removed, the state becomes visible.
According to the embodiment of the disclosure, the AR model needing to be displayed is judged according to the position information of the corresponding part of the user, and the state of the AR model is correspondingly adjusted, so that the user can see different AR scenes through interaction with different AR models, and the interactive experience is increased.
In some embodiments, the one or more AR models are organized according to a hierarchical relationship.
The state of the AR model can be flexibly adjusted by setting the hierarchical relationship, and in the above example of the box, the contents outside and inside the box can be set to different hierarchies, and they will present different states (visible/invisible) according to the actions of the user in the actual environment. If the box itself is the first level and the objects inside the box are the second level, it is obvious that the second level is nested in the first level, and the objects in the second level are all invisible when the first level is not opened.
According to the embodiment of the disclosure, multiple levels are set, so that the AR models of different levels can be organized conveniently, the states of the AR models of different levels can be adjusted, and the user operation experience can be improved.
In some embodiments, the hierarchical relationship comprises:
and setting an initial placement mode of the AR model of each layer.
In the present disclosure, the AR model may be a movable AR model, and the box may be moved or rotated as described above, and therefore, an initial placement mode needs to be set for the AR model. A coordinate system can be introduced into the AR model environment, namely the coordinate of the AR model can be limited to adjust the placing mode of the AR model. It can be understood that the AR model is placed in a manner that also affects the state of the AR model to some extent, and in the case of the above-described case, if the case is placed with the case opening downward, the user cannot see the inside of the case from above even if the case cover is opened from below. In some embodiments, it is also desirable to set the initial state of each level, which is often an invisible state.
Rules can be set between the state and the placement mode of the AR model to ensure that the visual state of the AR model is the same as the real situation. For example, on the premise of ensuring that each AR model satisfies the set invisible state, the rule for setting each AR model in the natural invisible state is as follows: when the AR model of the previous layer does not shield the AR model of the current layer, the AR model of the current layer is completely visible; when the AR model of the previous level shields the AR model of the current level partially, the shielded portion of the AR model of the current level is invisible, and the rest of the AR model of the current level is visible; when the AR model of the previous level completely shelters from the AR model of the current level, the AR model of the current level is invisible. Whether the occlusion exists can be judged according to the visual angle of the user and the coordinates of each AR model.
According to the embodiment of the disclosure, the initial placement mode of each level AR model is set, so that the calculation of the state and placement mode of the level AR model is conveniently carried out on the basis of the initial placement mode in the subsequent interaction process.
In some embodiments, the real-time determination of one or more AR models currently required to be displayed and the corresponding visual states thereof includes:
according to the acquired position information of the corresponding part of the user and the interaction state of the position information and the AR model, determining the current level corresponding to the position information, adjusting the placing mode of the AR model of the current level, and correspondingly adjusting the placing modes of the AR models of other levels.
It can be understood that when a user interacts with an AR model, the placement of the AR model at the current level is usually affected by the placement of the AR model at other levels, and therefore should be adjusted together. In the above example of the refrigerator, the refrigerator door is a first level, the beverage hung on the side of the refrigerator door is a second level, and when the refrigerator door moves, the beverage also moves along with the refrigerator door, so that the second level should be adjusted correspondingly.
According to the embodiment of the disclosure, the placing modes of the AR models of different levels are adjusted according to the operation of the user, so that the user obtains more realistic experience.
S103, displaying the one or more AR models to a user according to the corresponding visual states.
In some embodiments, the visual state of the current level AR model is presented to the user according to the user's perspective and the manner in which the AR model is placed.
The AR model to which the present disclosure relates has multiple levels from inside to outside, each level presenting different content. The inner level is hidden by the outer level and cannot be seen directly until the outer level is opened. Each level of the AR model may be "opened" by a person. Since the AR model is a virtual object, no real object exists in the real space. Therefore, the "opening" of the model needs to be achieved by means of camera capture and analysis and judgment of the motion of the hand of the person by the AR system. That is, human interaction with the AR model needs to be achieved by scene capture and video analysis.
The present disclosure provides a video analysis system, comprising: a hand positioning and motion recognition system, a hand motion tracking system and an AR model adjustment system.
The hand motion positioning and recognition system adopts a computer vision technology to search the part near the AR model in the shot picture and detect the existence of the hand of the user. After detecting the hands of the user, the system judges whether the hand positions of the user and key positions on the AR model have preset interaction or not, and transmits the judgment result to the hand motion tracking system.
The hand tracking system is used for starting the tracking of the hand after receiving the first interactive judgment about the hand positioning of the system, continuously acquiring the position information of the hand and transmitting the position information to the AR model adjusting system in real time. Until a second interactive determination is received by the system regarding hand positioning.
The AR model adjusting system needs to complete two tasks, and firstly, the initial placement mode of a plurality of AR models is adjusted. Here, the AR models will naturally form occlusion and visual effects at different viewing angles according to their size and position relationships. And secondly, resolving the position and the posture of the AR model related to the hand interaction of the user in real time according to the hand position information transmitted by the hand tracking system, and repositioning the AR model. For example, in an AR display in which a door of a refrigerator is opened, the door of the refrigerator may be treated as a separate AR model, and then the placement of the door of the refrigerator is continuously changed along with the movement of the user's hand, resulting in a visual effect of "the door is opened". Meanwhile, the AR model placed in the refrigerator in advance can change the visual state of the refrigerator along with the change of the position of the refrigerator door, so that users with different visual angles can see the condition of 'in the refrigerator'.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 2 is a block diagram of a multi-level AR display apparatus 200 based on user interaction according to an embodiment of the present disclosure.
As shown in FIG. 2, the multi-level AR presentation apparatus 200 based on user interaction comprises:
a user somatosensory tracking unit 201, configured to acquire a user interaction operation;
an AR model adjusting unit 202, configured to determine, according to the user interaction operation, one or more AR models that need to be displayed currently and a visual state corresponding to the one or more AR models;
a presentation unit 203, configured to present the one or more AR models to a user according to their corresponding visual states.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 3 shows a schematic block diagram of an electronic device 300 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
The device 300 comprises a computing unit 301 which may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)302 or a computer program loaded from a storage unit 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the device 300 can also be stored. The calculation unit 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Various components in device 300 are connected to I/O interface 305, including: an input unit 306 such as a keyboard, a mouse, or the like; an output unit 307 such as various types of displays, speakers, and the like; a storage unit 308 such as a magnetic disk, optical disk, or the like; and a communication unit 309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 309 allows the device 300 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 301 performs the various methods and processes described above, such as the method 100. For example, in some embodiments, the method 100 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 308. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 300 via ROM 302 and/or communication unit 309. When the computer program is loaded into RAM 303 and executed by the computing unit 301, one or more steps of the method 100 described above may be performed. Alternatively, in other embodiments, the computing unit 301 may be configured to perform the method 100 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions of the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (10)

1. A multi-level AR display method based on user interaction is characterized by comprising the following steps:
acquiring user interaction operation;
judging one or more AR models which need to be displayed currently and corresponding visual states thereof according to the user interaction operation;
and displaying the one or more AR models to the user according to the corresponding visual states.
2. The method of claim 1, wherein obtaining user interaction comprises:
and carrying out somatosensory tracking on the user, and acquiring the interactive operation between the user and the AR model according to the position information of the corresponding part of the user and the key part of the AR model.
3. The method of claim 2, wherein the motion sensing tracking of the user and the obtaining of the corresponding interactive operation according to the position information of the corresponding part of the user comprises:
judging whether the position of the corresponding part of the user and the key part on the AR model generate preset first interaction or not;
and after the first interaction is carried out, continuously acquiring the position information of the corresponding part of the user until the position of the corresponding part of the user is judged to have a preset second interaction with the key part on the AR model.
4. The method of claim 3, wherein the determining, according to the user interaction, one or more AR models that need to be presented currently and the corresponding visual states thereof comprises:
and judging one or more AR models which need to be displayed currently and the corresponding visual states of the AR models in real time according to the acquired position information of the corresponding part of the user and the interaction states of the AR models and the position information.
5. The method of claim 4,
the one or more AR models are organized according to a hierarchical relationship.
6. The method of claim 5, wherein the hierarchical relationship comprises:
and setting an initial placement mode of the AR model of each layer.
7. The method of claim 6, wherein determining in real-time one or more AR models currently in need of presentation and their corresponding visual states comprises:
according to the acquired position information of the corresponding part of the user and the interaction state of the position information and the AR model, determining the current level corresponding to the position information, adjusting the placing mode of the AR model of the current level, and correspondingly adjusting the placing modes of the AR models of other levels.
8. The method of claim 5, wherein said presenting the one or more AR models to the user according to their corresponding visual states comprises:
and displaying the visual state of the current-level AR model to the user according to the user visual angle and the placing mode of the AR model.
9. A multi-level AR presentation device based on user interaction is characterized by comprising:
the user somatosensory tracking unit is used for acquiring user interaction operation;
the AR model adjusting unit is used for judging one or more AR models needing to be displayed currently and corresponding visual states thereof according to the user interaction operation;
and the display unit is used for displaying the one or more AR models to the user according to the corresponding visual states.
10. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
CN202210730219.1A 2022-06-24 2022-06-24 Multi-level AR display method and device based on user interaction and electronic equipment Pending CN115047976A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210730219.1A CN115047976A (en) 2022-06-24 2022-06-24 Multi-level AR display method and device based on user interaction and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210730219.1A CN115047976A (en) 2022-06-24 2022-06-24 Multi-level AR display method and device based on user interaction and electronic equipment

Publications (1)

Publication Number Publication Date
CN115047976A true CN115047976A (en) 2022-09-13

Family

ID=83163433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210730219.1A Pending CN115047976A (en) 2022-06-24 2022-06-24 Multi-level AR display method and device based on user interaction and electronic equipment

Country Status (1)

Country Link
CN (1) CN115047976A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230177775A1 (en) * 2021-12-07 2023-06-08 Snap Inc. Augmented reality unboxing experience
US11960784B2 (en) 2021-12-07 2024-04-16 Snap Inc. Shared augmented reality unboxing experience

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230177775A1 (en) * 2021-12-07 2023-06-08 Snap Inc. Augmented reality unboxing experience
US11748958B2 (en) * 2021-12-07 2023-09-05 Snap Inc. Augmented reality unboxing experience
US11960784B2 (en) 2021-12-07 2024-04-16 Snap Inc. Shared augmented reality unboxing experience

Similar Documents

Publication Publication Date Title
US11170210B2 (en) Gesture identification, control, and neural network training methods and apparatuses, and electronic devices
US10636197B2 (en) Dynamic display of hidden information
CN107977141B (en) Interaction control method and device, electronic equipment and storage medium
CN115047976A (en) Multi-level AR display method and device based on user interaction and electronic equipment
US20180173404A1 (en) Providing a user experience with virtual reality content and user-selected, real world objects
AU2018220141B2 (en) Mapping virtual and physical interactions
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
EP3427125A1 (en) Intelligent object sizing and placement in augmented / virtual reality environment
US20140002443A1 (en) Augmented reality interface
KR20150048881A (en) Augmented reality surface displaying
CN111586459B (en) Method and device for controlling video playing, electronic equipment and storage medium
KR101854613B1 (en) System for simulating 3d interior based on web page and method for providing virtual reality-based interior experience using the same
CN111739169A (en) Product display method, system, medium and electronic device based on augmented reality
RU2667720C1 (en) Method of imitation modeling and controlling virtual sphere in mobile device
CN106951069A (en) The control method and virtual reality device of a kind of virtual reality interface
CN113359995B (en) Man-machine interaction method, device, equipment and storage medium
Jiang et al. A SLAM-based 6DoF controller with smooth auto-calibration for virtual reality
US20230147561A1 (en) Metaverse Content Modality Mapping
CN207380667U (en) Augmented reality interactive system based on radar eye
CN110460833A (en) A kind of AR glasses and smart phone interconnected method and system
CN113486415B (en) Model perspective method, intelligent terminal and storage device
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN110399059A (en) Method and apparatus for showing information
CN109011591A (en) A kind of safety protection method and device in reality-virtualizing game
GB2568355A (en) Dynamic mapping of virtual and physical interactions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination