CN110825280A - Method, apparatus and computer-readable storage medium for controlling position movement of virtual object - Google Patents

Method, apparatus and computer-readable storage medium for controlling position movement of virtual object Download PDF

Info

Publication number
CN110825280A
CN110825280A CN201810907342.XA CN201810907342A CN110825280A CN 110825280 A CN110825280 A CN 110825280A CN 201810907342 A CN201810907342 A CN 201810907342A CN 110825280 A CN110825280 A CN 110825280A
Authority
CN
China
Prior art keywords
virtual object
target
plane
terminal screen
controlling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810907342.XA
Other languages
Chinese (zh)
Inventor
刘昂
陈怡�
游东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201810907342.XA priority Critical patent/CN110825280A/en
Publication of CN110825280A publication Critical patent/CN110825280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Disclosed are a method of controlling a position movement of a virtual object, an apparatus for controlling a position movement of a virtual object, a hardware apparatus for controlling a position movement of a virtual object, and a computer-readable storage medium. The method for controlling the position movement of the virtual object comprises the steps of determining a target plane and a target position of the virtual object in a real scene, displaying the virtual object on a corresponding terminal screen according to the target position, and enabling the displayed virtual object to be located on the surface of the target plane. According to the embodiment of the disclosure, the target plane and the target position of the virtual object in the real scene are determined, then the virtual object is displayed on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, so that the situation that the virtual object is suspended on the plane or embedded in the plane when moving can be avoided, and the display effect of the terminal is improved.

Description

Method, apparatus and computer-readable storage medium for controlling position movement of virtual object
Technical Field
The present disclosure relates to the field of information technology, and in particular, to a method, an apparatus, and a computer-readable storage medium for controlling a position of a virtual object to move.
Background
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and virtual objects, and aims to sleeve a virtual world on a screen in the real world and interact with the virtual world.
The method for realizing the augmented reality technology is to put a virtual object in a real scene, namely, a real environment and the virtual object are superposed on the same picture or space in real time. After the virtual object is overlaid, the virtual object moves according to a preset motion track, or the virtual object is controlled to perform a preset action through the control.
At present, in an existing augmented reality scene, a moving position of a virtual object is generally controlled based on three-dimensional space coordinates. That is, a target position of the virtual object is first determined by three-dimensional coordinates, and then the virtual object is moved to the target position.
When the virtual object moves on a plane or moves from one plane to another plane, the position of the virtual object is controlled based on the three-dimensional space coordinates, which may cause the displayed virtual object to be embedded in the plane or to be suspended on the plane, thereby affecting the display effect of the terminal.
Disclosure of Invention
The technical problem solved by the present disclosure is to provide a method for controlling the position movement of a virtual object, so as to at least partially solve the technical problem of how to improve the display effect of the virtual object on a terminal. In addition, an apparatus for controlling the position movement of a virtual object, a hardware apparatus for controlling the position movement of a virtual object, a computer-readable storage medium, and a terminal for controlling the position movement of a virtual object are also provided.
In order to achieve the above object, according to one aspect of the present disclosure, the following technical solutions are provided:
a method of controlling virtual object position movement, comprising:
determining a target plane and a target position of a virtual object in a real scene;
and displaying the virtual object on a corresponding terminal screen according to the target position, wherein the displayed virtual object is positioned on the surface of the target plane.
Further, the step of determining a target plane of the virtual object in the real scene includes:
identifying a plane contained in the real scene;
selecting one of the identified planes as the target plane.
Further, the step of selecting one of the identified planes as the target plane includes:
displaying the identified plane on the terminal screen, and enabling the identified plane to be in a selectable state;
and taking the selected plane as the target plane.
Further, the step of determining the target position of the virtual object in the real scene includes:
determining a target display position of the virtual object on the terminal screen;
and determining the target position according to the target display position and the target plane.
Further, the step of determining the target position according to the target display position and the target plane includes:
acquiring a line passing through a point where the target display position is located;
and taking the intersection point of the line and the target plane as the target position.
Further, the line is perpendicular to a plane where the terminal screen is located.
Further, the step of determining the target display position of the virtual object on the terminal screen includes:
detecting a first trigger response generated on the terminal screen, and taking the generation position of the first trigger response as the target display position;
or, receiving an input target display position.
Further, before detecting the first trigger response generated on the terminal screen, the method further includes:
detecting a second trigger response aiming at the virtual object generated on the terminal screen, wherein the second trigger response is used for enabling the virtual object to be in a selected state;
correspondingly, before the step of displaying the virtual object on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, the method further includes:
and controlling the virtual object to move to the target position according to the second trigger response and the first trigger response.
Further, the step of controlling the virtual object to move to the target position according to the second trigger response and the first trigger response includes:
after the first trigger response is detected, the virtual object is directly moved to the target position, or the virtual object is dragged to the target position along a motion track.
Further, before the step of displaying the virtual object on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, the method further includes:
when a plurality of virtual objects are detected, if one of the virtual objects is detected to be selected, controlling the other virtual objects to move to the target position along with the selected virtual object in sequence;
or, when a plurality of virtual objects are detected, if all the virtual objects are simultaneously selected, simultaneously controlling the plurality of virtual objects to move to the target position.
Further, the method further comprises:
determining the initial position of the virtual object in the real scene according to the initial position of the virtual object on the terminal screen;
correspondingly, before the step of displaying the virtual object on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, the method further includes:
controlling the virtual object to move from the initial position to the target position.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
an apparatus for controlling the movement of the position of a virtual object, comprising:
the plane and position determining module is used for determining a target plane and a target position of the virtual object in the real scene;
and the display module is used for displaying the virtual object on a corresponding terminal screen according to the target position, and the displayed virtual object is positioned on the surface of the target plane.
Further, the plane and position determining module comprises:
a plane recognition unit for recognizing a plane included in the real scene;
a plane determination unit for selecting one plane from the identified planes as the target plane.
Further, the plane determining unit is specifically configured to: displaying the identified plane on the terminal screen, and enabling the identified plane to be in a selectable state; and taking the selected plane as the target plane.
Further, the plane and position determining module comprises:
the display position determining unit is used for determining the target display position of the virtual object on the terminal screen;
and the position determining unit is used for determining the target position according to the target display position and the target plane.
Further, the position determining unit is specifically configured to: acquiring a line passing through a point where the target display position is located; and taking the intersection point of the line and the target plane as the target position.
Further, the line is perpendicular to a plane where the terminal screen is located.
Further, the display position determination unit is specifically configured to: detecting a first trigger response generated on the terminal screen, and taking the generation position of the first trigger response as the target display position; or, receiving an input target display position.
Further, the display position determination unit is further configured to: detecting a second trigger response aiming at the virtual object generated on the terminal screen before detecting the first trigger response generated on the terminal screen, wherein the second trigger response is used for enabling the virtual object to be in a selected state;
correspondingly, the display module is further configured to: and before the virtual object is displayed on a corresponding terminal screen according to the target position and the displayed virtual object is positioned on the surface of the target plane, controlling the virtual object to move to the target position according to the second trigger response and the first trigger response.
Further, the display module is specifically configured to: after the first trigger response is detected, the virtual object is directly moved to the target position, or the virtual object is dragged to the target position along a motion track.
Further, the display module is further configured to: when a plurality of virtual objects are displayed on the corresponding terminal screen according to the target position and the displayed virtual objects are positioned in front of the surface of the target plane, if one virtual object is detected to be selected, the other virtual objects are controlled to move to the target position along with the selected virtual object in sequence; or, when a plurality of virtual objects are detected, if all the virtual objects are simultaneously selected, simultaneously controlling the plurality of virtual objects to move to the target position.
Further, the plane and position determining module is further configured to: determining the initial position of the virtual object in the real scene according to the initial position of the virtual object on the terminal screen;
correspondingly, the display module is further configured to: and before the virtual object is displayed on a corresponding terminal screen according to the target position and the displayed virtual object is positioned on the surface of the target plane, controlling the virtual object to move from the initial position to the target position.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
a hardware apparatus for controlling position movement of a virtual object, comprising:
a memory for storing non-transitory computer readable instructions; and
and the processor is used for executing the computer readable instructions, so that the processor realizes the steps in any one of the above technical schemes of the method for controlling the position movement of the virtual object when executing.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
a computer readable storage medium for storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the above-described method aspects of controlling the position movement of a virtual object.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
a terminal for controlling the position movement of a virtual object comprises any one of the devices for controlling the position movement of the virtual object.
The embodiment of the disclosure provides a method for controlling the position movement of a virtual object, a device for controlling the position movement of the virtual object, a hardware device for controlling the position movement of the virtual object, a computer readable storage medium and a terminal for controlling the position movement of the virtual object. The method for controlling the position movement of the virtual object comprises the steps of determining a target plane and a target position of the virtual object in a real scene, displaying the virtual object on a corresponding terminal screen according to the target position, and enabling the displayed virtual object to be located on the surface of the target plane. According to the embodiment of the disclosure, the target plane and the target position of the virtual object in the real scene are determined, then the virtual object is displayed on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, so that the situation that the virtual object is suspended on the plane or embedded in the plane when moving can be avoided, and the display effect of the terminal is improved.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
FIG. 1a is a schematic flow chart diagram of a method of controlling virtual object position movement according to one embodiment of the present disclosure;
FIG. 1b is a schematic diagram illustrating a display effect in the method for controlling the position of the virtual object according to the embodiment shown in FIG. 1 a; FIG. 1c is a schematic flow chart diagram of a method of controlling the position movement of a virtual object according to another embodiment of the present disclosure;
FIG. 1d is a schematic illustration of an alternative in-plane state in the method of controlling the position shift of a virtual object according to the embodiment shown in FIG. 1 c;
FIG. 1e is a schematic diagram illustrating a selected planar state in the method for controlling the position movement of the virtual object according to the embodiment shown in FIG. 1 c;
FIG. 1f is a schematic flow chart diagram of a method of controlling the position movement of a virtual object according to another embodiment of the present disclosure;
FIG. 1g is a schematic flow chart diagram of a method of controlling the position movement of a virtual object according to another embodiment of the present disclosure;
FIG. 2a is a schematic structural diagram of an apparatus for controlling the position movement of a virtual object according to an embodiment of the present disclosure;
FIG. 2b is a schematic structural diagram of an apparatus for controlling the position movement of a virtual object according to another embodiment of the present disclosure;
FIG. 2c is a schematic structural diagram of an apparatus for controlling the position movement of a virtual object according to another embodiment of the present disclosure;
FIG. 3 is a block diagram of a hardware device for controlling the movement of the position of a virtual object according to one embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a computer-readable storage medium according to one embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a terminal for controlling the position movement of a virtual object according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In order to solve the technical problem of how to improve the user experience effect, the embodiments of the present disclosure provide a method for controlling the position movement of a virtual object. As shown in fig. 1a, the method for controlling the position movement of the virtual object mainly includes the following steps S1 to S2. Wherein:
step S1: and determining a target plane and a target position of the virtual object in the real scene.
The executing body of this embodiment may be selected as the device for controlling the position movement of the virtual object provided in this disclosure, or the hardware device for controlling the position movement of the virtual object provided in this disclosure, or the terminal for controlling the position movement of the virtual object provided in this disclosure.
The terminal may be, but is not limited to, a mobile terminal (e.g., iPhone, smartphone, tablet, etc.), or a fixed terminal (e.g., desktop computer).
The virtual object can be selected as a three-dimensional model of a real object in a scene.
The target plane is a plane to which a virtual object is to move in the real scene, and the plane is a surface of an entity located in the real scene, such as, but not limited to, a desktop or a wall surface. The target position is a position to which the virtual object is to move in the real scene, and the target position is on the target plane.
Step S2: and displaying the virtual object on the corresponding terminal screen according to the target position, wherein the displayed virtual object is positioned on the surface of the target plane. The target position can be determined according to a target plane and/or a preset target display position of the virtual object on the terminal screen.
The terminal may be, but is not limited to, a mobile terminal (e.g., iPhone, smartphone, tablet, etc.), or a fixed terminal (e.g., desktop computer).
Specifically, after the target plane and the target position of the virtual object in the real scene are determined, the virtual object is placed at the target position, or the virtual object is controlled to move to the target position, and then the virtual object is displayed at the determined target position, so that the virtual object is displayed on the corresponding terminal screen, as shown in fig. 1b, and the displayed virtual object is located on the surface of the target plane, and therefore the virtual object viewed through the terminal screen is located right on the surface of the target plane, and is not suspended or embedded in the plane.
By adopting the technical scheme, the target plane and the target position of the virtual object in the real scene are determined, the virtual object is displayed on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, so that the situation that the virtual object is suspended on the plane or embedded into the plane when moving can be avoided, and the display effect of the terminal is improved.
In an alternative embodiment, as shown in fig. 1c, the step of determining the target plane of the virtual object in the real scene in step S1 includes:
s11: planes contained in a real scene are identified.
Among them, the real scene may include one or more planes. The plane included in the real scene can be identified by using a corresponding algorithm, which can be implemented by using the prior art, for example, a simultaneous localization and mapping (SLAM) algorithm, which is not described herein again.
S12: one of the identified planes is selected as a target plane.
Further, step S12 can be implemented in two ways:
in the first way, one plane is automatically selected as the target plane, i.e., one plane is automatically selected as the target plane from among the identified planes.
In the second mode, a user selects a target plane, namely, the identified plane is displayed on a terminal screen and is in a selectable state; and taking the selected plane as a target plane. That is, the user may select a plane by clicking or double-clicking or other preset actions, and take the plane selected by the user as the target plane.
Exemplarily, as shown in fig. 1d, planes 1-3 in the recognized real scene are sequentially displayed on the terminal screen, and if a plane is automatically selected as the target plane, a plane may be automatically selected by the system according to a preset rule, for example, the first recognized plane is automatically selected; if the user wants to display the virtual object on the plane 1, the user only needs to click or double click on the plane 1 on the terminal screen to complete the selection operation. When the plane 1 is selected, it is displayed on the terminal screen according to its placement position in the display scene, as shown in fig. 1 e.
In an alternative embodiment, as shown in fig. 1f, the step of determining the target position of the virtual object in the real scene in step S1 includes:
s13: and determining the target display position of the virtual object on the terminal screen.
And the target display position is the display position of the virtual object on the terminal screen.
S14: and determining the target position according to the target display position and the target plane.
Further, the step S13 may obtain the target display position in two ways: the first mode is as follows: and detecting a first trigger response generated on a terminal screen, and taking the generation position of the first trigger response as a target display position.
The first trigger response is a response generated by a trigger operation acting on the terminal screen, and may be, but is not limited to, a click response generated for the terminal screen, or a double-click response, or a detected preset gesture action. To distinguish another differently-acting trigger response that appears hereinafter, a trigger response that appears here is referred to as a first trigger response, and a subsequently-appearing trigger response is referred to as a second trigger response in turn.
The generation position of the first trigger response is a point on the corresponding plane of the terminal screen, and can be determined by a sensor arranged on the terminal screen.
Specifically, if the user wants to change the display position of the virtual object on the terminal screen, an operation needs to be performed on the terminal screen, for example, by clicking, or double clicking, or making a preset gesture motion on the terminal screen to determine the next display position of the virtual object. The terminal screen generates a trigger response after receiving the operation, wherein the generation position of the trigger response is the display position where the user wants to move the virtual object, and the display position is not the target position of the virtual object in the real scene, so that the target position of the virtual object in the real scene needs to be determined according to the trigger response, and the display position of the virtual object on the terminal screen can be accurately positioned.
In the second mode, an input target display position is received.
Specifically, the user can input the target display position through the terminal, and because the trigger operation of the user on the terminal screen is often a trigger area and is difficult to locate to a point, the point can be accurately located by inputting the target position, therefore, compared with the trigger operation of the user on the terminal screen, the position of the virtual object can be more accurately located, and the terminal display effect is further improved.
Further, step S14 includes:
acquiring a line passing through a point where the target display position is located;
the intersection of the line and the target plane is taken as the target position.
Wherein the line may be a straight line, a ray or a line segment.
Further, the line is perpendicular to the plane of the terminal screen.
In an optional embodiment, before detecting the first trigger response generated on the terminal screen, the method of the present disclosure further includes:
detecting a second trigger response aiming at the virtual object generated on the terminal screen, wherein the second trigger response is used for enabling the virtual object to be in a selected state;
accordingly, before step S2, the method further includes:
and controlling the virtual object to move to the target position according to the second trigger response and the first trigger response.
The second trigger response is a response generated by the terminal screen acting on the virtual object, and may be, but is not limited to, a click response generated by the terminal screen, or a double click response, or a detected preset gesture action.
Further, the step of controlling the virtual object to move to the target position according to the second trigger response and the first trigger response, and enabling the virtual object to be located on the surface of the target plane includes:
and after detecting the first trigger response, directly moving the virtual object to the target position, or dragging the virtual object to the target position along with the motion track.
Specifically, for the user plane, the present step may include the following two specific implementations: firstly, clicking a virtual object on a terminal screen, and then clicking a target position on the terminal screen, wherein the virtual object directly moves to the target position under the control of the method; the second method clicks a virtual object on the terminal screen, and then drags the virtual object to a target position on the terminal screen.
If the method is the first implementation manner, clicking a virtual object on a terminal screen, then clicking a target position on the terminal screen, after clicking a new position on the terminal screen, acquiring a line (such as a line) passing through the point of the position, wherein the intersection point of the line and a plane to which the virtual object is to move in a real scene is the new target position of the virtual object, and placing the virtual object on the target position; if the second implementation mode is adopted, the new target position is continuously calculated according to a certain time interval, and the virtual object moves along the motion track of the new target position to form a moving track animation.
By adopting the above technical solution, the embodiment can avoid the situation that the virtual object floats on the plane or is embedded in the plane when moving by identifying the planes contained in the real scene, selecting the plane from the planes as the target plane, determining the target position of the virtual object in the real scene, and displaying the virtual object on the corresponding terminal screen according to the target position, wherein the displayed virtual object is located on the surface of the target plane, thereby improving the display effect of the terminal.
In an alternative embodiment, as shown in fig. 1g, the method of the present disclosure further comprises:
s0: determining the initial position of the virtual object in the real scene according to the initial position of the virtual object on the terminal screen;
accordingly, before step S2, the method further includes:
and controlling the virtual object to move from the initial position to the target position.
Wherein, the initial position of the terminal screen can be selected as the central position of the terminal screen. The specifics may be determined by the user. In this embodiment, the method for determining the initial position of the terminal screen and the method for determining the initial position in the real scene are similar to the method for determining the target display position and the target position, and are not described herein again.
In an optional embodiment, before step S2, the method further includes:
when a plurality of virtual objects are detected, if one of the virtual objects is detected to be selected, the other virtual objects are controlled to move to the target position along with the selected virtual object in sequence;
or, when a plurality of virtual objects are detected, if all the virtual objects are simultaneously selected, the plurality of virtual objects are simultaneously controlled to move to the target positions.
Specifically, one or more virtual objects can be moved in the plane, and one virtual object can be moved or a plurality of virtual objects can be moved simultaneously when the virtual objects are moved. When a plurality of virtual objects exist, one virtual object is selected to move, and the movement of the other virtual objects is the same as that of the selected virtual object; or simultaneously selecting a plurality of virtual objects to move, and calculating the intersection point of each line and the plane for each virtual object corresponding to one line and one plane, thereby obtaining the target position of each virtual object.
It will be appreciated by those skilled in the art that obvious modifications (e.g., combinations of the enumerated modes) or equivalents may be made to the above-described embodiments.
In the above, although the steps in the embodiment of the method for controlling the position of the virtual object are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiment of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, those skilled in the art may also add other steps, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described in detail herein.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
In order to solve the technical problem of how to improve the user experience effect, the embodiments of the present disclosure provide an apparatus for controlling the position movement of a virtual object. The apparatus may perform the steps in the above-described method embodiment of controlling the position movement of the virtual object. As shown in fig. 2a, the apparatus mainly comprises: a plane and position determination module 21 and a display module 22; the plane and position determining module 21 is configured to determine a target plane and a target position of a virtual object in a real scene; the display module 22 is configured to display the virtual object on the corresponding terminal screen according to the target position, where the displayed virtual object is located on the surface of the target plane.
The terminal may be, but is not limited to, a mobile terminal (e.g., iPhone, smartphone, tablet, etc.), or a fixed terminal (e.g., desktop computer).
The virtual object can be selected as a three-dimensional model of a real object in a scene.
The target plane is a plane to which a virtual object is to move in the real scene, and the plane is a surface of an entity located in the real scene, such as, but not limited to, a desktop or a wall surface. The target position is a position to which the virtual object is to move in the real scene, and the target position is on the target plane.
The target position can be determined according to the target plane and/or a preset target display position on the terminal screen. Specifically, after the plane and position determining module 21 determines the target plane and the target position of the virtual object in the real scene, the display module 22 displays the virtual object at the determined target position, so that the virtual object is displayed on the corresponding terminal screen, as shown in fig. 1b, and the displayed virtual object is located on the surface of the target plane, so that the virtual object viewed through the terminal screen is located right on the surface of the target plane, and is not suspended or embedded in the plane.
In this embodiment, the plane and position determining module 21 determines a target plane and a target position of the virtual object in the real scene, and then the display module 22 displays the virtual object on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, so that the situation that the virtual object floats on the plane or is embedded in the plane when moving can be avoided, and the display effect of the terminal is improved.
In an alternative embodiment, as shown in fig. 2b, the plane and position determining module 21 comprises: a plane recognition unit 211 and a plane determination unit 212; the plane recognition unit 211 is configured to recognize a plane included in a real scene; the plane determination unit 212 is configured to select one of the identified planes as a target plane.
Among them, the real scene may include one or more planes. The plane included in the real scene can be identified by using a corresponding algorithm, which can be implemented by using the prior art, for example, a simultaneous localization and mapping (SLAM) algorithm, which is not described herein again.
Further, the plane determining unit 212 is specifically configured to automatically select a plane as the target plane, that is, automatically select a plane from the identified planes as the target plane; or selecting a target plane by a user, namely displaying the identified plane on a terminal screen and enabling the identified plane to be in a selectable state; and taking the selected plane as a target plane. That is, the user may select a plane by clicking or double-clicking or other preset actions, and take the plane selected by the user as the target plane.
For example, as shown in fig. 1d, the plane determining unit 212 sequentially displays the planes 1-3 in the identified real scene on the terminal screen, and if a plane is automatically selected as the target plane, the system may automatically select a plane according to a preset rule, for example, automatically select the first identified plane; if the user wants to display the virtual object on the plane 1, the user only needs to click or double click on the plane 1 on the terminal screen to complete the selection operation. When the plane 1 is selected, it is displayed on the terminal screen according to its placement position in the display scene, as shown in fig. 1 e.
In an alternative embodiment, as shown in fig. 2c, the plane and position determining module 21 comprises: a display position determination unit 213 and a position determination unit 214; the display position determining unit 213 is configured to determine a target display position of the virtual object on the terminal screen; the position determining unit 214 is configured to determine a target position according to the target display position and the target plane.
And the target display position is the display position of the virtual object on the terminal screen.
The display position determining unit 213 is specifically configured to detect a first trigger response generated on the terminal screen, and use a generation position of the first trigger response as a target display position, or receive an input of the target display position.
The first trigger response is a response generated by a trigger operation acting on the terminal screen, and may be, but is not limited to, a click response generated for the terminal screen, or a double-click response, or a detected preset gesture action. To distinguish another differently-acting trigger response that appears hereinafter, a trigger response that appears here is referred to as a first trigger response, and a subsequently-appearing trigger response is referred to as a second trigger response in turn.
The generation position of the first trigger response is a point on the corresponding plane of the terminal screen, and can be determined by a sensor arranged on the terminal screen.
Specifically, if the user wants to change the display position of the virtual object on the terminal screen, an operation needs to be performed on the terminal screen, for example, by clicking, or double clicking, or making a preset gesture motion on the terminal screen to determine the next display position of the virtual object. The terminal screen generates a trigger response after receiving the operation, wherein the generation position of the trigger response is the display position where the user wants to move the virtual object, and the display position is not the target position of the virtual object in the real scene, so that the target position of the virtual object in the real scene needs to be determined according to the trigger response, and the display position of the virtual object on the terminal screen can be accurately positioned.
Or the user can input the target display position through the terminal, because the trigger operation of the user on the terminal screen is often a trigger area and is difficult to locate to a point, and the point can be accurately located by inputting the target position, the embodiment can better accurately locate the position of the virtual object relative to the trigger operation of the user on the terminal screen, and further improve the terminal display effect.
Further, the position determining unit 213 is specifically configured to: acquiring a line passing through a point where the target display position is located; the intersection of the line and the target plane is taken as the target position.
Wherein the line may be a straight line, a ray or a line segment.
Further, the line is perpendicular to the plane of the terminal screen.
In an alternative embodiment, the display position determination unit 213 is further configured to: before detecting the first trigger response generated on the terminal screen, detecting a second trigger response generated on the terminal screen and aiming at the virtual object, wherein the second trigger response is used for enabling the virtual object to be in a selected state;
correspondingly, the display module 22 is further configured to: and controlling the virtual object to move to the target position according to the second trigger response and the first trigger response before the virtual object is displayed on the corresponding terminal screen according to the target position and the displayed virtual object is positioned on the surface of the target plane.
The second trigger response is a response generated by the terminal screen acting on the virtual object, and may be, but is not limited to, a click response generated by the terminal screen, or a double click response, or a detected preset gesture action.
Further, the display module 22 is specifically configured to: and after detecting the first trigger response, directly moving the virtual object to the target position, or dragging the virtual object to the target position along with the motion track.
Specifically, for standing on the user plane, the display module 22 is specifically used in the following two scenarios: firstly, clicking a virtual object on a terminal screen, and then clicking a target position on the terminal screen, wherein the virtual object directly moves to the target position under the control of the method; the second method clicks a virtual object on the terminal screen, and then drags the virtual object to a target position on the terminal screen.
If the first type is adopted, clicking a virtual object on a terminal screen, then clicking a target position on the terminal screen, after clicking a new position on the terminal screen, acquiring a line (such as a line) passing through the point of the position, wherein the intersection point of the line and a plane to which the virtual object is to move in a real scene is the new target position of the virtual object, and placing the virtual object on the target position; if the target position is the second type, continuously calculating a new target position according to a certain time interval, and moving the virtual object along the motion track of the new target position to form a moving track animation.
Further, the display module 22 is further configured to: when a plurality of virtual objects are displayed on a corresponding terminal screen according to a target position and the displayed virtual objects are positioned in front of the surface of the target plane, if one virtual object is detected to be selected, the other virtual objects are controlled to move to the target position along with the selected virtual object in sequence; or, when a plurality of virtual objects are detected, if all the virtual objects are simultaneously selected, the plurality of virtual objects are simultaneously controlled to move to the target positions.
In an alternative embodiment, as shown in fig. 2a, the plane and position determining module 21 is further configured to: determining the initial position of the virtual object in the real scene according to the initial position of the virtual object on the terminal screen; correspondingly, the display module 22 is further configured to: and controlling the virtual object to move from the initial position to the target position before the virtual object is displayed on the corresponding terminal screen according to the target position and the displayed virtual object is positioned on the surface of the target plane.
For detailed descriptions of the working principle, the technical effect, and the like of the embodiment of the apparatus for controlling the position of the virtual object, reference may be made to the related descriptions in the foregoing embodiment of the method for controlling the position of the virtual object, and further description is omitted here.
Fig. 3 is a hardware block diagram illustrating a hardware apparatus to control a virtual object position movement according to an embodiment of the present disclosure. As shown in fig. 3, a hardware device 30 for controlling the position movement of a virtual object according to an embodiment of the present disclosure includes a memory 31 and a processor 32.
The memory 31 is used to store non-transitory computer readable instructions. In particular, memory 31 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 32 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the hardware device 30 that control the virtual object position movement to perform desired functions. In an embodiment of the present disclosure, the processor 32 is configured to execute the computer readable instructions stored in the memory 31, so that the hardware device 30 for controlling virtual object position movement executes all or part of the aforementioned steps of the method for controlling virtual object position movement according to the embodiments of the present disclosure.
Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures should also be included in the protection scope of the present disclosure.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 4 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 4, a computer-readable storage medium 40, having non-transitory computer-readable instructions 41 stored thereon, in accordance with an embodiment of the present disclosure. When executed by a processor, the non-transitory computer readable instructions 41 perform all or part of the steps of the aforementioned method for matching video features according to the embodiments of the present disclosure.
The computer-readable storage medium 40 includes, but is not limited to: optical storage media (e.g., CD-ROMs and DVDs), magneto-optical storage media (e.g., MOs), magnetic storage media (e.g., magnetic tapes or removable disks), media with built-in rewritable non-volatile memory (e.g., memory cards), and media with built-in ROMs (e.g., ROM cartridges).
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 5 is a diagram illustrating a hardware structure of a terminal according to an embodiment of the present disclosure. As shown in fig. 5, the terminal 50 for controlling the position movement of the virtual object includes the above-mentioned embodiment of the apparatus for controlling the position movement of the virtual object.
The terminal may be implemented in various forms, and the terminal in the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, a vehicle-mounted terminal, a vehicle-mounted display terminal, a vehicle-mounted electronic rear view mirror, etc., and fixed terminals such as a digital TV, a desktop computer, etc.
The terminal may also include other components as equivalent alternative embodiments. As shown in fig. 5, the terminal 50 for controlling the position movement of the virtual object may include a power supply unit 51, a wireless communication unit 52, an a/V (audio/video) input unit 53, a user input unit 54, a sensing unit 55, an interface unit 56, a controller 57, an output unit 58, a memory 59, and the like. Fig. 5 shows a terminal having various components, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may alternatively be implemented.
The wireless communication unit 52 allows, among other things, radio communication between the terminal 50 and a wireless communication system or network. The a/V input unit 53 is for receiving an audio or video signal. The user input unit 54 may generate key input data according to a command input by a user to control various operations of the terminal. The sensing unit 55 detects a current state of the terminal 50, a position of the terminal 50, presence or absence of a touch input of the terminal 50 by a user, an orientation of the terminal 50, acceleration or deceleration movement and direction of the terminal 50, and the like, and generates a command or signal for controlling an operation of the terminal 50. The interface unit 56 serves as an interface through which at least one external device is connected to the terminal 50. The output unit 58 is configured to provide output signals in a visual, audio, and/or tactile manner. The memory 59 may store software programs or the like for processing and control operations performed by the controller 55, or may temporarily store data that has been output or is to be output. The memory 59 may include at least one type of storage medium. Also, the terminal 50 may cooperate with a network storage device that performs a storage function of the memory 59 through a network connection. The controller 57 generally controls the overall operation of the terminal. In addition, the controller 57 may include a multimedia module for reproducing or playing back multimedia data. The controller 57 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image. The power supply unit 51 receives external power or internal power and supplies appropriate power required to operate the respective elements and components under the control of the controller 57.
Various embodiments of the video feature comparison method presented in the present disclosure may be implemented using a computer-readable medium, such as computer software, hardware, or any combination thereof. For a hardware implementation, various embodiments of the comparison method of video features proposed by the present disclosure may be implemented by using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, various embodiments of the comparison method of video features proposed by the present disclosure may be implemented in the controller 57. For software implementation, various embodiments of the video feature comparison method presented in the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in memory 59 and executed by controller 57.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (14)

1. A method of controlling positional movement of a virtual object, comprising:
determining a target plane and a target position of a virtual object in a real scene;
and displaying the virtual object on a corresponding terminal screen according to the target position, wherein the displayed virtual object is positioned on the surface of the target plane.
2. The method of claim 1, wherein the step of determining the target plane of the virtual object in the real scene comprises:
identifying a plane contained in the real scene;
selecting one of the identified planes as the target plane.
3. The method of claim 2, wherein the step of selecting one of the identified planes as the target plane comprises:
displaying the identified plane on the terminal screen, and enabling the identified plane to be in a selectable state;
and taking the selected plane as the target plane.
4. The method of claim 1, wherein the step of determining the target position of the virtual object in the real scene comprises:
determining a target display position of the virtual object on the terminal screen;
and determining the target position according to the target display position and the target plane.
5. The method of claim 4, wherein the step of determining the target position based on the target display position and the target plane comprises:
acquiring a line passing through a point where the target display position is located;
and taking the intersection point of the line and the target plane as the target position.
6. The method according to claim 5, characterized in that said line is perpendicular to the plane of said terminal screen.
7. The method according to claim 4, wherein the step of determining the target display position of the virtual object on the terminal screen comprises:
detecting a first trigger response generated on the terminal screen, and taking the generation position of the first trigger response as the target display position;
or, receiving an input target display position.
8. The method according to claim 7, wherein before the detecting the first trigger response generated on the terminal screen, the method further comprises:
detecting a second trigger response aiming at the virtual object generated on the terminal screen, wherein the second trigger response is used for enabling the virtual object to be in a selected state;
correspondingly, before the step of displaying the virtual object on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, the method further includes:
and controlling the virtual object to move to the target position according to the second trigger response and the first trigger response.
9. The method of claim 8, wherein the step of controlling the virtual object to move to the target position based on the second trigger response and the first trigger response comprises:
after the first trigger response is detected, the virtual object is directly moved to the target position, or the virtual object is dragged to the target position along a motion track.
10. The method according to any one of claims 1 to 7, wherein before the step of displaying the virtual object on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, the method further comprises:
when a plurality of virtual objects are detected, if one of the virtual objects is detected to be selected, controlling the other virtual objects to move to the target position along with the selected virtual object in sequence;
or, when a plurality of virtual objects are detected, if all the virtual objects are simultaneously selected, simultaneously controlling the plurality of virtual objects to move to the target position.
11. The method according to any one of claims 1-7, further comprising:
determining the initial position of the virtual object in the real scene according to the initial position of the virtual object on the terminal screen;
correspondingly, before the step of displaying the virtual object on the corresponding terminal screen according to the target position, and the displayed virtual object is located on the surface of the target plane, the method further includes:
controlling the virtual object to move from the initial position to the target position.
12. An apparatus for controlling the positional movement of a virtual object, comprising:
the plane and position determining module is used for determining a target plane and a target position of the virtual object in the real scene;
and the display module is used for displaying the virtual object on a corresponding terminal screen according to the target position, and the displayed virtual object is positioned on the surface of the target plane.
13. A hardware apparatus for controlling position movement of a virtual object, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing performs the method of controlling the position movement of a virtual object according to any of claims 1-11.
14. A computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the method of controlling virtual object position movement of any of claims 1-11.
CN201810907342.XA 2018-08-09 2018-08-09 Method, apparatus and computer-readable storage medium for controlling position movement of virtual object Pending CN110825280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810907342.XA CN110825280A (en) 2018-08-09 2018-08-09 Method, apparatus and computer-readable storage medium for controlling position movement of virtual object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810907342.XA CN110825280A (en) 2018-08-09 2018-08-09 Method, apparatus and computer-readable storage medium for controlling position movement of virtual object

Publications (1)

Publication Number Publication Date
CN110825280A true CN110825280A (en) 2020-02-21

Family

ID=69541149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810907342.XA Pending CN110825280A (en) 2018-08-09 2018-08-09 Method, apparatus and computer-readable storage medium for controlling position movement of virtual object

Country Status (1)

Country Link
CN (1) CN110825280A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022088523A1 (en) * 2020-11-02 2022-05-05 网易(杭州)网络有限公司 Object moving method and apparatus, and storage medium and electronic apparatus

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104081317A (en) * 2012-02-10 2014-10-01 索尼公司 Image processing device, and computer program product
US20150243103A1 (en) * 2013-11-27 2015-08-27 Magic Leap, Inc. Rendering dark virtual objects as blue to facilitate viewing augmented or virtual reality
CN105264461A (en) * 2013-05-13 2016-01-20 微软技术许可有限责任公司 Interactions of virtual objects with surfaces
CN105808071A (en) * 2016-03-31 2016-07-27 联想(北京)有限公司 Display control method and device and electronic equipment
CN105843480A (en) * 2016-03-29 2016-08-10 乐视控股(北京)有限公司 Desktop icon adjustment method and apparatus
WO2017139509A1 (en) * 2016-02-12 2017-08-17 Purdue Research Foundation Manipulating 3d virtual objects using hand-held controllers
CN107358609A (en) * 2016-04-29 2017-11-17 成都理想境界科技有限公司 A kind of image superimposing method and device for augmented reality
CN107610269A (en) * 2017-09-12 2018-01-19 国网上海市电力公司 A kind of power network big data intelligent inspection system and its intelligent polling method based on AR
CN107678652A (en) * 2017-09-30 2018-02-09 网易(杭州)网络有限公司 To the method for controlling operation thereof and device of target object
CN107688426A (en) * 2017-08-07 2018-02-13 网易(杭州)网络有限公司 The method and apparatus for choosing target object
US20180053338A1 (en) * 2016-08-19 2018-02-22 Gribbing Oy Method for a user interface
CN107978019A (en) * 2016-10-21 2018-05-01 财团法人资讯工业策进会 Augmented reality system and method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104081317A (en) * 2012-02-10 2014-10-01 索尼公司 Image processing device, and computer program product
CN105264461A (en) * 2013-05-13 2016-01-20 微软技术许可有限责任公司 Interactions of virtual objects with surfaces
US20150243103A1 (en) * 2013-11-27 2015-08-27 Magic Leap, Inc. Rendering dark virtual objects as blue to facilitate viewing augmented or virtual reality
WO2017139509A1 (en) * 2016-02-12 2017-08-17 Purdue Research Foundation Manipulating 3d virtual objects using hand-held controllers
CN105843480A (en) * 2016-03-29 2016-08-10 乐视控股(北京)有限公司 Desktop icon adjustment method and apparatus
CN105808071A (en) * 2016-03-31 2016-07-27 联想(北京)有限公司 Display control method and device and electronic equipment
CN107358609A (en) * 2016-04-29 2017-11-17 成都理想境界科技有限公司 A kind of image superimposing method and device for augmented reality
US20180053338A1 (en) * 2016-08-19 2018-02-22 Gribbing Oy Method for a user interface
CN107978019A (en) * 2016-10-21 2018-05-01 财团法人资讯工业策进会 Augmented reality system and method
CN107688426A (en) * 2017-08-07 2018-02-13 网易(杭州)网络有限公司 The method and apparatus for choosing target object
CN107610269A (en) * 2017-09-12 2018-01-19 国网上海市电力公司 A kind of power network big data intelligent inspection system and its intelligent polling method based on AR
CN107678652A (en) * 2017-09-30 2018-02-09 网易(杭州)网络有限公司 To the method for controlling operation thereof and device of target object

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
1024工场: "ARCore:ARCore带来的新概念", 《CSDN博客-HTTPS://BLOG.CSDN.NET/P106786860/ARTICLE/DETAILS/78533538》 *
AR科技君: "Google 官方 AR 设计指南", 《HTTP://WWW.WOSHIPM.COM/PD/1195054.HTML》 *
AR科技君: "Google官方AR设计指南", 《人人都是产品经理-HTTP://WWW.WOSHIPM.COM/PD/1195054.HTML》 *
吴筱军: ""unity3d 让物体移动到点击位置"", 《博客园-HTTPS://WWW.CNBLOGS.COM/WRBXDJ/P/5683195.HTML》 *
帅帅家的人工智障: "【ARCore】谷歌发布超级炫酷ARCore介绍视频", 《HTTPS://WWW.BILIBILI.COM/VIDEO/BV1BX411T7A4?FROM=SEARCH&SEID=11829546379415779316》 *
张嘉夫: "ARKit 从零到一:教你编写 AR 立方体、平面检测与视觉效果、放置几何体并应用物理学", 《HTTPS://JUEJIN.IM/ENTRY/6844903623000850439》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022088523A1 (en) * 2020-11-02 2022-05-05 网易(杭州)网络有限公司 Object moving method and apparatus, and storage medium and electronic apparatus

Similar Documents

Publication Publication Date Title
US8839136B2 (en) Method of controlling virtual object or view point on two dimensional interactive display
JP6133972B2 (en) 3D graphic user interface
US9612736B2 (en) User interface method and apparatus using successive touches
US10796445B2 (en) Method and device for detecting planes and/or quadtrees for use as a virtual substrate
US11941181B2 (en) Mechanism to provide visual feedback regarding computing system command gestures
US9633412B2 (en) Method of adjusting screen magnification of electronic device, machine-readable storage medium, and electronic device
US11604580B2 (en) Configuration of application execution spaces and sub-spaces for sharing data on a mobile touch screen device
EP3540586A1 (en) Method and apparatus for providing a changed shortcut icon corresponding to a status thereof
AU2016200885B2 (en) Three-dimensional virtualization
WO2015149375A1 (en) Device, method, and graphical user interface for managing multiple display windows
EP2610840A2 (en) Device, method, and graphical user interface for manipulating a three-dimensional map view based on a device orientation
US20150063785A1 (en) Method of overlappingly displaying visual object on video, storage medium, and electronic device
KR102627191B1 (en) Portable apparatus and method for controlling a screen
US9792183B2 (en) Method, apparatus, and recording medium for interworking with external terminal
US20150067615A1 (en) Method, apparatus, and recording medium for scrapping content
KR101949493B1 (en) Method and system for controlling play of multimeida content
US10073612B1 (en) Fixed cursor input interface for a computer aided design application executing on a touch screen device
US10346033B2 (en) Electronic device for processing multi-touch input and operating method thereof
CN110827412A (en) Method, apparatus and computer-readable storage medium for adapting a plane
CN110825280A (en) Method, apparatus and computer-readable storage medium for controlling position movement of virtual object
US11340776B2 (en) Electronic device and method for providing virtual input tool
CN110827413A (en) Method, apparatus and computer-readable storage medium for controlling a change in a virtual object form
CN110825279A (en) Method, apparatus and computer readable storage medium for inter-plane seamless handover
JP2015007844A (en) User interface device, user interface method, and program
US20220342525A1 (en) Pushing device and method of media resource, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination