CN110825279A - Method, apparatus and computer readable storage medium for inter-plane seamless handover - Google Patents

Method, apparatus and computer readable storage medium for inter-plane seamless handover Download PDF

Info

Publication number
CN110825279A
CN110825279A CN201810900513.6A CN201810900513A CN110825279A CN 110825279 A CN110825279 A CN 110825279A CN 201810900513 A CN201810900513 A CN 201810900513A CN 110825279 A CN110825279 A CN 110825279A
Authority
CN
China
Prior art keywords
plane
virtual object
target
terminal screen
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810900513.6A
Other languages
Chinese (zh)
Inventor
刘昂
陈怡�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201810900513.6A priority Critical patent/CN110825279A/en
Priority to PCT/CN2019/073079 priority patent/WO2020029555A1/en
Publication of CN110825279A publication Critical patent/CN110825279A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure discloses a method for inter-plane seamless handover, an apparatus for inter-plane seamless handover, a hardware apparatus for inter-plane seamless handover, and a computer readable storage medium. The method for seamless switching between the planes comprises the steps of determining a target plane and a target position of a virtual object in a real scene; and adjusting the placing posture of the virtual object at the target position, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane. According to the embodiment of the method and the device, the target plane and the target position of the virtual object in the real scene are determined, then the placing posture of the virtual object is adjusted on the target position, the virtual object is displayed on the terminal screen, and the displayed virtual object is adaptive to the target plane, so that the situation that the virtual object is suspended on the plane or embedded into the plane when moving can be avoided, and the display effect of the terminal is improved.

Description

Method, apparatus and computer readable storage medium for inter-plane seamless handover
Technical Field
The present disclosure relates to the field of information technology, and in particular, to a method and an apparatus for inter-plane seamless handover, and a computer-readable storage medium.
Background
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and virtual objects, and aims to sleeve a virtual world on a screen in the real world and interact with the virtual world.
The method for realizing the augmented reality technology is to put a virtual object in a real scene, namely, a real environment and the virtual object are superposed on the same picture or space in real time. After the virtual object is overlaid, the virtual object moves according to a preset motion track, or the virtual object is controlled to perform a preset action through the control.
Currently, in an existing augmented reality scene, a virtual object is usually placed on a plane in a real scene, for example, on a desktop, and the placed virtual object can be controlled to move among multiple planes. However, when moving, the virtual object is not always located on the plane surface or is out of scale due to different planes and different positions.
For example, as shown in fig. 1, the black ball is originally on a plane and then moves out of the plane, but after moving out of the plane, the black ball still lies on the extended plane of the plane, and if another plane has a height difference with the plane, the black ball may be suspended, embedded or blocked, thereby affecting the display effect.
Disclosure of Invention
The technical problem solved by the present disclosure is to provide a method for seamless switching between planes, so as to at least partially solve the technical problem of how to improve the display effect of a virtual object on a terminal. In addition, an inter-plane seamless handover apparatus, an inter-plane seamless handover hardware apparatus, a computer readable storage medium, and an inter-plane seamless handover terminal are provided.
In order to achieve the above object, according to one aspect of the present disclosure, the following technical solutions are provided:
a method of inter-plane seamless handover, comprising:
determining a target plane and a target position of a virtual object in a real scene;
and adjusting the placing posture of the virtual object at the target position, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane.
Further, the method further comprises:
controlling the virtual object to move on an initial plane;
and if the position of the virtual object is judged to exceed the initial plane, triggering and executing the operation of determining the target plane and the target position of the virtual object in the real scene.
Further, the step of determining the target position of the virtual object in the real scene includes:
determining a target display position of the virtual object on a terminal screen;
and determining the target position according to the target display position and the target plane.
Further, the step of determining the target position according to the target display position and the target plane includes:
acquiring a line passing through a point where the target display position is located;
and taking the intersection point of the line and the target plane as the target position.
Further, the step of adjusting the placing posture of the virtual object, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane includes:
determining the length of a line segment formed between the intersection point of the target display position and the target plane;
determining the zoom degree of the virtual object according to the length and the length of the line segment recorded at the previous time;
and controlling the virtual object to zoom according to the zooming degree, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane.
Further, the line is perpendicular to a plane where the terminal screen is located.
Further, the step of determining the target display position of the virtual object on the terminal screen includes:
detecting a trigger response generated on the terminal screen, and taking the generation position of the trigger response as the target display position;
or, receiving an input target display position.
Further, the step of determining a target plane of the virtual object in the real scene includes:
identifying a plane contained in the real scene;
selecting one of the identified planes as the target plane.
Further, the step of selecting one of the identified planes as the target plane includes:
displaying the identified plane on the terminal screen, and enabling the identified plane to be in a selectable state;
and taking the selected plane as the target plane.
In order to achieve the above object, according to another aspect of the present disclosure, there is provided a method for displaying a virtual object on a terminal screen, wherein the method comprises:
an apparatus for inter-plane seamless handover, comprising:
the plane and position determining module is used for determining a target plane and a target position of the virtual object in the real scene;
and the plane switching module is used for adjusting the placing posture of the virtual object on the target position, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane.
Further, the apparatus further comprises:
the control movement module is used for controlling the virtual object to move on the initial plane;
and the position determination module is used for triggering and executing the operation of determining the target plane and the target position of the virtual object in the real scene if the position of the virtual object is determined to exceed the initial plane.
Further, the plane and position determining module comprises:
the display position determining unit is used for determining the target display position of the virtual object on the terminal screen;
and the target position determining unit is used for determining the target position according to the target display position and the target plane.
Further, the target position determination unit is specifically configured to:
acquiring a line passing through a point where the target display position is located; and taking the intersection point of the line and the target plane as the target position.
Further, the plane switching module is specifically configured to:
determining the length of a line segment formed between the intersection point of the target display position and the target plane; determining the zoom degree of the virtual object according to the length and the length of the line segment recorded at the previous time; and controlling the virtual object to zoom according to the zooming degree, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane.
Further, the line is perpendicular to a plane where the terminal screen is located.
Further, the display position determination unit is specifically configured to:
detecting a trigger response generated on the terminal screen, and taking the generation position of the trigger response as the target display position; or, receiving an input target display position.
Further, the plane and position determining module comprises:
a plane recognition unit for recognizing a plane included in the real scene;
a plane determination unit for selecting one plane from the identified planes as the target plane.
Further, the plane determining unit is specifically configured to:
displaying the identified plane on the terminal screen, and enabling the identified plane to be in a selectable state; and taking the selected plane as the target plane.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
an inter-plane seamless handover hardware apparatus, comprising:
a memory for storing non-transitory computer readable instructions; and
and the processor is used for executing the computer readable instructions, so that the processor realizes the steps in any one of the above technical solutions of the inter-plane seamless handover method when executing.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
a computer readable storage medium for storing non-transitory computer readable instructions, which when executed by a computer, cause the computer to perform the steps of any of the above-described inter-plane seamless handover method aspects.
In order to achieve the above object, according to still another aspect of the present disclosure, the following technical solutions are also provided:
a terminal for seamless switching between planes comprises any one of the above devices for seamless switching between planes.
The embodiment of the disclosure provides a method for inter-plane seamless switching, an inter-plane seamless switching device, an inter-plane seamless switching hardware device, a computer readable storage medium and an inter-plane seamless switching terminal. The method for seamless switching between the planes comprises the steps of determining a target plane and a target position of a virtual object in a real scene; and adjusting the placing posture of the virtual object at the target position, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane. According to the embodiment of the method and the device, the target plane and the target position of the virtual object in the real scene are determined, then the placing posture of the virtual object is adjusted on the target position, the virtual object is displayed on the terminal screen, and the displayed virtual object is adaptive to the target plane, so that the situation that the virtual object is suspended on the plane or embedded into the plane when moving can be avoided, and the display effect of the terminal is improved.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
FIG. 1 is a schematic view of a pose for controlling a virtual object to reach a target position according to the prior art;
fig. 2a is a flow chart illustrating a method of inter-plane seamless handover according to an embodiment of the present disclosure;
fig. 2b is a schematic diagram illustrating a pose of a virtual object after the virtual object reaches a target plane in the inter-plane seamless handover method according to an embodiment of the disclosure;
fig. 2c is a schematic diagram illustrating a pose of a virtual object after the virtual object reaches a target plane in a method of inter-plane seamless handover according to another embodiment of the present disclosure;
FIG. 2d is a schematic diagram of an alternative inter-plane state in the method for inter-plane seamless handover according to the embodiment shown in FIG. 2 a;
FIG. 2e is a diagram illustrating selected planar states in the method for inter-plane seamless handover according to the embodiment shown in FIG. 2 a;
FIG. 3a is a schematic structural diagram of an apparatus for inter-plane seamless handover according to an embodiment of the present disclosure;
FIG. 3b is a schematic structural diagram of an inter-plane seamless handover apparatus according to another embodiment of the present disclosure;
FIG. 4 is a block diagram of a hardware device for inter-plane seamless handover according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a computer-readable storage medium according to one embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a terminal for inter-plane seamless handover according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
In order to solve the technical problem of how to improve the user experience effect, the embodiments of the present disclosure provide a method for seamless handover between planes. As shown in fig. 2a, the method for inter-plane seamless handover mainly includes the following steps S1 to S2. Wherein:
step S1: and determining a target plane and a target position of the virtual object in the real scene.
The execution body of this embodiment may be selected as a device for inter-plane seamless handover provided by the embodiment of the present disclosure, or a hardware device for inter-plane seamless handover provided by the embodiment of the present disclosure, or a terminal for inter-plane seamless handover provided by the embodiment of the present disclosure.
The virtual object can be selected as a three-dimensional model of a real object in a scene.
The target plane is a plane to which a virtual object is to move in the real scene, and the plane is a surface of an entity located in the real scene, such as, but not limited to, a desktop or a wall surface. The target position is a position to which the virtual object is to move in the real scene, and the target position is on the target plane.
Step S2: and adjusting the placing posture of the virtual object at the target position, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to a target plane.
The terminal may be, but is not limited to, a mobile terminal (e.g., iPhone, smartphone, tablet, etc.), or a fixed terminal (e.g., desktop computer).
Specifically, after a target plane and a target position of a virtual object in a real scene are determined, the virtual object is controlled to move to the target position, the placing posture of the virtual object is adjusted, the virtual object is displayed on a terminal screen, and the displayed virtual object is adapted to the target plane. Specifically, the virtual object displayed on the terminal screen can be ensured to be adapted to the target plane by controlling the virtual object to rotate and/or zoom.
Also taking fig. 1 as an example, it can be seen from fig. 1 that the initial pose of the black ball is the surface located on the plane, and after moving to the target plane by the method of this embodiment, the pose thereof is as shown in fig. 2b, that is, the surface still located on the target plane after moving to the target plane.
By adopting the technical scheme, the target plane and the target position of the virtual object in the real scene are determined, the placing posture of the virtual object is adjusted at the target position, the virtual object is displayed on the terminal screen, and the displayed virtual object is adaptive to the target plane, so that the situation that the virtual object is suspended on the plane or embedded into the plane when moving can be avoided, and the display effect of the terminal is improved.
In an alternative embodiment, as shown in fig. 2c, the method of this embodiment further includes:
s01: and controlling the virtual object to move on the initial plane.
Wherein, the initial plane can be selected by the user and is the plane where the virtual object is initially placed.
Specifically, the virtual object can be controlled to move on the initial plane by controlling the terminal screen or the terminal. For example, movement of the virtual object may be controlled using a movement control, such as a virtual manipulation button to effect movement of the virtual object; the movement of the virtual object can also be controlled by sliding the finger on the terminal screen; the movement of the virtual object can also be controlled by directly moving the terminal, and the virtual object is always positioned in the center of the terminal screen, and the mobile terminal is equivalent to a mobile virtual model.
In the process of moving the virtual object, the new position may not be calculated, and the virtual object is only moved on the plane, where the movement may be a movement on the plane in proportion to the movement control, the finger movement, or the terminal movement; the new position can also be directly calculated, and the virtual object can be moved to the new position.
S02: and if the position of the virtual object exceeds the initial plane, triggering and executing the operation of determining the target plane and the target position of the virtual object in the real scene.
Specifically, the edge contour position of the initial plane is recorded in advance, and whether the position of the virtual object completely exceeds the initial plane is further judged. The method can be implemented by using the identification method of the extended plane edge contour in the prior art, for example, identification by using feature points or texture, and the like. Once the position of the virtual object is determined to exceed the initial plane, the target plane and the target position of the virtual object in the real scene are determined.
By adopting the above technical scheme, the position of the virtual object is judged whether to exceed the initial plane, if the position exceeds the initial plane, the execution is triggered to determine the target plane and the target position of the virtual object in the real scene, then the placing posture of the virtual object is adjusted on the target position, the virtual object is displayed on the terminal screen, and the displayed virtual object is adapted to the target plane, so that the situation that the virtual object is suspended on the plane or embedded into the plane when moving can be avoided, and the display effect of the terminal is improved.
In an alternative embodiment, the step of determining the target plane of the virtual object in the real scene in step S1 includes:
s11: planes contained in a real scene are identified.
Among them, the real scene may include one or more planes. The plane included in the real scene can be identified by using a corresponding algorithm, which can be implemented by using the prior art, for example, a simultaneous localization and mapping (SLAM) algorithm, which is not described herein again.
S12: one of the identified planes is selected as a target plane.
Further, step S12 can be implemented in two ways:
in the first way, one plane is automatically selected as the target plane, i.e., one plane is automatically selected as the target plane from among the identified planes.
In the second mode, a user selects a target plane, namely, the identified plane is displayed on a terminal screen and is in a selectable state; and taking the selected plane as a target plane. That is, the user may select a plane by clicking or double-clicking or other preset actions, and take the plane selected by the user as the target plane.
Illustratively, as shown in fig. 2d, the planes 1-3 in the identified real scene are sequentially displayed on the terminal screen, and if the user wants to display the virtual object on the plane 1, the user only needs to click or double click the plane 1 on the terminal screen to complete the selection operation. When the plane 1 is selected, it is displayed on the terminal screen according to its placement in the display scene, as shown in fig. 2 e.
In an alternative embodiment, the step of determining the target position of the virtual object in the real scene in step S1 includes:
s13: and determining the target display position of the virtual object on the terminal screen.
And the target display position is the display position of the virtual object on the terminal screen.
S14: and determining the target position according to the target display position and the target plane.
Further, the step S13 may obtain the target display position in two ways: the first mode is as follows: and detecting a trigger response generated on a terminal screen, and taking the generation position of the trigger response as a target display position.
The trigger response is a response generated by a trigger operation acting on the terminal screen, and may be, but is not limited to, a click response generated for the terminal screen, or a double-click response, or a detected preset gesture action.
The generation position of the trigger response is a point on the plane corresponding to the terminal screen, and can be determined by a sensor arranged on the terminal screen.
Specifically, if the user wants to change the display position of the virtual object on the terminal screen, an operation needs to be performed on the terminal screen, for example, by clicking, or double clicking, or making a preset gesture motion on the terminal screen to determine the next display position of the virtual object. The terminal screen generates a trigger response after receiving the operation, wherein the generation position of the trigger response is the display position where the user wants to move the virtual object, and the display position is not the target position of the virtual object in the real scene, so that the target position of the virtual object in the real scene needs to be determined according to the trigger response, and the display position of the virtual object on the terminal screen can be accurately positioned.
In the second mode, an input target display position is received.
Specifically, the user can input the target display position through the terminal, and because the trigger operation of the user on the terminal screen is often a trigger area and is difficult to locate to a point, the point can be accurately located by inputting the target position, therefore, compared with the trigger operation of the user on the terminal screen, the position of the virtual object can be more accurately located, and the terminal display effect is further improved.
Further, step S14 includes:
acquiring a line passing through a point where the target display position is located;
the intersection of the line and the target plane is taken as the target position.
Wherein the line may be a straight line, a ray or a line segment.
Further, the line is perpendicular to the plane of the terminal screen.
Further, step S2 may include:
s21: the length of a line segment formed between the intersection of the target display position and the target plane is determined.
S22: the degree of scaling of the virtual object is determined from the length and the length of the segment previously recorded.
Specifically, the length of the previous line segment may be recorded in advance, for example, a buffer may be provided for recording the length of the line segment, and when the plane of the virtual object changes, the new line segment length and the previous line segment length recorded in the buffer are used to calculate the scaling.
S23: and controlling the virtual object to zoom according to the zooming degree, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane.
It will be appreciated by those skilled in the art that obvious modifications (e.g., combinations of the enumerated modes) or equivalents may be made to the above-described embodiments.
In the above, although the steps in the embodiment of the method for inter-plane seamless handover are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiment of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, those skilled in the art may also add other steps, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
In order to solve the technical problem of how to improve the user experience effect, the embodiments of the present disclosure provide an inter-plane seamless handover apparatus. The apparatus may perform the steps in the above-described method embodiments of inter-plane seamless handover. As shown in fig. 3a, the apparatus mainly comprises: a plane and position determination module 21 and a plane switching module 22; the plane and position determining module 21 is configured to determine a target plane and a target position of a virtual object in a real scene; the plane switching module 22 is configured to adjust the placing posture of the virtual object at the target position, and display the virtual object on the terminal screen, where the displayed virtual object is adapted to the target plane.
The virtual object can be selected as a three-dimensional model of a real object in a scene.
The target plane is a plane to which a virtual object is to move in the real scene, and the plane is a surface of an entity located in the real scene, such as, but not limited to, a desktop or a wall surface. The target position is a position to which the virtual object is to move in the real scene, and the target position is on the target plane.
Specifically, after the plane and position determining module 21 determines the target plane and the target position of the virtual object in the real scene, the plane switching module 22 adjusts the placing posture of the virtual object at the target position, displays the virtual object on the terminal screen, and the displayed virtual object is adapted to the target plane. Specifically, the virtual object displayed on the terminal screen can be ensured to be adapted to the target plane by controlling the virtual object to rotate and/or zoom.
Also taking fig. 1 as an example, it can be seen from fig. 1 that the initial pose of the black ball is the surface located on the plane, and after moving to the target plane by the method of this embodiment, the pose thereof is as shown in fig. 2b, that is, the surface still located on the target plane after moving to the target plane.
By adopting the above technical scheme, the target plane and the target position of the virtual object in the real scene are determined by the plane and position determining module 21, then the placing posture of the virtual object is adjusted on the target position by the plane switching module 22, the virtual object is displayed on the terminal screen, and the displayed virtual object is adapted to the target plane, so that the situation that the virtual object floats on the plane or is embedded into the plane when moving can be avoided, and the display effect of the terminal is improved.
In an alternative embodiment, as shown in fig. 3b, the apparatus of this embodiment further includes: a control movement module 23 and a position determination module 24; the control moving module 23 is configured to control the virtual object to move on the initial plane; the position determination module 24 is configured to trigger execution of an operation of determining a target plane and a target position of the virtual object in the real scene if it is determined that the position of the virtual object exceeds the initial plane.
Wherein, the initial plane can be selected by the user and is the plane where the virtual object is initially placed.
Specifically, the control moving module 23 may control the virtual object to move on the initial plane by controlling the terminal screen or the terminal. For example, movement of the virtual object may be controlled using a movement control, such as a virtual manipulation button to effect movement of the virtual object; the movement of the virtual object can also be controlled by sliding the finger on the terminal screen; the movement of the virtual object can also be controlled by directly moving the terminal, and the virtual object is always positioned in the center of the terminal screen, and the mobile terminal is equivalent to a mobile virtual model.
In the process of moving the virtual object, the new position may not be calculated, and the virtual object is only moved on the plane, where the movement may be a movement on the plane in proportion to the movement control, the finger movement, or the terminal movement; the new position can also be directly calculated, and the virtual object can be moved to the new position.
Specifically, the position determining module 24 records the edge contour position of the initial plane in advance, and then determines whether the position of the virtual object completely exceeds the initial plane. The method can be implemented by using the identification method of the extended plane edge contour in the prior art, for example, identification by using feature points or texture, and the like. Once the position of the virtual object is determined to exceed the initial plane, the target plane and the target position of the virtual object in the real scene are determined.
By adopting the above technical scheme, the position determination module 24 determines whether the position of the virtual object exceeds the initial plane, if the position exceeds the initial plane, the execution plane and position determination module 21 is triggered to determine the target plane and the target position of the virtual object in the real scene, then the placement posture of the virtual object is adjusted at the target position through the plane switching module 22, the virtual object is displayed on the terminal screen, and the displayed virtual object adapts to the target plane, so that the situation that the virtual object floats on the plane or is incorrect in the plane posture when moving can be avoided, and the display effect of the terminal is improved.
In an alternative embodiment, the plane and position determining module 21 includes: a plane recognition unit 211 and a plane determination unit 212; the plane recognition unit 211 is configured to recognize a plane included in a real scene; the plane determination unit 212 is configured to select one of the identified planes as a target plane.
Among them, the real scene may include one or more planes. The plane included in the real scene can be identified by using a corresponding algorithm, which can be implemented by using the prior art, for example, a simultaneous localization and mapping (SLAM) algorithm, which is not described herein again.
Further, the plane determining unit 212 is specifically configured to: automatically selecting a plane as a target plane, namely automatically selecting a plane from the identified planes as the target plane; or, the user selects the target plane, namely, the identified plane is displayed on the terminal screen and is in a selectable state; and taking the selected plane as a target plane. That is, the user may select a plane by clicking or double-clicking or other preset actions, and take the plane selected by the user as the target plane.
For example, as shown in fig. 2d, the plane determining unit 212 sequentially displays the planes 1-3 in the identified real scene on the terminal screen, and if the user wants to display the virtual object on the plane 1, the user only needs to click or double click on the plane 1 on the terminal screen to complete the selecting operation. When the plane 1 is selected, it is displayed on the terminal screen according to its placement in the display scene, as shown in fig. 2 e.
In an alternative embodiment, the plane and position determining module 21 includes: a display position determination unit 213 and a target position determination unit 214; the display position determining unit 213 is configured to determine a target display position of the virtual object on the terminal screen; the target position determining unit 214 is configured to determine a target position according to the target display position and the target plane.
And the target display position is the display position of the virtual object on the terminal screen.
Further, the target position determining unit 214 is specifically configured to: and detecting a trigger response generated on a terminal screen, and taking the generation position of the trigger response as a target display position.
The trigger response is a response generated by a trigger operation acting on the terminal screen, and may be, but is not limited to, a click response generated for the terminal screen, or a double-click response, or a detected preset gesture action.
The generation position of the trigger response is a point on the plane corresponding to the terminal screen, and can be determined by a sensor arranged on the terminal screen.
Specifically, if the user wants to change the display position of the virtual object on the terminal screen, an operation needs to be performed on the terminal screen, for example, by clicking, or double clicking, or making a preset gesture motion on the terminal screen to determine the next display position of the virtual object. The terminal screen generates a trigger response after receiving the operation, wherein the generation position of the trigger response is the display position where the user wants to move the virtual object, and the display position is not the target position of the virtual object in the real scene, so that the target position of the virtual object in the real scene needs to be determined according to the trigger response, and the display position of the virtual object on the terminal screen can be accurately positioned.
Or, the target position determining unit 214 is specifically configured to: an input target display position is received.
Specifically, the user can input the target display position through the terminal, and because the trigger operation of the user on the terminal screen is often a trigger area and is difficult to locate to a point, the point can be accurately located by inputting the target position, therefore, compared with the trigger operation of the user on the terminal screen, the position of the virtual object can be more accurately located, and the terminal display effect is further improved.
Further, the target position determining unit 214 is specifically configured to:
acquiring a line passing through a point where the target display position is located; the intersection of the line and the target plane is taken as the target position.
Wherein the line may be a straight line, a ray or a line segment.
Further, the line is perpendicular to the plane of the terminal screen.
Further, the plane switching module 22 is specifically configured to:
determining the length of a line segment formed between the intersection point of the target display position and the target plane; determining the zoom degree of the virtual object according to the length and the length of the line segment recorded at the previous time; and controlling the virtual object to zoom according to the zooming degree, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to a target plane.
Specifically, the length of the previous line segment may be recorded in advance, for example, a buffer may be provided for recording the length of the line segment, and when the plane of the virtual object changes, the new line segment length and the previous line segment length recorded in the buffer are used to calculate the scaling.
For detailed descriptions of the working principle, the technical effect of the implementation, and the like of the embodiment of the apparatus for inter-plane seamless handover, reference may be made to the related descriptions in the foregoing embodiment of the inter-plane seamless handover method, and further description is omitted here.
Fig. 4 is a hardware block diagram illustrating a hardware apparatus for inter-plane seamless handover according to an embodiment of the present disclosure. As shown in fig. 4, the hardware apparatus 30 for inter-plane seamless handover according to the embodiment of the present disclosure includes a memory 31 and a processor 32.
The memory 31 is used to store non-transitory computer readable instructions. In particular, memory 31 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 32 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the hardware device 30 for inter-plane seamless switching to perform desired functions. In an embodiment of the present disclosure, the processor 32 is configured to execute the computer readable instructions stored in the memory 31, so that the hardware device 30 for inter-plane seamless handover performs all or part of the aforementioned steps of the method for inter-plane seamless handover according to the embodiments of the present disclosure.
Those skilled in the art should understand that, in order to solve the technical problem of how to obtain a good user experience, the present embodiment may also include well-known structures such as a communication bus, an interface, and the like, and these well-known structures should also be included in the protection scope of the present disclosure.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 5 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 5, a computer-readable storage medium 40, having non-transitory computer-readable instructions 41 stored thereon, in accordance with an embodiment of the present disclosure. When executed by a processor, the non-transitory computer readable instructions 41 perform all or part of the steps of the aforementioned method for matching video features according to the embodiments of the present disclosure.
The computer-readable storage medium 40 includes, but is not limited to: optical storage media (e.g., CD-ROMs and DVDs), magneto-optical storage media (e.g., MOs), magnetic storage media (e.g., magnetic tapes or removable disks), media with built-in rewritable non-volatile memory (e.g., memory cards), and media with built-in ROMs (e.g., ROM cartridges).
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
Fig. 6 is a diagram illustrating a hardware structure of a terminal according to an embodiment of the present disclosure. As shown in fig. 6, the inter-plane seamless handover terminal 50 includes the above-mentioned inter-plane seamless handover apparatus embodiment.
The terminal may be implemented in various forms, and the terminal in the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, a vehicle-mounted terminal, a vehicle-mounted display terminal, a vehicle-mounted electronic rear view mirror, etc., and fixed terminals such as a digital TV, a desktop computer, etc.
The terminal may also include other components as equivalent alternative embodiments. As shown in fig. 5, the inter-plane seamless handover terminal 50 may include a power supply unit 51, a wireless communication unit 52, an a/V (audio/video) input unit 53, a user input unit 54, a sensing unit 55, an interface unit 56, a controller 57, an output unit 58, a memory 59, and the like. Fig. 5 shows a terminal having various components, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may alternatively be implemented.
The wireless communication unit 52 allows, among other things, radio communication between the terminal 50 and a wireless communication system or network. The a/V input unit 53 is for receiving an audio or video signal. The user input unit 54 may generate key input data according to a command input by a user to control various operations of the terminal. The sensing unit 55 detects a current state of the terminal 50, a position of the terminal 50, presence or absence of a touch input of the terminal 50 by a user, an orientation of the terminal 50, acceleration or deceleration movement and direction of the terminal 50, and the like, and generates a command or signal for controlling an operation of the terminal 50. The interface unit 56 serves as an interface through which at least one external device is connected to the terminal 50. The output unit 58 is configured to provide output signals in a visual, audio, and/or tactile manner. The memory 59 may store software programs or the like for processing and controlling operations performed by the controller 55, or may temporarily store data that has been output or is to be output. The memory 59 may include at least one type of storage medium. Also, the terminal 50 may cooperate with a network storage device that performs a storage function of the memory 59 through a network connection. The controller 57 generally controls the overall operation of the terminal. In addition, the controller 57 may include a multimedia module for reproducing or playing back multimedia data. The controller 57 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image. The power supply unit 51 receives external power or internal power and supplies appropriate power required to operate the respective elements and components under the control of the controller 57.
Various embodiments of the video feature comparison method presented in the present disclosure may be implemented using a computer-readable medium, such as computer software, hardware, or any combination thereof. For a hardware implementation, various embodiments of the comparison method of video features proposed by the present disclosure may be implemented by using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, various embodiments of the comparison method of video features proposed by the present disclosure may be implemented in the controller 57. For software implementation, various embodiments of the video feature comparison method presented in the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in memory 59 and executed by controller 57.
For the detailed description of the present embodiment, reference may be made to the corresponding descriptions in the foregoing embodiments, which are not repeated herein.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
Also, as used herein, "or" as used in a list of items beginning with "at least one" indicates a separate list, such that, for example, a list of "A, B or at least one of C" means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (12)

1. A method for inter-plane seamless handover, comprising:
determining a target plane and a target position of a virtual object in a real scene;
and adjusting the placing posture of the virtual object at the target position, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane.
2. The method of claim 1, further comprising:
controlling the virtual object to move on an initial plane;
and if the position of the virtual object is judged to exceed the initial plane, triggering and executing the operation of determining the target plane and the target position of the virtual object in the real scene.
3. The method of claim 1, wherein the step of determining the target position of the virtual object in the real scene comprises:
determining a target display position of the virtual object on a terminal screen;
and determining the target position according to the target display position and the target plane.
4. The method of claim 3, wherein the step of determining a target position based on the target display position and the target plane comprises:
acquiring a line passing through a point where the target display position is located;
and taking the intersection point of the line and the target plane as the target position.
5. The method according to claim 4, wherein the step of adjusting the pose of the virtual object and displaying the virtual object on a terminal screen, wherein the displayed virtual object is adapted to the target plane comprises:
determining the length of a line segment formed between the intersection point of the target display position and the target plane;
determining the zoom degree of the virtual object according to the length and the length of the line segment recorded at the previous time;
and controlling the virtual object to zoom according to the zooming degree, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane.
6. The method according to claim 4, characterized in that said line is perpendicular to the plane of said terminal screen.
7. The method according to claim 3, wherein the step of determining the target display position of the virtual object on the terminal screen comprises:
detecting a trigger response generated on the terminal screen, and taking the generation position of the trigger response as the target display position;
or, receiving an input target display position.
8. The method according to any one of claims 1-7, wherein the step of determining a target plane of the virtual object in the real scene comprises:
identifying a plane contained in the real scene;
selecting one of the identified planes as the target plane.
9. The method of claim 8, wherein the step of selecting one of the identified planes as the target plane comprises:
displaying the identified plane on the terminal screen, and enabling the identified plane to be in a selectable state;
and taking the selected plane as the target plane.
10. An apparatus for inter-plane seamless handover, comprising:
the plane and position determining module is used for determining a target plane and a target position of the virtual object in the real scene;
and the plane switching module is used for adjusting the placing posture of the virtual object on the target position, displaying the virtual object on a terminal screen, and enabling the displayed virtual object to adapt to the target plane.
11. An inter-plane seamless handover hardware apparatus, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing performs the method for inter-plane seamless handover as claimed in any one of claims 1-9.
12. A computer-readable storage medium storing non-transitory computer-readable instructions that, when executed by a computer, cause the computer to perform the method of inter-plane seamless handover of any of claims 1-9.
CN201810900513.6A 2018-08-09 2018-08-09 Method, apparatus and computer readable storage medium for inter-plane seamless handover Pending CN110825279A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810900513.6A CN110825279A (en) 2018-08-09 2018-08-09 Method, apparatus and computer readable storage medium for inter-plane seamless handover
PCT/CN2019/073079 WO2020029555A1 (en) 2018-08-09 2019-01-25 Method and device for seamlessly switching among planes, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810900513.6A CN110825279A (en) 2018-08-09 2018-08-09 Method, apparatus and computer readable storage medium for inter-plane seamless handover

Publications (1)

Publication Number Publication Date
CN110825279A true CN110825279A (en) 2020-02-21

Family

ID=69413964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810900513.6A Pending CN110825279A (en) 2018-08-09 2018-08-09 Method, apparatus and computer readable storage medium for inter-plane seamless handover

Country Status (2)

Country Link
CN (1) CN110825279A (en)
WO (1) WO2020029555A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113093971B (en) * 2021-04-15 2022-11-01 网易(杭州)网络有限公司 Object display control method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509342A (en) * 2011-09-22 2012-06-20 北京航空航天大学 Collaborative virtual and actual sheltering treatment method in shared enhanced real scene
CN105493155A (en) * 2013-08-30 2016-04-13 高通股份有限公司 Method and apparatus for representing physical scene
CN106910249A (en) * 2015-12-23 2017-06-30 财团法人工业技术研究院 Augmented reality method and system
CN107358609A (en) * 2016-04-29 2017-11-17 成都理想境界科技有限公司 A kind of image superimposing method and device for augmented reality
CN107678652A (en) * 2017-09-30 2018-02-09 网易(杭州)网络有限公司 To the method for controlling operation thereof and device of target object
CN108139805A (en) * 2016-02-08 2018-06-08 谷歌有限责任公司 For the control system of the navigation in reality environment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8970690B2 (en) * 2009-02-13 2015-03-03 Metaio Gmbh Methods and systems for determining the pose of a camera with respect to at least one object of a real environment
CN107680164B (en) * 2016-08-01 2023-01-10 中兴通讯股份有限公司 Virtual object size adjusting method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509342A (en) * 2011-09-22 2012-06-20 北京航空航天大学 Collaborative virtual and actual sheltering treatment method in shared enhanced real scene
CN105493155A (en) * 2013-08-30 2016-04-13 高通股份有限公司 Method and apparatus for representing physical scene
CN106910249A (en) * 2015-12-23 2017-06-30 财团法人工业技术研究院 Augmented reality method and system
CN108139805A (en) * 2016-02-08 2018-06-08 谷歌有限责任公司 For the control system of the navigation in reality environment
CN107358609A (en) * 2016-04-29 2017-11-17 成都理想境界科技有限公司 A kind of image superimposing method and device for augmented reality
CN107678652A (en) * 2017-09-30 2018-02-09 网易(杭州)网络有限公司 To the method for controlling operation thereof and device of target object

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
1024工场: ""ARCore:ARCore带来的新概念"", 《CSDN博客,HTTPS://BLOG.CSDN.NET/P106786860/ARTICLE/DETAILS/78533538》 *
1024工场: "《CSDN博客,https://blog.csdn.net/p106786860/article/details/78533538》", 23 November 2017 *
AR科技君: "《http://www.woshipm.com/pd/1195054.html》", 4 August 2018 *
吴筱军: ""unity3d 让物体移动到点击位置"", 《博客园,HTTPS://WWW.CNBLOGS.COM/WRBXDJ/P/5683195.HTML》 *
吴筱军: "《博客园,https://www.cnblogs.com/wrbxdj/p/5683195.html》", 18 July 2016 *

Also Published As

Publication number Publication date
WO2020029555A1 (en) 2020-02-13

Similar Documents

Publication Publication Date Title
US11640235B2 (en) Additional object display method and apparatus, computer device, and storage medium
US10101873B2 (en) Portable terminal having user interface function, display method, and computer program
US11042294B2 (en) Display device and method of displaying screen on said display device
US11588961B2 (en) Focusing lighting module
US9323446B2 (en) Apparatus including a touch screen and screen change method thereof
US11048373B2 (en) User interface display method and apparatus therefor
US11749020B2 (en) Method and apparatus for multi-face tracking of a face effect, and electronic device
US9323351B2 (en) Information processing apparatus, information processing method and program
US9678574B2 (en) Computing system utilizing three-dimensional manipulation command gestures
US11941181B2 (en) Mechanism to provide visual feedback regarding computing system command gestures
AU2016200885B2 (en) Three-dimensional virtualization
JP2017513106A (en) Generate screenshot
US20150063785A1 (en) Method of overlappingly displaying visual object on video, storage medium, and electronic device
KR20180018561A (en) Apparatus and method for scaling video by selecting and tracking image regions
US9535493B2 (en) Apparatus, method, computer program and user interface
JP2019057291A (en) Control device, control method, and program
CN110827412A (en) Method, apparatus and computer-readable storage medium for adapting a plane
CN110825279A (en) Method, apparatus and computer readable storage medium for inter-plane seamless handover
US11755119B2 (en) Scene controlling method, device and electronic equipment
CN110827413A (en) Method, apparatus and computer-readable storage medium for controlling a change in a virtual object form
CN110825280A (en) Method, apparatus and computer-readable storage medium for controlling position movement of virtual object
KR102138233B1 (en) Display control apparatus and method for controlling the same
CN110827411A (en) Self-adaptive environment augmented reality model display method, device, equipment and storage medium
KR102482630B1 (en) Method and apparatus for displaying user interface
KR20190135958A (en) User interface controlling device and method for selecting object in image and image input device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination