CN111583418A - Control method of virtual scene and electronic equipment - Google Patents
Control method of virtual scene and electronic equipment Download PDFInfo
- Publication number
- CN111583418A CN111583418A CN202010421188.2A CN202010421188A CN111583418A CN 111583418 A CN111583418 A CN 111583418A CN 202010421188 A CN202010421188 A CN 202010421188A CN 111583418 A CN111583418 A CN 111583418A
- Authority
- CN
- China
- Prior art keywords
- virtual scene
- information
- user
- electronic equipment
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/61—Scene description
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a control method of a virtual scene and electronic equipment, wherein the method comprises the following steps: acquiring characteristic information of a target object through electronic equipment, wherein the characteristic information is used for distinguishing different users of the electronic equipment; identifying a first user of the electronic device based on the feature information; matching at least one target virtual scene from at least one virtual scene according to the characteristic information, wherein the at least one virtual scene is created based on the electronic equipment; acquiring environment information of the current environment of the electronic equipment; determining a target virtual scene matched with the current environment from the at least one target virtual scene as a first virtual scene according to the environment information; presenting, by the electronic device, the first virtual scene. The method and the electronic equipment provided by the invention solve the problem of low safety of virtual scene presentation in the prior art.
Description
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a method for controlling a virtual scene and an electronic device.
Background
Along with the development of display technology, more and more wearable display devices appear in the life of people, and the wearable display devices are widely popular among people with the advantages of convenience in wearing and strong practicability.
At present, Virtual Reality (VR) is a high and new technology that has appeared in recent years, and is also called smart technology or artificial environment. The virtual reality is a virtual world which utilizes computer simulation to generate a three-dimensional space, provides simulation of senses of vision, hearing, touch and the like for a user, and enables the user to observe objects in the three-dimensional space in time without limitation as if the user is in his own environment. Augmented Reality (AR) is also called mixed Reality. Virtual information is applied to the real world through a computer technology, and a real environment and a virtual object are superposed on the same picture or space in real time and exist at the same time.
At present, a virtual scene is built through the AR/VR device, which basically does not support multi-user matching, for example, after a user a creates a virtual scene 1 through the AR/VR device, the AR/VR device is taken off to the user B, and the user B still sees the virtual scene 1 at this time, which may cause information leakage of the user a, thereby bringing about potential safety hazard.
In addition, users all have their own preferences and personalized settings, if the users are switched, the new user still sees the virtual scene set by the previous user, if the user wants to check the content interested by the user, the user still needs to execute related operations to switch to the settings, and the operation in the whole process is complicated, so that the user feels bad.
Disclosure of Invention
The invention provides a control method of a virtual scene and electronic equipment, which are used for solving the problems of potential safety hazards and poor user experience caused by the fact that when users corresponding to AR/VR equipment in the prior art are switched, the virtual scene established by the corresponding user before switching is displayed before and after switching.
In a first aspect, the present application provides a method for controlling a virtual scene, including:
acquiring characteristic information of a target object through electronic equipment, wherein the characteristic information is used for distinguishing different users of the electronic equipment;
identifying a first user of the electronic device based on the feature information;
matching at least one target virtual scene from at least one virtual scene according to the characteristic information, wherein the at least one virtual scene is created based on the electronic equipment;
acquiring environment information of the current environment of the electronic equipment;
determining a target virtual scene matched with the current environment from the at least one target virtual scene as a first virtual scene according to the environment information;
presenting, by the electronic device, the first virtual scene.
Preferably, the environment information of the current environment of the electronic device is acquired; determining a target virtual scene matched with the current environment from the at least one target virtual scene as a first virtual scene according to the environment information; can be replaced by:
detecting whether the first user presets a default virtual scene;
and if so, acquiring the default virtual scene from the at least one target virtual scene as the first virtual scene.
Preferably, the determining, as the first virtual scene, a target virtual scene matching the current environment from the at least one target virtual scene according to the environment information includes:
and determining a target virtual scene with the matching degree with the current environment larger than a set threshold value from the at least one target virtual scene as the first virtual scene according to the environment information.
Preferably, the feature information includes: biometric information of a user of the electronic device, and/or feature information of a preset component of the electronic device.
Preferably, the identifying the first user of the electronic device based on the feature information includes:
comparing the detected characteristic information with at least one piece of pre-stored information;
and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
Preferably, in the case where the characteristic information includes characteristic information of a preset component of the electronic device,
the identifying the first user of the electronic device based on the characteristic information includes:
when the electronic equipment is in a user wearing state, acquiring a deformation value of a preset component of the electronic equipment;
determining the biological characteristic information of a user wearing the electronic equipment according to the deformation value;
comparing the biological characteristic information with at least one preset information;
and if the first information in the at least one preset information is matched, identifying the user as a first user corresponding to the first information.
Preferably, the biometric information includes one or a combination of several of pupil information, head circumference information, and head type information of the user.
In a second aspect, the present invention also provides an electronic device, including:
the sensor is used for acquiring characteristic information of a target object, and the characteristic information is used for distinguishing different users of the electronic equipment;
a processor configured to identify a first user of the electronic device based on the characteristic information; and matching at least one target virtual scene from at least one virtual scene according to the characteristic information, wherein the at least one virtual scene is created based on the electronic equipment, and the at least one virtual scene can be created by acquiring environmental information of the current environment of the electronic equipment, determining a target virtual scene matched with the current environment from the at least one target virtual scene as a first virtual scene according to the environmental information, and controlling a display output component of the electronic equipment to present the first virtual scene.
Preferably, the processor performs, in response to the request,
the method comprises the steps of obtaining environmental information of the current environment of the electronic equipment, and determining a target virtual scene matched with the environment from at least one target virtual scene as a first virtual scene according to the environmental information;
alternatively, whether a default virtual scene is preset by the first user is detected, and if yes, the default virtual scene is acquired from the at least one target virtual scene to serve as the first virtual scene.
Preferably, the processor determines, as the first virtual scene, a target virtual scene, from the at least one target virtual scene, whose matching degree with the current environment is greater than a set threshold, according to the environment information.
The embodiment of the invention at least has the following technical effects or advantages:
in the technical scheme of the embodiment of the invention, if the user corresponding to the electronic equipment changes, the characteristic information of the target object can be acquired, and the characteristic information is used for distinguishing different users of the electronic equipment; identifying a first user of the electronic device based on the feature information; determining a first virtual scene matched with the characteristic information from at least one virtual scene, wherein the at least one virtual scene is created based on the electronic equipment; presenting, by the electronic device, the first virtual scene. Therefore, the virtual scenes corresponding to different users can be displayed when the electronic equipment corresponds to different users, so that the virtual scene established by each user is prevented from being stolen by other users, the information safety of each user is ensured, and the operation of the user is simplified.
In addition, based on the characteristics of the head-mounted device, the embodiment of the invention acquires the pupil information, the head circumference or the head type information as the characteristic information, so that the original device structure of the electronic device can be reused to the maximum extent, and the scheme is convenient to realize.
The user who establishes a plurality of virtual scenes can acquire the virtual scene which needs to be displayed currently in a plurality of ways, so that when the user uses the electronic equipment, the displayed virtual scene meets the requirements of the user, and the use experience of the user is optimized.
Drawings
Fig. 1 is a schematic flowchart of a method for controlling a virtual scene according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of another electronic device according to an embodiment of the present invention.
Detailed Description
The application provides a control method of a virtual scene and electronic equipment, which are used for solving the problem of potential safety hazards caused by the fact that when users corresponding to AR/VR equipment in the prior art are switched, the virtual scene established by the corresponding users before switching is displayed before and after switching.
In order to solve the technical problems, the technical scheme in the embodiment of the invention has the following general idea:
in the technical scheme of the embodiment of the invention, if the user corresponding to the electronic equipment sends a change, the characteristic information of the target object can be obtained, and the characteristic information is used for distinguishing different users of the electronic equipment; identifying a first user of the electronic device based on the feature information; determining a first virtual scene matched with the characteristic information from at least one virtual scene, wherein the at least one virtual scene is created based on the electronic equipment; presenting, by the electronic device, the first virtual scene. Therefore, when the electronic equipment corresponds to different users, the virtual scenes corresponding to the different users can be displayed, so that the virtual scene established by each user is not stolen by other users, and the information safety of each user is ensured.
The technical solutions of the present application are described in detail with reference to the drawings and the specific embodiments, and it should be understood that the specific features of the embodiments and the embodiments are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features of the embodiments and the embodiments of the present invention may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Example one
As shown in fig. 1, an embodiment of the present invention provides a method for controlling a virtual scene, where the method specifically includes the following implementation steps:
in this embodiment, the feature information is used to distinguish different users, so the feature information that can distinguish different users is all applicable to the scheme provided by the embodiment of the present invention. Specifically, the characteristic information of the target object is acquired through the following modes: detecting biometric information of a user of the electronic device; or detecting characteristic information of a preset part of the electronic equipment.
Because the electronic device provided by the embodiment of the present invention may be an AR/VR device, and the AR/VR device is generally a head-mounted device in the implementation of the prior art, the feature information that is conveniently acquired in the embodiment of the present invention may be: one or a combination of several parameters of the pupil information, the head circumference information, and the head type information of the user. The parameters provided here are parameters that are easily obtained by the head-mounted device, but the characteristic information of the embodiment of the present invention is not limited to these parameters. The electronic equipment can also acquire any other parameter which can distinguish the user as the characteristic information; fingerprints and the like can also be acquired as characteristic information if the AR/VR device is not a head-mounted device.
in this embodiment, the corresponding feature information may be stored in advance for all users who have established the virtual scene, and when the electronic device switches users or a new user uses the electronic device, the electronic device may match the acquired feature information of the user with the pre-stored feature information, so as to identify a user who is currently using the electronic device.
after identifying the user currently using the electronic device, two cases are divided: if a user currently using the electronic device establishes a virtual scene in the electronic device, a first virtual scene corresponding to the user can be determined from at least one virtual scene stored in the electronic device; if the user currently using the electronic device does not use the electronic device to establish the virtual scene, a blank virtual scene corresponding to the user can be created as the first virtual scene.
And 104, presenting the first virtual scene through the electronic equipment.
In the embodiment of the present invention, the feature information is used to distinguish different users, so that the feature information that can distinguish different users is all applicable to the scheme provided by the embodiment of the present invention. Since when different users use the electronic device, besides some specific feature information that exists in the users themselves and can distinguish different users, components that the electronic device contacts with the users may also change correspondingly based on the use of different users, the feature information detected in this embodiment includes biometric information of the users and/or feature information of preset components in the electronic device.
The electronic device capable of creating and presenting the virtual scene may have various forms, and most of the prior art is in the form of a head-mounted device, so the biometric information detected in the embodiment of the present invention may be one or a combination of several of pupil information, head circumference information, or head type information of the user based on the characteristics of the head-mounted device.
Further, the way of identifying the first user of the electronic device based on the feature information may identify the first user in different ways according to the difference of the feature information, and two optimized ways are provided as follows:
the first method is as follows:
comparing the detected biometric information with at least one piece of pre-stored information;
and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
In the method, the characteristic information of the user is directly collected, so that the identity of the user using the electronic equipment can be directly determined through the collected characteristic information.
Mode two
The electronic device capable of creating and presenting the virtual scene may have various forms, and most of the existing technologies are in the form of a wearable device, so that the wearable device may set some characteristics adaptive to ergonomic elements based on the characteristics of a wearing user, and the method provided by the embodiment of the present invention may determine the identity of a user currently using the electronic device by detecting the ergonomic elements, and the specific implementation may be:
when the electronic equipment is in a user wearing state, acquiring a deformation value of a preset component of the electronic equipment;
determining the biological characteristic information of a user wearing the electronic equipment according to the deformation value;
comparing the biological characteristic information with at least one preset information;
and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
Based on the characteristics of the different feature information and the implementation of determining the user mode, the specific mode of obtaining some specific feature information and determining the user identity based on some feature information may be:
1. if the characteristic information is pupil characteristic information of the user, a pupil scanner can be arranged in the electronic equipment, and the pupil scanner can be started to scan the pupil information of the user to acquire the pupil information of the user; if no pupil scanner is available, the pupil image of the eye position of the user can be acquired through an image acquisition device arranged on the electronic equipment in an image acquisition mode.
2. If the characteristic information is the head circumference, because the general head-mounted electronic device is convenient for different users to carry, a deformable component (such as an elastic contraction belt) is generally arranged on the electronic device, so that the requirements of users with different head circumferences on the tightness degree of the device are met. Based on this characteristic of the electronic device, changes in the deformable member may be detected in the scheme to determine the user currently using the device. When the feature information is the head circumference information of the user, the obtaining the feature information of the target object by the electronic device includes:
when a user carries the electronic equipment, the electronic equipment detects the deformation parameters of the self-set deformation structure;
and the electronic equipment determines the head circumference information of the target object according to the deformation parameters.
For example, the electronic device is provided with the elastic extensible belt, and when a user with a large head circumference uses the head-mounted electronic device, the length of the elastic extensible belt is larger than that of a user with a small head circumference when the user uses the head-mounted electronic device, so that the electronic device can detect the length of the elastic extensible belt when the user uses the electronic device, and accordingly the corresponding user can be determined. However, this method is suitable for the case where the number of users is small and the head circumferences of a plurality of users are different, and in order to further determine more accurately which user currently uses the electronic device, the head circumference information may be further combined with the pupil information acquisition head shape information of the user, so that the determined user is more accurate.
In addition, since one user may establish a plurality of virtual scenes in the electronic device, in this embodiment, if one user establishes a plurality of virtual scenes in the electronic device, the specific implementation of determining the first virtual scene matching the feature information from at least one virtual scene may be:
based on identifying the user as a first user, a virtual scene associated with the first user is determined from the at least one virtual scene. Alternative implementations may be:
determining at least one of the virtual scenes created by the first user as the first virtual scene; or
Determining the virtual scene newly created by the first user as the first virtual scene; or
And determining the virtual scene which is called by the first user most recently as the first virtual scene.
Based on the above several ways of determining the first virtual scene, the following detailed description according to specific use cases includes:
1. a virtual scene that needs to be displayed currently can be selected as a first virtual scene for a user through the time information, which specifically includes:
and if at least one target virtual scene is matched from the at least one virtual scene according to the characteristic information, determining the target virtual scene with the smallest difference value with the current time of the electronic equipment as the first virtual scene according to the time information of the at least one target virtual scene (wherein the time information can be the time of the created virtual scene or the time of the virtual scene called by the user).
2. A virtual scene may be determined as a first virtual scene according to a current environment of the electronic device, and specifically, the method may include:
matching at least one target virtual scene from the at least one virtual scene according to the characteristic information;
acquiring environment information of the current environment of the electronic equipment, and determining a target virtual scene with the matching degree with the environment being greater than a set threshold value from the at least one target virtual scene as the first virtual scene according to the environment information.
3. The first virtual scene may be determined by a preset default value, and the specific implementation may be:
matching at least one target virtual scene from the at least one virtual scene according to the characteristic information;
and detecting that the user has a preset default virtual scene, and if so, acquiring the default virtual scene from the at least one target virtual scene as the first virtual scene.
4. Since the virtual scene associated with the first user is determined from at least one virtual scene in this embodiment, the association includes any virtual scene determined from at least one virtual scene according to a preset correspondence; the corresponding relationship may be that information associated with the first user exists in related information of at least one virtual scene, for example, a role that the first user has participated in the scene (that is, image information of the first user exists in the virtual scene), or a frequency that the first user uses a certain virtual scene is greater than a threshold value.
In addition, in order to improve the information security presented by the virtual scene, the best implementation mode is to isolate the virtual scene established by each user, that is, the virtual scene associated with the first user is the virtual scene established by the first user. Further, when the number of virtual scenes created by the first user is multiple, a specific virtual scene may also be determined as the first virtual scene in the manner of determining the first virtual scene provided in the above 1, 2, or 3.
In the technical scheme of the embodiment of the invention, if the user corresponding to the electronic equipment changes, the characteristic information of the target object can be acquired, and the characteristic information is used for distinguishing different users of the electronic equipment; identifying a first user of the electronic device based on the feature information; determining a first virtual scene matched with the characteristic information from at least one virtual scene, wherein the at least one virtual scene is created based on the electronic equipment; presenting, by the electronic device, the first virtual scene. Therefore, when the electronic equipment corresponds to different users, the virtual scenes corresponding to the different users can be displayed, so that the virtual scene established by each user is not stolen by other users, and the information safety of each user is ensured.
In addition, based on the characteristics of the head-mounted device, the embodiment of the invention acquires the pupil information, the head circumference or the head type information as the characteristic information, so that the original device structure of the electronic device can be reused to the maximum extent, and the scheme is convenient to realize.
The user who establishes a plurality of virtual scenes can acquire the virtual scene which needs to be displayed currently in a plurality of ways, so that when the user uses the electronic equipment, the displayed virtual scene meets the requirements of the user, and the use experience of the user is optimized.
Example two
As shown in fig. 2, an embodiment of the present invention further provides an electronic device, including:
the sensor 201 is used for acquiring characteristic information of a target object, and the characteristic information is used for distinguishing different users of the electronic equipment;
a processor 202 configured to identify a first user of the electronic device based on the characteristic information; determining a first virtual scene matched with the characteristic information from at least one virtual scene, wherein the at least one virtual scene is created based on the electronic equipment; and controlling a display output component of the electronic equipment to present the first virtual scene.
Specifically, the processor 202 may be a general-purpose Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits for controlling program execution.
Further, the electronic device may further include a memory, and the number of the memories may be one or more. The Memory may include a Read Only Memory (ROM), a Random Access Memory (RAM), and a disk Memory.
Optionally, the sensor 201 is used for detecting biometric information of a user of the electronic device; or detecting characteristic information of a preset part of the electronic equipment.
Optionally, the processor 202 is specifically configured to compare the detected biometric information with at least one piece of pre-stored information; and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
Optionally, the processor 202 is specifically configured to obtain a deformation value of a preset component of the electronic device when the electronic device is in a user wearing state; determining the biological characteristic information of a user wearing the electronic equipment according to the deformation value; comparing the biological characteristic information with at least one preset information; and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
Optionally, the sensor 201 is specifically configured to detect one or a combination of several of pupil information, head circumference information, and head type information of the user as the biometric information.
Optionally, the processor 202 is specifically configured to determine a virtual scene associated with the first user from the at least one virtual scene based on identifying the user as the first user.
Optionally, the processor 202 is specifically configured to determine at least one of the virtual scenes created by the first user as the first virtual scene; or determining the virtual scene newly created by the first user as the first virtual scene; or determining the virtual scene which is called by the first user most recently as the first virtual scene.
Various changes and specific examples in the control method for a virtual scene in the foregoing embodiment of fig. 1 are also applicable to the electronic device of this embodiment, and those skilled in the art can clearly know the implementation method of the electronic device in this embodiment through the foregoing detailed description of the control method for a virtual scene, so that details are not described here for brevity of the description.
EXAMPLE III
As shown in fig. 3, an embodiment of the present invention further provides another electronic device, including:
an obtaining unit 301, configured to obtain feature information of a target object, where the feature information is used to distinguish different users of the electronic device;
optionally, the obtaining unit 301 is configured to detect biometric information of a user of the electronic device; or detecting characteristic information of a preset part of the electronic equipment.
Optionally, the obtaining unit 301 is specifically configured to detect one or a combination of several of pupil information, head circumference information, and head type information of the user as the biometric information.
An identifying unit 302 configured to identify a first user of the electronic device based on the feature information;
optionally, the identification unit 302 is specifically configured to compare the detected biometric information with at least one piece of pre-stored information; and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
Optionally, the identification unit 302 is specifically configured to obtain a deformation value of a preset component of the electronic device when the electronic device is in a user wearing state; determining the biological characteristic information of a user wearing the electronic equipment according to the deformation value; comparing the biological characteristic information with at least one preset information; and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
A matching unit 303, configured to determine a first virtual scene matching the feature information from at least one virtual scene, where the at least one virtual scene is created based on the electronic device;
optionally, the matching unit 303 is specifically configured to determine, based on the identification of the user as the first user, a virtual scene associated with the first user from the at least one virtual scene.
Optionally, the matching unit 303 is specifically configured to determine at least one of the virtual scenes created by the first user as the first virtual scene; or determining the virtual scene newly created by the first user as the first virtual scene; or determining the virtual scene which is called by the first user most recently as the first virtual scene.
An output unit 304 for presenting the first virtual scene.
Various changes and specific examples in the control method of the virtual scene in the foregoing embodiment of fig. 1 are also applicable to the electronic device of this embodiment, and those skilled in the art can clearly know the implementation method of the electronic device in this embodiment through the foregoing detailed description of the control method of the virtual scene, so for the brevity of the description, detailed descriptions are not provided here.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Specifically, the computer program instructions corresponding to a display method in the embodiment of the present invention may be stored on a storage medium such as an optical disc, a hard disc, or a usb disk, and when the computer program instructions corresponding to a display method in the storage medium are read or executed by an electronic device, the method includes the following steps:
acquiring characteristic information of a target object through electronic equipment, wherein the characteristic information is used for distinguishing different users of the electronic equipment;
identifying a first user of the electronic device based on the feature information;
determining a first virtual scene matched with the characteristic information from at least one virtual scene, wherein the at least one virtual scene is created based on the electronic equipment;
presenting, by the electronic device, the first virtual scene.
Optionally, the step of storing in the storage medium: when executed, the computer program instructions corresponding to controlling the first electronic device to enter the remote control mode specifically include the following steps:
acquiring characteristic information of a target object, comprising: detecting biometric information of a user of the electronic device; or detecting characteristic information of a preset part of the electronic equipment.
Optionally, the step of storing in the storage medium: when executed, the computer program instructions corresponding to controlling the first electronic device to enter the remote control mode specifically include the following steps:
comparing the detected biometric information with at least one piece of pre-stored information;
and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
Optionally, the step of storing in the storage medium: when executed, the computer program instructions corresponding to controlling the first electronic device to enter the remote control mode specifically include the following steps:
when the electronic equipment is in a user wearing state, acquiring a deformation value of a preset component of the electronic equipment;
determining the biological characteristic information of a user wearing the electronic equipment according to the deformation value;
comparing the biological characteristic information with at least one preset information;
and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
Optionally, the step of storing in the storage medium: when executed, the computer program instructions corresponding to controlling the first electronic device to enter the remote control mode specifically include the following steps:
and acquiring one or more combinations of pupil information, head circumference information or head type information of the user as the biological characteristic information.
Optionally, the step of storing in the storage medium: when executed, the computer program instructions corresponding to controlling the first electronic device to enter the remote control mode specifically include the following steps:
based on identifying the user as a first user, a virtual scene associated with the first user is determined from the at least one virtual scene.
Optionally, the step of storing in the storage medium: when executed, the computer program instructions corresponding to controlling the first electronic device to enter the remote control mode specifically include the following steps:
determining at least one of the virtual scenes created by the first user as the first virtual scene; or
Determining the virtual scene newly created by the first user as the first virtual scene; or
And determining the virtual scene which is called by the first user most recently as the first virtual scene.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A control method of a virtual scene comprises the following steps:
acquiring characteristic information of a target object through electronic equipment, wherein the characteristic information is used for distinguishing different users of the electronic equipment;
identifying a first user of the electronic device based on the feature information;
matching at least one target virtual scene from at least one virtual scene according to the characteristic information, wherein the at least one virtual scene is created based on the electronic equipment;
acquiring environment information of the current environment of the electronic equipment;
determining a target virtual scene matched with the current environment from the at least one target virtual scene as a first virtual scene according to the environment information;
presenting, by the electronic device, the first virtual scene.
2. The method of claim 1, wherein the obtaining of the environmental information of the current environment of the electronic device; determining a target virtual scene matched with the current environment from the at least one target virtual scene as a first virtual scene according to the environment information; can be replaced by:
detecting whether the first user presets a default virtual scene;
and if so, acquiring the default virtual scene from the at least one target virtual scene as the first virtual scene.
3. The method of claim 1, wherein the determining, from the at least one target virtual scene according to the environment information, a target virtual scene matching the current environment as a first virtual scene comprises:
and determining a target virtual scene with the matching degree with the current environment larger than a set threshold value from the at least one target virtual scene as the first virtual scene according to the environment information.
4. The method of claim 1, the feature information comprising: biometric information of a user of the electronic device, and/or feature information of a preset component of the electronic device.
5. The method of claim 4, the identifying the first user of the electronic device based on the feature information, comprising:
comparing the detected characteristic information with at least one piece of pre-stored information;
and if the first information is matched with the first information in the at least one piece of pre-stored information, identifying the user as a first user corresponding to the first information.
6. The method of claim 4, wherein in a case where the characteristic information includes characteristic information of a preset component of the electronic device,
the identifying the first user of the electronic device based on the characteristic information includes:
when the electronic equipment is in a user wearing state, acquiring a deformation value of a preset component of the electronic equipment;
determining the biological characteristic information of a user wearing the electronic equipment according to the deformation value;
comparing the biological characteristic information with at least one preset information;
and if the first information in the at least one preset information is matched, identifying the user as a first user corresponding to the first information.
7. The method of claim 4, wherein the biometric information comprises one or more of pupil information, head circumference information, or head type information of the user.
8. An electronic device, comprising:
the sensor is used for acquiring characteristic information of a target object, and the characteristic information is used for distinguishing different users of the electronic equipment;
a processor configured to identify a first user of the electronic device based on the characteristic information; and matching at least one target virtual scene from at least one virtual scene according to the characteristic information, wherein the at least one virtual scene is created based on the electronic equipment, and the at least one virtual scene can be created by acquiring environmental information of the current environment of the electronic equipment, determining a target virtual scene matched with the current environment from the at least one target virtual scene as a first virtual scene according to the environmental information, and controlling a display output component of the electronic equipment to present the first virtual scene.
9. The electronic device of claim 8, the processor executing,
the method comprises the steps of obtaining environmental information of the current environment of the electronic equipment, and determining a target virtual scene matched with the environment from at least one target virtual scene as a first virtual scene according to the environmental information;
alternatively, whether a default virtual scene is preset by the first user is detected, and if yes, the default virtual scene is acquired from the at least one target virtual scene to serve as the first virtual scene.
10. The electronic device of claim 8, wherein the processor determines, as the first virtual scene, a target virtual scene, from the at least one target virtual scene, whose matching degree with the current environment is greater than a set threshold value, specifically according to the environment information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010421188.2A CN111583418A (en) | 2016-06-30 | 2016-06-30 | Control method of virtual scene and electronic equipment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610509625.XA CN106200941B (en) | 2016-06-30 | 2016-06-30 | Control method of virtual scene and electronic equipment |
CN202010421188.2A CN111583418A (en) | 2016-06-30 | 2016-06-30 | Control method of virtual scene and electronic equipment |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610509625.XA Division CN106200941B (en) | 2016-06-30 | 2016-06-30 | Control method of virtual scene and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111583418A true CN111583418A (en) | 2020-08-25 |
Family
ID=57464432
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010421188.2A Pending CN111583418A (en) | 2016-06-30 | 2016-06-30 | Control method of virtual scene and electronic equipment |
CN201610509625.XA Active CN106200941B (en) | 2016-06-30 | 2016-06-30 | Control method of virtual scene and electronic equipment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610509625.XA Active CN106200941B (en) | 2016-06-30 | 2016-06-30 | Control method of virtual scene and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN111583418A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113807867A (en) * | 2021-09-10 | 2021-12-17 | 支付宝(杭州)信息技术有限公司 | Test processing method, device, equipment and system |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106845196A (en) * | 2017-01-16 | 2017-06-13 | 宇龙计算机通信科技(深圳)有限公司 | A kind of method for authenticating for wearable device, device and wearable device |
CN106980520A (en) * | 2017-03-30 | 2017-07-25 | 努比亚技术有限公司 | A kind of virtual reality device limbs identification sets device and method |
CN107272884A (en) * | 2017-05-09 | 2017-10-20 | 聂懋远 | A kind of control method and its control system based on virtual reality technology |
CN111381575B (en) * | 2018-12-28 | 2021-08-31 | 成都鼎桥通信技术有限公司 | Automatic test method, device, server, electronic equipment and storage medium |
CN114237401B (en) * | 2021-12-28 | 2024-06-07 | 广州卓远虚拟现实科技有限公司 | Seamless linking method and system for multiple virtual scenes |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103929479A (en) * | 2014-04-10 | 2014-07-16 | 惠州Tcl移动通信有限公司 | Method and system for simulating real scene through mobile terminal to achieve user interaction |
CN104216520A (en) * | 2014-09-09 | 2014-12-17 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104423938A (en) * | 2013-08-26 | 2015-03-18 | 联想(北京)有限公司 | Information processing method and electronic device |
US20150371447A1 (en) * | 2014-06-20 | 2015-12-24 | Datangle, Inc. | Method and Apparatus for Providing Hybrid Reality Environment |
US20160070343A1 (en) * | 2014-09-09 | 2016-03-10 | Beijing Lenovo Software Ltd. | Information processing method and electronic device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102930196A (en) * | 2012-09-27 | 2013-02-13 | 东莞宇龙通信科技有限公司 | Icon displaying method and communication terminal |
CN103218557A (en) * | 2013-04-16 | 2013-07-24 | 深圳市中兴移动通信有限公司 | Biological-recognition-based system theme recognition method and device |
CN104021330A (en) * | 2014-05-28 | 2014-09-03 | 宇龙计算机通信科技(深圳)有限公司 | Method and device for user switching |
CN105245683A (en) * | 2014-06-13 | 2016-01-13 | 中兴通讯股份有限公司 | Method and device for adaptively displaying applications of terminal |
CN105574392B (en) * | 2015-06-30 | 2019-03-08 | 宇龙计算机通信科技(深圳)有限公司 | A kind of display mode switching method and mobile terminal |
-
2016
- 2016-06-30 CN CN202010421188.2A patent/CN111583418A/en active Pending
- 2016-06-30 CN CN201610509625.XA patent/CN106200941B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104423938A (en) * | 2013-08-26 | 2015-03-18 | 联想(北京)有限公司 | Information processing method and electronic device |
CN103929479A (en) * | 2014-04-10 | 2014-07-16 | 惠州Tcl移动通信有限公司 | Method and system for simulating real scene through mobile terminal to achieve user interaction |
US20150371447A1 (en) * | 2014-06-20 | 2015-12-24 | Datangle, Inc. | Method and Apparatus for Providing Hybrid Reality Environment |
CN104216520A (en) * | 2014-09-09 | 2014-12-17 | 联想(北京)有限公司 | Information processing method and electronic equipment |
US20160070343A1 (en) * | 2014-09-09 | 2016-03-10 | Beijing Lenovo Software Ltd. | Information processing method and electronic device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113807867A (en) * | 2021-09-10 | 2021-12-17 | 支付宝(杭州)信息技术有限公司 | Test processing method, device, equipment and system |
Also Published As
Publication number | Publication date |
---|---|
CN106200941B (en) | 2020-05-26 |
CN106200941A (en) | 2016-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106200941B (en) | Control method of virtual scene and electronic equipment | |
CN114303120B (en) | virtual keyboard | |
US10496155B2 (en) | Virtual reality floor mat activity region | |
CN107111629B (en) | Method and system for detecting an object of interest | |
CN102207819A (en) | Information processor, information processing method and program | |
US10853966B2 (en) | Virtual space moving apparatus and method | |
WO2016032899A1 (en) | Remote sensor access and queuing | |
US11508150B2 (en) | Image processing apparatus and method of controlling the same | |
JP2015090569A (en) | Information processing device and information processing method | |
CN106709303B (en) | Display method and device and intelligent terminal | |
CN105912912A (en) | Method and system for user to log in terminal by virtue of identity information | |
US11106949B2 (en) | Action classification based on manipulated object movement | |
US20130321404A1 (en) | Operating area determination method and system | |
CN106599656A (en) | Display method, device and electronic equipment | |
US20220254123A1 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
CN105229700B (en) | Device and method for extracting peak figure picture from multiple continuously shot images | |
KR20160046399A (en) | Method and Apparatus for Generation Texture Map, and Database Generation Method | |
CN107357424B (en) | Gesture operation recognition method and device and computer readable storage medium | |
CN111124109B (en) | Interactive mode selection method, intelligent terminal, equipment and storage medium | |
CN117372475A (en) | Eyeball tracking method and electronic equipment | |
CN110688018B (en) | Virtual picture control method and device, terminal equipment and storage medium | |
CN114255494A (en) | Image processing method, device, equipment and storage medium | |
KR20150086840A (en) | Apparatus and control method for mobile device using multiple cameras | |
CN116033282A (en) | Shooting processing method and electronic equipment | |
CN104298707B (en) | A kind of information processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |