CN109260706A - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN109260706A
CN109260706A CN201811138601.3A CN201811138601A CN109260706A CN 109260706 A CN109260706 A CN 109260706A CN 201811138601 A CN201811138601 A CN 201811138601A CN 109260706 A CN109260706 A CN 109260706A
Authority
CN
China
Prior art keywords
user
pattern
virtual
specific position
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811138601.3A
Other languages
Chinese (zh)
Other versions
CN109260706B (en
Inventor
尹殿永
焦阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811138601.3A priority Critical patent/CN109260706B/en
Publication of CN109260706A publication Critical patent/CN109260706A/en
Application granted granted Critical
Publication of CN109260706B publication Critical patent/CN109260706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the present application discloses a kind of information processing method and electronic equipment, first virtual state information of the first virtual objects is converted into the specific position with pattern Mapping to user corresponding with first virtual objects, virtual state when user being allowed to experience the online interaction of its virtual objects increases the mode that user participates in more people's online interaction.

Description

Information processing method and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an information processing method and an electronic device.
Background
In a multi-user online interaction scene such as an indoor online game or a live broadcast online confrontation game, different users log in a network server through respective electronic equipment, and online interaction is realized through services provided by the network server. During the interaction process, different users usually have different virtual objects.
In the current multi-user online interaction scene, a user can only see the virtual state of a virtual object (for example, the title of a virtual character, the name of the character, and the like) through a display screen of the electronic device, so that the mode of the user participating in the multi-user online interaction is single.
Disclosure of Invention
The present application is directed to an information processing method and an electronic device, which at least partially overcome the technical problems in the prior art.
In order to achieve the purpose, the application provides the following technical scheme:
an information processing method comprising:
reading first virtual state information of a first virtual object;
determining a pattern corresponding to the first virtual state information;
mapping the pattern to a particular location on the user corresponding to the first virtual object.
In the above method, preferably, the mapping the pattern to a specific position on the user corresponding to the first virtual object includes:
and irradiating the pattern to a specific position on the user through lighting equipment or projection equipment around the user.
The method preferably, wherein the illuminating the pattern to a specific position on the user body by the lighting device includes:
adjusting the angle of the lighting equipment to enable the lighting of the lighting equipment to irradiate a specific position on the user;
and adjusting the lighting effect of the lighting equipment to enable the specific position to display the pattern.
The method preferably further comprises the step of irradiating the pattern to a specific position on the user through the projection device, and the method comprises the following steps:
adjusting an angle of the projection device such that the projection device projects the pattern to a particular location on the user.
In the above method, preferably, the mapping the pattern to a specific position on the user corresponding to the first virtual object includes:
controlling a portion of the smart garment worn by the user located at the specific location to display the pattern.
In the method, preferably, the controlling of the portion of the smart garment worn by the user, which is located at the specific position, to display the pattern includes:
and controlling the light effect of a full-color LED lamp positioned at the specific position of the user on the intelligent clothes, so that the pattern is displayed at the specific position of the user.
In the above method, preferably, the mapping the pattern to a specific position on the user corresponding to the first virtual object includes:
determining a preset position on the user corresponding to the first virtual state information as the specific position, or determining a first part of a virtual state represented by the first virtual state information on the first virtual object, and determining the first part on the user as the specific position;
mapping the pattern to the specific location.
In the above method, preferably, the mapping the pattern to a specific position on the user corresponding to the first virtual object includes:
acquiring an image of the user;
establishing a three-dimensional model of the user according to the image;
identifying the specific position and a physical space position of the specific position according to the three-dimensional model;
mapping the pattern to the particular location according to the physical space location.
The above method, preferably, further comprises:
reading second virtual state information of a virtual environment where the first virtual object is located;
determining an environmental parameter corresponding to the second virtual state information;
and controlling equipment around the user to simulate the virtual environment according to the environment parameters.
An electronic device, comprising:
a display unit for displaying information;
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
reading first virtual state information of a first virtual object;
determining a pattern corresponding to the first virtual state information;
mapping the pattern to a particular location on the user corresponding to the first virtual object.
According to the scheme, the information processing method and the electronic equipment convert the first virtual state information of the first virtual object into the pattern which is mapped to the specific position on the user body corresponding to the first virtual object, so that the user can experience the virtual state of the virtual object during online interaction, and the mode that the user participates in multi-user online interaction is increased.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an implementation of an information processing method according to an embodiment of the present application;
FIG. 2 is a flowchart of an implementation of mapping a pattern to a specific location on a user corresponding to a first virtual object according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be practiced otherwise than as specifically illustrated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, an implementation flowchart of the information processing method provided by the present application may include:
step S11: first virtual state information of the first virtual object is read.
The first virtual object may be a virtual character in a virtual environment presented by the first electronic device. The first virtual state information of the first virtual object may refer to attribute information of the virtual object, for example: name of virtual object, title, wound status, attachment to body, etc.
Step S12: a pattern corresponding to the first virtual state information is determined.
In the virtual environment, some virtual state information of the first virtual object is directly displayed in a pattern, such as a wound state, an attachment on a body, and the like, and in this case, the pattern corresponding to the first virtual state information is the pattern displayed in the virtual environment by the first virtual state information. Some virtual state information may not be displayed, such as name, title, etc., and the pattern corresponding to the first virtual state information may be a preset pattern.
Step S13: the pattern is mapped to a particular location on the user corresponding to the first virtual object.
The user corresponding to the first virtual object refers to a user of the first electronic device. The mapping positions of different virtual state information on the user body may be the same or different.
According to the information processing method, the first virtual state information of the first virtual object is converted into the pattern which is mapped to the specific position of the user body corresponding to the first virtual object, so that the user can experience the virtual state of the virtual object during online interaction, the mode that the user participates in multi-user online interaction is increased, and the user experience is improved.
In an alternative embodiment, the pattern may be mapped to a particular location on the user's body corresponding to the first virtual object by:
the pattern is illuminated to a specific position on the user body by lighting devices or projection devices around the user.
A plurality (or groups) of lighting devices can be arranged around the user, and each or each group of lighting devices can adjust the brightness, the color and the like of the light. The lighting devices may be located on a ceiling or roof above the user. The lighting device can form certain patterns by controlling the brightness and/or color of the lighting device, the size of the formed patterns can be controlled by controlling the quantity of the lighting device, and the irradiation position of the lighting device on the body of a user can be changed by controlling the irradiation angle of the lighting device. In particular, the method comprises the following steps of,
the angle of the lighting equipment can be adjusted firstly, so that the lighting of the lighting equipment can irradiate a specific position on a user, and then the lighting effect of the lighting equipment with a certain quantity is adjusted, so that the pattern corresponding to the first virtual state information is displayed at the specific position. The number of the light devices may be determined according to the size of the virtual state represented by the first virtual state information. The larger the virtual state is, the larger the number of lighting devices is, and the smaller the virtual state is, the smaller the number of lighting devices is. The number of the lighting devices can be determined according to the corresponding relation between the size of the preset virtual state and the number of the lighting devices. Because the irradiation ranges of different lighting devices are different, the lighting device with the irradiation range at the specific position can be selected for control.
Or,
a plurality of projection devices can be arranged around the user, and after the pattern corresponding to the first virtual state information is determined, the pattern is projected to the user through the projection devices. The method specifically comprises the following steps: adjusting an angle of the projection device such that the projection device projects a pattern corresponding to the first virtual state information to a specific location on the user. Since the projection ranges of different projection devices are different, the projection device with the projection range including the specific position can be selected for control.
In another alternative embodiment, the pattern may be mapped to a particular location on the user's body corresponding to the first virtual object by:
and controlling a part, located at a specific position, of the intelligent clothes worn by the user to display a pattern corresponding to the first virtual state information.
The intelligent clothes worn by the user are provided with the light-emitting devices, so that the light-emitting effect of the light-emitting devices on the intelligent clothes can be directly controlled to present the patterns corresponding to the first virtual state information. The light emitting device may be a number of small liquid crystal screens, or a number of LED lamps.
Specifically, can arrange on the intelligent clothes and be provided with a plurality of full-color LED lamps, every full-color LED lamp all can adjust colour and luminance, when needs the specific position mapping pattern on the user body, can confirm the full-color LED lamp that is located above-mentioned specific position earlier, then, the luminance and/or the colour of the full-color LED lamp of this specific position of control make this specific position present above-mentioned pattern.
In an alternative embodiment, one way to map the pattern to a specific location on the user's body corresponding to the first virtual object may be:
and determining a preset position corresponding to the first virtual state information on the user body as a specific position.
Or,
and determining a first part of the virtual state represented by the first virtual state information in the first virtual object, and determining the first part on the user as a specific position.
The pattern is mapped to the specific location.
In this embodiment, if the virtual state represented by the first virtual state information is directly displayed on the first virtual object, the first portion of the virtual state on the first virtual object is determined, and the first portion on the user is directly determined as the specific position. For example, if the first virtual state information is a wound, the virtual state represented by the first virtual state information is: and if the right shoulder has the knife wound, directly determining the right shoulder of the user as a specific position, and mapping the knife wound to the right shoulder of the user.
And if the virtual state represented by the first virtual state information is not displayed on the first virtual object, determining the preset position as the specific position. For example, if the first virtual state information is a name, the corresponding virtual state is: zhang III, then display Zhang III in front of chest; similarly, if the first virtual state information is in the title, the corresponding virtual state is: for long, the "for long" is displayed on the forehead, etc.
In an alternative embodiment, an implementation flow diagram for mapping a pattern to a specific location on a user corresponding to a first virtual object is shown in fig. 2 and may include:
step S21: an image of a user is acquired.
The image of the user may be acquired in real time by a plurality of image acquisition units arranged around the user.
Step S22: and establishing a three-dimensional model of the user according to the acquired image.
And establishing a three-dimensional model of the user through images acquired by different image acquisition units.
Step S23: and identifying the specific position and the physical space position of the specific position according to the three-dimensional model.
The three-dimensional model of the user may be compared to a preset three-dimensional model of the human body to identify the specific location in the three-dimensional model of the user. For example, different parts (such as the face, the legs, the arms, the hands, the feet, the chest, the belly, the waist, etc., the face can be further subdivided into the left face, the right face, the forehead, etc., the arms can be further subdivided into the upper arm, the lower arm, etc.) of the human body three-dimensional model can carry different labels to mark different human body parts.
The distance between the specific position and the first image acquisition unit and the distance between the specific position and the second image acquisition unit can be calculated by utilizing a binocular ranging principle, and the physical space position of the specific position can be determined by combining the physical space position of the first image acquisition unit and the physical space position of the second image acquisition unit. The first image acquisition unit and the second image acquisition unit refer to image acquisition units capable of acquiring images of the characteristic positions.
Step S24: the pattern is mapped to a specific location according to the physical spatial location.
In the embodiment of the application, the specific position is tracked in real time, and the pattern can be accurately projected to a user. For example, assuming that the face of the first virtual object has a cut, the cut is always mapped to the same position of the face of the user by the above method regardless of how the user turns the head.
In an optional embodiment, in addition to reading the first virtual state information of the first virtual object, the second virtual state information of the first virtual object may also be read, and the environment parameter corresponding to the second virtual state information is determined; and controlling equipment around the user to simulate the virtual environment according to the environment parameter.
In this embodiment, the second virtual state information may refer to information of a virtual environment in which the first virtual object is located, for example, if the virtual environment in which the first virtual object is located is lightning thunder, the lighting device around the user may be controlled to flash to simulate lightning, and the sound box device around the user may be controlled to emit sound similar to thunder.
Specifically, the environmental parameters may include: temperature, humidity, light brightness, sound, etc. Accordingly, the devices around the user may include: temperature control equipment, humidity control equipment, lighting equipment, sound equipment and the like.
Corresponding to the method embodiment, the present application further provides an electronic device, which is denoted as a second electronic device for convenience of description, and a schematic structural diagram of the electronic device is shown in fig. 3, where the electronic device may include:
a display unit 31 for displaying information, such as displaying all displayable content of images, text, symbols, etc.
A memory 32 for storing at least one set of instructions.
A processor 33 for invoking and executing a set of instructions stored in memory 32, by executing the set of instructions to:
first virtual state information of the first virtual object is read.
The first virtual object may be a virtual character or a virtual monster, etc. in a virtual environment presented by the first electronic device. The first virtual state information of the first virtual object may refer to attribute information of the virtual object, for example: name of virtual object, title, wound status, attachment to body, etc.
The first electronic device and the second electronic device may be the same electronic device or different electronic devices.
A pattern corresponding to the first virtual state information is determined.
The pattern is mapped to a particular location on the user corresponding to the first virtual object. The second electronic device may map the pattern to a specific location on the user by controlling the devices around the user.
According to the electronic equipment provided by the embodiment of the application, the first virtual state information of the first virtual object is converted into the pattern which is mapped to the specific position on the user body corresponding to the first virtual object, so that the user can experience the virtual state of the virtual object during online interaction, and the mode that the user participates in multi-user online interaction is increased.
In an optional embodiment, when mapping the pattern to a specific position on the user corresponding to the first virtual object, the processor 33 may specifically be configured to:
and irradiating the pattern to a specific position on the user through lighting equipment or projection equipment around the user.
In an alternative embodiment, when the processor 33 irradiates the pattern to a specific position on the user through the lighting device, the processor may specifically be configured to:
adjusting the angle of the lighting equipment to enable the lighting of the lighting equipment to irradiate a specific position on the user;
and adjusting the lighting effect of the lighting equipment to enable the specific position to display the pattern.
In an optional embodiment, when the processor 33 irradiates the pattern to a specific position on the user through the projection device, the processor may specifically be configured to:
adjusting an angle of the projection device such that the projection device projects the pattern to a particular location on the user.
In an optional embodiment, when the processor 33 maps the pattern to a specific position on the user corresponding to the first virtual object, the method may be specifically configured to:
controlling a portion of the smart garment worn by the user located at the specific location to display the pattern.
In an alternative embodiment, when the processor 33 controls the portion of the smart garment worn by the user and located at the specific position to display the pattern, the processor may be specifically configured to:
and controlling the light effect of the full-color LED lamp positioned at the specific position of the user on the intelligent clothes, so that the pattern is displayed at the specific position of the user.
In an optional embodiment, when the processor 33 maps the pattern to a specific position on the user corresponding to the first virtual object, the method may be specifically configured to:
determining a preset position on the user corresponding to the first virtual state information as the specific position, or determining a first part of a virtual state represented by the first virtual state information on the first virtual object, and determining the first part on the user as the specific position;
mapping the pattern to the specific location.
In an optional embodiment, when the processor 33 maps the pattern to a specific position on the user corresponding to the first virtual object, the method may be specifically configured to:
acquiring an image of the user;
establishing a three-dimensional model of the user according to the image;
identifying the specific position and a physical space position of the specific position according to the three-dimensional model;
mapping the pattern to the particular location according to the physical space location.
In an alternative embodiment, the processor 33 is further configured to:
reading second virtual state information of a virtual environment where the first virtual object is located;
determining an environmental parameter corresponding to the second virtual state information;
and controlling equipment around the user to simulate the virtual environment according to the environment parameters.
The following describes embodiments of the present application, taking a first virtual object as a virtual object of a user in a game as an example.
When a user plays a network confrontation game or a network live game, the virtual character of the user is in the game and is usually associated with an account number when the user logs in a game application, so that the virtual character corresponding to the user can be determined through the account number.
Suppose that a user plays online games indoors with a game host, a plurality of lighting devices and a plurality of cameras are arranged in a certain area on a roof above the user, and the lighting angles and the colors and the brightness of the lighting devices can be adjusted.
During the game, images of the user are collected through the plurality of cameras, and a three-dimensional model of the user is constructed according to the collected images. And identifying different parts of the user and the physical space position of each part according to the three-dimensional model.
The control device (an electronic device different from the game host) can communicate with the game host through the wireless communication module, read the state information of the virtual character associated with the login account, and comprises the following steps: name, title, injury status, environmental information.
Generating a pattern of the name according to a predetermined font according to the name;
acquiring a preset pattern corresponding to the title according to the title;
in this embodiment, the name is projected to the chest of the user and the title is projected to the forehead of the user.
The target lighting device is determined according to the wounded part (such as the left face) and the size of the wound, and the color and brightness of the target lighting device are adjusted according to the color of the wound, so that the wounded pattern is formed on the wounded part.
It is assumed that the environment information includes: the temperature (22 ℃), the humidity (45%) and 5-level wind, the control device controls the air conditioning equipment around the user to adjust the temperature and the humidity of the environment, so that the environment temperature is 22 ℃, the environment humidity is 45%, and the electric fan is controlled to rotate at the speed corresponding to the 5-level wind so as to simulate the 5-level wind.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that the technical problems can be solved by combining and combining the features of the embodiments from the claims.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An information processing method characterized by comprising:
reading first virtual state information of a first virtual object;
determining a pattern corresponding to the first virtual state information;
mapping the pattern to a particular location on the user corresponding to the first virtual object.
2. The method of claim 1, wherein said mapping the pattern to a specific location on the user corresponding to the first virtual object comprises:
and irradiating the pattern to a specific position on the user through lighting equipment or projection equipment around the user.
3. The method of claim 2, wherein illuminating the pattern by the lighting device at a particular location on the user comprises:
adjusting the angle of the lighting equipment to enable the lighting of the lighting equipment to irradiate a specific position on the user;
and adjusting the lighting effect of the lighting equipment to enable the specific position to display the pattern.
4. The method of claim 2, wherein illuminating the pattern at a particular location on the user by the projection device comprises:
adjusting an angle of the projection device such that the projection device projects the pattern to a particular location on the user.
5. The method of claim 1, wherein said mapping the pattern to a specific location on the user corresponding to the first virtual object comprises:
controlling a portion of the smart garment worn by the user located at the specific location to display the pattern.
6. The method of claim 5, wherein the controlling the portion of the smart garment worn by the user located at the particular location to display the pattern comprises:
and controlling the light effect of a full-color LED lamp positioned at the specific position of the user on the intelligent clothes, so that the pattern is displayed at the specific position of the user.
7. The method of claim 1, wherein said mapping the pattern to a specific location on the user corresponding to the first virtual object comprises:
determining a preset position on the user corresponding to the first virtual state information as the specific position, or determining a first part of a virtual state represented by the first virtual state information on the first virtual object, and determining the first part on the user as the specific position;
mapping the pattern to the specific location.
8. The method of claim 1, wherein said mapping the pattern to a specific location on the user corresponding to the first virtual object comprises:
acquiring an image of the user;
establishing a three-dimensional model of the user according to the image;
identifying the specific position and a physical space position of the specific position according to the three-dimensional model;
mapping the pattern to the particular location according to the physical space location.
9. The method of claim 1, further comprising:
reading second virtual state information of a virtual environment where the first virtual object is located;
determining an environmental parameter corresponding to the second virtual state information;
and controlling equipment around the user to simulate the virtual environment according to the environment parameters.
10. An electronic device, comprising:
a display unit for displaying information;
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
reading first virtual state information of a first virtual object;
determining a pattern corresponding to the first virtual state information;
mapping the pattern to a particular location on the user corresponding to the first virtual object.
CN201811138601.3A 2018-09-28 2018-09-28 Information processing method and electronic equipment Active CN109260706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811138601.3A CN109260706B (en) 2018-09-28 2018-09-28 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811138601.3A CN109260706B (en) 2018-09-28 2018-09-28 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN109260706A true CN109260706A (en) 2019-01-25
CN109260706B CN109260706B (en) 2021-02-19

Family

ID=65198229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811138601.3A Active CN109260706B (en) 2018-09-28 2018-09-28 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN109260706B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1797474A (en) * 2004-12-30 2006-07-05 中国科学院自动化研究所 Fast method for posting players to electronic game
JP3859361B2 (en) * 1998-07-10 2006-12-20 株式会社東芝 Information display device
EP2910014A1 (en) * 2012-10-20 2015-08-26 Intel Corporation System for dynamic projection of media
CN106213609A (en) * 2016-08-31 2016-12-14 金轶超 A kind of can be mutual garment system
CN206021359U (en) * 2016-07-26 2017-03-15 金德奎 Augmented reality equipment that can be directly interactive between user and its system
CN107469355A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Game image creation method and device, terminal device
CN108134964A (en) * 2017-11-22 2018-06-08 上海掌门科技有限公司 Net cast stage property stacking method, computer equipment and storage medium
CN108492154A (en) * 2018-04-24 2018-09-04 仝相宝 A kind of intelligent projection mapping method and its system based on the storage of block chain

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3859361B2 (en) * 1998-07-10 2006-12-20 株式会社東芝 Information display device
CN1797474A (en) * 2004-12-30 2006-07-05 中国科学院自动化研究所 Fast method for posting players to electronic game
EP2910014A1 (en) * 2012-10-20 2015-08-26 Intel Corporation System for dynamic projection of media
CN206021359U (en) * 2016-07-26 2017-03-15 金德奎 Augmented reality equipment that can be directly interactive between user and its system
CN106213609A (en) * 2016-08-31 2016-12-14 金轶超 A kind of can be mutual garment system
CN107469355A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Game image creation method and device, terminal device
CN108134964A (en) * 2017-11-22 2018-06-08 上海掌门科技有限公司 Net cast stage property stacking method, computer equipment and storage medium
CN108492154A (en) * 2018-04-24 2018-09-04 仝相宝 A kind of intelligent projection mapping method and its system based on the storage of block chain

Also Published As

Publication number Publication date
CN109260706B (en) 2021-02-19

Similar Documents

Publication Publication Date Title
US10380803B1 (en) Methods and systems for virtualizing a target object within a mixed reality presentation
US10089772B2 (en) Context-aware digital play
US20210074245A1 (en) Image display method and apparatus, storage medium, and electronic device
JP6492332B2 (en) Information processing apparatus, information processing method, and program
US20030057884A1 (en) Systems and methods for digital entertainment
US20130165223A1 (en) Interactive video game with visual lighting effects
CN109068053A (en) Image special effect methods of exhibiting, device and electronic equipment
US20200137815A1 (en) Communication connection method, terminal device and wireless communication system
KR101963770B1 (en) Dart game apparatus and dart game system with display unit
CN108229450A (en) The method and living creature characteristic recognition system of light filling are carried out based on screen display
KR20190096988A (en) Information processing apparatus, information processing method, and program
JP2022075707A (en) Lighting device
TW201916915A (en) Dart game apparatus and dart game system providing event effect
WO2023155442A1 (en) Method and apparatus for controlling air conditioner, and air conditioner and storage medium
CN109260706B (en) Information processing method and electronic equipment
CN114385289A (en) Rendering display method and device, computer equipment and storage medium
TWI686229B (en) Dart game apparatus and dart game system
KR101963771B1 (en) Dart game apparatus and dart game system providing lesson video
US20080305713A1 (en) Shadow Generation Apparatus and Method
KR20190043116A (en) Dart game apparatus and dart game system providing lesson video
CN114821010A (en) Virtual scene processing method and device, storage medium and electronic equipment
CN106851235A (en) The interaction display method and device of desk lamp
JP7179024B2 (en) Systems and methods for rendering virtual objects
CN112418669A (en) Job execution method, job execution apparatus, storage medium, and electronic apparatus
KR20190043113A (en) Dart game apparatus and dart game system comprising lightning unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant