CN116785694A - Region determination method and device, electronic equipment and storage medium - Google Patents
Region determination method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116785694A CN116785694A CN202210266667.0A CN202210266667A CN116785694A CN 116785694 A CN116785694 A CN 116785694A CN 202210266667 A CN202210266667 A CN 202210266667A CN 116785694 A CN116785694 A CN 116785694A
- Authority
- CN
- China
- Prior art keywords
- area
- game
- game area
- information
- early warning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012545 processing Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 14
- 238000005516 engineering process Methods 0.000 abstract description 15
- 230000008859 change Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 12
- 238000004088 simulation Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 230000004048 modification Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000000977 initiatory effect Effects 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000001747 exhibiting effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/803—Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8017—Driving on land or water; Flying
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/80—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
- A63F2300/8082—Virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure relates to the field of computer technology, and in particular, to a method and device for determining a region, an electronic device and a storage medium. The area determining method comprises the following steps: determining a vehicle interior area in an acquisition interface of the camera device, wherein the vehicle interior area is a space of the vehicle except for vehicle devices; determining a first game area corresponding to the virtual reality VR game in the vehicle interior area based on the boundary setting information; and if a starting instruction for the VR game is received, displaying the first game area on a display interface. By adopting the method and the device, the convenience of determining the game area can be improved, and the situation that the user collides when experiencing the VR game can be reduced.
Description
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method and device for determining a region, an electronic device and a storage medium.
Background
The virtual reality is a computer simulation system capable of creating and experiencing a virtual world, and a simulation environment is generated by using a computer, so that the virtual reality is a system simulation of multi-information-fused interactive three-dimensional dynamic vision and entity behaviors, and a user is immersed in the simulation environment. However, when the user plays the VR game in the vehicle, the user is immersed in a simulation environment corresponding to the VR game so that the user collides with a vehicle device in the vehicle when performing an activity in the simulation environment.
Disclosure of Invention
The present disclosure provides a region determining method and apparatus, an electronic device, and a storage medium, and is mainly aimed at reducing a collision with a vehicle device in a vehicle when a user plays a VR game in the vehicle.
According to an aspect of the present disclosure, there is provided a region determining method including:
determining a vehicle interior area in an acquisition interface of a camera device, wherein the vehicle interior area is a space of the vehicle except for a vehicle device;
determining a first game area corresponding to a Virtual Reality (VR) game in the vehicle interior area based on boundary setting information;
and if a starting instruction aiming at the VR game is received, displaying the first game area on a display interface.
Optionally, the displaying the first game area on the display interface includes:
acquiring a first display mode corresponding to the first game area, wherein the first display mode comprises one of a invisible display mode and a visible display mode;
and displaying the first game area based on the first display mode.
Optionally, after the first game area is displayed on the display interface, the method further includes:
acquiring human body image information corresponding to VR equipment;
And if the human body image information meets the first early warning condition corresponding to the first game area, displaying early warning prompt information.
Optionally, after the first game area is displayed on the display interface, the method further includes:
obtaining position information of a VR controller in VR equipment;
and if the position information meets the second early warning condition corresponding to the first game area, displaying early warning prompt information.
Optionally, the displaying the early warning prompt information includes:
acquiring a second display mode corresponding to the early warning prompt information;
and displaying the early warning prompt information by adopting the second display mode.
Optionally, the second display mode is an area processing display mode, and the displaying the early warning prompt information by adopting the second display mode includes:
performing fuzzy processing on the boundary area corresponding to the first game area to obtain a processed first game area;
and displaying the processed first game area.
According to another aspect of the present disclosure, there is provided a region determining apparatus including:
an area acquisition unit configured to determine a vehicle interior area in an image pickup device acquisition interface, the vehicle interior area being a space of the vehicle other than a vehicle device;
An area determination unit configured to determine a first game area corresponding to a virtual reality VR game in the vehicle interior area based on boundary setting information;
and the region display unit is used for displaying the first game region on a display interface if a starting instruction for the VR game is received.
Optionally, the region display unit includes a mode acquisition subunit and a region display subunit; the region display unit is used for displaying the first game region when the display interface displays the first game region:
the mode acquisition subunit is used for acquiring a first display mode corresponding to the first game area, wherein the first display mode comprises one of a invisible display mode and a display mode;
the region display subunit is configured to display the first game region based on the first display manner.
Optionally, the device further includes an image information acquiring unit and an information displaying unit, configured to, after the first game area is displayed on the display interface:
the image information acquisition unit is used for acquiring human body image information corresponding to the VR equipment;
the information display unit is used for displaying early warning prompt information if the human body image information meets first early warning conditions corresponding to the first game area.
Optionally, the device further includes a location information acquiring unit and an information displaying unit, configured to, after the first game area is displayed on the display interface:
the position information acquisition unit is used for acquiring position information of a VR controller in VR equipment;
the information display unit is used for displaying early warning prompt information if the position information meets the second early warning condition corresponding to the first game area.
Optionally, the information display unit includes a mode acquisition subunit and an information display subunit, where the information display unit is used to display early warning prompt information:
the mode acquisition subunit is used for acquiring a second display mode corresponding to the early warning prompt information;
the information display subunit is configured to display the early warning prompt information in the second display mode.
Optionally, the second display mode is an area processing display mode, and the information display unit is configured to, when the second display mode is used to display the early warning prompt information, specifically be used for:
performing fuzzy processing on the boundary area corresponding to the first game area to obtain a processed first game area;
and displaying the processed first game area.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding aspects.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of any one of the preceding aspects.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any one of the preceding aspects.
In one or more embodiments of the present disclosure, acquiring a vehicle interior region in an interface by determining an imaging device, the vehicle interior region being a space of the vehicle other than a vehicle device; determining a first game area corresponding to a Virtual Reality (VR) game in the vehicle interior area based on boundary setting information; and if a starting instruction aiming at the VR game is received, displaying the first game area on a display interface. Therefore, when the region corresponding to the VR game is acquired by controlling the image pickup device to acquire the region inside the vehicle, the game region can be directly displayed based on the boundary setting information, the user does not need to manually select the region, the convenience of region determination can be improved, the touch probability of a human body and a vehicle device when the user plays the VR game is reduced, and the convenience of the VR game is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 illustrates a background schematic diagram of a region determining method provided in an embodiment of the present disclosure;
FIG. 2 illustrates a system architecture diagram of a region determination method provided by an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a first area determining method according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a second area determining method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural view showing a first area determining apparatus provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural view showing a second area determining apparatus provided in an embodiment of the present disclosure;
fig. 7 is a schematic structural view showing a third area determining apparatus provided in an embodiment of the present disclosure;
fig. 8 is a schematic structural view showing a fourth area determining apparatus provided in an embodiment of the present disclosure;
fig. 9 is a schematic structural view showing a fifth area determining apparatus provided in an embodiment of the present disclosure;
Fig. 10 is a block diagram of an electronic device for implementing a region determination method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the development of scientific technology, computer technology is mature, and convenience of production and living of users is improved. The user may experience the virtual world, for example, in a simulated environment, which may be constructed, for example, using virtual reality technology. However, in order to reduce the occurrence of collisions with adjacent physical objects in the physical environment when a user is active in the simulated environment, for example, in order to reduce the occurrence of collisions with vehicle devices in a vehicle when a user plays a VR game in the vehicle, the user needs to manually construct a safe area in which the activity can be performed before experiencing the virtual world.
Fig. 1 illustrates a background schematic diagram of a region determining method provided by an embodiment of the present disclosure, according to some embodiments. As shown in fig. 1, when the terminal displays a simulated environment constructed by using a virtual reality technology, the terminal may acquire region coordinate data input by a user through a moving handle, and further the terminal may construct and display a corresponding security region according to the region coordinate data.
In some embodiments, fig. 2 illustrates a system architecture diagram of a region determination method provided by an embodiment of the present disclosure. As shown in fig. 2, when the handle 13 acquires the region coordinate data selected by the user, the handle 13 may generate the region coordinate data to the terminal 11 through the network 12. Further, the terminal 11 can construct and display a corresponding security area according to the area coordinate data inputted by the handle 13.
It is easy to understand that in the related art, the terminal needs to construct a corresponding security area according to the area coordinate data input by the user, and once the physical environment where the user is located changes, the terminal needs to reconstruct the security area according to the area coordinate data input again by the user, thereby affecting the game experience of the user. The region coordinate data input by the user is only horizontal coordinate data, and is suitable only for a physical environment having a large space, but is not suitable for a vehicle having a complex physical environment. Secondly, when the user experiences the currently constructed simulation environment and wants to replace the next simulation environment, the user still considers that the user is positioned at the time of constructing the safety area due to the change of the position of the user caused by the activity in the currently constructed simulation environment, so that the user breaks through the safety area and collides with the adjacent physical objects outside the safety area during the activity.
The present disclosure is described in detail below with reference to specific examples.
In a first embodiment, as shown in fig. 3, fig. 3 shows a schematic flow chart of a first area determining method according to an embodiment of the disclosure, where the method may be implemented by a computer program and may be executed on an apparatus for determining an area. The computer program may be integrated in the application or may run as a stand-alone tool class application.
The area determining device may be a terminal having an area determining function, including but not limited to: virtual Reality (VR) devices, wearable devices, handheld devices, personal computers, tablet computers, vehicle devices, smartphones, computing devices, or other processing devices connected to a wireless modem, etc. Terminals may be called different names in different networks, for example: a user equipment, an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent or user equipment, a cellular telephone, a cordless telephone, a personal digital assistant (personal digital assistant, PDA), a fifth Generation mobile communication technology (5th Generation Mobile Communication Technology,5G) network, a fourth Generation mobile communication technology (the 4th Generation mobile communication technology,4G) network, a third Generation mobile communication technology (3 rd-Generation, 3G) network, or a terminal in a future evolution network, etc.
Specifically, the area determining method includes:
s101, determining an internal area of a vehicle in an acquisition interface of a camera device;
according to some embodiments, the camera means refers to means for acquiring an area inside the vehicle. The camera device is not particularly limited to a fixed camera device. The camera device includes, but is not limited to, a camera installed in the VR device, a vehicle camera, a cell phone camera, and the like.
In some embodiments, the camera acquisition interface refers to an interface in which an image acquired by the camera is located. The camera acquisition interface is not particularly limited to a certain fixed interface. For example, the camera acquisition interface may change when the location of the camera changes. When the vehicle changes, the acquisition interface of the camera device can also change.
In some embodiments, the vehicle interior region refers to a region in which the image pickup device is located in a space of the vehicle interior other than the vehicle device determined in the acquisition interface. The vehicle interior region is not particularly limited to a certain fixed region. For example, when a vehicle changes, the vehicle interior region may also change. The vehicle interior area may also change when the vehicle interior components change. The vehicle interior area may also change when the position of the camera device changes.
It is readily understood that the terminal may determine the vehicle interior area in the camera acquisition interface when the terminal is exhibiting a simulated environment constructed using virtual reality technology.
S102, determining a first game area corresponding to a Virtual Reality (VR) game in an internal area of a vehicle based on boundary setting information;
according to some embodiments, a virtual reality VR game refers to a simulated environment that a terminal builds using virtual reality technology for a user to play an experienced game. The VR game is not specific to a particular fixed VR game. The VR games include, but are not limited to, shooting games, action simulation games, role playing games, local interactive games, real scene experience games, and the like.
In some embodiments, the first game area refers to an area where a user does not collide when active in the simulated environment. The first play area is not particularly limited to a fixed area. For example, when the vehicle interior region changes, the first play area may also change. When the terminal receives an area modification instruction for the first game area, the first game area may also be changed.
In some embodiments, the boundary setting information refers to information used by the terminal when determining the first game area from the vehicle interior area. The boundary setting information is not particularly limited to a certain fixed information. For example, the terminal may set the bottom boundary of the vehicle interior region as the bottom boundary of the first play region, and set the other boundary than the bottom boundary in the first play region 10cm from the other boundary than the bottom boundary in the vehicle interior region.
It is easy to understand that when the terminal determines that the image pickup device picks up the vehicle interior area in the interface, the terminal may determine the first game area corresponding to the virtual reality VR game in the vehicle interior area based on the boundary setting information.
S103, if a starting instruction for the VR game is received, the first game area is displayed on the display interface.
According to some embodiments, the start instruction refers to an instruction that the terminal receives to start a game, and the start instruction may be, for example, an instruction input by a user when selecting any VR game in the terminal interface. The start instruction is not specific to a fixed instruction. The initiation instructions include, but are not limited to, click initiation instructions, voice initiation instructions, gesture initiation instructions, and the like. For example, when a user clicks a control corresponding to any VR game, the terminal may obtain a start instruction corresponding to the VR game. When the user speaks the voice information corresponding to any VR game, the terminal can acquire the voice information, and the terminal can determine to acquire the starting instruction corresponding to the VR game based on the voice information. When a user makes gesture information corresponding to any VR game, the terminal can acquire the gesture information. When the gesture information is used for starting the VR game, the terminal can acquire a starting instruction corresponding to the VR game.
In some embodiments, the presentation interface refers to an interface presented by the terminal after a simulated environment constructed using virtual reality technology. The presentation interface is not specific to a particular fixation interface. For example, the presentation interface may also change as the simulated environment being constructed changes. The presentation interface may also change when the first play area changes.
It is easy to understand that when the terminal determines the first game area corresponding to the virtual reality VR game, if the terminal receives the start instruction for the VR game, the terminal may display the first game area on the display interface.
In the embodiment of the disclosure, the vehicle interior area in the acquisition interface of the image pickup device is determined, and the vehicle interior area is a space of the vehicle except for a vehicle device; determining a first game area corresponding to the virtual reality VR game in the vehicle interior area based on the boundary setting information; and if a starting instruction for the VR game is received, displaying the first game area on a display interface. Therefore, when the region corresponding to the VR game is acquired by controlling the image pickup device to acquire the region inside the vehicle, the game region can be displayed directly based on the boundary setting information, the user does not need to manually select the region, the convenience of region determination can be improved, the touch probability of a human body and vehicle devices is reduced, and the convenience of the VR game is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a second area determining method according to an embodiment of the disclosure. Specifically, the area determining method includes:
s201, determining a vehicle interior region in a camera acquisition interface;
the specific process is as above, and will not be described here again.
According to some embodiments, the execution body of an embodiment of the present disclosure may be, for example, a VR device.
According to some embodiments, the manner in which the terminal determines the vehicle interior region in the camera acquisition interface includes, but is not limited to, acquisition by an interior camera, acquisition by an exterior camera, acquisition by an interior and exterior camera, and so forth.
In some embodiments, when the terminal determines, via the internal camera, that the camera device is capturing an area of the vehicle interior in the interface, the terminal may scan the area of the vehicle interior via a camera installed in the VR device. For example, when at least two cameras are internally installed in the VR device, the terminal may scan a panoramic image of an area inside the vehicle based on the at least two cameras. The terminal may also scan a panoramic image of the interior area of the vehicle by controlling the VR device to rotate around the interior area of the vehicle.
In some embodiments, when the terminal determines that the camera device collects the vehicle interior region in the interface through the external camera, the terminal may obtain the scan information of the external camera through the remote connection, thereby obtaining the vehicle interior region. The external camera is not particularly limited to a fixed camera. The external camera refers to a camera that is not provided on the terminal. The external cameras include, but are not limited to, vehicle cameras, cell phone cameras, and the like.
In some embodiments, when the terminal determines, through the internal and external cameras, that the camera device collects an area inside the vehicle in the interface, the terminal may obtain a panoramic image of the area inside the vehicle based on an image scanned by the camera installed in the VR device and an image scanned by the external camera, and further obtain the area inside the vehicle.
It is readily understood that the terminal may determine the vehicle interior area in the camera acquisition interface when the terminal is exhibiting a simulated environment constructed using virtual reality technology.
S202, determining a first game area corresponding to a Virtual Reality (VR) game in an internal area of a vehicle based on boundary setting information;
the specific process is as above, and will not be described here again.
According to some embodiments, the manner in which the terminal obtains the boundary setting information includes, but is not limited to, a manner based on preset information, a manner based on user settings, and so on. Therefore, the terminal can improve the accuracy of the establishment of the first game area by determining the first game area through the boundary setting information.
In some embodiments, when the terminal acquires the boundary information corresponding to any sub-region in the vehicle interior region, the terminal may, for example, find the corresponding boundary setting information in the data stored in the terminal interior. The terminal may determine a game sub-area corresponding to the sub-area based on the boundary setting information corresponding to the sub-area. For example, when the terminal acquires window boundary information corresponding to a window region in an internal region of the vehicle, the terminal may find out boundary setting information corresponding to the window region in internally stored data as a boundary for establishing a game region at a distance of 10cm from the window boundary.
In some embodiments, when the terminal obtains the boundary information corresponding to any sub-region in the vehicle interior region, but the terminal cannot find the corresponding boundary setting information in the data stored in the interior, the terminal may send out the reminding information, where the reminding information is used to remind the user to input the boundary setting information corresponding to the sub-region. The terminal may determine a game sub-area corresponding to the sub-area based on the boundary setting information input by the user, and store the boundary setting information.
For example, when the terminal acquires the ornament boundary information corresponding to any ornament area in the vehicle interior area, but the terminal cannot find the corresponding boundary setting information in the internally stored data, the terminal may acquire the boundary setting information input by the user, which may be, for example, information to establish a game area boundary at 5cm from the ornament boundary, and store the boundary setting information.
It is easy to understand that when the terminal acquires the vehicle interior area, the terminal may determine a first game area corresponding to the virtual reality VR game in the vehicle interior area based on the boundary setting information.
S203, if a starting instruction for the VR game is received, displaying a first game area on a display interface;
The specific process is as above, and will not be described here again.
According to some embodiments, when the terminal receives a start instruction for the VR game and displays the first game area on the display interface, the terminal may obtain a first display manner corresponding to the first game area and display the first game area based on the first display manner. Therefore, the situation that the human body collides with the vehicle device when the user experiences the VR game can be prompted to the position of the human body in the first game area and the situation around the human body, and further the game experience of the user can be improved.
In some embodiments, the first presentation means refers to a presentation means corresponding to the first game area. The first display mode is not particularly limited to a fixed display mode. The first display mode includes, but is not limited to, a stealth display mode, a visualization display mode, and the like.
For example, the first presentation may be a presentation duration. The terminal may present the boundary corresponding to the first game area based on the physical environment in which it is located. After 5s of showing, hiding the boundary corresponding to the first game area, and enabling the terminal to start to show the simulation environment corresponding to the VR game. The terminal may also display a boundary corresponding to the first game area based on the simulated environment corresponding to the VR game, and hide the boundary corresponding to the first game area after displaying for 10s to display the simulated environment corresponding to the VR game.
According to some embodiments, when the terminal receives a start instruction for the VR game and the first game area is displayed on the display interface, the terminal may obtain displacement information of the VR device. And re-determining a second game area corresponding to the VR game in the vehicle interior area based on the displacement information and the boundary setting information, and displaying the second game area on the display interface.
For example, when the terminal receives a start instruction for the VR game and displays the first game area on the display interface, if the terminal detects that the user is at the edge position of the first game area or is located outside the first game area, the terminal may re-control the image capturing device to obtain the vehicle interior area based on the displacement information of the VR device, and re-determine the second game area corresponding to the VR game in the vehicle interior area based on the boundary setting information.
According to some embodiments, when the terminal receives a start instruction for the VR game, when the first game area is displayed on the display interface, the terminal may further control the camera device to acquire the vehicle interior area once every a preset period, and based on the boundary setting information, redetermine a third game area corresponding to the VR game in the vehicle interior area, and display the third game area on the display interface.
For example, when the terminal controls the image pickup device to determine the vehicle interior area once in the acquisition interface every a preset period of time, the terminal may determine whether the vehicle interior area is changed. If the terminal determines that the vehicle interior area changes in the acquisition interface, the terminal can acquire the change information corresponding to the vehicle interior area. The change information includes, but is not limited to, vehicle interior region change information caused by seat adjustment, vehicle interior region change information caused by passenger change, and the like. The terminal may redetermine a third game area corresponding to the VR game in the vehicle interior area based on the boundary setting information and display the third game area on the display interface.
It is easy to understand that when the terminal determines the first game area corresponding to the virtual reality VR game, if the terminal receives the start instruction for the VR game, the terminal may display the first game area on the display interface.
S204, acquiring human body image information corresponding to VR equipment;
according to some embodiments, the human body image information refers to image information corresponding to a user using the VR device. The human body image information does not particularly refer to certain fixed information. For example, when a user using a VR device changes, the body image information may also change.
In some embodiments, the terminal may control the camera to capture an image of the user and determine size data of the user based on the image of the user. The size data includes, but is not limited to, body size data such as height, arm length, etc. Further, the terminal may establish a mirror model of the user based on the size data at a location in the first game area where the user is located.
It is easy to understand that when the terminal is starting the VR game, the terminal may acquire the human body image information corresponding to the VR device.
S205, if the human body image information meets a first early warning condition corresponding to the first game area, displaying early warning prompt information;
according to some embodiments, the first pre-warning condition refers to a condition adopted when the terminal prompts the human body image information to approach a boundary where there is a risk of collision in the first game area. The first pre-warning condition is not specific to a certain fixed condition. When the terminal acquires a condition modification instruction aiming at the first early warning condition, the first early warning condition can also be changed.
In some embodiments, the boundary at which there is a risk of collision refers to a boundary at which the distance from the human body to the physical object at which the collision occurs is lower than a preset distance. For example, if the terminal detects that the shortest distance between any human body image boundary in the first game area and the external physical object is 1m, the terminal judges that the human body image boundary is not the boundary with collision risk.
In some embodiments, the early warning prompt information refers to information sent when the terminal detects that the human body image information meets a first early warning condition corresponding to the first game area. The early warning prompt information does not particularly refer to certain fixed information. The early warning prompt information includes, but is not limited to, boundary prompt information, voice prompt information, vibration prompt information, and the like. When the terminal receives an information modification instruction aiming at the early warning prompt information, the early warning prompt information can also be changed.
For example, when the terminal detects that the shortest distance of the human body image information from the boundary of the first game area corresponding to the window area is less than 10cm, the terminal may display the boundary information corresponding to the first game area. The terminal may also send out preset voice information to prompt the user.
It is easy to understand that, when the terminal obtains the human body image information corresponding to the VR device, if the terminal determines that the human body image information meets the first early warning condition corresponding to the first game area, the terminal may display early warning prompt information.
S206, obtaining position information of a VR controller in VR equipment;
according to some embodiments, VR controllers refer to controllers that in combination with VR devices capture user gestures to simulate objects in a scene, thereby enabling a user to interact with the simulated scene. The VR controller is not specifically limited to a fixed controller. The VR controller includes, but is not limited to, a handle, a wristband, and the like.
In some embodiments, the location information refers to minimum location information of the VR controller from a boundary corresponding to the first game area. The position information is not particularly limited to a certain fixed position information. For example, when the first game area is changed, the position information may also be changed. The location information may also change when the location of the VR device changes.
It is readily understood that the terminal may obtain location information of the VR controller in the VR device when the terminal is starting the VR game.
S207, if the position information meets the second early warning condition corresponding to the first game area, displaying early warning prompt information.
According to some embodiments, the second pre-warning condition refers to a condition that the terminal employs when prompting the VR controller to be distant from a boundary in the first game zone where there is a risk of collision. The second pre-warning condition is not specific to a certain fixed condition. When the terminal acquires a condition modification instruction aiming at the second early warning condition, the second early warning condition can also be changed.
According to some embodiments, when the terminal displays the early warning prompt information, the terminal may acquire a second display mode corresponding to the early warning prompt information, and display the early warning prompt information in the second display mode.
In some embodiments, the second display mode refers to a display mode adopted when the terminal detects that the position information of the VR controller meets a second early warning condition corresponding to the first game area or the human body image information meets a first early warning condition corresponding to the first game area, and the terminal displays the early warning information. The second display mode is not particularly limited to a certain fixing mode. For example, the second display mode may be a region processing display mode or a text display mode.
In some embodiments, when the terminal displays the early warning prompt information in an area processing display mode, the terminal can perform fuzzy processing on the boundary area corresponding to the first game area to obtain a processed first game area, and display the processed first game area, so that the accuracy of displaying the early warning prompt information can be improved.
It is easy to understand that, when the terminal obtains the position information of the VR controller in the VR device, if the terminal determines that the position information meets the second early warning condition corresponding to the first game area, the terminal may display early warning prompt information.
In an embodiment of the disclosure, acquiring an area inside a vehicle in an interface by determining a camera; determining a first game area corresponding to the virtual reality VR game in the vehicle interior area based on the boundary setting information; therefore, the region corresponding to the VR game is acquired by controlling the image pickup device to acquire the vehicle interior region, and the user does not need to manually select the region, so that the convenience of region determination can be improved. Secondly, if a starting instruction for the VR game is received, the first game area is displayed on the display interface, so that the game area can be directly displayed when the starting instruction for the VR game is received, the touch probability of a human body and a vehicle device can be reduced, and further the game experience of a user can be improved. In addition, acquiring human body image information corresponding to the VR equipment, and displaying early warning prompt information if the human body image information meets a first early warning condition corresponding to the first game area; position information of a VR controller in the VR device can also be obtained; if the position information meets the second early warning condition corresponding to the first game area, the early warning prompt information is displayed, and the touch probability of a human body and a vehicle device can be reduced by carrying out early warning prompt when the user is at risk of collision, so that the game experience of the user can be improved.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
Referring to fig. 5, a schematic structural diagram of a first area determining apparatus according to an exemplary embodiment of the present disclosure is shown. The region determining means may be implemented as all or part of the apparatus by software, hardware or a combination of both. The area determination apparatus 500 includes an area acquisition unit 501, an area determination unit 502, and an area display unit 503, wherein:
a region acquisition unit 501 configured to determine a vehicle interior region in the image pickup device acquisition interface, the vehicle interior region being a space of the vehicle other than the vehicle device;
an area determination unit 502 configured to determine a first game area corresponding to the virtual reality VR game in the vehicle interior area based on the boundary setting information;
the area displaying unit 503 is configured to display the first game area on the display interface if a start instruction for the VR game is received.
Fig. 6 illustrates a schematic structural diagram of a second area determining apparatus provided in an embodiment of the present disclosure, according to some embodiments. As shown in fig. 6, the region presentation unit 503 includes a pattern acquisition subunit 513 and a region presentation subunit 523; the area displaying unit 503 is configured to, when the first game area is displayed on the display interface:
the mode obtaining subunit 513 is configured to obtain a first display mode corresponding to the first game area, where the first display mode includes one of a invisible display mode and a display mode;
the region presentation subunit 523 is configured to present the first game region based on the first presentation manner.
Fig. 7 illustrates a schematic structural diagram of a third area determining apparatus provided in an embodiment of the present disclosure, according to some embodiments. As shown in fig. 7, the area determination apparatus 500 further includes an image information acquisition unit 504 and an information display unit 505 for, after displaying the first game area on the display interface:
an image information obtaining unit 504, configured to obtain human body image information corresponding to the VR device;
the information display unit 505 is configured to display early warning prompt information if the human body image information meets a first early warning condition corresponding to the first game area.
Fig. 8 illustrates a schematic structural diagram of a fourth area determining apparatus provided in an embodiment of the present disclosure, according to some embodiments. As shown in fig. 8, the area determining apparatus 500 further includes a position information acquiring unit 506 and an information displaying unit 505 for, after displaying the first game area on the display interface:
A location information obtaining unit 506, configured to obtain location information of a VR controller in the VR device;
the information display unit 505 is configured to display the early warning prompt information if the location information meets the second early warning condition corresponding to the first game area.
Fig. 9 illustrates a schematic structural diagram of a fifth area determining apparatus provided in an embodiment of the present disclosure, according to some embodiments. As shown in fig. 9, the information display unit 505 includes a mode acquisition subunit 515 and an information display subunit 525, where the information display unit 505 is configured to display early warning prompt information:
a mode obtaining subunit 515, configured to obtain a second display mode corresponding to the early warning prompt information;
the information display subunit 525 is configured to display the early warning prompt information in the second display mode.
According to some embodiments, the second display manner is an area processing display manner, and the information display subunit 525 is configured to, when displaying the early warning prompt information by using the second display manner, specifically:
performing fuzzy processing on the boundary area corresponding to the first game area to obtain a processed first game area;
and displaying the processed first game area.
It should be noted that, in the area determining apparatus provided in the foregoing embodiment, when the area determining method is executed, only the division of the foregoing functional modules is used as an example, and in practical application, the foregoing functional allocation may be performed by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the area determining apparatus provided in the foregoing embodiments and the area determining method embodiment belong to the same concept, which embody the detailed implementation process in the method embodiment, and are not repeated herein.
The foregoing embodiment numbers of the present disclosure are merely for description and do not represent advantages or disadvantages of the embodiments.
In the embodiment of the disclosure, determining, by an area acquisition unit, an area inside the vehicle in an acquisition interface of the image pickup device, the area inside the vehicle being a space of the vehicle except for a vehicle device; the area determination unit determines a first game area corresponding to the virtual reality VR game in the vehicle interior area based on the boundary setting information; and if the region display unit receives a starting instruction aiming at the VR game, displaying the first game region on a display interface. Therefore, when the region corresponding to the VR game is acquired by controlling the image pickup device to acquire the region inside the vehicle, the game region can be directly displayed without manually selecting the region by a user, the convenience of region determination can be improved, the touch probability of a human body and a vehicle device is reduced, and the convenience of the VR game is improved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 10 shows a schematic block diagram of an example electronic device 1000 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data required for the operation of the device 1000 can also be stored. The computing unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
Various components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and communication unit 1009 such as a network card, modem, wireless communication transceiver, etc. Communication unit 1009 allows device 1000 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1001 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1001 performs the respective methods and processes described above, for example, the area determination method. For example, in some embodiments, the XXX methods may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communication unit 1009. When a computer program is loaded into RAM 1003 and executed by computing unit 1001, one or more steps of the region determination method described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the region determination method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (10)
1. A method of determining a region, comprising:
determining a vehicle interior area in an acquisition interface of a camera device, wherein the vehicle interior area is a space of the vehicle except for a vehicle device;
determining a first game area corresponding to a Virtual Reality (VR) game in the vehicle interior area based on boundary setting information;
and if a starting instruction aiming at the VR game is received, displaying the first game area on a display interface.
2. The method of claim 1, wherein the presenting the first game area at a presentation interface comprises:
acquiring a first display mode corresponding to the first game area, wherein the first display mode comprises one of a invisible display mode and a visible display mode;
and displaying the first game area based on the first display mode.
3. The method of claim 1, wherein after the first game area is presented at the presentation interface, further comprising:
acquiring human body image information corresponding to VR equipment;
and if the human body image information meets the first early warning condition corresponding to the first game area, displaying early warning prompt information.
4. The method of claim 1, wherein after the first game area is presented at the presentation interface, further comprising:
obtaining position information of a VR controller in VR equipment;
and if the position information meets the second early warning condition corresponding to the first game area, displaying early warning prompt information.
5. The method according to claim 3 or 4, wherein the displaying the early warning prompt information includes:
acquiring a second display mode corresponding to the early warning prompt information;
And displaying the early warning prompt information by adopting the second display mode.
6. The method of claim 5, wherein the second display mode is a regional processing display mode, and the displaying the early warning prompt information by using the second display mode includes:
performing fuzzy processing on the boundary area corresponding to the first game area to obtain a processed first game area;
and displaying the processed first game area.
7. An area determining apparatus, comprising:
an area acquisition unit configured to determine a vehicle interior area in an image pickup device acquisition interface, the vehicle interior area being a space of the vehicle other than a vehicle device;
an area determination unit configured to determine a first game area corresponding to a virtual reality VR game in the vehicle interior area based on boundary setting information;
and the region display unit is used for displaying the first game region on a display interface if a starting instruction for the VR game is received.
8. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; it is characterized in that the method comprises the steps of,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
9. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210266667.0A CN116785694A (en) | 2022-03-16 | 2022-03-16 | Region determination method and device, electronic equipment and storage medium |
PCT/CN2023/080162 WO2023174111A1 (en) | 2022-03-16 | 2023-03-07 | Region determination method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210266667.0A CN116785694A (en) | 2022-03-16 | 2022-03-16 | Region determination method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116785694A true CN116785694A (en) | 2023-09-22 |
Family
ID=88022188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210266667.0A Pending CN116785694A (en) | 2022-03-16 | 2022-03-16 | Region determination method and device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116785694A (en) |
WO (1) | WO2023174111A1 (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10656704B2 (en) * | 2017-05-10 | 2020-05-19 | Universal City Studios Llc | Virtual reality mobile pod |
US20190033989A1 (en) * | 2017-07-31 | 2019-01-31 | Google Inc. | Virtual reality environment boundaries using depth sensors |
US10832477B2 (en) * | 2017-11-30 | 2020-11-10 | International Business Machines Corporation | Modifying virtual reality boundaries based on usage |
US10901081B2 (en) * | 2018-10-02 | 2021-01-26 | International Business Machines Corporation | Virtual reality safety |
CN113171614A (en) * | 2021-05-28 | 2021-07-27 | 努比亚技术有限公司 | Auxiliary control method and device for game carrier and computer readable storage medium |
-
2022
- 2022-03-16 CN CN202210266667.0A patent/CN116785694A/en active Pending
-
2023
- 2023-03-07 WO PCT/CN2023/080162 patent/WO2023174111A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
WO2023174111A1 (en) | 2023-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3628381A1 (en) | Game picture display method and apparatus, storage medium and electronic device | |
JP2021526698A (en) | Image generation methods and devices, electronic devices, and storage media | |
CN112527115A (en) | User image generation method, related device and computer program product | |
CN113359995B (en) | Man-machine interaction method, device, equipment and storage medium | |
CN113033346B (en) | Text detection method and device and electronic equipment | |
CN108665510B (en) | Rendering method and device of continuous shooting image, storage medium and terminal | |
CN112181141A (en) | AR positioning method, AR positioning device, electronic equipment and storage medium | |
US10872455B2 (en) | Method and portable electronic device for changing graphics processing resolution according to scenario | |
CN113419865A (en) | Cloud resource processing method, related device and computer program product | |
CN113160270A (en) | Visual map generation method, device, terminal and storage medium | |
CN116785694A (en) | Region determination method and device, electronic equipment and storage medium | |
CN115643445A (en) | Interaction processing method and device, electronic equipment and storage medium | |
CN113625878A (en) | Gesture information processing method, device, equipment, storage medium and program product | |
CN114065783A (en) | Text translation method, device, electronic equipment and medium | |
CN113327311A (en) | Virtual character based display method, device, equipment and storage medium | |
CN108540726B (en) | Method and device for processing continuous shooting image, storage medium and terminal | |
CN116071422B (en) | Method and device for adjusting brightness of virtual equipment facing meta-universe scene | |
CN116382475B (en) | Sight line direction control, sight line communication method, device, equipment and medium | |
CN114327059B (en) | Gesture processing method, device, equipment and storage medium | |
CN116824014B (en) | Data generation method and device for avatar, electronic equipment and medium | |
US20220083766A1 (en) | Computer program, server, terminal device, system, and method | |
CN114398131B (en) | Information display method, device, equipment, medium and program product | |
CN114549697B (en) | Image processing method, device, equipment and storage medium | |
CN109636898B (en) | 3D model generation method and terminal | |
CN116612755A (en) | Voice interaction method, device and equipment of vehicle-mounted terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |