CN115177954A - Game interaction method, device and equipment based on view container and storage medium - Google Patents

Game interaction method, device and equipment based on view container and storage medium Download PDF

Info

Publication number
CN115177954A
CN115177954A CN202210651093.9A CN202210651093A CN115177954A CN 115177954 A CN115177954 A CN 115177954A CN 202210651093 A CN202210651093 A CN 202210651093A CN 115177954 A CN115177954 A CN 115177954A
Authority
CN
China
Prior art keywords
game
view container
scene
target
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210651093.9A
Other languages
Chinese (zh)
Inventor
张天昊
黎艳秋
唐寅
盛晓彤
王海山
李劭
黄敏
陈仕军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Tengyun Information Industry Co ltd
Original Assignee
Yunnan Tengyun Information Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Tengyun Information Industry Co ltd filed Critical Yunnan Tengyun Information Industry Co ltd
Priority to CN202210651093.9A priority Critical patent/CN115177954A/en
Publication of CN115177954A publication Critical patent/CN115177954A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The game interaction method, the game interaction device, the game interaction equipment and the game interaction storage medium based on the view container respond to the triggering operation of a user on a scene opening element of a game, and scene map information of the scene opening element is obtained; acquiring a target view container and a target point position layer corresponding to a scene opening element; displaying a static map corresponding to the target view container in a game picture of the game through a view container assembly, and superposing and displaying a target point bitmap layer corresponding to the position parameter of the target position on the target position of the static map; through the mode, each view container in the view container assembly corresponds to a static map of different indoor scenes respectively, the point location layer associated with the mark position in the static map corresponds to a game, the view container is used for bearing data of the indoor scenes under the line, the point location layer is used for bearing data of the games on the line, and the combination of the indoor scenes under the line and the games on the line is realized through the combination of the view container and the point location layer, so that the game interaction is facilitated, and the user experience is improved.

Description

Game interaction method, device and equipment based on view container and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for game interaction based on a view container.
Background
In some offline interactive games in an indoor scene, generally, real positions of the indoor scene correspond to virtual positions of virtual scenes in a game picture one by one, and a user holds the electronic device, moves and explores the indoor scene according to clues provided by the virtual scenes displayed in the game picture of the electronic device, and arrives at different real positions to complete scenarios of the offline interactive games.
In the prior art, the offline interactive game in the indoor scene is usually implemented based on a navigation scheme, after a user starts a game in an electronic device, a game program accesses a navigation application program, the real position is mapped to a virtual position in a navigation interface according to longitude and latitude coordinates of the real position in the navigation interface, and the user performs route planning in the navigation interface and takes a route displayed by the navigation interface as an indication to reach different real positions so as to complete the scenario of the offline interactive game. In the prior art, game interaction is realized through a navigation scheme, a navigation route planning function is mainly focused on, a combination mode of an online game and an offline indoor scene is limited to a mode that a user plans a route to find a plurality of real positions in the indoor scene in a navigation interface when the offline immersive interactive game is carried out on the indoor scene, the navigation interface serving as an online part depends on a navigation application program, the online game and the offline indoor scene are not easily combined with the online scene, game interaction is not facilitated, and user experience is influenced.
Disclosure of Invention
The application aims to provide a game interaction method, a game interaction device, game interaction equipment and a storage medium based on a view container, so as to solve the technical problems that in the prior art, when an offline immersive interactive game is carried out in an indoor scene, the combination of an online game and an offline indoor scene is limited, and the game interaction is not facilitated.
The technical scheme of the application is as follows: a game interaction method based on a view container is provided, which comprises the following steps:
responding to the triggering operation of a user on a scene opening element of a game, and acquiring scene map information of the scene opening element;
acquiring a target view container and at least one target point position layer corresponding to the scene opening element according to the identification information of the game, the scene map information and the incidence relation between the view container and each point position layer in the view container assembly;
displaying a static map corresponding to the target view container through the view container assembly in a game picture of the game, and superposing and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map.
In some embodiments, the obtaining a target view container and at least one target point position layer corresponding to the scene opening element according to the identification information of the game, the scene map information, and the association relationship between the view container in the view container assembly and each point position layer includes:
acquiring the target view container corresponding to the scene opening element from the view container of the view container assembly according to the scene map information;
and acquiring at least one target point position layer corresponding to the scene starting element from the point position layer having the association relation with the target view container according to the identification information of the game.
In some embodiments, before the acquiring the scene map information of the scene opening element in response to the user's trigger operation on the scene opening element of the game, the method further includes:
acquiring a corresponding static map according to different indoor scenes, wherein the static map comprises a plurality of marked positions and position parameters of the marked positions;
establishing the view container component, respectively establishing the corresponding view container for each static map in the view container component, and loading the corresponding static map by using the view container, wherein the view container component is a parent container of the view container;
and aiming at each view container, establishing at least one point location layer for each marking position in the corresponding static map, and associating the view container with the established point location layer, wherein the point location layer comprises display information, position parameters of the marking position and identification information of the game.
In some embodiments, the point location map layer further includes a service triggering rule;
the displaying, in a game screen of the game, a static map corresponding to the target view container through the view container component, and displaying, in an overlay manner, the target point position layer corresponding to the position parameter of the target position on the target position of the static map, further includes:
responding to the triggering operation of the user on the target point location layer in the game picture, and acquiring the service triggering rule of the target point location layer;
and displaying a corresponding trigger result in the game picture according to the service trigger rule, wherein the trigger result comprises a jump button for jumping to a corresponding display interface, prompt information for representing the next target position or a scene starting element for switching a new scene.
In some embodiments, before the acquiring the scene map information of the scene opening element in response to the user's trigger operation on the scene opening element of the game, the method further includes:
the method comprises the steps of detecting triggering operation of a user on a scene opening element of a game, wherein the triggering operation comprises the triggering operation on a display element in a game program interface or the scanning operation on a two-dimensional code for opening a corresponding game scene in the game program interface.
In some embodiments, after the displaying, by the view container component, the static map corresponding to the target view container in the game screen of the game, and displaying, in a superimposed manner, the target point position layer corresponding to the position parameter of the target position on the target position of the static map, the method further includes:
receiving scanning operation of a user on a two-dimensional code for displaying corresponding prompt information in the game picture;
and responding to the scanning operation, and displaying the corresponding prompt information in the game picture.
In some embodiments, the scene map information includes a scene map identification associated with the view container and display data for characterizing a virtual object of a user;
correspondingly, the displaying a static map corresponding to the target view container through the view container component in a game picture of the game, and displaying the target point position layer corresponding to the position parameter of the target position in an overlaying manner on the target position of the static map includes:
loading a static map corresponding to the target view container through the view container component;
acquiring a rotation angle of the static map and a display position of the virtual object in the static map according to the display data of the virtual object;
rotating the static map according to the rotation angle, and displaying the rotated static map through the view container assembly in a game picture of the game;
superposing and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map;
and overlaying and displaying an icon corresponding to the virtual object on the display position of the static map.
Another technical scheme of the application is as follows: there is provided a view container based game interaction apparatus, comprising:
the scene determining module is used for responding to the triggering operation of a user on a scene opening element of a game and acquiring scene map information of the scene opening element;
a scene data obtaining module, configured to obtain a target view container and at least one target point position layer corresponding to the scene opening element according to the identification information of the game, the scene map information, and an association relationship between a view container and each point position layer in the view container assembly;
and the display loading module is used for displaying a static map corresponding to the target view container in a game picture of the game through the view container assembly, and overlaying and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map.
Another technical scheme of the application is as follows: an electronic device is provided that includes a processor, and a memory coupled to the processor, the memory storing program instructions executable by the processor; the processor, when executing the program instructions stored by the memory, implements the view container based game interaction method described above.
Another technical scheme of the application is as follows: there is provided a storage medium having stored therein program instructions that, when executed by a processor, implement a method of game interaction based on a view container as described above.
The game interaction method, the game interaction device, the game interaction equipment and the game interaction storage medium based on the view container respond to the triggering operation of a user on a scene opening element of a game, and scene map information of the scene opening element is obtained; acquiring a target view container and at least one target point position map layer corresponding to the scene opening element according to the identification information of the game, the scene map information and the incidence relation between the view container and each point map layer in the view container assembly; displaying a static map corresponding to the target view container through the view container component in a game picture of the game, and superposing and displaying a target point position layer corresponding to a position parameter of a target position on the target position of the static map; through the mode, each view container in the view container assembly corresponds to a static map of different indoor scenes respectively, the point location layer associated with the mark position in the static map corresponds to a game, the view container is used for bearing data of the indoor scenes under the line, the point location layer is used for bearing data of the games on the line, and the combination of the indoor scenes under the line and the games on the line is realized through the combination of the view container and the point location layer, so that the game interaction is facilitated, and the user experience is improved.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for game interaction based on a view container according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a view container-based game interaction device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specified otherwise. In the embodiment of the present application, all the directional indicators (such as upper, lower, left, right, front, and rear … …) are used only to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
An embodiment of the application provides a game interaction method based on a view container. The execution subject of the game interaction method based on the view container includes, but is not limited to, an electronic device capable of being configured to execute the game interaction method based on the view container provided by the embodiment of the present application. In other words, the view container-based game interaction method may be performed by software or hardware installed in the electronic device.
In this embodiment, the electronic device serving as the execution subject may be a terminal device, for example, a mobile Phone, a tablet computer, a game machine, a PDA, or the like, and the terminal device may be installed with an operating system, where the operating system may include Android (Android), IOS, windows Phone, windows, or the like, and may generally support running of various games. The method comprises the steps of running a game application program or a game applet in other application programs on the terminal equipment, and rendering on a display screen of the terminal equipment to obtain a graphical User Interface (UI), wherein contents displayed by the UI at least partially comprise a part of or all game scenes, and the specific form of the game scenes can be square or other shapes (such as round and the like).
Please refer to fig. 1, which is a flowchart illustrating a game interaction method based on a view container according to an embodiment of the present application. It should be noted that the method of the present application is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. In this embodiment, the method for game interaction based on view containers includes the following steps:
and S10, responding to the triggering operation of the user on the scene opening element of the game, and acquiring the scene map information of the scene opening element.
In this embodiment, the scene opening element corresponds to a certain virtual scene of the game, the virtual scene corresponds to an offline indoor scene, and the user needs to complete a scenario related to the offline indoor scene through interaction with the virtual scene, for example, the scenario may be an exploration task.
As an embodiment, the program of the game may be directly stored in an electronic device held by the user in the form of an Application (APP).
As another embodiment, the program of the game may be cached in the electronic device held by the user in the form of a WeChat applet, and when the user loads the corresponding game through the link of the Game applet or the two-dimensional code for the first time, the program of the game is sent from the WeChat server to the electronic device held by the user for caching. When the user quits the game applet every time, the archived data containing the user progress are sent to the WeChat server, when the user enters the game applet next time, the archived data containing the user progress are obtained from the WeChat server, and the user can continue playing the game on the basis of the progress completed last time.
In some embodiments, a user performs a trigger operation on a scene opening element, generates a corresponding scene opening instruction according to the trigger operation, and acquires corresponding scene map information according to the scene opening instruction, so as to acquire, through the scene map information, loading data for loading the scene in a game screen.
In this embodiment, the map scene information is used to identify the corresponding view container, for example, the map scene information may be an ID code of the corresponding view container, or the map scene information may be a mapping relationship, where the mapping relationship includes a first identifier for characterizing a scene open element and a second identifier for characterizing the view container.
As an embodiment, step S10 further includes the following steps:
s21, detecting triggering operation of a user on a scene opening element of the game, wherein the triggering operation comprises triggering operation on a display element in a game program interface or scanning operation on a two-dimensional code for opening a corresponding game scene in the game program interface.
In this embodiment, the game program interface is a display interface of a game, the display interface may be a game screen of the game, an opening interface of the game program, or a next level opening interface, the display elements in the game program interface may be buttons, operation boxes, characters, pictures, animation being played, video being played, or the like, and in this case, the trigger operation is a click operation, a slide operation, a press operation, a double click operation, or the like. The user can click a scanning button in a game program interface to scan the two-dimensional code in the indoor scene arranged under the line to trigger the opening of the corresponding scene, at the moment, the scene opening element is the two-dimensional code, and the triggering operation is the scanning operation.
And S20, acquiring a target view container and at least one target point position layer corresponding to the scene opening element according to the identification information of the game, the scene map information and the incidence relation between the view container and each point position layer in the view container assembly.
In this embodiment, different view containers respectively correspond to different static maps, the static maps are obtained according to an indoor scene, and the static maps include a plurality of labeled positions and position parameters of the labeled positions. As an embodiment, the static map may be obtained as follows: firstly, collecting plane size data and layout of an indoor scene, wherein the layout comprises the positions of all key points; secondly, drawing a basic static map according to the layout of the indoor scene, wherein the basic static map is marked with marking positions representing all key points; establishing a virtual coordinate system on the basic static map according to the plane size data, wherein the virtual coordinate system comprises an origin, for example, the origin may be set as a position corresponding to an entrance of the indoor scene, or the origin may be a point location corresponding to an upper left corner of the basic static map, or the origin may be a position corresponding to a center of the indoor scene, or the origin is set according to business requirements; and thirdly, establishing a virtual coordinate system according to the plane size data of the basic static map, and acquiring the position parameter of each marked position from the virtual coordinate system to obtain the static map, wherein the position parameter can be the plane coordinate of the marked position.
In this embodiment, the view container carries a corresponding static map, so that the view container displays the corresponding static map. In some embodiments, the view container component is a parent container of the view container, that is, each view container is located directly at a direct child node of the view container component.
In this embodiment, the view container component is a functional component for configuring and displaying a static map, and the view container component may be a functional component installed in an operating system of the electronic device, or a functional component in an application installed in the electronic device.
As an implementation mode, the game is a wechat game applet, the view container component is a wechat movable-area component, the view container is a movable-view component, specifically, the movable-area component is newly built in a wechat applet page, then the movable-view component is newly built in the movable-area component, the movable-view component is used for bearing a corresponding static map, and at this time, the static map can be dragged and zoomed.
In this embodiment, the point map layer carries display information of a corresponding game at a labeled position, for example, a plurality of different point map layers may be associated with a same labeled position in a same static map, and different point map layers may correspond to different games, for example, a point map layer a associated with the labeled position corresponds to a game a, in the game a, a mountain is displayed at the labeled position and a monster is displayed on the mountain through the point map layer a, and then, the display information of the point map layer a is data and/or a code for displaying a mountain and a monster on the mountain; the point location layer B associated with the labeling position corresponds to a game B, in the game B, a tree and prompt information mounted on the tree are displayed at the labeling position through the point location layer B, and therefore the display information of the point location layer B is data and/or codes used for displaying the tree and the prompt information mounted on the tree; the point position layer C associated with the labeling position corresponds to a game C, in the game C, a Non-player Character (NPC) and interaction between a user and the Non-player Character are displayed at the labeling position through the point position layer C, and then, display information of the point position layer C is a code for displaying data of the Non-player Character and realizing interaction between the user and the Non-player Character.
In this embodiment, the dot position layer, in addition to bearing the display information, also includes identification information of the game and a position parameter of the labeling position, for example, the identification information of the game may be a game id. The point location layer is associated with the view container by establishing an association relation, is also associated with the game by identification information of the game, and is also associated with a mark position in a static map carried by the corresponding view container by a position parameter of the mark position.
As an implementation manner, the same game does not need to be associated with all the annotation positions in the static map carried by the view container, and a plurality of annotation positions may be selected according to the scene of the game and the point position map layer of the game may be associated with the selected annotation positions.
In this embodiment, the target view container is a view container corresponding to the scene opening element, and the target point bitmap layer is a point position bitmap layer corresponding to the target view container and the game, respectively.
As an embodiment, before step S10, the method further includes the following steps:
s31, acquiring a corresponding static map according to different indoor scenes, wherein the static map comprises a plurality of marked positions and position parameters of the marked positions;
s32, establishing the view container assembly, respectively establishing the corresponding view container for each static map in the view container assembly, and loading the corresponding static map by using the view container, wherein the view container assembly is a parent container of the view container;
s33, aiming at each view container, establishing at least one point location layer for each marking position in the corresponding static map, and associating the view container with the established point location layer, wherein the point location layer comprises display information, position parameters of the marking position and identification information of the game.
For specific descriptions of the obtaining manner of the static map, the view container assembly, the view container, and the point map layer, reference may be made to the foregoing contents of this embodiment, which are not described in detail herein.
In some embodiments, step S20 specifically includes the following steps:
s41, acquiring the target view container corresponding to the scene opening element from the view container of the view container assembly according to the scene map information;
wherein, the target view container can be obtained based on the coded ID of the view container recorded in the scene map information, or can be obtained by the incidence relation between the map scene information and the view container,
and S42, acquiring at least one target point position layer corresponding to the scene starting element from the point position layer having the association relation with the target view container according to the identification information of the game.
For example, the static map corresponding to the view container has n labeling positions, and the n labeling positions are respectively associated with M1, M2, … …, mi, … … and Mn point position layers, where n is a natural number, mi is a natural number, i is an integer, and i is greater than or equal to 1 and less than or equal to n. Thus, the view container has an association with the M1+ M2+ … … + Mn dot bitmap layer. Therefore, in this embodiment, the target dot position map layer in which the identification information of the game is recorded is obtained from all the dot position map layers associated with the view container by the identification information of the game.
And S30, displaying a static map corresponding to the target view container in a game picture of the game through the view container assembly, and superposing and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map.
In this embodiment, the static map is displayed by the view container component, and the corresponding target point bitmap layer is displayed in a superimposed manner at the target position of the static map by the display component for displaying the point bitmap layer, so as to load the scene corresponding to the scene opening element in the game picture. The display component may be at least one functional component installed in an operating system of the electronic device, or may be at least one functional component in an application installed in the electronic device.
In this embodiment, the displayed scene includes a static map and display information corresponding to a target point bitmap layer on the static map, the scene can be used as an offensive and clue for a user to perform offline activities, the user can directly refer to the scene to determine next-step activities without selecting a next real position in a navigation mode, and meanwhile, point position maps of different games are superimposed on the static map, so that combination of offline indoor scenes and online games is realized, and game interaction is facilitated. In addition, the point position layers of different games are superposed on the same static map, so that the multiplexing of the same indoor scene is realized, and the static map of the indoor scene does not need to be configured in each game.
As an implementation mode, the game is a wechat game applet, the view container component is a wechat movable-area component, the view container is a movable-view component, specifically, the movable-area component is newly built in a wechat applet page, then the movable-view component is newly built in the movable-area component, the movable-view component is used for bearing a corresponding static map, and at this time, the static map can be dragged and zoomed. When the view container corresponding to the scene opening element needs to be loaded in the game picture, the game applet calls the movable-area component to display the static map, and the game applet calls the display component for displaying the map layer to display the corresponding target point bitmap layer on the static map in an overlapping mode. In the embodiment, the same applet code can replace different information in tasks of different games, so that different types of tasks and different game backgrounds can be configured in the same scene without frequently replacing the applet codes of the point positions.
As an embodiment, the scene map information includes a scene map identifier associated with the view container and display data for characterizing a virtual object of a user; the virtual object is a virtual character of a user in a game, display data of the virtual object comprise the birth position and the orientation of the virtual character in a scene, the display angle of the static map in a game picture needs to be adapted according to the display data of the virtual object, and the birth position of the virtual object is located at the lower left corner of the corresponding scene by rotating the static map.
Correspondingly, step S30 specifically includes the following steps:
s51, loading a static map corresponding to the target view container through the view container component;
wherein loading the static map in the view container assembly facilitates subsequent processing.
S52, acquiring the rotation angle of the static map and the display position of the virtual object in the static map according to the display data of the virtual object;
the display position of the virtual object in the static map is determined according to the birth position of the virtual object, and the rotation angle of the static map is determined according to the display position and the orientation of the virtual object.
S53, rotating the static map according to the rotation angle, and displaying the rotated static map through the view container component in a game picture of the game;
in the rotated static map, the display position of the virtual object may be located at the lower left corner of the corresponding scene.
S54, superposing and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map;
and the display component for displaying the point position map layer is used for displaying the corresponding target point map layer on the target position of the static map in an overlapping manner so as to load the scene corresponding to the scene opening element in the game picture. The display component may be at least one functional component installed in an operating system of the electronic device, or may be at least one functional component in an application installed in the electronic device.
And S55, overlaying and displaying the icon corresponding to the virtual object on the display position of the static map.
The icons of the virtual objects can also be displayed in an overlapping manner through the display component for displaying the point position map layer.
As an implementation manner, the point location map layer may further include a service trigger rule, where the service trigger rule is used to display a corresponding trigger result when a user performs a trigger operation on the point location map layer. Then, after step S30, the following steps are further included:
s61, responding to the triggering operation of the user on the target point location layer in the game picture, and acquiring the service triggering rule of the target point location layer;
and the user triggers and starts the service triggering rule by operating the target point bitmap layer.
S62, displaying a corresponding trigger result in the game picture according to the service trigger rule, wherein the trigger result comprises a jump button for jumping to a corresponding display interface, prompt information for representing a next target position or a scene starting element for switching a new scene;
in some embodiments, the result can be triggered to jump to a corresponding display interface to obtain a receiving mode of the game award; the user can be guided to do offline activities through the prompt message of the next target position; the new scene may be switched by displaying a scene opening element, and after the user performs a trigger operation on the scene opening element, the new scene is switched according to steps S10, S20, and S30.
As an embodiment, after step S30, the method further includes the following steps:
s71, receiving scanning operation of a user on the two-dimensional code for displaying the corresponding prompt information in the game picture;
and S72, responding to the scanning operation, and displaying the corresponding prompt information in the game picture.
In this embodiment, in order to further enhance game interaction, the user may trigger a scan button in the game screen to scan the two-dimensional code set in the indoor scene below the line, for example, after the user finds the next actual position according to the strategy and clue of the scene, the user moves to the actual position, and scans the two-dimensional code set in the actual position to obtain the prompt information of the next actual position.
In some embodiments, the user may input game information in the target point position map layer displayed on the game screen to mark the progress of the user, and the input game information may be saved as archived data of the user.
As shown in fig. 2, an embodiment of the present application provides a game interaction device based on a view container, where the device 20 includes: the game system comprises a scene determining module 21, a scene data acquiring module 22 and a display loading module 23, wherein the scene determining module 21 is configured to respond to a trigger operation of a user on a scene opening element of a game to acquire scene map information of the scene opening element; a scene data obtaining module 22, configured to obtain, according to the identification information of the game, the scene map information, and an association relationship between a view container in the view container assembly and each point bitmap layer, a target view container and at least one target point bitmap layer corresponding to the scene opening element; the display loading module 23 is configured to display a static map corresponding to the target view container through the view container component in a game picture of the game, and superimpose and display the target point position layer corresponding to the position parameter of the target position on the target position of the static map.
In some embodiments, the scene data obtaining module 22 is further configured to obtain the target view container corresponding to the scene opening element from the view container of the view container assembly according to the scene map information; and acquiring at least one target point position layer corresponding to the scene starting element from the point position layer having the association relation with the target view container according to the identification information of the game.
In some embodiments, the scene data acquiring module 22 is further configured to acquire a corresponding static map according to different indoor scenes, where the static map includes a plurality of labeled locations and location parameters of the labeled locations; establishing the view container component, respectively establishing the corresponding view container for each static map in the view container component, and loading the corresponding static map by using the view container, wherein the view container component is a parent container of the view container; and aiming at each view container, establishing at least one point location layer for each marking position in the corresponding static map, and associating the view container with the established point location layer, wherein the point location layer comprises display information, position parameters of the marking position and identification information of the game.
In some embodiments, the display loading module 23 is further configured to obtain the service trigger rule of the target point location layer in response to a trigger operation performed by a user on the target point location layer in the game picture; and displaying a corresponding trigger result in the game picture according to the service trigger rule, wherein the trigger result comprises a jump button for jumping to a corresponding display interface, prompt information for representing the next target position or a scene starting element for switching a new scene.
In some embodiments, the scene determining module 21 is further configured to detect a trigger operation of a user to open a scene opening element of the game, where the trigger operation includes a trigger operation on a display element in the game program interface or a scanning operation on a two-dimensional code for opening a corresponding game scene in the game program interface.
In some embodiments, the display loading module 23 is further configured to receive, in the game screen, a scanning operation of a user on a two-dimensional code for displaying corresponding prompt information; and responding to the scanning operation, and displaying the corresponding prompt information in the game picture.
In some embodiments, the scene map information includes a scene map identification associated with the view container and display data for characterizing a virtual object of a user; correspondingly, the display loading module 23 is further configured to load, by the view container component, a static map corresponding to the target view container; acquiring a rotation angle of the static map and a display position of the virtual object in the static map according to the display data of the virtual object; rotating the static map according to the rotation angle, and displaying the rotated static map through the view container assembly in a game picture of the game; superposing and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map; and overlaying and displaying an icon corresponding to the virtual object on the display position of the static map.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 3, the electronic device 30 includes a processor 31 and a memory 32 coupled to the processor 31.
The memory 32 stores program instructions for implementing the view container-based game interaction method of any of the above embodiments.
Processor 31 is operative to execute program instructions stored in memory 32 for view container based game interaction.
The processor 31 may also be referred to as a CPU (Central Processing Unit). The processor 31 may be an integrated circuit chip having signal processing capabilities. The processor 31 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a storage medium according to an embodiment of the present application. The storage medium 40 of the embodiment of the present application stores program instructions 41 capable of implementing all the methods described above, where the program instructions 41 may be stored in the storage medium in the form of a software product, and include several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings are included in the scope of the present disclosure.
While the foregoing is directed to embodiments of the present application, it will be appreciated by those skilled in the art that changes may be made in this embodiment without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims.

Claims (10)

1. A game interaction method based on a view container is characterized by comprising the following steps:
responding to the triggering operation of a user on a scene opening element of a game, and acquiring scene map information of the scene opening element;
acquiring a target view container and at least one target point position layer corresponding to the scene opening element according to the identification information of the game, the scene map information and the incidence relation between the view container and each point position layer in the view container assembly;
displaying a static map corresponding to the target view container through the view container assembly in a game picture of the game, and superposing and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map.
2. The method of claim 1, wherein the obtaining of the target view container and at least one target point position map layer corresponding to the scene opening element according to the identification information of the game, the scene map information, and the association relationship between the view container and each point position map layer in the view container assembly comprises:
acquiring the target view container corresponding to the scene opening element from the view container of the view container assembly according to the scene map information;
and acquiring at least one target point position layer corresponding to the scene starting element from the point position layer having the association relation with the target view container according to the identification information of the game.
3. The method for game interaction based on the view container as claimed in claim 1, wherein before the step of responding to the user's trigger operation on the scene opening element of the game and acquiring the scene map information of the scene opening element, the method further comprises:
acquiring a corresponding static map according to different indoor scenes, wherein the static map comprises a plurality of marked positions and position parameters of the marked positions;
establishing the view container assembly, respectively establishing the corresponding view container for each static map in the view container assembly, and loading the corresponding static map by using the view container, wherein the view container assembly is a parent container of the view container;
and aiming at each view container, establishing at least one point position layer for each marking position in the corresponding static map, and associating the view container with the established point position layer, wherein the point position layer comprises display information, position parameters of the marking position and identification information of the game.
4. The view container based game interaction method of claim 3, wherein the point map layer further comprises a business triggering rule;
the displaying, in a game screen of the game, a static map corresponding to the target view container through the view container component, and displaying, in an overlay manner, the target point position layer corresponding to the position parameter of the target position on the target position of the static map, further includes:
responding to the triggering operation of the user on the target point location layer in the game picture, and acquiring the service triggering rule of the target point location layer;
and displaying a corresponding trigger result in the game picture according to the service trigger rule, wherein the trigger result comprises a jump button for jumping to a corresponding display interface, prompt information for representing the next target position or a scene starting element for switching a new scene.
5. The method for game interaction based on the view container as claimed in claim 1, wherein before the step of responding to the user's trigger operation on the scene opening element of the game and acquiring the scene map information of the scene opening element, the method further comprises:
the method comprises the steps of detecting triggering operation of a user on a scene opening element of a game, wherein the triggering operation comprises triggering operation on a display element in a game program interface or scanning operation on a two-dimensional code for opening a corresponding game scene in the game program interface.
6. The method of claim 1, wherein after the displaying a static map corresponding to the target view container in the game screen of the game through the view container component and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map in an overlapping manner, the method further comprises:
receiving scanning operation of a user on a two-dimensional code for displaying corresponding prompt information in the game picture;
and responding to the scanning operation, and displaying the corresponding prompt information in the game picture.
7. The method of claim 1, wherein the scene map information comprises a scene map identifier associated with the view container and display data for representing a virtual object of a user;
correspondingly, the displaying a static map corresponding to the target view container through the view container component in a game picture of the game, and displaying the target point position layer corresponding to the position parameter of the target position in an overlaying manner on the target position of the static map includes:
loading a static map corresponding to the target view container through the view container component;
acquiring a rotation angle of the static map and a display position of the virtual object in the static map according to the display data of the virtual object;
rotating the static map according to the rotation angle, and displaying the rotated static map through the view container assembly in a game picture of the game;
superposing and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map;
and overlaying and displaying an icon corresponding to the virtual object on the display position of the static map.
8. A view container based game interaction apparatus, comprising:
the scene determining module is used for responding to the triggering operation of a user on a scene opening element of a game and acquiring scene map information of the scene opening element;
a scene data acquisition module, configured to acquire a target view container and at least one target point location map layer corresponding to the scene opening element according to the identification information of the game, the scene map information, and an association relationship between a view container and each point location map layer in the view container assembly;
and the display loading module is used for displaying a static map corresponding to the target view container in a game picture of the game through the view container assembly, and overlaying and displaying the target point position layer corresponding to the position parameter of the target position on the target position of the static map.
9. An electronic device comprising a processor, and a memory coupled to the processor, the memory storing program instructions executable by the processor; the processor, when executing the program instructions stored in the memory, implements the view container based game interaction method of any one of claims 1-7.
10. A storage medium having stored therein program instructions which, when executed by a processor, implement a method of view container based game interaction according to any one of claims 1 to 7.
CN202210651093.9A 2022-06-10 2022-06-10 Game interaction method, device and equipment based on view container and storage medium Pending CN115177954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210651093.9A CN115177954A (en) 2022-06-10 2022-06-10 Game interaction method, device and equipment based on view container and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210651093.9A CN115177954A (en) 2022-06-10 2022-06-10 Game interaction method, device and equipment based on view container and storage medium

Publications (1)

Publication Number Publication Date
CN115177954A true CN115177954A (en) 2022-10-14

Family

ID=83513038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210651093.9A Pending CN115177954A (en) 2022-06-10 2022-06-10 Game interaction method, device and equipment based on view container and storage medium

Country Status (1)

Country Link
CN (1) CN115177954A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116617670A (en) * 2023-05-30 2023-08-22 青岛意想意创技术发展有限公司 Method and device for setting interactive game equipment in irregular field

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116617670A (en) * 2023-05-30 2023-08-22 青岛意想意创技术发展有限公司 Method and device for setting interactive game equipment in irregular field
CN116617670B (en) * 2023-05-30 2023-12-19 青岛意想意创技术发展有限公司 Method and device for setting interactive game equipment in irregular field

Similar Documents

Publication Publication Date Title
CN107341018B (en) Method and device for continuously displaying view after page switching
JP6875346B2 (en) Information processing methods and devices, storage media, electronic devices
CN111031293B (en) Panoramic monitoring display method, device and system and computer readable storage medium
CN109767447B (en) Template matching method, device, equipment and medium
CN112774192B (en) Game scene jumping method and device, electronic equipment and storage medium
EP3659683A1 (en) Object display method and device and storage medium
CN104615418B (en) A kind of implementation method and device of slidably broadcast window
CN109710362B (en) Screenshot processing method, computing device and computer storage medium
CN108697934A (en) Guidance information related with target image
CN112150602A (en) Model image rendering method and device, storage medium and electronic equipment
CN113209616A (en) Object marking method, device, terminal and storage medium in virtual scene
CN111782108B (en) Interface switching control method, device, medium and equipment in game
US20230142566A1 (en) System and method for precise positioning with touchscreen gestures
US20230120293A1 (en) Method and apparatus for visualization of public welfare activities
CN115177954A (en) Game interaction method, device and equipment based on view container and storage medium
CN113384881B (en) Map display method and device in game
CN116688526A (en) Virtual character interaction method and device, terminal equipment and storage medium
CN114968464A (en) Recent content display method, device, terminal and storage medium
CN114225409A (en) In-game display control method, device, terminal device and storage medium
CN114816622B (en) Scene picture display method and device, electronic equipment and storage medium
CN112569601B (en) Splicing method and device of model components in game and electronic equipment
CN116271851A (en) Information sharing method, apparatus, electronic device, storage medium, and program product
CN117462955A (en) Game editing method and device and electronic equipment
CN116474366A (en) Display control method, display control device, electronic apparatus, storage medium, and program product
CN108874115B (en) Session scene display method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination