CN114504814A - Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus - Google Patents

Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus Download PDF

Info

Publication number
CN114504814A
CN114504814A CN202210122903.1A CN202210122903A CN114504814A CN 114504814 A CN114504814 A CN 114504814A CN 202210122903 A CN202210122903 A CN 202210122903A CN 114504814 A CN114504814 A CN 114504814A
Authority
CN
China
Prior art keywords
image
target
equipment
target virtual
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210122903.1A
Other languages
Chinese (zh)
Inventor
许展豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210122903.1A priority Critical patent/CN114504814A/en
Publication of CN114504814A publication Critical patent/CN114504814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an information processing method, an information processing device, a storage medium, a processor and an electronic device. The method comprises the following steps: the method comprises the steps that first equipment obtains identification information of a target virtual role in a first image, wherein the first image is an image currently displayed by the first equipment; the first equipment generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control; the first equipment covers a target control on a target virtual character in the first image to generate a second image; and the first equipment sends the second image to the second equipment, wherein the second equipment is used for responding to the synchronous display instruction and displaying the second image. The invention solves the technical problem that the false touch occurs in the process of controlling the virtual roles because of the overlapping of a plurality of virtual roles in the prior art.

Description

Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus
Technical Field
The present invention relates to the field of computers, and in particular, to an information processing method, an information processing apparatus, a storage medium, a processor, and an electronic apparatus.
Background
At present, when a player reaches a certain level in a turn-based game, a large number of game characters exist in a battle station, the volume of the game characters is very large, and in some special cases, a plurality of game characters can be overlapped, and at this time, if an object needing to be executed is selected from the overlapped game characters, the situation of mistaken touch is easy to occur.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
At least some embodiments of the present invention provide an information processing method, an information processing apparatus, a storage medium, a processor, and an electronic apparatus, so as to at least solve the technical problem in the related art that a false touch occurs in a process of controlling a virtual character due to overlapping of a plurality of virtual characters.
According to an embodiment of the present invention, there is provided an information processing method including: the method comprises the steps that first equipment obtains identification information of a target virtual role in a first image, wherein the first image is an image currently displayed by the first equipment; the first equipment generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control; the first equipment covers a target control on a target virtual character in the first image to generate a second image; and the first equipment sends a second image to the second equipment, wherein the second equipment is used for responding to a synchronous display instruction to display the second image, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
Optionally, after the first device overlays the target control on the target virtual character in the first image and generates the second image, the method further includes: the method comprises the steps that a first device obtains size information and position information of a second device, wherein the second device is located in a preset area of the first device, and the position information is used for representing the position of the second device in the preset area; the first equipment utilizes the size information and the position information to cut the second image to generate a third image; and the first equipment transmits the third image to the second equipment, wherein the second equipment is also used for responding to the synchronous display instruction and displaying the third image.
Optionally, the first device performs cropping processing on the second image by using the size information and the position information to generate a third image, including: the first device determines an overlapping area of the second device and the first image in the first device based on the size information and the position information; and the first equipment cuts the second image by using the overlapping area to generate a third image.
Optionally, the second device provides a target touch area, and the method further includes: the method comprises the steps that a first device receives a control instruction sent by a second device; the first device controls a target virtual role corresponding to the target control in the first image based on the control instruction, wherein the control instruction is generated based on touch operation executed on the target control in the target touch area in the second device.
According to an embodiment of the present invention, there is also provided an information processing method including: the method comprises the steps that a second device receives a first image sent by a first device, wherein the first image is an image currently displayed by the first device; the second equipment acquires identification information of the target virtual role in the first image; the second equipment generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control; the second equipment covers the target control on the target virtual role in the first image to generate a second image; and the second device responds to a synchronous display instruction to display the second image, wherein the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second device under the condition that the first device displays the first image.
Optionally, the second device covers the target control on the target virtual character in the first image, and after the second image is generated, the second device provides a target touch area, and the method further includes: the method comprises the steps that the second equipment obtains size information and position information of the second equipment, wherein the second equipment is located in a preset area of the first equipment, and the position information is used for representing the position of the second equipment in the preset area; the second equipment utilizes the size information and the position information to cut the second image to generate a third image; the second device displays a third image in response to the synchronized display instruction.
Optionally, the second device performs cropping processing on the second image by using the size information and the position information to generate a third image, including: the second device determines an overlapping area of the second device and the first image in the first device based on the size information and the position information; and the second device cuts the second image by using the overlapping area to generate a third image.
Optionally, the second device provides a target touch area, and the method further includes: the second equipment responds to touch operation executed on a target control in the target touch area and generates a control instruction; and the second equipment sends a control instruction to the first equipment, wherein the first equipment controls the target virtual role corresponding to the target control in the first image based on the control instruction.
According to an embodiment of the present invention, there is also provided an information processing apparatus including: the first obtaining module is used for obtaining the identification information of the target virtual role in the first image through first equipment, wherein the first equipment is used for displaying the first image; the first generation module is used for generating a target control corresponding to the target virtual role through the first equipment based on the identification information, wherein the target virtual role is controlled through the target control; the first covering module is used for covering the target control on the target virtual role in the first image through the first equipment to generate a second image; and the sending module is used for sending a second image to the second equipment through the first equipment, wherein the second equipment is used for responding to a synchronous display instruction and displaying the second image, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
According to an embodiment of the present invention, there is also provided another information processing apparatus including: the receiving module is used for receiving a first image sent by first equipment through second equipment, wherein the first equipment is used for displaying the first image; the second acquisition module is used for acquiring the identification information of the target virtual role in the first image through second equipment; the second generation module is used for generating a target control corresponding to the target virtual role through the second equipment based on the identification information, wherein the target virtual role is controlled through the target control; the second covering module is used for covering the target virtual character in the first image as a target control through second equipment to generate a second image; and the display module is used for responding to a synchronous display instruction through the second equipment and displaying the second image, wherein the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
According to an embodiment of the present invention, there is also provided a nonvolatile storage medium having a computer program stored therein, wherein the computer program is configured to execute the information processing method in any one of the above when running.
According to an embodiment of the present invention, there is further provided a processor configured to execute a program, where the program is configured to execute the information processing method in any one of the above methods when the program is executed.
According to an embodiment of the present invention, there is also provided an electronic apparatus including a memory and a processor, the memory storing a computer program therein, and the processor being configured to execute the computer program to perform the information processing method in any one of the above.
By the embodiment of the application, the first device can acquire identification information of a target virtual character in a first image, wherein the first image is an image currently displayed by the first device, and the first device can generate a target control corresponding to the target virtual character based on the identification information, wherein the target virtual character can be controlled by the target control; the method comprises the steps that a first device covers a target control on a target virtual role in a first image instead to generate a second image, and the first device sends the second image to a second device, wherein the second device is used for responding to a synchronous display instruction to display the second image, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second device under the condition that the first image is displayed on the first device, so that the aim of controlling the target virtual role through the target control to avoid mistaken touch is achieved. It is easy to note that, because the second image containing the target control is displayed on the second device, the user can implement control over the target virtual character in the first device on the second device through the target control without affecting the display screen of the game in the first device, and therefore, under the condition that a plurality of virtual characters in the first screen are overlapped, the virtual character to be executed is controlled by displaying the controls corresponding to the plurality of virtual characters on other devices, so that the condition of error touch is avoided, the game experience of the user is improved, and the technical problem that error touch occurs in the process of executing the method on the virtual character due to the overlapping of the plurality of virtual characters in the related art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an information processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of information processing according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a first device and a second device of an embodiment of the invention;
FIG. 4 is a schematic view of a first image containing a plurality of overlapping game characters, in accordance with an embodiment of the present invention;
FIG. 5 is a diagram illustrating a second image containing controls corresponding to a plurality of overlapping game characters, in accordance with an embodiment of the present invention;
FIG. 6 is a flow chart of another information processing method of an embodiment of the present invention;
FIG. 7 is a schematic diagram of an information processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of another information processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method embodiments may be performed in a mobile terminal, a computer terminal or a similar computing device. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, a game console, etc. Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an information processing method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input/output device 108, and a display device 110 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used for storing computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the information processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, that is, implementing the information processing method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The inputs in the input output Device 108 may come from a plurality of Human Interface Devices (HIDs). For example: keyboard and mouse, game pad, other special game controller (such as steering wheel, fishing rod, dance mat, remote controller, etc.). Some human interface devices may provide output functions in addition to input functions, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
The information processing method in one embodiment of the disclosure can be operated in a local terminal device or a server. When the information processing method runs on the server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and the operation of the information processing method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output.
In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides an information processing method, and fig. 2 is a flowchart of the information processing method according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
fig. 2 is a flowchart of an information processing method according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
step S202, the first device acquires identification information of the target virtual character in the first image.
The first image is an image currently displayed by the first device.
The first device may be a mobile phone, a tablet, a computer, or other terminal device. The second device may also be a terminal device such as a mobile phone, a tablet, a computer, or the like, and the second device may also be a display device.
The first image may be a graphical user interface currently displayed in a display screen of the first device.
The target virtual character may be one virtual character or a plurality of virtual characters displayed in the first image, and the target virtual character may be all virtual characters displayed in the first image. The identification information of the target virtual character can be the name of the target virtual character, and the identification information of the target virtual character can also be the icon of the head portrait of the target virtual character.
In an optional embodiment, when the target virtual role is a plurality of virtual roles, the first device may obtain identification information of the target virtual role, and generate a target control corresponding to the target role through the identification information, so that a user can perform touch control conveniently.
In another alternative embodiment, the first device may perform the step of obtaining the identification information of the target avatar in the first image upon detecting that the second device is located above or overlaid on the first device.
Step S204, the first device generates a target control corresponding to the target virtual role based on the identification information.
And controlling the target virtual role through the target control.
In an optional embodiment, the first device may generate a target control corresponding to the target virtual character according to the identification information, where the target control may be an icon of a head portrait of the target virtual character, and the target control may also be a name of the target virtual character.
In another alternative embodiment, the size of the target control corresponding to the target virtual character may be determined according to the number of characters in the target virtual character. That is, the size of the target control can be determined based on the number of avatars in the target avatar. If the number of the roles in the target virtual role is more, the target control corresponding to the target virtual role is smaller, so that the situation that the target controls are overlapped due to the fact that the number of the controls in the target controls is too large, and the situation that a user mistakenly touches the target controls in the process of clicking the target controls is avoided. If the number of the roles in the target virtual role is smaller, the target control corresponding to the target virtual role is larger, so that the user can click conveniently.
Step S206, the first device overlays the target control on the target virtual character in the first image, and generates a second image.
In an alternative embodiment, the first device may overlay a target control on an avatar of the target virtual character in the first image, and generate a second image, where the second image includes the target control corresponding to the target virtual character. By covering the target control on the target virtual character, the user can conveniently check the target virtual character corresponding to the control, so that the user can conveniently control the target virtual character through the target control.
In another optional embodiment, the first device may further replace the target virtual character in the first image with a target control, and generate a second image, where the second image may include the target control corresponding to the target virtual character.
Step S208, the first device sends the second image to the second device.
And the second equipment is used for responding to a synchronous display instruction, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
As shown in fig. 3, the first device may be a device displaying a first image, the second device may be a device displaying a second image, the second device may be smaller in size than the first device, and the second device may be overlaid on or placed over the first device. The first image can display the image of the target virtual character, and the second image can display the target control corresponding to the target virtual character.
In an optional embodiment, after the first device generates the second image, the second image may be sent to the second device for displaying, so that a user clicks a target control displayed by the second image in the second device to control a target virtual character corresponding to the target control, and a situation that a false touch occurs due to too many virtual characters is prevented.
The synchronous display instruction can be issued by the first device when the second device is overlaid on the first device. The first device can send a synchronous display instruction to the second device when detecting that the second device covers the first device and sends the second image to the second device, so that the second device can display the second image containing the target control while the first device displays the first image, and a user can conveniently control the target virtual role corresponding to the target control in the first image by clicking the target control in the second image.
Through the steps, the first device can acquire identification information of a target virtual character in a first image, wherein the first image is an image currently displayed by the first device, the first device can generate a target control corresponding to the target virtual character based on the identification information, and the target virtual character can be controlled through the target control; the method comprises the steps that a first device covers a target control on a target virtual role in a first image to generate a second image, the first device sends the second image to a second device, the second device is used for responding to a synchronous display instruction to display the second image, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second device under the condition that the first image is displayed on the first device, so that the aim of controlling the target virtual role through the target control to avoid mistaken touch is achieved. It is easy to note that, because the second image containing the target control is displayed on the second device, the user can implement control over the target virtual character in the first device on the second device through the target control without affecting the display screen of the game in the first device, and therefore, under the condition that a plurality of virtual characters in the first screen are overlapped, the virtual character to be executed is controlled by displaying the controls corresponding to the plurality of virtual characters on other devices, so that the condition of error touch is avoided, the game experience of the user is improved, and the technical problem that error touch occurs in the process of executing the method on the virtual character due to the overlapping of the plurality of virtual characters in the related art is solved.
Optionally, after the first device overlays the target control on the target virtual character in the first image and generates the second image, the method further includes: the method comprises the steps that a first device obtains size information and position information of a second device, wherein the second device is located in a preset area of the first device, and the position information is used for representing the position of the second device in the preset area; the first equipment utilizes the size information and the position information to cut the second image to generate a third image; and the first equipment transmits the third image to the second equipment, wherein the second equipment is also used for responding to the synchronous display instruction and displaying the third image.
The size information of the second device may be a length and a width of the second device, and the position information of the second device may be a position of the second device in a preset area of the first device. The preset area may be an area within a preset height above the screen of the first device.
In an alternative embodiment, the second device may be provided with a sensor, and the first device locates the position of the second device by detecting the sensor on the second device to obtain the position information of the second device, where the sensor may be disposed at the upper left, the upper right, the lower left, the lower right, etc. of the second device, or may be disposed at the center of the second device, where the position where the sensor is disposed is not limited at all. In the present application, the sensor is provided at the center of the second device as an example.
In another alternative embodiment, after detecting the position information of the sensor located in the central position of the second device, the first device may determine, by combining with the size information of the second device, an area covered by the second device on the first device, and in particular, draw a coverage area of the second device on the first device by combining with the size information of the second device, with the position of the sensor as a reference point. After the coverage area is obtained, the second image containing the target control may be cropped according to the coverage area to generate a third image, where the third image corresponds to an area covered by the second device in the first image.
In another alternative embodiment, after generating the third image, the first device may transmit the third image to the second device, which displays the third image on the second device in response to the synchronized display instruction. It should be noted that the area displayed by the third image changes as the second device moves in the preset area of the first device, and when the second device is located above the upper left corner area of the first device, if the upper left corner of the first device displays the target virtual character, the second device correspondingly displays an image, which includes the target control corresponding to the target virtual character, in the upper left corner area.
Optionally, the first device performs cropping processing on the second image by using the size information and the position information to generate a third image, including: the first device determines an overlapping area of the second device and the first image in the first device based on the size information and the position information; and the first equipment cuts the second image by using the overlapping area to generate a third image.
The overlap region may be a region where the second device projects the first image in the first device, and the overlap region may also be a region where the second device overlays the first image in the first device.
In an alternative embodiment, after the first device detects the position information of the second device, the first device draws an overlapping area of the second device in the first device according to the position information and the size information, and after the overlapping area is determined, the first device may crop the second image by using the overlapping area to generate the third image.
Optionally, the second device provides a target touch area, and the method further includes: the method comprises the steps that a first device receives a control instruction sent by a second device; the first device controls a target virtual role corresponding to the target control in the first image based on the control instruction, wherein the control instruction is generated based on touch operation executed on the target control in the target touch area in the second device.
In an optional embodiment, when the second device is located above the first device, a specific game operation of the second device is that a user clicks a target control on the second device, the second device may generate a control instruction, and send the control instruction to the first device, and when receiving the control instruction, the first device may control a target virtual character corresponding to the target control according to the control instruction, so that the user controls the target virtual character corresponding to the target control on the first device through the target control on the second device, and it is avoided that a false touch occurs when the user controls the virtual character due to a situation that a plurality of virtual characters on the first device are overlapped, and operation failure is caused. When the target control is displayed on the second device, the target control in the second device may be displayed at a corresponding position above the target virtual character in the first device, so that the user can distinguish the target control corresponding to the target virtual character.
For example, as shown in fig. 4, a first image containing a plurality of overlapping game characters is shown, the plurality of overlapping games contain a character a, a character B, and a character C, when a user needs to perform a method on one of the plurality of overlapping game characters displayed on the first image in the first device by using a finger, a second device may be overlaid on the first device over the overlapping game characters, as shown in fig. 5, a target control corresponding to each of the overlapping game characters in the first device may be displayed on the second device, and a control 1, a control 2, and a control 3 may be displayed on the second device, where the control 1 may overlay the virtual character image of the character a, the control 2 may overlay the virtual character image of the character B, and the control 3 may overlay the virtual character image of the character C, so that the user can control the overlapping game character through the target control, the condition of mistaken touch is avoided.
In a possible implementation manner, another information processing method is provided in an embodiment of the present invention, and fig. 5 is a flowchart of an information processing method according to an embodiment of the present invention, as shown in fig. 5, the method includes the following steps:
fig. 6 is a flowchart of another information processing method according to an embodiment of the present invention. As shown in fig. 5, the method may include the steps of:
step S602, the second device receives the first image sent by the first device.
The first image is an image currently displayed by the first device.
The first device may be a mobile phone, a tablet, a computer, or other terminal device. The second device may also be a terminal device such as a mobile phone, a tablet, a computer, or the like, and the second device may also be a display device.
The first image may be a graphical user interface currently displayed in a display screen of the first device.
In another alternative embodiment, the first device may send the first image currently displayed by the first device to the second device for processing when detecting that the second device is located above or overlaid on the first device.
Step S604, the second device obtains identification information of the target virtual character in the first image.
The target virtual character may be one virtual character or a plurality of virtual characters displayed in the first image, and the target virtual character may be all virtual characters displayed in the first image. The identification information of the target virtual character may be a name of the target virtual character, and the identification information of the target virtual character may also be an icon of an avatar of the target virtual character.
In an optional embodiment, when the target virtual role is a plurality of virtual roles, the second device may obtain identification information of the target virtual role, and generate a target control corresponding to the target role through the identification information, so that a user can perform touch control conveniently.
Step S606, the second device generates a target control corresponding to the target virtual role based on the identification information.
And controlling the target virtual role through the target control.
In an optional embodiment, the second device may generate a target control corresponding to the target virtual character according to the identification information, where the target control may be an icon of a head portrait of the target virtual character, and the target control may also be a name of the target virtual character.
In another alternative embodiment, the size of the target control corresponding to the target virtual character may be determined according to the number of characters in the target virtual character. That is, the size of the target control can be determined based on the number of avatars in the target avatar. If the number of the roles in the target virtual role is more, the target control corresponding to the target virtual role is smaller, so that the situation that the target controls are overlapped due to the fact that the number of the controls in the target controls is too large, and the situation that a user mistakenly touches the target controls in the process of clicking the target controls is avoided. If the number of the roles in the target virtual role is smaller, the target control corresponding to the target virtual role is larger, so that the user can click conveniently.
Step S608, the second device covers the target control on the target avatar in the first image to generate a second image.
In an optional embodiment, the second device may overlay a target control on the avatar of the target virtual character in the first image, and generate a second image, where the second image includes the target control corresponding to the target virtual character. By covering the target control on the target virtual character, the user can conveniently check the target virtual character corresponding to the control, so that the user can conveniently control the target virtual character through the target control.
In another optional embodiment, the second device may further replace the target virtual character in the first image with a target control to generate a second image, where the second image may include the target control corresponding to the target virtual character.
In step S610, the second device displays the second image in response to the synchronous display instruction.
And the synchronous display instruction is used for representing that a second image corresponding to the first image is synchronously displayed in the second device under the condition that the first device displays the first image.
In an optional embodiment, after the second device generates the second image, the second device may display the second image, so that the user clicks a target control displayed in the second image in the second device to control a target virtual character corresponding to the target control, and a situation of false touch caused by too many virtual characters is prevented.
The synchronous display instruction can be issued by the second device when the second device is overlaid on the first device. When the second device detects that the second image is covered above the first device and the second image is generated, a synchronous display instruction can be further generated, so that the second device can display the second image containing the target control while the first device displays the first image, and a user can conveniently control the target virtual role corresponding to the target control in the first image by clicking the target control in the second image.
Through the steps, the second device can receive a first image sent by the first device, wherein the first image is an image currently displayed by the first device; the second equipment acquires identification information of the target virtual role in the first image; the second equipment generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control; the second equipment covers the target control on the target virtual role in the first image to generate a second image; the second device responds to the synchronous display instruction, and displays the second image, wherein the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second device under the condition that the first image is displayed by the first device, and the purpose of controlling the target virtual role through the target control to avoid mistaken touch is achieved. It is easy to note that, because the second image containing the target control is displayed on the second device, the user can implement control over the target virtual character in the first device on the second device through the target control without affecting the display screen of the game in the first device, and therefore, under the condition that a plurality of virtual characters in the first screen are overlapped, the virtual character to be executed is controlled by displaying the controls corresponding to the plurality of virtual characters on other devices, so that the condition of error touch is avoided, the game experience of the user is improved, and the technical problem that error touch occurs in the process of executing the method on the virtual character due to the overlapping of the plurality of virtual characters in the related art is solved.
Optionally, the second device covers the target control on the target virtual character in the first image, and after the second image is generated, the second device provides a target touch area, and the method further includes: the method comprises the steps that the second equipment obtains size information and position information of the second equipment, wherein the second equipment is located in a preset area of the first equipment, and the position information is used for representing the position of the second equipment in the preset area; the second equipment utilizes the size information and the position information to cut the second image to generate a third image; the second device displays a third image in response to the synchronized display instruction.
The size information of the second device may be a length and a width of the second device, and the position information of the second device may be a position of the second device in a preset area of the first device. The preset area may be an area within a preset height above the screen of the first device.
In an alternative embodiment, a sensor may be disposed on the second device, and the first device locates the position of the second device by detecting the sensor on the second device to obtain the position information of the second device, and sends the position information of the second device to the second device, where the sensor may be disposed above the left, above the right, below the left, below the right, and the like of the second device, and may also be disposed in the center of the second device, where the position where the sensor is disposed is not limited. In the present application, the sensor is provided at the center of the second device as an example.
In another alternative embodiment, the first device may send the location information of the second device to the second device after detecting the location information of the sensor located in the central location of the second device, and then may determine the area covered by the second device on the first device by combining with the size information of the second device, specifically, the coverage area of the second device on the first device is drawn by using the location of the sensor as a reference point and combining with the size information of the second device. After the coverage area is obtained, the second image containing the target control may be cropped according to the coverage area to generate a third image, where the third image corresponds to an area covered by the second device in the first image.
In another alternative embodiment, after generating the third image, the second device may display the third image on the second device in response to the synchronized display instruction. It should be noted that the area displayed by the third image changes as the second device moves in the preset area of the first device, and when the second device is located above the upper left corner area of the first device, if the upper left corner of the first device displays the target virtual character, the second device correspondingly displays an image, which includes the target control corresponding to the target virtual character, in the upper left corner area.
Optionally, the second device performs cropping processing on the second image by using the size information and the position information to generate a third image, including: the second device determines an overlapping area of the second device and the first image in the first device based on the size information and the position information; and the second device cuts the second image by using the overlapping area to generate a third image.
The overlap region may be a region where the second device projects the first image in the first device, and the overlap region may also be a region where the second device overlays the first image in the first device.
In an alternative embodiment, the second device may draw an overlapping area of the second device in the first device for the first image according to the position information and the size information, and after determining the overlapping area, the second device may crop the second image by using the overlapping area to generate the third image.
Optionally, the second device provides a target touch area, and the method further includes: the second equipment responds to touch operation executed on a target control in the target touch area and generates a control instruction; and the second equipment sends a control instruction to the first equipment, wherein the first equipment controls the target virtual role corresponding to the target control in the first image based on the control instruction.
In an optional embodiment, when the second device is located above the first device, a specific game operation is performed, the user clicks a target control on the second device, the second device may generate a control instruction, and send the control instruction to the first device, and when receiving the control instruction, the first device may control a target virtual character corresponding to the target control according to the control instruction, so that the user controls the target virtual character corresponding to the target control on the first device through the target control on the second device, and it is avoided that a user mistakenly touches the first device when controlling the virtual character due to the fact that a plurality of virtual characters on the first device are overlapped, and operation failure is caused. When the target control is displayed on the second device, the target control in the second device may be displayed at a corresponding position above the target virtual character in the first device, so that the user can distinguish the target control corresponding to the target virtual character.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, an information processing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of which has been already made is omitted. As used hereinafter, the terms "unit", "module" and "modules" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a schematic diagram of an information processing apparatus according to an embodiment of the present invention, as shown in fig. 7, the apparatus including: a first obtaining module 702, a first generating module 704, a first covering module 706, a first sending module 708.
The first obtaining module is used for obtaining the identification information of the target virtual role in the first image through first equipment, wherein the first equipment is used for displaying the first image; the first generation module is used for generating a target control corresponding to the target virtual role through the first equipment based on the identification information, wherein the target virtual role is controlled through the target control; the first covering module is used for covering a target virtual role in the first image as a target control through first equipment to generate a second image; and the sending module is used for sending a second image to the second equipment through the first equipment, wherein the second equipment is used for responding to a synchronous display instruction and displaying the second image, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
Optionally, the apparatus further comprises: a first clipping module.
The first obtaining module is further configured to obtain size information and position information of a second device through the first device, where the second device is located in a preset area of the first device, and the position information is used to represent a position of the second device in the preset area; the first cutting module is used for cutting the second image by using the size information and the position information through the first equipment to generate a third image; the first sending module is further configured to send the third image to the second device through the first device, where the second device is further configured to display the third image in response to the synchronous display instruction.
Optionally, the first cropping module comprises: the device comprises a first determining unit and a first cutting unit.
The first determining unit is used for determining an overlapping area of the second device and the first image in the first device through the first device based on the size information and the position information; the first cropping unit is used for cropping the second image by using the overlapping area through the first device to generate a third image.
Optionally, the apparatus further comprises: the device comprises a first receiving module and a first control module.
The first receiving module is further used for receiving a control instruction sent by the second device through the first device; the first control module is further configured to control, by the first device, the target virtual character corresponding to the target control in the first image based on the control instruction, where the control instruction is generated based on a touch operation performed on the target control in the target touch area in the second device.
In this embodiment, another information processing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not described again after the description. As used hereinafter, the terms "unit", "module" and "modules" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 8 is a schematic diagram of an information processing apparatus according to an embodiment of the present invention, as shown in fig. 8, the apparatus including: a receiving module 802, a second obtaining module 804, a second generating module 806, a second covering module 808, and a displaying module 810.
The receiving module is used for receiving a first image sent by first equipment through second equipment, wherein the first equipment is used for displaying the first image; the second acquisition module is used for acquiring the identification information of the target virtual role in the first image through second equipment; the second generation module is used for generating a target control corresponding to the target virtual role through the second equipment based on the identification information, wherein the target virtual role is controlled through the target control; the second covering module is used for covering the target virtual character in the first image as a target control through second equipment to generate a second image; and the display module is used for responding to a synchronous display instruction through the second equipment and displaying the second image, wherein the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
Optionally, the second obtaining module is further configured to obtain, by the second device, size information and location information of the second device, where the second device is located in a preset area of the first device, and the location information is used to represent a location of the second device in the preset area; the second cutting module is also used for cutting the second image by using the size information and the position information through the second equipment to generate a third image; the display module is further configured to display, by the second device, the third image in response to the synchronized display instruction.
Optionally, the second cropping module comprises: a second determining unit and a second cutting unit.
Wherein the second determining unit is configured to determine, by the second device, an overlapping area of the second device and the first image in the first device based on the size information and the position information; the second cropping unit is used for cropping the second image by using the overlapping area through the second device to generate a third image.
Optionally, the second device provides a target touch area, and the apparatus further includes: and a second sending module.
The second generation module is further used for responding to touch operation executed on the target control in the target touch area through the second equipment and generating a control instruction; and the second sending module is used for sending a control instruction to the first equipment through the second equipment, wherein the first equipment controls the target virtual role corresponding to the target control in the first image based on the control instruction.
It should be noted that, the above units and modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the units and the modules are all positioned in the same processor; alternatively, the units and modules may be located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in this embodiment, the above-mentioned nonvolatile storage medium may be configured to store a computer program for executing the steps of: the method comprises the steps that first equipment obtains identification information of a target virtual role in a first image, wherein the first image is an image currently displayed by the first equipment; the first equipment generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control; covering a target virtual role in the first image by the first device as a target control to generate a second image; and the first equipment sends a second image to the second equipment, wherein the second equipment is used for responding to a synchronous display instruction to display the second image, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
Optionally, in this embodiment, the nonvolatile storage medium may be further configured to store a computer program for executing the following steps: the method comprises the steps that a second device receives a first image sent by a first device, wherein the first image is an image currently displayed by the first device; the second equipment acquires identification information of the target virtual role in the first image; the second equipment generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control; the second equipment covers the target control on the target virtual role in the first image to generate a second image; and the second device responds to a synchronous display instruction to display the second image, wherein the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second device under the condition that the first device displays the first image.
Optionally, in this embodiment, the nonvolatile storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program: the method comprises the steps that first equipment obtains identification information of a target virtual role in a first image, wherein the first image is an image currently displayed by the first equipment; the first equipment generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control; the first equipment covers a target control on a target virtual character in the first image to generate a second image; and the first equipment sends a second image to the second equipment, wherein the second equipment is used for responding to a synchronous display instruction to display the second image, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
Optionally, in this embodiment, the processor may be further configured to execute, by the computer program, the following steps: the method comprises the steps that a second device receives a first image sent by a first device, wherein the first image is an image currently displayed by the first device; the second equipment acquires identification information of the target virtual role in the first image; the second equipment generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control; the second equipment covers the target control on the target virtual role in the first image to generate a second image; and the second device responds to a synchronous display instruction to display the second image, wherein the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second device under the condition that the first device displays the first image.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (13)

1. An information processing method characterized by comprising:
the method comprises the steps that first equipment obtains identification information of a target virtual role in a first image, wherein the first image is an image currently displayed by the first equipment;
the first device generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control;
the first device covers the target control on the target virtual character in the first image to generate a second image;
and the first equipment sends the second image to second equipment, wherein the second equipment is used for responding to a synchronous display instruction to display the second image, and the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second equipment under the condition that the first equipment displays the first image.
2. The method of claim 1, wherein the first device overlays the target control on the target avatar in the first image, and wherein after generating the second image, the method further comprises:
the first device acquires size information and position information of the second device, wherein the second device is located in a preset area of the first device, and the position information is used for representing the position of the second device in the preset area;
the first equipment utilizes the size information and the position information to cut the second image to generate a third image;
and the first equipment sends the third image to the second equipment, wherein the second equipment is further used for responding to the synchronous display instruction and displaying the third image.
3. The method of claim 2, wherein the first device performs cropping processing on the second image using the size information and the position information to generate a third image, comprising:
the first device determines an overlapping area of the second device and a first image in the first device based on the size information and the position information;
and the first equipment utilizes the overlapping area to cut the second image to generate a third image.
4. The method of claim 1, wherein the second device provides a target touch area, the method further comprising:
the first equipment receives a control instruction sent by the second equipment;
the first device controls the target virtual character corresponding to the target control in the first image based on the control instruction, wherein the control instruction is generated based on touch operation executed on the target control in the target touch area in the second device.
5. An information processing method characterized by comprising:
the method comprises the steps that a second device receives a first image sent by a first device, wherein the first image is an image currently displayed by the first device;
the second equipment acquires identification information of a target virtual role in the first image;
the second device generates a target control corresponding to the target virtual role based on the identification information, wherein the target virtual role is controlled through the target control;
the second device covers the target control on the target virtual character in the first image to generate a second image;
and the second device responds to a synchronous display instruction to display the second image, wherein the synchronous display instruction is used for representing that the second image corresponding to the first image is synchronously displayed in the second device under the condition that the first device displays the first image.
6. The method of claim 5, wherein the second device overlays the target control on the target avatar in the first image, and wherein the second device provides a target touch area after generating the second image, the method further comprising:
the second device obtains size information and position information of the second device, wherein the second device is located in a preset area of the first device, and the position information is used for representing the position of the second device in the preset area;
the second device cuts the second image by using the size information and the position information to generate a third image;
the second device displays the third image in response to the synchronized display instruction.
7. The method of claim 6, wherein the second device performs cropping processing on the second image using the size information and the position information to generate a third image, comprising:
the second device determines an overlapping area of the second device and the first image in the first device based on the size information and the position information;
and the second equipment utilizes the overlapping area to cut the second image to generate a third image.
8. The method of claim 5, wherein the second device provides a target touch area, the method further comprising:
the second equipment responds to touch operation executed on a target control in the target touch area and generates a control instruction;
and the second equipment sends the control instruction to the first equipment, wherein the first equipment controls a target virtual character corresponding to the target control in the first image based on the control instruction.
9. An information processing apparatus characterized by comprising:
the first obtaining module is used for obtaining identification information of a target virtual role in a first image through first equipment, wherein the first equipment is used for displaying the first image;
a first generation module, configured to generate, by the first device, a target control corresponding to the target virtual character based on the identification information, where the target virtual character is controlled by the target control;
a first overlaying module, configured to overlay, by the first device, the target control on the target avatar in the first image to generate a second image;
and a sending module, configured to send the second image to a second device through the first device, where the second device is configured to display the second image in response to a synchronous display instruction, and the synchronous display instruction is used to indicate that the second image corresponding to the first image is synchronously displayed in the second device when the first device displays the first image.
10. An information processing apparatus characterized by comprising:
the device comprises a receiving module, a processing module and a display module, wherein the receiving module is used for receiving a first image sent by first equipment through second equipment, and the first equipment is used for displaying the first image;
the second acquisition module is used for acquiring the identification information of the target virtual role in the first image through the second equipment;
a second generation module, configured to generate, by the second device, a target control corresponding to the target virtual character based on the identification information, where the target virtual character is controlled by the target control;
a second overlaying module, configured to overlay, by the second device, the target control on the target virtual character in the first image to generate a second image;
and a display module, configured to display, by the second device, the second image in response to a synchronous display instruction, where the synchronous display instruction is used to characterize that the second image corresponding to the first image is synchronously displayed in the second device when the first device displays the first image.
11. A non-volatile storage medium, in which a computer program is stored, wherein the computer program is configured to execute the information processing method of any one of claims 1 to 8 when executed.
12. A processor for running a program, wherein the program is configured to execute the information processing method according to any one of claims 1 to 8 when running.
13. An electronic device comprising a memory and a processor, wherein the memory has a computer program stored therein, and the processor is configured to execute the computer program to perform the information processing method according to any one of claims 1 to 8.
CN202210122903.1A 2022-02-09 2022-02-09 Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus Pending CN114504814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210122903.1A CN114504814A (en) 2022-02-09 2022-02-09 Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210122903.1A CN114504814A (en) 2022-02-09 2022-02-09 Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN114504814A true CN114504814A (en) 2022-05-17

Family

ID=81551424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210122903.1A Pending CN114504814A (en) 2022-02-09 2022-02-09 Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN114504814A (en)

Similar Documents

Publication Publication Date Title
CN111701226A (en) Control method, device and equipment for control in graphical user interface and storage medium
CN113908550A (en) Virtual character control method, nonvolatile storage medium, and electronic apparatus
CN113262489B (en) Game route generation method and device, nonvolatile storage medium and electronic device
CN111467790A (en) Target object control method, device and system
CN111467791A (en) Target object control method, device and system
CN113318428B (en) Game display control method, nonvolatile storage medium, and electronic device
CN111643890A (en) Card game interaction method and device, electronic equipment and storage medium
CN111760272A (en) Game information display method and device, computer storage medium and electronic equipment
CN111450527A (en) Information processing method and device
CN113318429B (en) Control method and device for exiting game, processor and electronic device
CN113440848B (en) In-game information marking method and device and electronic device
CN114653059A (en) Method and device for controlling virtual character in game and non-volatile storage medium
CN112083994A (en) Notification message processing method and device
CN114504808A (en) Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus
CN113318430B (en) Method and device for adjusting posture of virtual character, processor and electronic device
CN114504814A (en) Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus
CN113590013B (en) Virtual resource processing method, nonvolatile storage medium and electronic device
CN115105831A (en) Virtual object switching method and device, storage medium and electronic device
CN115738230A (en) Game operation control method and device and electronic equipment
CN114832371A (en) Method, device, storage medium and electronic device for controlling movement of virtual character
CN111913562B (en) Virtual content display method and device, terminal equipment and storage medium
CN113230649A (en) Display control method and device
CN114504813A (en) Information processing method, information processing apparatus, storage medium, processor, and electronic apparatus
CN113440837B (en) Method and device for controlling tactile feedback in game, storage medium and electronic equipment
CN114504812A (en) Virtual role control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination