WO2023241206A1 - 虚拟对象的套装处理方法、装置、电子设备、存储介质及程序产品 - Google Patents

虚拟对象的套装处理方法、装置、电子设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023241206A1
WO2023241206A1 PCT/CN2023/088657 CN2023088657W WO2023241206A1 WO 2023241206 A1 WO2023241206 A1 WO 2023241206A1 CN 2023088657 W CN2023088657 W CN 2023088657W WO 2023241206 A1 WO2023241206 A1 WO 2023241206A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
component
area
virtual object
suit
Prior art date
Application number
PCT/CN2023/088657
Other languages
English (en)
French (fr)
Inventor
田聪
谢洁琪
刘博艺
崔维健
邓昱
黎智
何晶晶
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023241206A1 publication Critical patent/WO2023241206A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/422Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress

Definitions

  • Embodiments of the present application provide a virtual object package processing method, device, electronic equipment, computer-readable storage media, and computer program products, which can improve the efficiency of virtual object replacement in virtual scenes.
  • the embodiment of the present application provides a virtual object package processing method, which is executed by an electronic device.
  • Methods include:
  • the virtual scene includes a first virtual object wearing a first suit, the first suit includes a plurality of components, and the plurality of components are distributed at different parts of the first virtual object;
  • the first virtual object is in the first area in the virtual scene, in response to the color of the first area not matching the color of the first component in the first set, the first The component is replaced with a second component, wherein the second component matches the color of the first area and is the same as the first component being worn.
  • Embodiments of the present application provide a package processing method for virtual objects, which is executed by an electronic device, including:
  • the virtual scene includes a first virtual object wearing a first suit, the first suit includes a plurality of components, the plurality of components are distributed at different parts of the first virtual object, the The virtual scene also includes color inversion controls;
  • Embodiments of the present application provide a package processing method for virtual objects, which is executed by an electronic device, including:
  • the virtual scene includes a first virtual object wearing a first suit, the first suit includes a plurality of components, and the plurality of components are distributed at different parts of the first virtual object;
  • An embodiment of the present application provides a virtual object package processing device, including:.
  • a display module configured to display a virtual scene, wherein the virtual scene includes a first virtual object wearing a first suit, the first suit includes a plurality of components, and the plurality of components are distributed on the first virtual object. different parts;
  • a suit switching module configured to respond to a color of the first area not matching a color of a first component in the first suit during a period when the first virtual object is in a first area in the virtual scene. , replace the first component with a second component, wherein the second component matches the color of the first area and is the same as the wearing part of the first component.
  • An embodiment of the present application provides a virtual object package processing device, including:.
  • a display module configured to display a virtual scene, wherein the virtual scene includes a first virtual object wearing a first suit, the first suit includes a plurality of components, and the plurality of components are distributed on the first virtual object.
  • the virtual scene also includes inverse color change controls;
  • a suit switching module configured to replace the first component in the first suit that matches the color of the first area with a fifth component in response to a triggering operation of the reverse color change control, wherein the fifth component
  • the component is a component with an opposite color to that of the first area, and the wearing location of the fifth component is the same as the wearing location of the first component.
  • a display module configured to display a virtual scene, wherein the virtual scene includes a first virtual object wearing a first suit, the first suit includes a plurality of components, and the plurality of components are distributed on the first virtual object. different parts;
  • a suit switching module configured to perform the following processing in response to the first virtual object leaving the first area and entering the second area:
  • the entire first set is replaced with a second set that matches the color of the second area, and the first set is Continue to wear the second suit in the second area;
  • the first virtual object is controlled to continue wearing the first suit in the second area.
  • An embodiment of the present application provides an electronic device, including:
  • Memory used to store executable instructions
  • the processor is configured to implement the method provided by the embodiment of the present application when executing executable instructions stored in the memory.
  • Embodiments of the present application provide a computer-readable storage medium that stores executable instructions, which are used to cause the processor to implement the method provided by the embodiments of the present application when executed.
  • An embodiment of the present application provides a computer program product, which includes a computer program or instructions.
  • the computer program or instructions are executed by a processor, the method provided by the embodiment of the present application is implemented.
  • the possibility of the virtual object being exposed in the virtual scene by replacing at least some parts of the set of virtual objects with parts that match the color of the environment of the virtual scene, so that the parts of the set of virtual objects automatically change following the color of the in-game scene It reduces the interference of the set components on the interaction of virtual objects without paying the user the operating cost and thinking cost, supports the user to focus on the interaction process in the virtual scene, and improves the operating efficiency.
  • Figure 1A is a schematic diagram of the application mode of the virtual object package processing method provided by the embodiment of the present application.
  • Figure 2 is a schematic structural diagram of a terminal device 400 provided by an embodiment of the present application.
  • Figures 3A to 3D are schematic flow diagrams of a virtual object package processing method provided by an embodiment of the present application.
  • FIGS. 5A to 5C are schematic diagrams of the virtual scene interface provided by embodiments of the present application.
  • Figure 5D is a schematic diagram of the control status provided by the embodiment of the present application.
  • Figure 5E is a schematic diagram of the warehouse interface provided by the embodiment of the present application.
  • FIG. 5F and Figure 5G are schematic diagrams of the virtual scene interface provided by the embodiment of the present application.
  • Figure 7 is a schematic diagram of a map of a virtual scene provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of a color histogram provided by an embodiment of the present application.
  • FIG. 9 is an optional flow diagram of a virtual object package processing method provided by an embodiment of the present application.
  • first ⁇ second ⁇ third are only used to distinguish similar objects and do not represent a specific ordering of objects. It is understandable that “first ⁇ second ⁇ third” is used in Where appropriate, the specific order or sequence may be interchanged so that the embodiments of the application described herein can be implemented in an order other than that illustrated or described herein.
  • Response is used to represent the conditions or states on which the performed operations depend.
  • the dependent conditions or states are met, the one or more operations performed may be in real time or may have a set delay; Unless otherwise specified, there is no restriction on the execution order of the multiple operations performed.
  • Virtual objects objects that interact in virtual scenes, are controlled by users or robot programs (for example, robot programs based on artificial intelligence), and can be still, move, and perform various behaviors in virtual scenes, such as in games. various roles, etc.
  • robot programs for example, robot programs based on artificial intelligence
  • Image A is a blue sky photo
  • Image B is a blue sea photo
  • Image A and Image B respectively represent different contents
  • the color histograms of Image A and Image B respectively correspond to the vectors between the color vectors The smaller the distance, the higher the color similarity between image A and image B.
  • the electronic device provided by the embodiment of the present application can be implemented as a notebook computer, a tablet computer, a desktop computer, a set-top box, and a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, a vehicle-mounted terminal)
  • a mobile device for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device, a vehicle-mounted terminal.
  • Various types of user terminals can also be implemented as servers.
  • graphics processing hardware examples include central processing units (CPU, Central Processing Unit) and graphics processing units (GPU, Graphics Processing Unit).
  • CPU central processing units
  • GPU Graphics Processing Unit
  • the terminal device 400 runs a client 401 (for example, a stand-alone version of a game application).
  • client 401 for example, a stand-alone version of a game application.
  • the virtual scene may be an environment for game characters to interact, for example It can be a plain, street, valley, etc. for game characters to fight; taking the first-person perspective to display the virtual scene as an example, the first virtual object is displayed in the virtual scene and the first virtual object passes through the holding part (such as the hand).
  • the first virtual object can be a game character controlled by the user, that is, the first virtual object is controlled by a real user and will respond to the real user's target Movement in the virtual scene through the operation of a controller (such as a touch screen, voice-activated switch, keyboard, mouse, joystick, etc.).
  • a controller such as a touch screen, voice-activated switch, keyboard, mouse, joystick, etc.
  • the real user moves the joystick to the right
  • the first virtual object will move to the right in the virtual scene. You can also move your body, stay still in place, jump, and control the first virtual object to perform shooting operations, etc.
  • the first virtual object may be a virtual object controlled by the user.
  • the client 401 displays a virtual scene.
  • the first virtual object in the virtual scene wears a first suit.
  • the first suit includes multiple parts.
  • the first virtual object moves
  • the first area is reached, in response to the first component in the first set not matching the color of the first area, the first component is replaced with a second component that matches the color of the first area, the first component and the second component
  • the wearing position is the same.
  • the first part is a green backpack part
  • the second part is a white backpack part.
  • the first area is a snow area
  • the first virtual object moves from the grassland area to the snow area
  • the green backpack is replaced with the snow area.
  • White backpack with area color matching is
  • FIG. 1B is a schematic diagram of the application mode of the virtual object package processing method provided by the embodiment of the present application, applied to the terminal device 400 and the server 200, It is suitable for an application mode that relies on the computing power of the server 200 to complete virtual scene calculations and output the virtual scene on the terminal device 400 .
  • the server 200 calculates the virtual scene-related display data (such as scene data) and sends it to the terminal device 400 through the network 300.
  • the terminal device 400 relies on the graphics computing hardware to complete the loading, calculation and display data. Parsing and rendering rely on graphics output hardware to output virtual scenes to form visual perceptions. For example, two-dimensional video frames can be presented on the display screen of a smartphone, or projected on the lenses of augmented reality/virtual reality glasses to achieve a three-dimensional display effect. Video frames; for perception in the form of virtual scenes, it can be understood that corresponding hardware output of the terminal device 400 can be used, such as using a microphone to form auditory perception, using a vibrator to form tactile perception, and so on.
  • the terminal device 400 runs a client 401 (for example, a network version of a game application), and interacts with other users by connecting to the server 200 (for example, a game server).
  • the terminal device 400 outputs the virtual scene 101 of the client 401.
  • a first virtual object and a launching prop for example, a shooting prop or a throwing prop held by the first virtual object through a holding part (for example, a hand) are displayed in the virtual scene, where the first virtual object may be a user-controlled object.
  • the controlled game character that is, the first virtual object is controlled by a real user and will move in the virtual scene in response to the real user's operation of the controller (such as a touch screen, voice-activated switch, keyboard, mouse, joystick, etc.), For example, when the real user moves the joystick to the right, the first virtual object will move to the right in the virtual scene.
  • the controller such as a touch screen, voice-activated switch, keyboard, mouse, joystick, etc.
  • the first virtual object may be a user-controlled virtual object.
  • the client 401 displays a virtual scene.
  • the first virtual object in the virtual scene wears a first suit.
  • the first suit includes multiple components.
  • the first virtual object When the object moves to the first area, in response to the first part in the first set not matching the color of the first area, the first part is replaced with a second part that matches the color of the first area, the first part being the same as the first part.
  • the wearing positions of the two parts are the same.
  • the first part is a green backpack part
  • the second part is both a white backpack part.
  • the first area is snow. area
  • the first virtual object moves from the grassland area to the snow area
  • the green backpack is replaced with a white backpack that matches the color of the snow area.
  • Cloud Technology refers to the unification of a series of resources such as hardware, software, and networks within a wide area network or a local area network to realize data calculation and storage.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, and application technology based on the cloud computing business model. It can form a resource pool and use it on demand, which is flexible and convenient. Cloud computing technology will become an important support. The background services of technical network systems require a large amount of computing and storage resources.
  • Cloud gaming also known as gaming on demand, is an online gaming technology based on cloud computing technology. Cloud gaming technology enables graphics Light-end devices (Thin Client) with relatively limited processing and data computing capabilities can run high-quality games.
  • the game is not run on the player's game terminal, but runs on the cloud server, and the cloud server renders the game scene into a video and audio stream, which is transmitted to the player's game terminal through the network.
  • Player game terminals do not need to have powerful graphics computing and data processing capabilities. They only need to have basic streaming media playback capabilities and the ability to obtain player input instructions and send them to the cloud server.
  • the server 200 in Figure 1B can be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or it can provide cloud services, cloud databases, cloud computing, cloud functions, and cloud storage. , network services, cloud communications, middleware services, domain name services, security services, CDN, and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the terminal device 400 can be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
  • the terminal device 400 and the server 200 can be connected directly or indirectly through wired or wireless communication methods, which are not limited in the embodiments of this application.
  • the processor 410 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware Components, etc., wherein the general processor can be a microprocessor or any conventional processor, etc.
  • DSP Digital Signal Processor
  • User interface 430 includes one or more output devices 431 that enable the presentation of media content, Includes one or more speakers and/or one or more visual displays.
  • User interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, and other input buttons and controls.
  • Memory 450 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, etc.
  • Memory 450 optionally includes one or more storage devices physically located remotely from processor 410 .
  • the memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplarily described below.
  • Operating system 451 including system programs for handling various basic system services and performing hardware-related tasks; network communication module 452, for reaching other computing devices via one or more (wired or wireless) network interfaces 420, illustratively
  • the network interface 420 includes: Bluetooth, Wireless Compatibility Certification (WiFi), Universal Serial Bus (USB, Universal Serial Bus), etc.
  • the presentation module 453 operates through one or more output devices 431 (e.g., a display screen, speakers, etc.) to enable the presentation of information (e.g., a user interface for operating peripheral devices and displaying content and information); an input processing module 454 for processing one or more input devices 432 Detect one or more user inputs or interactions and translate the detected inputs or interactions.
  • the virtual object package processing device provided by the embodiment of the present application can be implemented in a software manner.
  • Figure 2 shows the virtual object package processing device 455 stored in the memory 450, which can be a program, a plug-in, etc.
  • the software in the form of software includes the following software modules: display module 4551 and package switching module 4552. These modules are logical, so they can be combined or further split at will according to the functions implemented.
  • the interactive processing method of the virtual scene provided by the embodiment of the present application will be described in detail below with reference to the accompanying drawings.
  • the interactive processing method of the virtual scene provided by the embodiment of the present application can be executed solely by the terminal device 400 in Figure 1A , or can also be executed by the terminal device 400 and the server 200 in Figure 1B Collaborative execution.
  • FIG. 3A is a schematic flowchart of an interactive processing method for a virtual scene provided by an embodiment of the present application, which will be described in conjunction with the steps shown in FIG. 3A .
  • the method shown in FIG. 3A can be executed by various forms of computer programs running on the terminal device 400, and is not limited to the above-mentioned client 410. It can also be the above operating system, software module and script. , therefore the client should not be regarded as limiting the embodiments of this application.
  • the virtual scene includes a first virtual object wearing a first suit
  • the first suit includes a plurality of components
  • the plurality of components are distributed at different parts of the first virtual object.
  • the types of parts include tops, pants, shoes, decorations (cloaks, hats, gloves, jewelry, etc.), pets, hanging pets (pets hanging on the virtual object), attack props, etc., which the virtual object can wear on the body.
  • the parts can be used as parts of the set. At least two parts are required to complete the set.
  • the first virtual object may be a user-controlled virtual object.
  • the first virtual object before step 301, when the first virtual object enters the game in the virtual scene, the first virtual object may not be wearing any components.
  • the first virtual object can wear components based on the environment color of the first area, or the first virtual object can wear preset components (for example: a basic suit required for game play, Preset suits set by players, etc.).
  • step 302 while the first virtual object is in the first area in the virtual scene, in response to the color of the first area not matching the color of the first component in the first set, the first component is replaced with a second part.
  • the second component matches the color of the first area and is worn at the same location as the first component.
  • the color similarity is calculated as follows: the grayscale values of the three red, green, and blue channels of the first part and the grayscale values of the three red, green, and blue channels of the first region are respectively mapped to the normalized For points in the Hue-Saturation-Value (HSV) color space, calculate the distance between two points.
  • HSV Hue-Saturation-Value
  • the first area can be any area in the virtual scene.
  • the area in the virtual scene can be divided according to any of the following methods: 1. Divided according to different terrains, for example: divided into mountain areas, plain areas, and basins according to the terrain. areas, forest areas and lake areas, etc. 2. Divide according to area, for example: divide the virtual scene into multiple rectangles, squares and circles with equal areas according to the grid in the map of the virtual scene. 3. Divide according to different functions, for example: divided according to functions into warehouse area, residential area, field area and agricultural area, etc.
  • Figure 5A is a schematic diagram of a virtual scene interface provided by an embodiment of the present application.
  • the first virtual object 502 is in the virtual scene, and is determined based on the color of the ground 503 of the virtual scene where the first virtual object 502 stands.
  • Component 501 is a component located on the head (wearing position) of the virtual object, such as a helmet. Part 501 does not match the environment color of the virtual scene.
  • Figure 5B is a schematic diagram of a virtual scene interface provided by an embodiment of the present application. Component 501 is replaced with a component 504 that matches the color of the virtual scene.
  • the virtual scene also includes an automatic dressing control; in response to the color of the first area not matching the color of the first component in the first set, replacing the first component with the second component can be done by: Implemented by: in response to the opening operation of the automatic dressing control, displaying that the automatic dressing control is in an on state; in response to the color of the first area not matching the color of at least one first component in the first set, automatically changing the first Part is replaced with a second part.
  • the component switching process is automatically performed, and the first component that does not match the color of the first area is switched to the second component; when the automatic dressing control is off, no part switching is performed. Handling of component switching.
  • the opening operation can be the user's click, long press, etc. on the automatic change control. When the user clicks or long-presses the automatic dress-up control that is on, the automatic dress-up control switches to the off state.
  • FIG. 5C is a schematic diagram of a virtual scene interface provided by an embodiment of the present application.
  • the automatic dressing control 505 is displayed in the virtual scene in a floating layer manner.
  • Figure 5D is a schematic diagram of the control status provided by the embodiment of the present application; when the automatic change control is turned on, the automatic change function is executed. After at least the parts in the set are automatically switched, if the current number of changes reaches the upper limit of the number of changes (for example, 10 times), the automatic change control is switched from the on state to the off state, the automatic change function is not executed, and the automatic change function is received.
  • the switch control is turned on and triggered, it does not respond to the trigger operation.
  • the automatic replacement control After automatic component switching is performed in the set, if the current number of replacements does not reach the upper limit of the number of replacements, the automatic replacement control will enter the cooling state.
  • the automatic replacement function will not be executed in the cooling state, and the preset cooling will be displayed on the automatic replacement control.
  • Countdown corresponding to the duration until the countdown of the preset cooling duration (for example: 60 seconds) ends.
  • the preset cooling time is reached, the automatic dressing control returns to the on state.
  • the third component may be a component associated with the manual change control, or a component actively selected by the user.
  • the user wants to try on the newly acquired hat B (the third component).
  • the first virtual object currently wears the hat A (the first component). ) is replaced with hat B (third component).
  • the wearing time threshold the first virtual object continues to wear hat B.
  • the wearing time threshold is reached, if the color of hat B does not match the environment color of the virtual scene, hat B is replaced with hat D that matches the environment color (No. four parts).
  • the first component before responding to the trigger operation for the manual dressing control, is determined in any of the following ways: 1. In response to the selection operation for any component in the first set, the selected component is The first part; 2. Take the part with the largest color difference from other parts in the first set as the first part; for example: calculate the color similarity between every two parts in the first set, and for each part, Obtain the sum of each color similarity between the part and other parts, and use the part with the smallest similarity sum as the part with the largest color difference from other parts. 3. Use the component with the smallest performance parameters in the first set as the first component.
  • the performance parameters include at least one of the following: protection performance parameters, attack performance parameters, the level of the virtual object required to wear the component, and movement speed performance parameters.
  • the manual change control cannot be used in the cooling state. is triggered, and the countdown corresponding to the preset cooling time is displayed on the manual change control until the preset cooling time (for example: 60 seconds) ends. When the preset cooling time is reached, the manual change control returns to the usable state.
  • the automatic dress change control or the manual dress change control can also be hidden to indicate that the use of the automatic dress change control or the manual dress change control is prohibited.
  • the color similarity between the candidate part and the first region is greater than the color similarity threshold. For example, if the color similarity between the candidate component and the first region is greater than the color similarity threshold, it means that the color of the candidate component matches the color of the first region.
  • the second component may be the candidate component that has the highest color similarity to the first region among the candidate components that satisfy the filtering condition.
  • Figure 3B is a schematic flowchart of a virtual object package processing method provided by an embodiment of the present application. The color similarity is determined through the following steps 311 to 312. degree, as detailed below.
  • step 311 the color vector of the associated area of the first component in the first area is determined.
  • the associated area is the area corresponding to the virtual environment closest to the first component, and is a geometric area formed with the first virtual object as the center.
  • the virtual environment closest to the feet of the first virtual object is the ground.
  • a circle is formed on the ground of the virtual scene, and the circle is used as the associated area.
  • the area of the association area is determined based on the area occupied by the first virtual object in the virtual scene. For example, a circular area with an area that is a preset multiple (eg, 10 times) of the area occupied by the virtual object on the ground is used as the association area. (Positively related to the part size of the first virtual object)
  • the color vector is used to represent the color distribution characteristics of the environment of the component or virtual scene.
  • the color distribution characteristics refer to the types of colors included in the environment and the proportion of each color.
  • Step 311 can be implemented through the following steps 3111 to 3114, which are described in detail below.
  • the game screen within the virtual object's field of view can be captured every preset time period (for example, 10 seconds) to obtain a field of view screen image.
  • the field of view image does not include controls, minimaps and other parts displayed in the form of floating layers in the virtual scene, which avoids additional interference factors from being mixed into the field of view image, thus improving the accuracy of obtaining color vectors.
  • the virtual scene can be a 3D scene, and the plane where the correlation area is located and the plane where the visual field image is located are not necessarily parallel. Based on the plane area mapped by the correlation area in the visual field image, the visual field image is segmented to obtain the correlation. area image.
  • the texture material image of the virtual scene closest to the first virtual object may also be used as the associated area image. For example: based on the associated area, at least part of the texture material image of the ground where the first virtual object is standing is intercepted as the associated area image.
  • Schematic diagram of the interface an image of an associated area of an upper body part of a first virtual object 502 including a virtual obstacle part of the object 517, the upper body parts are switched to parts that match the color of the virtual obstacle 517; similarly, the associated area image of the leg parts of the first virtual object 502 includes a part of the ground 503 of the virtual scene, then The parts of the legs are switched to parts that match the color of the ground 503 of the virtual scene.
  • color ratio data can be presented in the form of data, tables, histograms, etc.
  • the color ratio data includes the proportion of each color in the associated area image to all colors in the associated area image.
  • Step 3113 can be implemented in the following manner: reducing the associated area image and scaling the associated area image to a preset size (for example: 8 pixels ⁇ 8 pixels, a total of 64 pixels; 16 pixels ⁇ 16 pixels, a total of 256 pixels), And convert the reduced image into a grayscale image; perform proportion statistics on each color in the grayscale image to obtain the color proportion data of the associated area image. For example: for a reduced image of 8 pixels ⁇ 8 pixels, the reduced associated area image is downsampled based on the preset 64-level grayscale to obtain a grayscale image.
  • the maximum number of color types in the grayscale image is 64. . Obtain the total number of pixels in the grayscale image, count the number of pixels corresponding to each color in the grayscale image, and use the ratio between the number of pixels corresponding to each color and the total number of pixels as the proportion value corresponding to each color.
  • a histogram can be made based on the color ratio data to obtain a color histogram.
  • Figure 8 is a schematic diagram of a color histogram provided by an embodiment of the present application; the length of each histogram in the color histogram respectively represents the proportion of different types of colors in the associated area image, S1, S2, S3, S4 , S5, S6, S7, S8, S9, and S10 respectively correspond to different color systems.
  • Each color system includes multiple color types, and the number of color types corresponding to each color system is the same.
  • step 3114 the color vector of the associated region is extracted from the color scale data.
  • the color ratio data can be converted into a less complex color vector through a neural network model.
  • the color ratio vector of the color ratio data is determined based on the ratio value corresponding to each color in the color ratio data, or That is to say, combine the proportion values corresponding to each color into a vector to obtain the color proportion vector.
  • the total number of dimensions of the color proportion vector is equal to the color proportion.
  • the number of color types in the data is the same; the color proportion vector is dimensionally reduced and mapped to the color vector of the associated area.
  • dimensionality reduction mapping is implemented in the following way: obtain the first total number of dimensions preconfigured for the dimensionality reduction color vector, the first total number of dimensions is smaller than the second total number of dimensions corresponding to the color proportion vector, and change the number in the color proportion vector. All colors are divided into color intervals with the first total number of dimensions, weighted summation is performed on each proportion value of each color interval, the weighted summation result corresponding to each color interval is normalized, and each normalized The results are combined into a reduced-dimensional color vector, which is the color vector of the associated area.
  • the dimension of the color vector obtained by the dimensionality reduction mapping is 6 dimensions.
  • the dimensionality reduction mapping of the 64-dimensional color proportion vector can be achieved in the following way. All the colors in the 64-dimensional color proportion vector are divided into 6 color intervals. Obtain the weighted summation results of the proportion values of each color in the 6 color intervals, normalize the 6 weighted summation results, and obtain 6 normalized results, namely c, d, e, f ,g,h.
  • Figure 3D is a schematic flowchart of the virtual object package processing method provided by the embodiment of the present application.
  • the color vector of each candidate component is determined through the following steps 3121 to 3123, which will be described in detail below.
  • step 3121 the following processing is performed on each candidate part: extracting each Texture materials, each texture material is combined into a candidate part image of the candidate part.
  • the virtual scene is a two-dimensional virtual scene and the component is also a two-dimensional component
  • the three views or front and rear views of the component are tiled to form a component image
  • the component is a three-dimensional component
  • all of the components are obtained
  • the texture material on the outer surface is tiled to form a part image.
  • the color vector of each part owned by the virtual object can be obtained in advance and stored in the database.
  • the dimensionality of a color vector is positively related to the level of refinement required for color recognition.
  • step 3122 the candidate part image is converted into color ratio data of the candidate part image.
  • step 3122 is implemented by: reducing the candidate component image and converting the reduced image into a grayscale image; and statistically obtaining the color ratio data of each color of the associated region image from the grayscale image.
  • step 3122 is a conversion process performed on the candidate component image
  • step 3113 is a conversion process performed on the associated region image.
  • the principles of the two steps of conversion processing are the same.
  • step 3122 please refer to step 3113, which will not be described again here. .
  • step 3123 the color vector of the candidate part is extracted from the color scale data.
  • step 3123 may refer to step 3114, which will not be described again here.
  • the vector distance is used to characterize the color similarity between the candidate part and the first region, and the vector distance is negatively correlated with the color similarity.
  • frequent replacement of parts in a set of virtual objects may be avoided by replacing the first part with a second part in response to satisfying the replacement constraint, wherein the replacement constraint includes at least one of the following:
  • the number of costume changes of the first virtual object in the current game does not reach the upper limit of costume changes (for example: 10 times).
  • the following processing is performed: in response to the first area being a preset dressing area of the first virtual object, And the wearing part corresponding to the first part is the preset wearing part of the preset dressing area, the preset part associated with the preset wearing part is used as the second part, and the first part is replaced with the second part.
  • the preset parts corresponding to the preset wearing positions of each area in the virtual scene can be preset. If the virtual object moves to this area, the parts in the virtual object's suit that do not match the environment color will be Replace it with the default part corresponding to the wearing part.
  • Figure 7 is a schematic diagram of a virtual scene map provided by an embodiment of the present application; in the virtual scene map 705, it is assumed that the area 701 is a snow mountain terrain, the preset wearing part corresponding to the area 701 is the upper body, and the preset wearing part corresponding to The default component is a white top.
  • the first virtual object moves to the first area, if the upper body component (first component) of the first virtual object does not match the color of the first area, the first virtual object can be moved to the first area.
  • the upper body part is switched to a white top.
  • the color of some components in the first set can be changed in the following manner so that the color of the components matches the color of the environment: in response to the color of the first area matching the color of at least one first component in the first set If the colors do not match, and the first component meets the color change condition, replace the color of the first component with a target color that matches the color of the first region.
  • the color change conditions include at least one of the following:
  • each candidate part corresponding to the first part does not match the color of the first region, where the candidate part is owned by the first virtual object. For example, for a wearing part, when the color similarity between the color of each candidate component and the color of the first area is less than the color similarity threshold, then the color of each candidate component corresponding to the first component and the color of the first area The colors don't match.
  • the first component has a binding relationship with other components in the first set, where the binding relationship means that the components support each other functionally, and the virtual object can complete complex operations using the components.
  • the function of the first component is stronger than that of each candidate component corresponding to the first component, where the function includes at least one of the following: defense, attack, and movement speed.
  • the first component in response to the color of the first region not matching the color of at least one first component in the first set and the first component not meeting the color change condition, the first component is replaced with the second component.
  • the dress-up prompt is displayed in at least one of the following ways: voice prompts, text message prompts, special effects animation prompts ( For example: display a gradually disappearing aperture centered on the replaced part of the virtual object).
  • voice prompts For example: display a gradually disappearing aperture centered on the replaced part of the virtual object.
  • text message prompts For example: display a gradually disappearing aperture centered on the replaced part of the virtual object.
  • special effects animation prompts For example: display a gradually disappearing aperture centered on the replaced part of the virtual object.
  • prompt information 516 is displayed in the virtual scene, the content of which is “Appearance has been changed”.
  • step 402A in response to a trigger operation for the inverse color change control, the first component in the first set that matches the color of the first area is replaced with a fifth component.
  • FIG. 6D is a schematic diagram of a virtual scene interface provided by an embodiment of the present application.
  • the first virtual object 502 is in an open plain area. If the user wants to make the first virtual object 502 more prominent to facilitate teammates to distinguish the first virtual object 502, the user can trigger the color inversion control 512 to change the first virtual object 502.
  • the worn component 513A that matches the color of the environment is replaced with a component that is the opposite color.
  • Part 513A matches the color of ground 503A of the virtual scene.
  • Figure 6E which is a schematic diagram of a virtual scene interface provided by an embodiment of the present application. Part 513A is replaced with a part 514 that does not match the color of the environment, making the first virtual object 502 more recognizable in the virtual scene.
  • the components worn by the virtual object are Replacing parts with colors that do not match the environment makes the virtual object more recognizable in the virtual scene, making it easier for the virtual object to perform tasks that do not require concealment in the virtual scene.
  • FIG. 4B is a schematic flowchart of a virtual object package processing method provided by an embodiment of the present application, which will be described in conjunction with the steps shown in FIG. 4B .
  • step 401B the virtual scene is displayed.
  • the virtual scene includes a first virtual object wearing a first suit
  • the first suit includes a plurality of components
  • the plurality of components are distributed at different parts of the first virtual object.
  • step 410B can refer to step 301, which will not be described again here.
  • step 402B in response to the first virtual object leaving the first area and entering the second area, the following processing is performed: if the color difference between the second area and the first area is greater than the color difference threshold, the first set is replaced as a whole Wear a second suit that matches the color of the second zone and continue wearing the second suit in the second zone.
  • the color difference can be characterized as the difference between 1 and color similarity.
  • the color difference threshold can be the difference between 1 and the color similarity threshold. Color similarity is negatively correlated with color difference. The higher the similarity, the smaller the difference. For example: the color similarity threshold is 0.7, then the color difference threshold is 0.3. When the color similarity between the first suit and the environment color is 0.6, then the color difference is 0.4 which is greater than the color difference threshold 0.3, replace the first suit with the color difference threshold of 0.3. Two sets.
  • step 403B if the color difference between the second area and the first area is less than or equal to the color difference threshold, then in the second area, the first virtual object is controlled to continue wearing the first suit.
  • the color difference within each area is smaller than the color difference between areas, that is, the color difference within the area is smaller.
  • the color difference may be less than the color difference threshold.
  • the replaced suit can be kept in the area until the virtual object enters another area, or until the color difference between the environment around the virtual object and the first suit is greater than the color difference threshold.
  • the first area and the second area are not adjacent, and there is a third area between the first area and the second area; when the first virtual object is in the third area, the first virtual object is controlled Continue wearing the first outfit.
  • the third area may be a transition area between the first area and the second area, and the color difference between the third area and the first area is small.
  • the third area is a snow terrain with a small color difference from the first area
  • the first virtual object is controlled to continue wearing the first suit.
  • the process of controlling the first virtual object to continue wearing the second suit is transferred.
  • the process of controlling the first virtual object to continue wearing the second suit is transferred.
  • the third component can be determined in any of the following ways: 1. In response to the selection operation of any component in the first set, the selected component is regarded as the third component; 2. The relationship between the first set and other components is The component with the largest color difference is used as the third component; for example: calculate the color similarity between every two components in the first set, and for each component, obtain the sum of each color similarity between the component and other components, The part with the smallest sum of similarities is regarded as the part with the largest color difference from other parts. 3. Use the component with the smallest performance parameters in the first set as the third component.
  • the performance parameters include at least one of the following: protection performance parameters, attack performance parameters, the level of the virtual object required to wear the component, and movement speed performance parameters.
  • the first virtual object does not have a corresponding second suit in the second area; for example, the absence of a corresponding second suit means that the second suit corresponding to the second area is not preset.
  • the number of parts that do not match the color of the second area is less than the replacement quantity threshold; the replacement quantity threshold is positively related to the total number of parts in the set, and the replacement quantity threshold can be half of the total number of parts. For example: there are six parts in the suit, including: hat, gloves, shoes, tops, pants, virtual attack props; the replacement quantity threshold is 3. When the number of parts that do not match the color of the second area is less than 3, then only Parts are replaced instead of performing a complete replacement.
  • the third component has no binding relationship with other components in the first set.
  • the binding relationship refers to the functional support between components, and virtual objects can use components to complete complex operations.
  • the components in the set of virtual objects automatically change following the color of the scene in the game, and the virtual objects in the virtual scene Reduce and avoid the possibility of being exposed
  • users can automatically replace concealed suits, reducing the cost of operation and thinking in battle, and improving the user's gaming experience.
  • players can change the clothes of the virtual objects they control.
  • Players can dress up the virtual object by selecting parts from different wearing parts of the virtual object in the game warehouse.
  • players can also dress up the virtual object by picking up material packages in the virtual scene to obtain parts.
  • the terrain and environment of the virtual scene are changeable, and the game requires constant attention to changes in the environment and the movements of hostile virtual objects.
  • Players cannot spare more time and energy to match the components of the virtual object suit, making it difficult to move quickly in the virtual scene. Wear the required suit (for example, a suit with high concealment, etc.), and lack the means to quickly change outfits.
  • the virtual object suit processing method provided by this application can switch the parts in the virtual object suit according to the color of the virtual scene during the game, and replace the parts that do not match the color of the environment with the ones that match the color of the environment. components to improve the concealment of virtual objects in virtual scenes.
  • the virtual scene includes a virtual object, and the virtual object wears a suit.
  • the suit is the clothing of the virtual object in the game.
  • the suit is composed of a variety of components, and the components are items worn by the virtual object, such as tops, pants, shoes, etc.
  • the suit in the embodiment of this application covers all equipment, fashions, and accessories on the virtual object.
  • other forms of pets and carried objects may appear.
  • the warehouse of virtual objects includes: a warehouse that stores game props (this warehouse stores Ghillie Suit (clothes used for disguise)) and a player's fashion warehouse.
  • the virtual scene also includes automatic dress-up controls and manual dress-up controls.
  • the color of the area where the virtual object is located in the game can be intelligently identified, the color type and the proportion of each color can be obtained, and the regional color based on the virtual scene can be obtained Replacement can be achieved automatically or manually.
  • the automatic method is: set the automatic dressing control option to on, and automatically switch the components in the virtual object's current outfit to components that match the environment color of the current scene.
  • the manual method is: when the user triggers the manual dressing control, the parts in the virtual object's current outfit are switched to parts that match the environment color of the current scene.
  • FIG. 9 is an optional flowchart of a virtual object package processing method provided by an embodiment of the present application. Taking the terminal device as the execution subject, the explanation will be given in conjunction with the steps shown in Figure 9 .
  • step 901 the automatic dress-up control is turned on, and it is determined whether there is a first component in the current suit of the virtual object that does not match the color of the current environment.
  • the automatic dress-up control is a control used to indicate whether the automatic dress-up function is turned on.
  • the automatic dress-up control is turned on, it is in the automatic dress-up mode and executes the automatic dress-up function.
  • the automatic dressing control is turned off, the automatic dressing function will not be executed.
  • the parts corresponding to each wearing part of the player's virtual object are automatically compared with the environment closest to the wearing part to determine whether the colors of the two match.
  • Color matching means that the color difference between the component and the environment is small, that is, the color similarity between the color of the component and the color of the environment is greater than or equal to the similarity threshold, and the similarity threshold can be 0.5 (the value range of the similarity is , 1 ⁇ similarity ⁇ 0), when the color similarity between the color of the component and the color of the environment is less than the similarity threshold, it means that the color of the component does not match the color of the environment.
  • step 901 determines whether there is a first component in the current suit of the virtual object that does not match the color of the current environment.
  • step 902 the game screen corresponding to the virtual object is framed to obtain a visual field screen image.
  • the game screen within the virtual object's field of view can be captured every preset time period (for example, 10 seconds) to obtain a field of view screen image.
  • preset time period for example, 10 seconds
  • the controls in the virtual scene are not included in the visual field image.
  • step 903 the field of view screen image is divided to obtain the associated area image of the first component, the size of the associated area image is reduced, and grayscale conversion processing is performed on the reduced associated area image.
  • the segmentation of the field of view screen image is performed in the following manner: the virtual objects and environmental interference factors in the field of view screen image are segmented to obtain the global environment image of the virtual scene in the game screen (for example: the field of view screen image includes virtual objects and the sky of the virtual scene). , virtual buildings, virtual vehicles (such as virtual aircraft, cars, etc.), and the ground of virtual scenes, segment the virtual objects and the sky from the field of view image, and use the segmented field of view image as the global environment image.
  • Determine the global environment The area associated with each part of the virtual object in the environment image is divided into the global environment image to obtain the associated area image of each part.
  • step 904 the color histogram of the associated area image is extracted, and the multi-dimensional vector A is formed based on the color distribution data of the color histogram.
  • the dimensions of the multi-dimensional vector can be determined according to the precision required for game recognition, and the precision is positively related to the dimension.
  • a multi-dimensional vector is a six-dimensional vector as an example.
  • the proportion value of each color in the color histogram is combined into a color histogram vector corresponding to the color histogram (the color proportion vector above).
  • Each The colors each correspond to one dimension.
  • the total number of dimensions of the color histogram vector is the same as the number of color types in the color histogram.
  • the color histogram vector is dimensionally reduced and mapped based on the preset dimensions (for example, 6 dimensions) to obtain a six-dimensional vector (above). the reduced dimensionality of the color vector).
  • the color histogram vector is obtained.
  • the total number of dimensions of the color histogram vector is the same as the number of color categories in the color histogram.
  • Dimensionality reduction mapping is implemented in the following way: divide all colors in the color histogram vector into 6 color intervals, perform a weighted sum of each proportion value of each color interval, and calculate the weighted summation result corresponding to each color interval. Perform normalization and combine each normalized result into a six-dimensional vector.
  • the color data set includes seven colors, ordered in order, red, orange, yellow, green, blue, blue, and purple.
  • the embodiment of this application takes 64 colors as an example to illustrate.
  • the texture material image of the object in the virtual scene closest to the component can also be obtained, the color histogram is obtained based on the texture material, and the multi-dimensional vector A is obtained based on the color histogram.
  • the shoes of the virtual object are closest to the ground of the virtual scene, and the multi-dimensional vector A is obtained based on the texture material image of the ground.
  • steps 905 to 907 may be executed before step 901.
  • Obtain the color information of each part owned by the virtual object in advance represented in the form of a color multi-dimensional vector
  • multiple parts form a set, and for each part, the outer surface of the part is
  • the texture material is tiled into a component image, and a color multi-dimensional vector is obtained based on the component image.
  • the color multi-dimensional vector corresponding to the new part is also stored in the database.
  • step 905 a component image of each component of the virtual object is obtained.
  • the virtual scene is a two-dimensional virtual scene and the component is also a two-dimensional component
  • the three views or front and rear views of the component are tiled to form a component image
  • the component is a three-dimensional component
  • all of the components are obtained
  • the texture material on the outer surface is tiled to form a part image.
  • step 906 a color histogram for each part is obtained based on the part image of each part.
  • step 907 the multi-dimensional vector B of each part is formed based on the color distribution data of the color histogram of each part.
  • multidimensional vector B are the same as the dimensions of multidimensional vector A.
  • step 906 and step 907 are the same as step 904 and will not be described again here.
  • step 908 the vector distance between the multi-dimensional vector A and each multi-dimensional vector B is determined.
  • the vector distance between color vectors can be used to characterize the color similarity between the component and the environment, and the degree of matching is negatively correlated with the color similarity. Calculate the distance between multi-dimensional vector A and each multi-dimensional vector B. The two vectors with the smallest vector distance have the highest color similarity.
  • step 909 the component corresponding to the multi-dimensional vector B with the shortest vector distance is selected as the second component closest to the color of the current environment.
  • the component corresponding to the multi-dimensional vector Bi is the component that best matches the color of the environment for this wearing part. Replace non-matching parts with parts that best match the color.
  • step 910 the first part of the virtual object is replaced with a second part.
  • the second part corresponds to the same wearing part as the first part, and for the wearing part, the second part is the part owned by the virtual object that has the highest color similarity to the environment color of the virtual scene.
  • a prompt message may be displayed in the virtual scene interface to prompt the player that the component worn by the virtual object has been replaced.
  • Figure 5A is a schematic diagram of a virtual scene interface provided by an embodiment of the present application.
  • the first virtual object 502 is in the virtual scene.
  • the environment color of the virtual scene can be determined based on the ground 503 of the virtual scene on which the first virtual object 502 stands.
  • component 501 is a component located on the head (wearing position) of the virtual object, such as a helmet. Part 501 does not match the environment color of the virtual scene.
  • Figure 5B is a schematic diagram of a virtual scene interface provided by an embodiment of the present application. Component 501 is replaced with a component 504 that matches the color of the virtual scene.
  • Embodiments of the present application improve the concealment of virtual objects in the virtual scene by replacing at least some components in the set of virtual objects with components that match the environmental color of the virtual scene, and facilitate the rapid replacement of virtual objects during the game. Pack.
  • the automatic dress-up control when the automatic dress-up control is turned off, if the user visually perceives that the suit of the virtual object is different from the environment color of the virtual scene, the automatic dress-up control may be triggered. In response to a trigger operation for the auto-dress control, parts of the virtual object's suit that are different from the environment color are replaced with parts that match the environment color. Alternatively, in response to a trigger operation for the automatic outfit change control, the suit of the virtual object is replaced with a suit preset by the player.
  • automatic dress-up controls and manual dress-up controls in the virtual scene are displayed in the virtual scene.
  • Figure 5C is a schematic diagram of a virtual scene interface provided by an embodiment of the present application.
  • the automatic dress-up control 505 and the manual dress-up control 506 are displayed in the virtual scene in a floating layer manner.
  • the dressing control is displayed in a cooling state. You can also set an upper limit for the number of dress changes.
  • Figure 5D is a schematic diagram of the control status provided by the embodiment of the present application; when the automatic change control is turned on, the automatic change function is executed. After at least the parts in the set are automatically switched, if the current number of changes reaches the upper limit of the number of changes (for example, 10 times), the automatic change control is switched from the on state to the off state, the automatic change function is not executed, and the automatic change function is received. There is no response when the opening of the dress-up control triggers the operation. After at least the parts in the set are automatically switched, if the current number of replacements does not reach the upper limit of the number of replacements, the automatic replacement control will enter the cooling state.
  • the upper limit of the number of changes for example, 10 times
  • the automatic replacement function will not be executed in the cooling state, and the preset cooling will be displayed on the automatic replacement control. Countdown corresponding to the duration until the end of the preset cooling time (for example: 60 seconds). When the preset cooling time is reached, the automatic dressing control returns to the on state.
  • the manual dress change control is in a usable state, and in response to the trigger operation for the manual dress change control, at least some parts of the virtual object's suit are switched; if the current The number of dress changes has reached the upper limit of dress changes (for example, 10 times).
  • the manual dressing control is displayed in a disabled state (refer to Figure 5D, the disabled state can be characterized as a grayscale state display, or a disabled symbol is displayed on the manual dressing control). After at least the parts in the set are automatically switched, if the current number of changes does not reach the upper limit of the number of changes, the manual change control enters the cooling state.
  • the manual change control In the cooling state, the manual change control cannot be triggered, and the preset is displayed on the manual change control.
  • the countdown corresponding to the cooling time until the end of the preset cooling time (for example: 60 seconds).
  • the manual change control returns to the usable state.
  • the automatic dress change control or the manual dress change control can also be hidden to indicate that the use of the automatic dress change control or the manual dress change control is prohibited.
  • FIG. 5E is a schematic diagram of the warehouse interface provided by the embodiment of the present application; a warehouse control 507 is provided in the virtual scene. In response to a trigger operation for the warehouse control 507, a warehouse interface 508 is displayed.
  • the warehouse interface 508 includes an item column and The fashion bar stores virtual prop components and virtual equipment components owned by the virtual object. The fashion bar stores fashion components owned by the virtual object.
  • the automatic dressing function can be performed; in response to the triggering operation for the manual dressing control 506, at least some components in the set of the virtual object are replaced with components that match the color of the environment; or, The suit of the virtual object is replaced with a preset suit corresponding to the manual dress-up control 506, and in use is displayed on the manual dress-up control 506 to indicate that the preset suit is being used.
  • the automatic dressing control and the manual dressing control are set in the warehouse interface, which avoids multiple controls from blocking the virtual scene picture.
  • the wearing part of the virtual object when the automatic dressing control is turned on, if the wearing part of the virtual object does not wear any components, the wearing part will automatically wear the component that best matches the environment color of the current area, and the component is the same as the wearing part.
  • Corresponding parts For example: virtual objects enter the game During the game, if only the top and pants are worn, and no parts are worn on the feet and head, the virtual object will automatically wear shoes and hats that match the color of the environment in the current area. Or, if the virtual object is not wearing any parts, the virtual object is automatically matched with a suit that matches the environment color of the current area.
  • the suit after the player turns on the automatic dressing function, if the virtual object is not yet dressed, the suit will be automatically matched with the suit that is closest to the environment color of the current area and automatically put on; if the virtual object is already dressed, and some parts of the suit are incompatible with the environment If the colors do not match, then parts will be replaced based on the environment. If all parts do not match the environment, then parts will be replaced in units of sets.
  • the use of the automatic dressing control and the manual dressing control are not mutually exclusive.
  • the entire suit or some parts of the suit can be switched through the manual dressing control.
  • the user selects any part in the set of the virtual object as the part to be replaced, and triggers the manual dressing control to replace the part to be replaced with other parts associated with the manual dressing control.
  • Other parts can be parts that meet any of the following conditions: parts that better match the color of the current area compared to the part to be replaced, parts that have better performance parameters compared to the part to be replaced, and frequency of use compared to the part to be replaced Taller parts, parts of the opposite color to the part to be replaced, parts preferred by the user, etc.
  • automatic dress-up control when the automatic dress-up control is turned on, if the player wears any part for the virtual object through the manual dress-up control (for example: the player's favorite parts), automatic dress-up will no longer be performed within the preset time period. Function. In response to reaching the preset time period and the environment color of the virtual scene does not match the color of at least some components in the set of the virtual object, switching at least some components to components that best match the current environment color.
  • Figure 6A is a schematic diagram of a virtual scene interface provided by an embodiment of the present application; the first virtual object 502 wears the component 511 and the component 510A, and the legs of the first virtual object are under the water surface 509 of the virtual scene, then the component 511 Being attracted by the water in the virtual scene Occlusion, it is difficult to distinguish the color of the underwater environment above the water surface, and only the unoccluded parts 510A of the first virtual object 502 can be replaced.
  • Figure 6B is a schematic diagram of a virtual scene interface provided by an embodiment of the present application; component 510A is replaced with a component 515 that matches the color of the virtual scene, while the component 511 obscured by water is not replaced.
  • the player may be prompted that the parts of the virtual object's set have been replaced by displaying prompt information 516 (refer to FIG. 6B , the content of the prompt information may be "Appearance has been changed").
  • users may have a need to improve the recognition of the first virtual object: multiple virtual objects are in a melee, the first virtual object and teammate virtual objects act collectively, and the first virtual object is in a non-game game area.
  • the weather factors and environmental factors of the virtual scene affect the visibility of virtual objects (for example: rain, snow, smoke, etc.).
  • the user may need to distinguish the first virtual object from other virtual objects and virtual scenes.
  • the virtual scene also includes an inverse color change control.
  • the user can trigger the inverse color change control to partially or entirely switch the suit of the virtual object to a suit with the opposite color to the environment of the virtual scene.
  • the color is opposite, that is, the color similarity is low.
  • the part with the lowest similarity to the environment color of the current area is selected as the inverse color part opposite to the environment color of the virtual scene (the fifth part above). Replace the currently worn parts of the virtual object with inverse-colored parts to improve the recognition of the virtual object in the virtual scene.
  • Figure 6D is a schematic diagram of a virtual scene interface provided by an embodiment of the present application; assuming that the current area is a grassland, the environment color of the virtual scene can be determined based on the ground 503 of the virtual scene in this area.
  • the ground 503 of the virtual scene is grass green
  • the component 513A worn by the virtual object 512 is a component that matches the color of the ground 503 of the virtual scene.
  • the component 513A is a green camouflage uniform.
  • the The component 513A worn on the upper body of a virtual object 502 is replaced with a component that does not match the color of the environment of the virtual scene.
  • Figure 6E is a schematic diagram of a virtual scene interface provided by an embodiment of the present application; the component 513A on the virtual object 502 is replaced with a component 514 that does not match the environmental color of the virtual scene, so that the virtual object 502 is in the virtual scene. Higher recognition.
  • the virtual object by replacing at least part of the components of the virtual object set with components that do not match the environmental color of the virtual scene, the virtual object is more recognizable in the virtual scene, making it easier for the user to observe the virtual object, and making it easier for the user to observe the virtual object. Distinguishing virtual objects from virtual scenes and other virtual objects can improve the efficiency of human-computer interaction for users to control virtual objects.
  • areas of the virtual scene are divided based on scene type (for example: city, ruins, snow, etc.), and the color difference within each area is smaller than the color difference between areas.
  • scene type for example: city, ruins, snow, etc.
  • the suit of the virtual object is partially or completely replaced according to the color corresponding to the area, and the replaced suit is kept in the area until the virtual object enters other areas.
  • FIG. 7 is a schematic diagram of a virtual scene map provided by an embodiment of the present application; in the virtual scene map 705, it is assumed that area 701 is a snow mountain, area 702 is a grassland, and area 704 is a desert. . Area 703 exists between area 701 and area 702. Assume that the color difference within the three areas is small (the color similarity is greater than the color similarity threshold), but the color difference between the areas is large (the color similarity is less than the color similarity threshold). Due to the color of the texture material in the virtual scene It is fixed.
  • the color-matching suits or parts for different areas can be predetermined based on the parts owned by the virtual object.
  • Area 701, area 702, and area 704 are respectively provided with corresponding color-matching suits.
  • the color similarity between the colors at some positions in the region 703 and the region 701 is less than the color similarity threshold, and the color similarity between the colors at some positions and the region 701 is greater than the color similarity threshold.
  • the virtual object When the virtual object enters the area 701, some components in the first suit of the virtual object are switched to preset components that match the environmental color of the snow mountain scene in the area 701 to form a third suit, and the virtual object continues to be worn in the area 701.
  • the third set or, switch the first set as a whole a second suit that matches the color of the environment, and continues to wear the second suit in area 701; when the virtual object enters area 703, in response to the color of the environment not matching the color of the components in the suit of the virtual object, the suit of the virtual object is Switch to a suit that matches the color of your environment.
  • components in the suit of virtual objects are switched based on area switching in the virtual scene, which reduces the frequency of judging the similarity between the color of the environment and the color of the suit, and can reduce the consumption of computing resources and the memory usage of the client. .
  • the components in the set of virtual objects automatically change following the color of the scene in the game, and the virtual objects in the virtual scene It reduces the possibility of being exposed and avoids the negative interference caused by the abundant suit parts on the battle.
  • the user can automatically replace the concealed suit, reducing the cost of operation and thinking in the battle, and improving the user's gaming experience.
  • the virtual object package processing device 455 is stored in the memory 450
  • the software module in may include: a display module 4551 configured to display a virtual scene, wherein the virtual scene includes a first virtual object wearing a first suit, the first suit includes a plurality of components, and the plurality of components are distributed on the first virtual object. Different parts; the suit switching module 4552 is configured to, during the period when the first virtual object is in the first area in the virtual scene, in response to the color of the first area not matching the color of the first component in the first suit, switching the first Part is replaced with a second part.
  • the second component matches the color of the first area and is worn at the same location as the first component.
  • the virtual scene also includes an automatic dress-up control;
  • the suit switching module 4552 is configured to respond to an opening operation for the automatic dress-up control, displaying that the automatic dress-up control is in an on state; in response to the color of the first area Does not match the color of the first part in the first set, automatically replaces the first part with the second part.
  • the virtual scene also includes a manual outfit change control, an outfit switching module 4552, Configured to, in response to a triggering operation for the manual change control, replace the first component in the first suit with a third component, and maintain the switched first suit within a wear duration threshold, wherein the third component is the same as the third component.
  • Any component with the same wearing position of a component in response to maintaining the switched first suit reaching the wear duration threshold, and the color of the first area does not match the color of the third component in the first suit, replace the third component with The fourth component; wherein the fourth component matches the color of the first area and is the same as the wearing part of the third component.
  • the set switching module 4552 is configured to determine the first component in any one of the following ways before responding to a triggering operation for the manual change control: in response to a selection operation for any component in the first set, Take the selected part as the first part; take the part with the largest color difference from other parts in the first set as the first part; take the part with the smallest performance parameters in the first set as the first part.
  • the virtual scene also includes a manual dress-up control; the suit switching module 4552 is configured to display that the manual dress-up control is in a usable state in response to the manual dress-up condition being met; wherein the manual dress-up condition includes at least the following: One: the time interval between the current time and the last time of dressing up is greater than or equal to the interval threshold; the number of dressing up times of the first virtual object does not reach the upper limit of the number of dressing up times; in response to the color of the first area being different from the first The color of the first part in the set does not match, and a trigger operation is received on the manual change control to replace the first part with the second part.
  • the suit switching module 4552 is configured to display that the manual dress-up control is in a disabled state in any of the following ways in response to the manual dress-up condition not being met: hiding the manual dress-up control; displaying the manual dress-up control in grayscale Control; displays a disabled symbol on the manual dressup control.
  • the suit switching module 4552 is configured to obtain multiple candidate components for the same wearing part as the first component before replacing the first component with the second component; filter the multiple candidate components that satisfy The candidate parts of the condition are used as the second part, wherein multiple candidate parts are owned by the first virtual object; wherein the filtering conditions include any of the following: the function of the candidate part is the same as the function of the first part; the wearability of the first part The parts are not obscured by the virtual environment; The color similarity between the candidate part and the first region is greater than the color similarity threshold.
  • the set switching module 4552 is configured to determine the color similarity before replacing the first part with the second part by: determining the color vector of the associated area of the first part in the first area; determining each A vector distance between the color vector of the candidate part and the color vector of the associated region, where the vector distance is used to characterize the color similarity between the candidate part and the first region, and the vector distance is negatively correlated with the color similarity.
  • the suit switching module 4552 is configured to obtain the visual field image image corresponding to the first virtual object; segment the visual field image image based on the associated area of the wearing part of the first component to obtain the associated area image; and perform segmentation processing on the associated area image.
  • the image is converted and processed to obtain the color ratio data of the associated area image; feature extraction processing is performed on the color ratio data to obtain the color vector of the associated area.
  • the set switching module 4552 is configured to reduce the associated area image and convert the reduced image into a grayscale image; and collect color ratio data of each color in the associated area image from the grayscale image.
  • the set switching module 4552 is configured to determine a color proportion vector of the color proportion data based on the proportion value corresponding to each color in the color proportion data, wherein the value of each dimension of the color proportion vector is consistent with each proportion.
  • the values correspond to one-to-one; the color proportion vector is dimensionally reduced and mapped to the color vector of the associated area.
  • the set switching module 4552 is configured to determine the color vector of each candidate part by performing the following on each candidate part before determining the color vector of the associated area of the first part in the first area. Processing: Extract each texture material of the candidate part and combine each texture material into a candidate part image of the candidate part; convert the candidate part image into color ratio data of the candidate part image; extract the color vector of the candidate part from the color ratio data .
  • the set switching module 4552 is configured to reduce the candidate component image, and perform grayscale conversion processing on the reduced image to obtain a grayscale image; The proportion statistics of each color in the image are performed to obtain the color proportion data of the candidate part image.
  • the set switching module 4552 is configured to determine a color proportion vector of the color proportion data based on the proportion value corresponding to each color in the color proportion data, wherein the value of each dimension of the color proportion vector is consistent with each proportion.
  • the values correspond to one-to-one; the color proportion vector is dimensionally reduced and mapped to the color vector of the candidate part.
  • the outfit switching module 4552 is configured to replace the first component with the second component in response to satisfying the replacement constraint, wherein the replacement constraint includes at least one of the following: the number of costume changes of the first virtual object has not been completed. The upper limit of the number of dress changes has been reached; the first virtual object needs to be hidden; the stay time of the first virtual object in the first area is greater than the duration threshold; the area of the first area is greater than the dress change area threshold.
  • the suit switching module 4552 is configured to identify the hidden requirements of the virtual object before replacing the first component with the second component in the following manner: based on the environment parameters of the first area and the attribute parameters of the virtual object.
  • the neural network model performs concealment prediction on the first virtual object, and obtains a concealment prediction result indicating whether the first virtual object needs to be concealed; wherein, the attribute parameters of the virtual object include: the position information of the first virtual object, the hostile virtuality of the first virtual object The position information of the object, and the position information of the teammate virtual object of the first virtual object; the environmental parameters of the first area include: the terrain information of the first area, and the field of view of the first area.
  • the suit switching module 4552 is configured to train the neural network model in the following manner before calling the neural network model to perform covert predictions on the first virtual object based on the environmental parameters of the first area and the attribute parameters of the virtual object: Obtain the environmental parameters of the virtual scene and the game data of at least two camps.
  • the at least two camps include the losing camp and the winning camp.
  • the game data includes: the location where the virtual objects of the winning camp perform covert behaviors, and the virtual objects of the losing camp.
  • the position where the covert behavior is performed obtain the marked game data, in which the label of the position where the virtual object of the winning camp performs the covert behavior is probability 1, and the label of the position where the virtual object of the losing camp performs the covert behavior is probability 0; based on the virtual
  • the environmental parameters of the scene and the annotated game data are used to train the initial neural network model, and the trained neural network model is obtained.
  • the set switching module 4552 is configured to perform the following processing in response to the color of the first area not matching the color of the first component in the first set: in response to the first area being a preset of the first virtual object. Assume a dressing area, and the wearing part corresponding to the first component is the preset wearing part of the preset dressing area, the preset component associated with the preset wearing part is used as the second component, and the first component is replaced with the second component; Wherein, the color of the preset component matches the color of the first area.
  • the suit switching module 4552 is configured to replace the first suit as a whole with a second suit that matches the color of the first region in response to satisfying a global replacement condition, where the global replacement condition includes at least one of the following: A corresponding second suit is preset in the first area for the first virtual object; an overall replacement instruction for the first suit is received.
  • the suit switching module 4552 is configured to perform the following processing in response to the first virtual object leaving the first area and entering the second area: if the color difference between the second area and the first area is less than or equal to the color Difference threshold, then in the second area, control the first virtual object to continue wearing the first suit; if the color difference between the second area and the first area is greater than the color difference threshold, replace the first suit as a whole with the second suit.
  • the color of the zone matches the second suit, and the second suit continues to be worn in the second zone.
  • the suit switching module 4552 is configured to, before replacing the first suit as a whole with a second suit that matches the color of the second area, if the partial replacement condition is not met, then switch to replacing the first suit as a whole. Processing of a second set that matches the color of the second area; if the partial replacement condition is met, replace the third component in the first set with a fourth component, where the fourth component matches the color of the second area , and is the same as the wearing part of the third component; wherein the local replacement conditions include at least one of the following: the first virtual object does not have a corresponding second suit in the second area; the number of components that do not match the color of the second area Less than the replacement quantity threshold; the third part has no binding relationship with other parts in the first set.
  • the set switching module 4552 is configured to replace the first part with The second component; wherein the color change condition includes at least one of the following: the first component The color of each corresponding candidate part does not match the color of the first area, where the candidate part is owned by the first virtual object; the first part has a binding relationship with other parts in the first set; the function of the first part Stronger than each candidate component corresponding to the first component; the function of the first component is associated with the task currently performed by the first virtual object, wherein the second component does not have the function corresponding to the currently performed task.
  • the set switching module 4552 is configured to replace the color of the first part in response to the color of the first area not matching the color of the first component in the first set, and the first component satisfies the color change condition. Be the target color that matches the color of the first region.
  • the virtual scene also includes an inverse color change control;
  • the suit switching module 4552 is configured to respond to the first virtual object not requiring concealment in the first area and receiving a trigger for the inverse color change control. Operation, replace the first component in the first suit that matches the color of the first area with a fifth component; wherein the fifth component is a component with the opposite color to the first area, and the wearing part of the fifth component is consistent with the first The parts are worn at the same location.
  • the suit switching module 4552 is configured to, before replacing the first component in the first suit that matches the color of the first area with the fifth component, Among the candidate parts, the candidate part with the lowest color similarity to the first region is used as the fifth part, where multiple candidate parts are owned by the first virtual object.
  • the display module 4552 is configured to display a virtual scene, wherein the virtual scene includes a first virtual object wearing a first suit, the first suit includes multiple components, and the multiple components are distributed on different parts of the first virtual object. parts, the virtual scene also includes an inverse color change control; the suit switching module 4551 is configured to respond to a trigger operation for the inverse color change control, and replace the first component in the first suit that matches the color of the first area with The fifth component; wherein the fifth component is a component with the opposite color to the first area, and the wearing location of the fifth component is the same as the wearing location of the first component.
  • the display module 4551 is configured to display a virtual scene, wherein the virtual scene includes a first virtual object wearing a first suit, the first suit includes a plurality of parts, and the plurality of parts The components are distributed in different parts of the first virtual object; the set switching module 4552 is configured to respond to the first virtual object leaving the first area and entering the second area, performing the following processing: If the color between the second area and the first area The difference is greater than the color difference threshold, then replace the first suit as a whole with a second suit that matches the color of the second area, and continue to wear the second suit in the second area; if the color between the second area and the first area If the difference is less than or equal to the color difference threshold, in the second area, the first virtual object is controlled to continue wearing the first suit.
  • the first area and the second area are not adjacent, and there is a third area between the first area and the second area; the suit switching module 4552 is configured to operate when the first virtual object is in the third area. When, the first virtual object is controlled to continue wearing the first suit.
  • the suit switching module 4552 is configured to, before continuing to wear the second suit in the second area, if the color distribution difference in the second area is less than or equal to the color difference threshold, then transfer to control the first virtual object to continue wearing it. Processing of the second set.
  • the suit switching module 4552 is configured to, before replacing the first suit as a whole with a second suit that matches the color of the second area, if the partial replacement condition is not met, then switch to replacing the first suit as a whole. Processing of a second set that matches the color of the second area; if the partial replacement condition is met, replace the third component in the first set with a fourth component, where the fourth component matches the color of the second area , and is the same as the wearing part of the third component; wherein the local replacement conditions include at least one of the following: the first virtual object does not have a corresponding second suit in the second area; the number of components that do not match the color of the second area Less than the replacement quantity threshold; the third part has no binding relationship with other parts in the first set.
  • Embodiments of the present application provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the virtual object package processing method described above in the embodiment of the present application.
  • Embodiments of the present application provide a computer-readable storage medium storing executable instructions.
  • the executable instructions are stored therein.
  • the executable instructions When executed by a processor, they will cause the processor to execute the virtual objects provided by the embodiments of the present application.
  • the package processing method for example, the package processing method of virtual objects as shown in Figure 3A, 4A or 4B.
  • the computer-readable storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • Various equipment may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • Various equipment may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the above memories.
  • executable instructions may take the form of a program, software, software module, script, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and their May be deployed in any form, including deployed as a stand-alone program or deployed as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • executable instructions may be deployed to execute on one computing device, or on multiple computing devices located at one location, or alternatively, on multiple computing devices distributed across multiple locations and interconnected by a communications network execute on.
  • At least some components in the set of virtual objects are replaced with components that match the environmental color of the virtual scene, so that the components in the set of virtual objects automatically change following the color of the scene in the game, and the virtual
  • the possibility of objects being exposed in the virtual scene is reduced, and the negative interference caused by rich suit parts on combat is avoided.
  • Users can automatically replace concealed suits, reducing the cost of operation and thinking in battle, and improving the user's gaming experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供了一种虚拟对象的套装处理方法、装置、电子设备及存储介质;方法包括:显示虚拟场景,其中,虚拟场景包括穿着第一套装的第一虚拟对象,第一套装包括多个部件,多个部件分布在第一虚拟对象的不同部位;在第一虚拟对象处于虚拟场景中的第一区域的期间,响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,将第一部件替换为第二部件;其中,第二部件与第一区域的颜色匹配,且与第一部件的穿戴部位相同。

Description

虚拟对象的套装处理方法、装置、电子设备、存储介质及程序产品
相关申请的交叉引用
本申请基于申请号为202210671738.5、申请日为2022年6月14日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机技术,尤其涉及一种虚拟对象的套装处理方法、装置、电子设备、存储介质及程序产品。
背景技术
基于图形处理硬件的显示技术,扩展了感知环境以及获取信息的渠道,尤其是虚拟场景的显示技术,能够根据实际应用需求实现受控于用户或人工智能的虚拟对象之间的多样化的交互,具有各种典型的应用场景,例如在游戏等的虚拟场景中,能够模拟虚拟对象之间的真实的对战过程。
虚拟对象在虚拟场景中可以穿着各种套装(例如:游戏外观、游戏装备等),在游戏对局过程中,玩家无法抽出较多时间和精力对套装进行搭配,相关技术中暂无较好的快捷换装方案。
发明内容
本申请实施例提供一种虚拟对象的套装处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品,能够提升虚拟对象在虚拟场景中的换装效率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种虚拟对象的套装处理方法,由电子设备执行, 方法包括:
显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位;
在所述第一虚拟对象处于所述虚拟场景中的第一区域的期间,响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,将所述第一部件替换为第二部件,其中,所述第二部件与所述第一区域的颜色匹配,且与所述第一部件的穿戴部位相同。
本申请实施例提供一种虚拟对象的套装处理方法,由电子设备执行,包括:
显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位,所述虚拟场景中还包括反色换装控件;
响应于针对所述反色换装控件的触发操作,将所述第一套装中与第一区域的颜色匹配的第一部件替换为第五部件,其中,所述第五部件是与所述第一区域的颜色相反的部件,且所述第五部件的穿戴部位与所述第一部件的穿戴部位相同。
本申请实施例提供一种虚拟对象的套装处理方法,由电子设备执行,包括:
显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位;
响应于所述第一虚拟对象离开第一区域并进入第二区域,执行以下处理:
若所述第二区域与所述第一区域之间的颜色差异大于颜色差异阈值,将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装,并在 所述第二区域中继续穿着所述第二套装;
若所述第二区域与所述第一区域之间的颜色差异小于或者等于颜色差异阈值,在所述第二区域中,控制所述第一虚拟对象继续穿着所述第一套装。
本申请实施例提供一种虚拟对象的套装处理装置,包括:。
显示模块,配置为显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位;
套装切换模块,配置为在所述第一虚拟对象处于所述虚拟场景中的第一区域的期间,响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,将所述第一部件替换为第二部件,其中,所述第二部件与所述第一区域的颜色匹配,且与所述第一部件的穿戴部位相同。
本申请实施例提供一种虚拟对象的套装处理装置,包括:。
显示模块,配置为显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位,所述虚拟场景中还包括反色换装控件;
套装切换模块,配置为响应于针对所述反色换装控件的触发操作,将所述第一套装中与第一区域的颜色匹配的第一部件替换为第五部件,其中,所述第五部件是与所述第一区域的颜色相反的部件,且所述第五部件的穿戴部位与所述第一部件的穿戴部位相同。
本申请实施例提供一种虚拟对象的套装处理装置,包括:。
显示模块,配置为显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位;
套装切换模块,配置为响应于所述第一虚拟对象离开第一区域并进入第二区域,执行以下处理:
若所述第二区域与所述第一区域之间的颜色差异大于颜色差异阈值,将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装,并在所述第二区域中继续穿着所述第二套装;
若所述第二区域与所述第一区域之间的颜色差异小于或者等于颜色差异阈值,控制所述第一虚拟对象在所述第二区域中继续穿着所述第一套装。
本申请实施例提供一种电子设备,包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的可执行指令时,实现本申请实施例提供的方法。
本申请实施例提供一种计算机可读存储介质,存储有可执行指令,用于引起处理器执行时,实现本申请实施例提供的方法。
本申请实施例提供一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现本申请实施例提供的方法。
本申请实施例具有以下有益效果:
通过将虚拟对象的套装中至少部分部件替换为与虚拟场景的环境颜色匹配的部件,使得虚拟对象的套装中的部件跟随游戏内场景的颜色自动变化,虚拟对象在虚拟场景中被暴露的可能性降低、用户在不付出操作成本和思考成本的情况下,减少了套装部件对虚拟对象的交互的干扰,支持用户专注于在虚拟场景中的交互过程,提高了操作效率。
附图说明
图1A是本申请实施例提供的虚拟对象的套装处理方法的应用模式示意图;
图1B是本申请实施例提供的虚拟对象的套装处理方法的应用模式示意图;
图2是本申请实施例提供的终端设备400的结构示意图;
图3A至图3D是本申请实施例提供的虚拟对象的套装处理方法的流程示意图;
图4A与图4B是本申请实施例提供的虚拟对象的套装处理方法的流程示意图;
图5A至图5C是本申请实施例提供的虚拟场景界面的示意图;
图5D是本申请实施例提供的控件状态的示意图;
图5E是本申请实施例提供的仓库界面的示意图;
图5F与图5G是本申请实施例提供的虚拟场景界面的示意图;
图6A至图6F是本申请实施例提供的虚拟场景界面的示意图;
图7是本申请实施例提供的虚拟场景的地图的示意图;
图8是本申请实施例提供的颜色直方图的示意图;
图9是本申请实施例提供的虚拟对象的套装处理方法的一个可选的流程示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二\第三”仅仅是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
需要指出,在本申请实施例中,涉及到用户信息、用户反馈数据等相关的数据,当本申请实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)虚拟场景,利用设备输出的区别于现实世界的场景,通过裸眼或设备的辅助能够形成对虚拟场景的视觉感知,例如通过显示屏幕输出的二维影像,通过立体投影、虚拟现实和增强现实技术等立体显示技术来输出的三维影像;此外,还可以通过各种可能的硬件形成听觉感知、触觉感知、嗅觉感知和运动感知等各种模拟现实世界的感知。
2)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
3)虚拟对象,虚拟场景中进行交互的对象,受到用户或机器人程序(例如,基于人工智能的机器人程序)的控制,能够在虚拟场景中静止、移动以及进行各种行为的对象,例如游戏中的各种角色等。
4)颜色直方图,颜色分布直方图的简称,用于表征一幅图像中颜色的全局分布的直方图,直方图中每个直方的长度表征不同色彩在一幅图像中所占的比例。每张图像都可以生成颜色分布直方图,基于颜色分布直方图可以确定图像的颜色向量,颜色向量之间的向量距离用于表征两张图像的 颜色相似度,向量距离与颜色相似度负相关。例如:图像A是蓝色的天空照片,图像B是蓝色的大海照片,图像A与图像B分别表征了不同的内容,图像A与图像B的颜色直方图分别对应的颜色向量之间的向量距离较小,则图像A与图像B的颜色相似度较高。
5)套装,游戏中虚拟对象的着装,套装由多种部件组成,部件类型包括上衣、裤子、鞋子、装饰物(披风、帽子、手套、首饰等)、宠物、挂宠(挂在虚拟对象身上的宠物)、攻击道具等,虚拟对象穿戴在身上的物品均可以称为套装的部件。
本申请实施例提供一种虚拟对象的套装处理方法、虚拟对象的套装处理装置、电子设备和计算机可读存储介质及计算机程序产品,能够实现在虚拟场景中对虚拟对象进行快捷换装,提升虚拟对象在虚拟场景中的隐蔽程度。
本申请实施例提供的电子设备可以实施为笔记本电脑、平板电脑、台式计算机、机顶盒、移动设备(例如,移动电话,便携式音乐播放器,个人数字助理、专用消息设备、便携式游戏设备、车载终端)等各种类型的用户终端,也可以实施为服务器。
在一个实施场景中,参考图1A,图1A是本申请实施例提供的虚拟对象的套装处理方法的应用模式示意图,适用于一些完全依赖于终端设备400的图形处理硬件计算能力即可完成虚拟场景的相关数据计算的应用模式,例如单机版/离线模式的游戏,通过智能手机、平板电脑和虚拟现实/增强现实设备等各种不同类型的终端设备400完成虚拟场景的输出。
作为示例,图形处理硬件的类型包括中央处理器(CPU,Central Processing Unit)和图形处理器(GPU,Graphics Processing Unit)。
当形成虚拟场景的视觉感知时,终端设备400通过图形计算硬件计算显示所需要的数据,并完成显示数据的加载、解析和渲染,在图形输出硬 件输出能够对虚拟场景形成视觉感知的视频帧,例如,在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;此外,为了丰富感知效果,终端设备400还可以借助不同的硬件来形成听觉感知、触觉感知、运动感知和味觉感知的一种或多种。
作为示例,终端设备400上运行有客户端401(例如单机版的游戏应用),在客户端401的运行过程中输出包括有角色扮演的虚拟场景,虚拟场景可以是供游戏角色交互的环境,例如可以是用于供游戏角色进行对战的平原、街道、山谷等等;以第一人称视角显示虚拟场景为例,在虚拟场景中显示有第一虚拟对象以及第一虚拟对象通过握持部位(例如手部)握持的发射道具(例如可以是射击道具或者投掷道具),其中,第一虚拟对象可以是受用户控制的游戏角色,即第一虚拟对象受控于真实用户,将响应于真实用户针对控制器(例如触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向右移动摇杆时,第一虚拟对象将在虚拟场景中向右部移动,还可以保持原地静止、跳跃以及控制第一虚拟对象进行射击操作等。
举例来说,第一虚拟对象可以是用户控制的虚拟对象,客户端401显示虚拟场景,虚拟场景中的第一虚拟对象穿戴第一套装,第一套装包括多个部件,当第一虚拟对象移动到第一区域时,响应于第一套装中的第一部件与第一区域的颜色不匹配,将第一部件替换为与第一区域的颜色匹配的第二部件,第一部件与第二部件的穿戴位置相同。例如:第一部件是绿色的背包部件,第二部件均是白色的背包部件,假设第一区域是雪地区域,第一虚拟对象从草原区域移动到雪地区域,将绿色背包替换为与雪地区域颜色匹配的白色背包。
在另一个实施场景中,参考图1B,图1B是本申请实施例提供的虚拟对象的套装处理方法的应用模式示意图,应用于终端设备400和服务器200, 适用于依赖于服务器200的计算能力完成虚拟场景计算、并在终端设备400输出虚拟场景的应用模式。
以形成虚拟场景的视觉感知为例,服务器200进行虚拟场景相关显示数据(例如场景数据)的计算并通过网络300发送到终端设备400,终端设备400依赖于图形计算硬件完成计算显示数据的加载、解析和渲染,依赖于图形输出硬件输出虚拟场景以形成视觉感知,例如可以在智能手机的显示屏幕呈现二维的视频帧,或者,在增强现实/虚拟现实眼镜的镜片上投射实现三维显示效果的视频帧;对于虚拟场景的形式的感知而言,可以理解,可以借助于终端设备400的相应硬件输出,例如使用麦克风形成听觉感知,使用振动器形成触觉感知等等。
作为示例,终端设备400上运行有客户端401(例如网络版的游戏应用),通过连接服务器200(例如游戏服务器)与其他用户进行游戏互动,终端设备400输出客户端401的虚拟场景101,在虚拟场景中显示有第一虚拟对象、以及第一虚拟对象通过握持部位(例如手部)握持的发射道具(例如可以是射击道具或者投掷道具),其中,第一虚拟对象可以是受用户控制的游戏角色,即第一虚拟对象受控于真实用户,将响应于真实用户针对控制器(例如触控屏、声控开关、键盘、鼠标和摇杆等)的操作而在虚拟场景中运动,例如当真实用户向右移动摇杆时,第一虚拟对象将在虚拟场景中向右部移动,还可以保持原地静止、跳跃以及控制第一虚拟对象进行射击操作等。
举例来说,例如:第一虚拟对象可以是用户控制的虚拟对象,客户端401显示虚拟场景,虚拟场景中的第一虚拟对象穿戴第一套装,第一套装包括多个部件,当第一虚拟对象移动到第一区域时,响应于第一套装中的第一部件与第一区域的颜色不匹配,将第一部件替换为与第一区域的颜色匹配的第二部件,第一部件与第二部件的穿戴位置相同。例如:第一部件是绿色的背包部件,第二部件均是白色的背包部件,假设第一区域是雪地 区域,第一虚拟对象从草原区域移动到雪地区域,将绿色背包替换为与雪地区域颜色匹配的白色背包。
在一些实施例中,终端设备400可以通过运行计算机程序来实现本申请实施例提供的虚拟对象的套装处理方法,例如,计算机程序可以是操作系统中的原生程序或软件模块;可以是本地(Native)应用程序(APP,Application),即需要在操作系统中安装才能运行的程序,例如射击类游戏APP(即上述的客户端401);也可以是小程序,即只需要下载到浏览器环境中就可以运行的程序。总而言之,上述计算机程序可以是任意形式的应用程序、模块或插件。
以计算机程序为应用程序为例,在实际实施时,终端设备400安装和运行有支持虚拟场景的应用程序。该应用程序可以是第一人称射击游戏(FPS,First-Person Shooting game)、第三人称射击游戏、虚拟现实应用程序、三维地图程序或者多人生存游戏中的任意一种。用户使用终端设备400操作位于虚拟场景中的虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、投掷、建造虚拟建筑中的至少一种。示意性的,该虚拟对象可以是虚拟人物,比如仿真人物角色或动漫人物角色等。
在另一些实施例中,本申请实施例还可以借助于云技术(Cloud Technology)实现,云技术是指在广域网或局域网内将硬件、软件、网络等系列资源统一起来,实现数据的计算、储存、处理和共享的一种托管技术。
云技术是基于云计算商业模式应用的网络技术、信息技术、整合技术、管理平台技术、以及应用技术等的总称,可以组成资源池,按需所用,灵活便利。云计算技术将变成重要支撑。技术网络系统的后台服务需要大量的计算、存储资源。云游戏(Cloud gaming)又可称为游戏点播(gaming on demand),是一种以云计算技术为基础的在线游戏技术。云游戏技术使图形 处理与数据运算能力相对有限的轻端设备(Thin Client)能运行高品质游戏。在云游戏场景下,游戏并不在玩家游戏终端,而是在云端服务器中运行,并由云端服务器将游戏场景渲染为视频音频流,通过网络传输给玩家游戏终端。玩家游戏终端无需拥有强大的图形运算与数据处理能力,仅需拥有基本的流媒体播放能力与获取玩家输入指令并发送给云端服务器的能力即可。
示例的,图1B中的服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端设备400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。终端设备400以及服务器200可以通过有线或无线通信方式进行直接或间接地连接,本申请实施例中不做限制。
下面对图1A中示出的终端设备400的结构进行说明。参考图2,图2是本申请实施例提供的终端设备400的结构示意图,图2所示的终端设备400包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。终端设备400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431, 包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450可选地包括在物理位置上远离处理器410的一个或多个存储设备。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序;网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他计算设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)和通用串行总线(USB,Universal Serial Bus)等;呈现模块453,于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的虚拟对象的套装处理装置可以采用软件方式实现,图2示出了存储在存储器450中的虚拟对象的套装处理装置455,其可以是程序和插件等形式的软件,包括以下软件模块:显示模块4551、套装切换模块4552,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分。
下面将结合附图对本申请实施例提供的虚拟场景的交互处理方法进行具体说明。本申请实施例提供的虚拟场景的交互处理方法可以由图1A中的终端设备400单独执行,也可以由图1B中的终端设备400和服务器200 协同执行。
下面,以由图1B中的终端设备400和服务器200协同执行本申请实施例提供的虚拟场景的交互处理方法为例进行说明。参见图3A,图3A是本申请实施例提供的虚拟场景的交互处理方法的流程示意图,将结合图3A示出的步骤进行说明。需要说明的是,图3A示出的方法可以由终端设备400上运行的各种形式的计算机程序执行,并不局限于上述的客户端410,还可以是上文的操作系统、软件模块和脚本,因此客户端不应视为对本申请实施例的限定。
在步骤301中,显示虚拟场景。
这里,虚拟场景包括穿着第一套装的第一虚拟对象,第一套装包括多个部件,多个部件分布在第一虚拟对象的不同部位。
示例的,部件的类型包括上衣、裤子、鞋子、装饰物(披风、帽子、手套、首饰等)、宠物、挂宠(挂在虚拟对象身上的宠物)、攻击道具等,虚拟对象可以穿戴在身上的部件均可以作为套装的部件。至少两个部件即可组成套装。第一虚拟对象可以是用户控制的虚拟对象。
在一些实施例中,在步骤301之前,第一虚拟对象进入虚拟场景中的游戏对局时,第一虚拟对象可以是未穿戴任何部件的。当第一虚拟对象未穿戴任何部件时,可以基于第一区域的环境颜色为第一虚拟对象穿戴部件,或者,为第一虚拟对象穿戴预设部件(例如:游戏对局所需的基础套装、玩家自行设置的预设套装等)。
在步骤302中,在第一虚拟对象处于虚拟场景中的第一区域的期间,响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,将第一部件替换为第二部件。
这里,第二部件与第一区域的颜色匹配,且与第一部件的穿戴部位相同。
示例的,第一套装中可以存在至少一个第一部件与第一区域的颜色不 匹配,针对每个第一部件均执行步骤302,使得切换后的套装每个部件的颜色均与第一区域的颜色匹配。
在一些实施例中,可以通过颜色相似度来判断第一部件与第一区域的颜色是否匹配。第一区域的颜色与第一部件的颜色不匹配是指,部件的颜色与虚拟对象当前所处的位置(例如相邻附近的环境的颜色之间的颜色相似度(颜色相似度的取值范围为[0,1])小于颜色相似度阈值(例如:颜色相似度阈值为0.5)。
示例的,颜色相似度的计算方式如下:将第一部件的红绿蓝3个通道的灰度值、以及第一区域的红绿蓝3个通道的灰度值,分别映射到归一化的色相-饱和度-明度(Hue-Saturation-Value,HSV)色彩空间中的点,计算两个点的距离,当第一部件与第一区域的颜色越接近,则两个点的向量距离越接近于0,反之,当两个颜色相差越远,则向量距离越接近于1,从而可以1与距离的差值作为颜色相似度。
示例的,第一区域可以是虚拟场景中的任意区域,虚拟场景中的区域可以根据如下任意方式进行划分:1、根据不同的地形进行划分,例如:根据地形划分为山地区域、平原区域、盆地区域、森林区域和湖泊区域等。2、根据面积进行分割,例如:根据虚拟场景的地图中的网格将虚拟场景划分为多个面积相等的长方形、正方形和圆形。3、根据不同的功能进行划分,例如:根据功能划分为仓库区域、住宅区域、野外区域和农业区域等。
参考图5A,参考图5A,图5A是本申请实施例提供的虚拟场景界面的示意图,第一虚拟对象502处于虚拟场景中,基于第一虚拟对象502所站立的虚拟场景的地面503的颜色确定虚拟场景的环境颜色,部件501是处于虚拟对象头部(穿戴位置)的部件,例如:头盔。部件501与虚拟场景的环境颜色不匹配。参考图5B,图5B是本申请实施例提供的虚拟场景界面的示意图,部件501被替换为与虚拟场景的颜色匹配的部件504。
本申请实施例中,通过将虚拟对象的套装中的部件替换为与虚拟场景 的颜色匹配的部件,实现了在对局过程中对虚拟对象进行自动换装,提升了虚拟对象在虚拟场景中的隐蔽性,提升了用户的游戏体验。
在一些实施例中,虚拟场景中还包括自动换装控件;响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,将第一部件替换为第二部件,可以通过以下方式实现:响应于针对自动换装控件的开启操作,显示自动换装控件处于开启状态;响应于第一区域的颜色与第一套装中的至少一个第一部件的颜色不匹配,自动将第一部件替换为第二部件。
示例的,自动换装控件为开启状态时,自动执行部件切换的处理,将与第一区域的颜色不匹配的第一部件切换为第二部件;当自动换装控件为关闭状态时,不进行部件切换的处理。开启操作可以是用户针对自动换装控件的点击、长按等操作。当用户对开启状态的自动换装控件进行点击或者长按时,自动换装控件切换为关闭状态。
示例的,参考图5C,图5C是本申请实施例提供的虚拟场景界面的示意图。自动换装控件505以浮层方式显示在虚拟场景中。参考图5D,图5D是本申请实施例提供的控件状态的示意图;自动换装控件开启状态下,执行自动换装功能。在套装中至少部件自动切换后,若当前换装次数达到了换装次数上限(例如10次),自动换装控件由开启状态切换到关闭状态,不执行自动换装功能,且接收到针对自动换装控件的开启触发操作时,不响应触发操作。在套装中执行部件自动切换后,若当前换装次数没有达到换装次数上限,自动换装控件进入冷却状态,冷却状态下不执行自动换装功能,并在自动换装控件上显示预设冷却时长对应的倒计时,直至预设冷却时长(例如:60秒)的倒计时结束。当达到预设冷却时长时,自动换装控件恢复至开启状态。
在一些实施例中,虚拟场景中还包括手动换装控件。继续参考图5C,自动换装控件505以浮层方式显示在虚拟场景中。在自动换装模式处于开启状态时,用户可以通过手动换装控件将套装中的第一部件切换为其他部 件,可以通过以下方式实现上述切换:响应于针对手动换装控件的触发操作,将第一套装中的第一部件替换为第三部件,并在穿戴时长阈值内保持切换后的第一套装,其中,第三部件是与第一部件的穿戴部位相同的任意部件;响应于保持切换后的第一套装达到穿戴时长阈值,且第一区域的颜色与第一套装中的第三部件的颜色不匹配,将第三部件替换为第四部件;其中,第四部件与第一区域的颜色匹配,且与第三部件的穿戴部位相同。
示例的,第三部件可以是手动换装控件关联的部件,或者用户主动选择的部件。例如:在自动换装模式下,用户想要试穿新获取的帽子B(第三部件),响应于针对手动换装控件的触发操作,将第一虚拟对象当前穿戴的帽子A(第一部件)替换为帽子B(第三部件)。并在穿戴时长阈值内,第一虚拟对象持续穿着帽子B,达到穿戴时长阈值时,若帽子B的颜色与虚拟场景的环境颜色不匹配,将帽子B替换为与环境颜色匹配的帽子D(第四部件)。
在一些实施例中,在响应于针对手动换装控件的触发操作之前,通过以下任意一种方式确定第一部件:1、响应于针对第一套装中任意部件的选择操作,将选中的部件作为第一部件;2、将第一套装中与其他部件之间的颜色差异最大的部件作为第一部件;例如:计算第一套装中每两个部件之间的颜色相似度,针对每个部件,获取部件与其他部件之间的每个颜色相似度的加和,将相似度加和最小的部件作为与其他部件之间颜色差异最大的部件。3、将第一套装中性能参数最小的部件作为第一部件。性能参数至少包括以下一项:防护性能参数、攻击性能参数、穿戴该部件所需的虚拟对象的等级、移动速度性能参数。
在一些实施例中,虚拟场景中还包括手动换装控件;响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,将第一部件替换为第二部件,可以通过以下方式实现:响应于满足手动换装条件,显示手动换装控件处于可使用状态;响应于第一区域的颜色与第一套装中的至少一个第一 部件的颜色不匹配,且接收到针对手动换装控件的触发操作,将第一部件替换为与第二部件。
这里,手动换装条件包括以下至少之一:1、当前时刻与上一次换装的换装时刻之间的时间间隔大于或者等于间隔阈值(例如60秒);2、第一虚拟对象在当前对局中的换装次数未达到换装次数上限(例如10次)。
在一些实施例中,响应于不满足手动换装条件,通过以下任意一种方式显示手动换装控件处于禁用状态:隐藏手动换装控件;灰度化显示手动换装控件;在手动换装控件上显示禁用符号。
继续参考图5D,若满足手动换装条件,手动换装控件处于可使用状态,响应于针对手动换装控件的触发操作,对虚拟对象的套装中至少部分部件进行切换;若当前换装次数达到了换装次数上限,手动换装控件以禁用状态显示(参考图5D,禁用状态可通过灰度状态表征,或者通过在手动换装控件上显示禁用符号表征)。在套装中至少部件自动切换后,若当前换装次数没有达到换装次数上限,手动换装控件进入冷却状态(一种可以恢复为可使用状态的禁用状态),冷却状态下手动换装控件无法被触发,并在手动换装控件上显示预设冷却时长对应的倒计时,直至预设冷却时长(例如:60秒)结束。达到预设冷却时长时,手动换装控件恢复至可使用状态。
在一些实施例中,若当前换装次数达到了换装次数上限,还可以隐藏自动换装控件或者手动换装控件,以表征禁止使用自动换装控件或者手动换装控件。
在一些实施例中,在将第一部件替换为第二部件之前,通过以下方式确定第二部件:获取与第一部件用于相同的穿戴部位的多个候选部件;将多个候选部件中满足筛选条件的候选部件作为第二部件,其中,多个候选部件是第一虚拟对象拥有的。
筛选条件包括以下任意一项:
1、候选部件的功能与第一部件的功能相同,且候选部件的功能强于第 一部件的功能;示例的,候选部件的功能包括以下至少一项:攻击、防御、移动速度。
2、第一部件的穿戴部位未被虚拟环境遮蔽;示例的,参考图6A,图6A是本申请实施例提供的虚拟场景界面的示意图;第一虚拟对象502穿着部件511与部件510A,第一虚拟对象的腿部处于虚拟场景的水面509之下,则部件511被虚拟场景中的水所遮蔽,在水面之上难以分辨水下环境的颜色,可以仅对第一虚拟对象502没有被遮蔽的部件510A进行替换。参考图6B,图6B是本申请实施例提供的虚拟场景界面的示意图;将部件510A替换为了与虚拟场景的颜色匹配的部件515,而被水遮蔽的部件511未替换。
3、候选部件与第一区域的颜色相似度大于颜色相似度阈值。示例的,候选部件的与第一区域的颜色相似度大于颜色相似度阈值,则说明候选部件的颜色与第一区域的颜色匹配。第二部件可以是满足筛选条件的候选部件中,与第一区域的颜色相似度最高的候选部件。
在一些实施例中,将第一部件替换为第二部件之前,参考图3B,图3B是本申请实施例提供的虚拟对象的套装处理方法的流程示意图,通过以下步骤311至步骤312确定颜色相似度,以下具体说明。
在步骤311中,确定第一部件在第一区域中的关联区域的颜色向量。
示例的,关联区域是与第一部件距离最近的虚拟环境对应的区域,是以第一虚拟对象为中心形成的几何区域,例如:与第一虚拟对象的脚部最接近的虚拟环境是地面,则以第一虚拟对象的脚部为中心,以预设长度为半径在虚拟场景的地面形成圆形,将圆形作为关联区域。或者,以预设长度为边长在虚拟场景的地面形成正方形,将正方形作为关联区域。关联区域的面积根据第一虚拟对象在虚拟场景中所占的面积确定,例如:将虚拟对象在地面所占面积的预设倍数(例如:10倍)面积的圆形区域作为关联区域。(与第一虚拟对象的部位尺寸正相关)
示例的,颜色向量用于表征部件或者虚拟场景的环境的颜色分布特征,颜色分布特征是指环境中包括的颜色种类以及每种颜色所占的比例。
在一些实施例中,参考图3C,图3C是本申请实施例提供的虚拟对象的套装处理方法的流程示意图,步骤311可以通过以下步骤3111至步骤3114实现,以下具体说明。
在步骤3111中,获取第一虚拟对象对应的视野画面图像。
示例的,为了节省性能消耗,虚拟对象处于游戏对局内时,可以每隔预设时长(例如:10秒)对虚拟对象的视野内的游戏画面进行截帧,得到视野画面图像。视野画面图像中不包含虚拟场景中以浮层形式显示的控件、小地图等部分,避免了额外的干扰因素混入视野画面中,进而提升了获取颜色向量的准确度。
在步骤3112中,基于第一部件的穿戴部位的关联区域对视野画面图像进行分割处理,得到关联区域图像。
示例的,虚拟场景可以是3D场景,则关联区域所在平面与视野画面图像所在平面不一定是平行的,基于关联区域在视野画面图像中映射的平面区域,对视野画面图像进行分割处理,得到关联区域图像。
在一些实施例中,为提升确定关联区域图像的准确性,也可以将与第一虚拟对象最近的虚拟场景的贴图素材图像作为关联区域图像。例如:基于关联区域截取第一虚拟对象所站的位置的地面的至少部分贴图素材图像,作为关联区域图像。
示例的,参考图5F,图5F是本申请实施例提供的虚拟场景界面的示意图,虚拟对象502的上身部件距离虚拟障碍物517最近,腿部的部件距离虚拟场景的地面503最近,则第一虚拟对象502的上身部件的关联区域位于虚拟障碍物517,第一虚拟对象502的腿部的部件的关联区域位于虚拟场景的地面503;参考图5G,图5G是本申请实施例提供的虚拟场景界面的示意图,第一虚拟对象502的上身部件的关联区域图像包括虚拟障碍 物517的一部分,则上身部件被切换为与虚拟障碍物517的颜色匹配的部件;同理地,第一虚拟对象502的腿部的部件的关联区域图像包括虚拟场景的地面503的一部分,则腿部的部件被切换为与虚拟场景的地面503的颜色匹配的部件。
在步骤3113中,确定关联区域图像的颜色比例数据。
示例的,颜色比例数据可以通过数据、表格、直方图等形式呈现。颜色比例数据中包括关联区域图像中每种颜色在关联区域图像的全部颜色中所占的比例。步骤3113可以通过以下方式实现:对关联区域图像进行缩小,将关联区域图像缩放至预设尺寸(例如:8像素×8像素,共64个像素;16像素×16像素,共256个像素),并将缩小图像转换为灰度图像;对灰度图像中每种颜色进行比例统计,得到关联区域图像的颜色比例数据。例如:针对8像素×8像素的缩小图像,基于预设的64级灰度对缩小的关联区域图像进行下采样处理,得到灰度图像,灰度图像中所拥有的最大颜色种类数量为64种。获取灰度图像中的像素总数量,统计灰度图像中每种颜色对应的像素数量,将每种颜色对应的像素数量与像素总数量之间的比值,作为每种颜色对应的比例值。
在一些实施例中,可以基于颜色比例数据制作直方图,得到颜色直方图。参考图8,图8是本申请实施例提供的颜色直方图的示意图;颜色直方图中每个直方的长度分别表征不同种类颜色在关联区域图像中所占的比例,S1、S2、S3、S4、S5、S6、S7、S8、S9、S10分别对应不同的色系,每个色系包括多个颜色种类,每个色系对应的颜色种类的数量相同。
在步骤3114中,从颜色比例数据提取关联区域的颜色向量。
在一些实施例中,可以通过神经网络模型将颜色比例数据转换为复杂度较低的颜色向量,例如,基于颜色比例数据中每种颜色对应的比例值,确定颜色比例数据的颜色比例向量,也就是说,将每种颜色对应的比例值组合为向量,得到颜色比例向量,颜色比例向量的总维度数量与颜色比例 数据中的颜色种类数量相同;对颜色比例向量降维映射为关联区域的颜色向量。
示例的,降维映射通过以下方式实现:获取针对降维的颜色向量预配置的第一总维度数量,第一总维度数量小于颜色比例向量对应的第二总维度数量,将颜色比例向量中的全部颜色划分为第一总维度数量的颜色区间,对每个颜色区间的每个比例值进行加权求和处理,对每个颜色区间对应的加权求和结果进行归一化,将每个归一化结果组合为一个降维的颜色向量,也就是关联区域的颜色向量。
示例的,假设关联区域图像的颜色比例数据表征为颜色数据集合X,X={x1、x2…xn},xi是颜色数据集合中第i种颜色对应的颜色比例(比例值),颜色比例的取值范围为(1≥xi≥0),将颜色数据集合中的颜色根据颜色的色系进行排序。例如:颜色数据集合中包括七种颜色,依次排序为,赤橙黄绿青蓝紫。本申请实施例以64种颜色为例进行举例说明,将颜色数据集合X={x1、x2…x64}中的每个比例值组合为颜色比例向量(x1,x2…x64),颜色比例向量对应的维度为64维度,将颜色比例向量降维映射为颜色向量。
假设:降维映射得到的颜色向量的维度为6维,对64维度的颜色比例向量进行降维映射可以通过以下方式实现,将64维度的颜色比例向量中的全部颜色划分为6个颜色区间,分别获取6个颜色区间中每个颜色的比例值的加权求和结果,对得到的6个加权求和结果进行归一化,得到6个归一化结果,分别为c、d、e、f、g、h。将降维的颜色向量表征为A,则A=(c,d,e,f,g,h)。
参考图3D,图3D是本申请实施例提供的虚拟对象的套装处理方法的流程示意图在步骤312之前,通过以下步骤3121至步骤3123确定每个候选部件的颜色向量,以下具体说明。
在步骤3121中,对每个候选部件进行以下处理:提取候选部件的每个 贴图素材,将每个贴图素材组合为候选部件的候选部件图像。
示例的,当虚拟场景的是二维虚拟场景时,部件也为二维部件,则将部件的三视图或前后视图平铺,组成一张部件图像;当部件为三维部件时,获取部件的所有外表面的贴图素材平铺,组成一张部件图像。可以预先获取虚拟对象拥有的每个部件的颜色向量,并将颜色向量存储在数据库中。颜色向量的维度与颜色识别所需要的精细度正相关。
在步骤3122中,将候选部件图像转换为候选部件图像的颜色比例数据。
在一些实施例中,步骤3122通过以下方式实现:对候选部件图像进行缩小,将缩小图像转换为灰度图像;从灰度图像中统计得到关联区域图像的每种颜色的颜色比例数据。
示例的,步骤3122是针对候选部件图像进行的转换处理,步骤3113是针对关联区域图像进行的转换处理,二个步骤转换处理的原理相同,步骤3122的执行可以参考步骤3113,此处不再赘述。
在步骤3123中,从颜色比例数据提取候选部件的颜色向量。
在一些实施例中,步骤3123通过以下方式实现:基于颜色比例数据中每种颜色对应的比例值,确定颜色比例数据的颜色比例向量,其中,颜色比例向量的每个维度的数值与每个比例值一一对应;将颜色比例向量进行降维映射为候选部件的颜色向量。
示例的,步骤3123的执行可以参考步骤3114,此处不再赘述。
继续参考图3B,在步骤312中,确定每个候选部件的颜色向量与关联区域的颜色向量之间的向量距离。
这里,向量距离用于表征候选部件与第一区域之间的颜色相似度,且向量距离与颜色相似度负相关。
示例的,假设,第一部件的穿戴部位的第i个候选部件对应的颜色向量为Bi,Bi表征为B=(Ci,Di,Ei,Fi,Gi,Hi)。颜色向量A与颜色向量Bi间的向量距离x表示为以下公式(1):
1与x的差值可以作为颜色相似度,也就是说,当x最小(颜色相似度最高)时,颜色向量Bi对应的候选部件为这个穿戴部位的与环境颜色最匹配的一个部件,可以将该候选部件作为第二部件,并将第一部件替换为第二部件。
本申请实施例,通过将虚拟场景的局部环境与虚拟对象的部件的颜色进行比较,提升了确定颜色相似度的准确度,提升了虚拟角色换装的准确度,提升了虚拟对象在虚拟场景中的隐蔽程度,避免错误换装导致运行虚拟场景的客户端的内存占用率过高的情况,节约了终端设备的资源,提高了终端设备运行虚拟场景的续航时间。
在一些实施例中,可以通过以下方式避免虚拟对象的套装中的部件频繁进行替换:响应于满足替换限制条件,将第一部件替换为第二部件,其中,替换限制条件包括以下至少之一:
1、第一虚拟对象在当前对局中的换装次数未达到换装次数上限(例如:10次)。
2、第一虚拟对象需要隐蔽。可以通过以下方式识别虚拟对象的隐蔽需求:基于第一区域的环境参数、以及虚拟对象的属性参数调用神经网络模型对第一虚拟对象进行隐蔽预测,得到表征第一虚拟对象是否需要隐蔽的隐蔽预测结果;其中,虚拟对象的属性参数包括:第一虚拟对象的位置信息、第一虚拟对象的敌对虚拟对象的位置信息、以及第一虚拟对象的队友虚拟对象的位置信息;第一区域的环境参数包括:第一区域的地形信息、第一区域的视野。
在一些实施例中,神经网络模型在训练后具备预测虚拟对象是否需要隐蔽,以适应突袭或逃过追击的场景。可以通过以下方式训练神经网络模型:获取虚拟场景的环境参数与至少两个阵营的对局数据,其中,至少两 个阵营包括失败阵营与胜利阵营,对局数据包括:胜利阵营的虚拟对象执行隐蔽行为的位置、失败阵营的虚拟对象执行隐蔽行为的位置;对对局数据进行数据标注,得到标注后的对局数据,其中,胜利阵营的虚拟对象执行隐蔽行为的位置的标签为概率1,失败阵营的虚拟对象执行隐蔽行为的位置的标签为概率0;基于虚拟场景的环境参数、标注后的对局数据对初始神经网络模型进行训练,得到训练后的神经网络模型。
在神经网络模型的训练过程中,神经网络模型基于虚拟场景的环境参数和对局数据输出需要隐蔽的预测概率,与针对对局数据实际标注的概率的差值,代入损失函数(例如交叉熵损失函数),从而在神经网络模型中进行反向传播,以逐层更新神经网络模型的参数。
3、第一虚拟对象在第一区域的停留时长大于时长阈值;示例的,可以通过以下方式预测停留时长,基于第一区域的面积与虚拟对象的属性参数调用神经网络模型对第一虚拟对象进行预测,得到预测停留时长。在一些实施例中,通过以下方式训练神经网络模型:获取虚拟对象在虚拟场景中各区域的停留时间、每个区域的面积;通过大量的数据训练初始神经网络模型学习停留时间与面积之间的关系,得到训练后的神经网络模型。
在神经网络模型的训练过程中,神经网络模型基于虚拟对象在虚拟场景中各区域的停留时间、每个区域的面积输出预测停留时长,将预测停留时长与标注的实际停留时长的差值带入损失函数(例如交叉熵损失函数),从而在神经网络模型中进行反向传播,以逐层更新神经网络模型的参数。
4、第一区域的面积大于换装面积阈值。示例的,第一区域的面积小于换装面积阈值,意味着虚拟对象可能很快从当前区域移动到其他区域,为避免频繁切换,若当前区域的面积小于换装面积阈值,则不对虚拟对象的套装进行切换。
本申请实施例,通过上述方案限制了用户通过手动换装控件频繁触发换装,以及限制了自动换装控件开启状态下频繁进行换装,避免客户端的 内存占用率过高,节省了计算资源,提高了终端设备运行虚拟场景的续航时间。
在一些实施例中,响应于第一区域的颜色与第一套装中的至少一个第一部件的颜色不匹配,执行以下处理:响应于第一区域为第一虚拟对象的预设换装区域,且第一部件对应的穿戴部位为预设换装区域的预设穿戴部位,将预设穿戴部位关联的预设部件作为第二部件,将第一部件替换为第二部件。
这里,预设部件的颜色与第一区域的颜色匹配。
示例的,为节约计算资源,可以预先设置虚拟场景中每个区域的预设穿戴位置对应的预设部件,若虚拟对象移动到该区域,则将虚拟对象的套装中与环境颜色不匹配的部件替换为该穿戴部位对应的预设部件。参考图7,图7是本申请实施例提供的虚拟场景的地图的示意图;虚拟场景地图705中,假设区域701为雪山地形,区域701对应的预设穿戴部位是上身,预设穿戴部位对应的预设部件为白色系的上衣,当第一虚拟对象移动到第一区域时,若第一虚拟对象的上身部件(第一部件)与第一区域的颜色不匹配,则可以将第一虚拟对象的上身部件切换为白色系的上衣。
本申请实施例,通过对虚拟场景中不同区域设置的对应的预设部件,并根据虚拟对象所处的区域自动切换,节约了用户在虚拟场景中额外的操作,从而能够专注于虚拟场景中的交互操作,提高操作效率。
在一些实施例中,可以通过以下方式对第一套装进行整体替换:响应于满足全局替换条件,将第一套装整体替换为与第一区域的颜色匹配的第二套装,其中,全局替换条件包括以下至少之一:
1、针对第一虚拟对象在第一区域预先设置有对应的第二套装。第二套装可以是玩家手动设置的与环境颜色匹配的套装,也可以是自动选取的与环境颜色的颜色相似度最高的套装。
2、接收到针对第一套装的整体替换指令。例如:玩家通过手动换装部 件触发整体替换指令,将虚拟对象的套装整体地替换为第二套装。
在一些实施例中,可以通过以下方式对第一套装中的部分部件进行换色,使部件的颜色与环境颜色匹配:响应于第一区域的颜色与第一套装中的至少一个第一部件的颜色不匹配,且第一部件满足换色条件,将第一部件的颜色替换为与第一区域的颜色匹配的目标颜色。
示例的,目标颜色通过以下至少一种方式确定:基于第一区域的颜色提取目标颜色;针对第一区域预先设置与第一区域的颜色匹配的目标颜色。
其中,换色条件包括以下至少之一:
1、第一部件对应的每个候选部件的颜色与第一区域的颜色不匹配,其中,候选部件是第一虚拟对象拥有的。示例的,针对一个穿戴部位,当每个候选部件的颜色与第一区域的颜色之间的颜色相似度均小于颜色相似度阈值,则第一部件对应的每个候选部件的颜色与第一区域的颜色不匹配。
2、第一部件与第一套装中的其他部件具有绑定关系,其中,绑定关系是指部件之间在功能上互相支持,虚拟对象利用部件能够完成复合性的操作。虚拟对象穿戴具有绑定关系的部件时,相较于虚拟对象未穿戴任何部件时,虚拟对象增长的属性参数=每个部件的属性参数的加和+绑定关系对应的属性参数。若虚拟对象当前穿着的套装中不存在绑定关系的部件,虚拟对象增长的属性参数是每个部件的属性参数的加和。
3、第一部件的功能强于第一部件对应的每个候选部件,其中,功能包括以下至少一项:防御、攻击、移动速度。
4、第一部件的功能与第一虚拟对象当前执行的任务相关联,其中,第二部件不具备当前执行的任务对应的功能。例如:虚拟对象当前执行的任务需要游泳,虚拟对象穿着游泳圈(第一部件),游泳圈与当前执行的任务关联。候选部件不具备游泳圈的功能,则对游泳圈进行换色。虚拟对象当前执行的任务需要防弹衣(第一部件),防弹衣的颜色与环境颜色不匹配,则对防弹衣进行换色。
示例的,继续参考图6A,第一虚拟对象502处于虚拟场景的水中,第一虚拟对象502穿戴的部件510A(上衣)与虚拟场景的颜色不匹配,若部件510A对应的上身部位(穿戴部位)的每个候选部件的颜色与第一区域的颜色均不匹配,则可以对部件510A进行换色,参考图6C,图6C是本申请实施例提供的虚拟场景界面的示意图。部件510A的款式没有发生改变,而是被更换颜色形成了部件510B。
在一些实施例中,响应于第一区域的颜色与第一套装中的至少一个第一部件的颜色不匹配,且第一部件不满足换色条件,将第一部件替换为第二部件。
示例的,由于对部件进行换色需要对部件的贴图素材进行调色处理,或者制作新的贴图素材,为减少部件对应的贴图素材占用的存储空间,优先考虑对部件进行替换,当部件不满足替换条件时,对部件的颜色进行替换。
本申请实施例通过对部件进行换色处理,避免了由于虚拟对象未拥有对应的颜色的部件,造成虚拟对象无法在虚拟场景中隐蔽的问题。
在一些实施例中,虚拟对象的套装中至少部分部件的颜色被替换,或者至少部分部件被替换时,通过以下至少一种方式,显示换装提示:语音提示、文字消息提示、特效动画提示(例如:以虚拟对象的被替换的部件为中心显示逐渐消失的光圈)。参考图6B,第一虚拟对象502的上身部件被替换为部件515,在虚拟场景中显示提示信息516,内容为“外观已更换”。
在一些实施例中,参考图4A,图4A是本申请实施例提供的虚拟对象的套装处理方法的流程示意图,将结合图4A示出的步骤进行说明。
在步骤401A中,显示虚拟场景。
这里,虚拟场景包括穿着第一套装的第一虚拟对象,第一套装包括多个部件,多个部件分布在第一虚拟对象的不同部位,虚拟场景中还包括反色换装控件。
示例的,步骤410A的处理可以参考步骤301,此处不再赘述。
在步骤402A中,响应于针对反色换装控件的触发操作,将第一套装中与第一区域的颜色匹配的第一部件替换为第五部件。
其中,第五部件是与第一区域的颜色相反的部件,且第五部件的穿戴部位与第一部件的穿戴部位相同。
示例的,颜色相反是指,部件的颜色与第一区域的环境颜色之间的颜色相似度小于颜色相似度阈值。在第一部件对应的候选部位中,第五部件可以是与第一区域颜色相反,且与第一区域的颜色相似度最低的部件。
示例的,步骤402A可以通过以下方式实现:响应于第一虚拟对象在第一区域中不需要隐蔽,且接收到针对反色换装控件的触发操作,将第一套装中与第一区域的颜色匹配的第一部件替换为第五部件。
以下场景中,第一虚拟对象不需要隐蔽:第一虚拟对象的周围不存在敌对虚拟对象;非战斗区域;虚拟场景呈现雨雪天气,可见度低时;第一虚拟对象参与多人混战时。
在一些实施例中,在步骤402A之前,通过以下方式确定第五部件:在与第一部件处于相同的穿戴部位的多个候选部件中,将与第一区域的颜色相似度最低的候选部件作为第五部件。
参考图6D,图6D是本申请实施例提供的虚拟场景界面的示意图。第一虚拟对象502处于开阔的平原地带,若用户想要使第一虚拟对象502更加突出,便于队友分辨第一虚拟对象502,可以对反色换装控件512进行触发,将第一虚拟对象502穿戴的与环境颜色匹配的部件513A替换为反色的部件。部件513A与虚拟场景的地面503A的颜色匹配。参考图6E,图6E是本申请实施例提供的虚拟场景界面的示意图。部件513A被替换为与环境的颜色不匹配的部件514,使得第一虚拟对象502在虚拟场景中的辨识度更高。
本申请实施例中,根据虚拟场景的环境颜色,将虚拟对象穿戴的部件 替换为与环境颜色不匹配的部件,使得虚拟对象在虚拟场景中辨识度更高,便于虚拟对象在虚拟场景中执行不需要隐蔽的任务。
在一些实施例中,换色条件也适用于将虚拟对象的部件的颜色替换为与环境颜色不匹配的颜色,参考图6F,图6F是本申请实施例提供的虚拟场景界面的示意图。第一虚拟对象502的部件513A的颜色被替换为与环境颜色不匹配的颜色,形成部件513B。
在一些实施例中,参考图4B,图4B是本申请实施例提供的虚拟对象的套装处理方法的流程示意图,将结合图4B示出的步骤进行说明。
在步骤401B中,显示虚拟场景。
这里,其中,虚拟场景包括穿着第一套装的第一虚拟对象,第一套装包括多个部件,多个部件分布在第一虚拟对象的不同部位。
示例的,步骤410B的处理可以参考步骤301,此处不再赘述。
在步骤402B中,响应于第一虚拟对象离开第一区域并进入第二区域,执行以下处理:若第二区域与第一区域之间的颜色差异大于颜色差异阈值,则将第一套装整体替换为与第二区域的颜色匹配的第二套装,并在第二区域中继续穿着第二套装。
示例的,颜色差异可以表征为1与颜色相似度之差,颜色差异阈值可以为1与颜色相似度阈值之差,颜色相似度与颜色差异负相关,相似度越高则差异越小。例如:颜色相似度阈值为0.7,则颜色差异阈值为0.3,当第一套装与环境颜色之间的颜色相似度为0.6,则颜色差异为0.4大于颜色差异阈值0.3,将第一套装替换为第二套装。
以下结合附图进行解释说明,继续参考图7,假设区域701(第一区域)为雪山地形,区域704(第二区域)为沙漠地形,区域701与区域704相邻,区域701与区域704之间的颜色差异大于颜色差异阈值,则将第一套装整体替换为与第二区域的颜色匹配的第二套装,并在第二区域中继续穿着第二套装。
在步骤403B中,若第二区域与第一区域之间的颜色差异小于或者等于颜色差异阈值,则在第二区域中,控制第一虚拟对象继续穿着第一套装。
示例的,基于场景类型(例如:城市、废墟、雪地等)对虚拟场景的区域进行划分,则每个区域内部的颜色差异相较于区域之间的颜色差异较小,也即区域内部的颜色差异可能小于颜色差异阈值。当虚拟对象进入区域中时,可以在区域中保持替换后的套装,直至虚拟对象进入其他的区域,或者直至虚拟对象周围的环境的颜色与第一套装的颜色差异大于颜色差异阈值。
在一些实施例中,第一区域与第二区域之间不相邻,第一区域与第二区域之间具有第三区域;在第一虚拟对象处于第三区域中时,控制第一虚拟对象继续穿着第一套装。
示例的,第三区域可以是第一区域与第二区域的过渡区域,第三区域的与第一区域的颜色差异较小。继续参考图7,区域701(第一区域)与区域702(第二区域)之间存在区域703(第三区域)。假设第三区域为雪地地形,与第一区域的颜色差异较小,则在第一虚拟对象处于第三区域中时,控制第一虚拟对象继续穿着第一套装。
在一些实施例中,在第二区域中继续穿着第二套装之前,若第二区域的颜色分布差异小于或者等于颜色差异阈值,则转入控制第一虚拟对象继续穿着第二套装的处理。
示例的,第二区域中不同位置的分布的颜色可能存在差异,若每个位置之间的颜色差异小于或者等于颜色差异阈值,则转入控制第一虚拟对象继续穿着第二套装的处理。
在将第一套装整体替换为与第二区域的颜色匹配的第二套装之前,若不满足局部替换条件,则转入将第一套装整体替换为与第二区域的颜色匹配的第二套装的处理;
若满足局部替换条件,则将第一套装中的第三部件的替换为第四部件, 其中,第四部件与第二区域的颜色匹配,且与第三部件的穿戴部位相同。第三部件是可以是通过以下任意方式确定的:1、响应于针对第一套装中任意部件的选择操作,将选中的部件作为第三部件;2、将第一套装中与其他部件之间的颜色差异最大的部件作为第三部件;例如:计算第一套装中每两个部件之间的颜色相似度,针对每个部件,获取部件与其他部件之间的每个颜色相似度的加和,将相似度加和最小的部件作为与其他部件之间颜色差异最大的部件。3、将第一套装中性能参数最小的部件作为第三部件。性能参数至少包括以下一项:防护性能参数、攻击性能参数、穿戴该部件所需的虚拟对象的等级、移动速度性能参数。
其中,局部替换条件包括以下至少之一:
1、第一虚拟对象在第二区域不存在对应的第二套装;示例的,不存在对应的第二套装是指,未预先设置第二区域对应的第二套装。
2、与第二区域的颜色不匹配的部件的数量小于替换数量阈值;替换数量阈值与套装中的部件总数正相关,替换数量阈值可以是部件总数的一半。例如:套装中有六个部件,包括:帽子、手套、鞋子、上衣、裤子、虚拟攻击道具;替换数量阈值为3,当与第二区域的颜色不匹配的部件的数量小于3,则仅对部件进行替换,而不是执行整体替换。
3、第三部件与第一套装中的其他部件不具有绑定关系。其中,绑定关系是指部件之间在功能上互相支持,虚拟对象利用部件能够完成复合性的操作。虚拟对象穿戴具有绑定关系的部件时,相较于虚拟对象未穿戴任何部件时,虚拟对象增长的属性参数=每个部件的属性参数的加和+绑定关系对应的属性参数。若虚拟对象当前穿着的套装中不存在绑定关系的部件,虚拟对象增长的属性参数=每个部件的属性参数的加和。
本申请实施例中,通过将虚拟对象的套装中至少部分部件替换为与虚拟场景的环境颜色匹配的部件,使得虚拟对象的套装中的部件跟随游戏内场景的颜色自动变化,虚拟对象在虚拟场景中被暴露的可能性降低、避免 了丰富的套装部件对战斗带来的不良干扰,用户可自动更换隐蔽性套装,减少战斗内操作和思考成本,提升了用户的游戏体验。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。
本申请实施例提供的虚拟对象的套装处理方法可以应用在如下应用场景中:
在虚拟场景中,玩家可以对自己控制的虚拟对象进行换装。玩家可以通过在游戏仓库中选择虚拟对象不同穿戴部位的部件为虚拟对象进行换装,在游戏对局中,还可以通过拾取虚拟场景中的物资包获取部件对虚拟对象进行换装。虚拟场景的地形和环境多变,且游戏对局内需要时刻注意环境变化、敌对虚拟对象的动向,玩家无法抽出较多时间和精力对虚拟对象的套装的部件进行搭配,导致难以在虚拟场景中快捷地穿着所需的套装(例如:隐蔽性高的套装等),缺乏能够快捷换装的手段。本申请提供的虚拟对象的套装处理方法可以在游戏对局过程中,根据虚拟场景的颜色对虚拟对象的套装中的部件进行切换,将与环境的颜色不匹配的部件替换为与环境的颜色匹配的部件,提升虚拟对象在虚拟场景中的隐蔽程度。
示例的,虚拟场景中包括虚拟对象,虚拟对象穿着套装,套装是游戏中虚拟对象的着装,套装由多种部件组成,部件是虚拟对象穿戴在身上的物品,例如:上衣、裤子、鞋子等。本申请实施例中的套装涵盖虚拟对象身上所有装备、时装、挂件,根据不同游戏可能会出现其他宠物和携带物形式,只要根据场景智能变色,均属于本申请的方案范畴。虚拟对象的仓库包括:存储对局道具的仓库(该仓库存储有吉利服(Ghillie Suit,用于伪装的衣服)、玩家的时装仓库。
虚拟场景中还包括自动换装控件、手动换装控件。通过本申请实施例提供的虚拟对象的套装处理方法,可以智能识别游戏内虚拟对象所在区域的颜色,获取颜色种类、每种颜色所占的比例,基于虚拟场景的区域颜色 可以通过自动或者手动的方式实现换装。自动方式为:将自动换装控件选项设置为开启状态,将虚拟对象的当前着装中的部件自动切换为与当前场景的环境颜色匹配的部件。手动方式为:当用户触发手动换装控件时,将虚拟对象的当前着装中的部件切换为与当前场景的环境颜色匹配的部件。
参考图9,图9是本申请实施例提供的虚拟对象的套装处理方法的一个可选的流程示意图。以终端设备为执行主体,结合图9示出的步骤进行解释说明。
在步骤901中,自动换装控件处于开启状态,判断虚拟对象的当前套装中是否存在与当前环境颜色不匹配的第一部件。
示例的,当虚拟对象进入游戏对局之前,用户可以为虚拟对象搭配套装;或者,虚拟对象进入游戏对局时,没有穿戴任何套装部件,或者仅部分穿戴部位穿着了部件。当自动换装控件处于开启状态时,响应于虚拟对象从当前场区域移动到其他区域,执行判断虚拟对象的当前套装中是否存在与当前环境颜色不匹配的第一部件的处理。或者,当上一次判断的时刻与当前时刻之间的时间间隔达到预设时长(例如:10秒)时,执行判断虚拟对象的当前套装中是否存在与当前环境颜色不匹配的第一部件的处理。
示例的,自动换装控件是用于表征自动换装功能是否开启的控件,当自动换装控件处于开启状态,则处于自动换装模式,执行自动换装功能。反之,当自动换装控件处于关闭状态,不执行自动换装功能。自动换装模式下,自动将玩家虚拟对象(上文中的第一虚拟对象)的每个穿戴部位对应的部件与穿戴部位最接近的环境进行比较,确定二者的颜色是否匹配。颜色匹配是指部件与环境之间的颜色差异较小,也即,部件的颜色与环境颜色之间的颜色相似度大于等于相似度阈值,相似度阈值可以为0.5(相似度的取值范围为,1≥相似度≥0),当部件的颜色与环境颜色之间的颜色相似度小于相似度阈值,则说明部件与环境颜色不匹配。
当步骤901的判断结果为是时,转入步骤902。当步骤901的判断结 果为否时,继续执行步骤901,判断虚拟对象的当前套装中是否存在与当前环境颜色不匹配的第一部件。
在步骤902中,对虚拟对象对应的游戏画面进行截帧,得到视野画面图像。
示例的,为了节省性能消耗,虚拟对象处于游戏对局内时,可以每隔预设时长(例如:10秒)对虚拟对象的视野内的游戏画面进行截帧,得到视野画面图像。为提升获取颜色多维向量的准确性,视野画面图像中不包含虚拟场景中的控件。
在步骤903中,分割视野画面图像,得到第一部件的关联区域图像,缩小关联区域图像的尺寸,并对缩小的关联区域图像进行灰度转换处理。
示例的,分割视野画面图像通过以下方式进行:将视野画面图像中虚拟对象、环境干扰因素分割,得到游戏画面中虚拟场景的全局环境图像(例如:视野画面图像中包括虚拟对象、虚拟场景的天空、虚拟建筑、虚拟载具(例如:虚拟飞行器、汽车等)、虚拟场景的地面,将虚拟对象、天空从视野画面图像中分割出,将分割处理后的视野画面图像作为全局环境图像。确定全局环境图像中虚拟对象的每个部位相关联的区域,对全局环境图像进行分割,得到每个部位的关联区域图像。缩小关联区域图像尺寸预设尺寸(例如:8像素×8像素,共64个像素;16像素×16像素,共256个像素),去除图片细节对画面的影响。灰度转换处理通过以下方式进行:基于预设的灰度级(例如:64级、256级)对缩小的关联区域图像进行下采样处理,得到灰度处理后的关联区域图像,灰度处理后的关联区域图像所拥有的最大颜色种类数量与灰度级相等。例如:针对8像素×8像素的缩小图像,基于预设的64级灰度对缩小的关联区域图像进行下采样处理,得到灰度处理后的关联区域图像,该图像中所拥有的最大颜色种类数量为64种。
在步骤904中,提取关联区域图像的颜色直方图,基于颜色直方图的颜色分布数据组成多维向量A。
示例的,提取关联区域图像的颜色直方图通过以下方式进行:对灰度图像中每种颜色进行比例统计,基于每种颜色对应的比例制作颜色直方图。制作颜色直方图是一种统计颜色数据的方式,具体实施中还可以通过表格、饼图(通过饼图中每个颜色的扇形对应的角度表征颜色的比例)等方式对颜色数据进行统计。
参考图8,图8是本申请实施例提供的颜色直方图的示意图;颜色直方图中每个直方的长度分别表征不同种类颜色在关联区域图像中所占的比例,S1、S2、S3、S4、S5、S6、S7、S8、S9、S10分别对应不同的色系,每个色系包括多个颜色种类,每个色系对应的颜色种类的数量相同。
示例的,具体实施例中可以根据游戏识别所需要的精细度确定多维向量(上文的颜色向量)的维度,精细度与维度正相关。本申请实施例中以多维向量为六维向量为例进行说明,将颜色直方图中每种颜色的比例值组合为颜色直方图对应的颜色直方图向量(上文的颜色比例向量),每种颜色分别对应一个维度,颜色直方图向量的总维度数量与颜色直方图中的颜色种类数量相同,基于预设维度(例如6维)对颜色直方图向量进行降维映射,得到六维向量(上文的降维的颜色向量)。
将颜色直方图中每种颜色的比例作为向量中每个维度对应的数值,得到颜色直方图对应的颜色直方图向量,也就是说,将颜色直方图中每种颜色对应的比例值组合为向量,得到颜色直方图向量,颜色直方图向量的总维度数量与颜色直方图中的颜色种类数量相同。降维映射通过以下方式实现:将颜色直方图向量中的全部颜色划分为6个颜色区间,对每个颜色区间的每个比例值进行加权求和,对每个颜色区间对应的加权求和结果进行归一化,将每个归一化结果组合为一个六维向量。
示例的,将颜色直方图表征为颜色数据集合X,X={x1、x2…xn},xi是颜色数据集合中第i种颜色对应的颜色比例(比例值),颜色比例的取值范围为(1≥xi≥0),将颜色数据集合中的颜色根据颜色的色系进行排序。 例如:颜色数据集合中包括七种颜色,依次排序为,赤橙黄绿青蓝紫。本申请实施例以64种颜色为例进行举例说明,将颜色数据集合X={x1、x2…x64}转换为颜色直方图向量(x1,x2…x64),颜色直方图向量对应的维度为64维度,将64维度的颜色直方图向量中的全部颜色划分为6个颜色区间,分别获取6个颜色区间中每个颜色的比例值的加权求和结果,对得到的6个加权求和结果进行归一化,得到6个归一化结果,分别为c、d、e、f、g、h。则六维向量表征为A=(c,d,e,f,g,h)。
在一些实施例中,还可以获取距离部件最近的虚拟场景中物体的贴图素材图像,基于贴图素材获取颜色直方图,并基于颜色直方图得到多维向量A。例如:虚拟对象的鞋子与虚拟场景的地面距离最近,基于地面的贴图素材图像获取多维向量A。
示例的,步骤905至步骤907可以在步骤901之前执行。预先获取虚拟对象拥有的每个部件的颜色信息(以颜色多维向量形式表征),并将颜色信息存储在数据库中,例如:多个部件组成一个套装,针对每个部件,将部件的外表面的贴图素材平铺为部件图像,基于部件图像获取颜色多维向量。当玩家获取了新部件时,同时将新部件对应的颜色多维向量存储在数据库中。
在步骤905中,获取虚拟对象的每个部件的部件图像。
示例的,当虚拟场景的是二维虚拟场景时,部件也为二维部件,则将部件的三视图或前后视图平铺,组成一张部件图像;当部件为三维部件时,获取部件的所有外表面的贴图素材平铺,组成一张部件图像。
在步骤906中,基于每个部件的部件图像获取每个部件的颜色直方图。
在步骤907中,基于每个部件的颜色直方图的颜色分布数据组成每个部件的多维向量B。
示例的,多维向量B的维度与多维向量A的维度是相同的。步骤906、步骤907的原理与步骤904相同,此处不再赘述。假设多维向量B是六维 向量,其中,第i个多维向量Bi表征为B=(Ci,Di,Ei,Fi,Gi,Hi)。
在步骤908中,确定多维向量A与每个多维向量B之间的向量距离。
示例的,可以用颜色向量之间的向量距离表征部件与环境之间的颜色相似度,匹配程度与颜色相似度负相关。计算多维向量A和每个多维向量B之间的距离,向量距离最小的两个向量,颜色相似度最高。
多维向量A与多维向量Bi间的向量距离x表示为以下公式(1):
在步骤909中,选取向量距离最短的多维向量B对应的部件,作为与当前环境颜色最接近第二部件。
示例的,当x最小时,多维向量Bi对应的部件为这个穿戴部位的与环境颜色最匹配的一个部件。将不匹配的部件,替换为颜色最匹配的部件。
在步骤910中,将虚拟对象的第一部件替换为第二部件。
示例的,第二部件与第一部件对应的穿戴部位相同,针对该穿戴部位,第二部件是虚拟对象拥有的与虚拟场景的环境颜色的颜色相似度最高的部件。在第一部件替换为第二部件之后,可以在虚拟场景界面中显示提示信息,向玩家提示虚拟对象穿戴的部件已经替换。
参考图5A,图5A是本申请实施例提供的虚拟场景界面的示意图,第一虚拟对象502处于虚拟场景中,虚拟场景的环境颜色可以基于第一虚拟对象502所站立的虚拟场景的地面503确定,部件501是处于虚拟对象头部(穿戴位置)的部件,例如:头盔。部件501与虚拟场景的环境颜色不匹配。参考图5B,图5B是本申请实施例提供的虚拟场景界面的示意图,部件501被替换为与虚拟场景的颜色匹配的部件504。
本申请实施例通过将虚拟对象的套装中的至少部分部件替换为与虚拟场景的环境颜色匹配的部件,提升虚拟对象在虚拟场景中的隐蔽程度,便于虚拟对象在游戏对局过程中进行快速换装。
在一些实施例中,当自动换装控件处于关闭状态时,若用户通过视觉感知虚拟对象的套装与虚拟场景的环境颜色不同,可以对自动换装控件进行触发。响应于针对自动换装控件的触发操作,将虚拟对象的套装中与环境颜色不同的部件替换为与环境颜色匹配的部件。或者,响应于针对自动换装控件的触发操作,将虚拟对象的套装替换为玩家预先设定的套装。
在一些实施例中,虚拟场景中的自动换装控件、手动换装控件显示在虚拟场景中。参考图5C,图5C是本申请实施例提供的虚拟场景界面的示意图。自动换装控件505、手动换装控件506以浮层方式显示在虚拟场景中。为避免频繁自动换装造成计算资源消耗过多,可以设置换装功能执行的冷却时间,在未达到冷却时间时,换装控件以冷却状态显示。还可以设置换装次数上限,在同一游戏对局中,虚拟对象的换装次数若达到了换装次数上限,则禁止通过自动换装模式进行换装、禁止通过手动换装控件进行换装(以禁用状态显示手动换装控件),避免占用客户端的运行内存,节省计算资源。
参考图5D,图5D是本申请实施例提供的控件状态的示意图;自动换装控件开启状态下,执行自动换装功能。在套装中至少部件自动切换后,若当前换装次数达到了换装次数上限(例如10次),自动换装控件由开启状态切换到关闭状态,不执行自动换装功能,且接收到针对自动换装控件的开启触发操作时,不响应。在套装中至少部件自动切换后,若当前换装次数没有达到换装次数上限,自动换装控件进入冷却状态,冷却状态下不执行自动换装功能,并在自动换装控件上显示预设冷却时长对应的倒计时,直至预设冷却时长(例如:60秒)结束。达到预设冷却时长时,自动换装控件恢复至开启状态。
同理地,若当前换装次数没有达到换装次数上限,手动换装控件处于可使用状态,响应于针对手动换装控件的触发操作,对虚拟对象的套装中至少部分部件进行切换;若当前换装次数达到了换装次数上限(例如10次), 手动换装控件以禁用状态显示(参考图5D,禁用状态可以表征为,灰度状态显示,或者在手动换装控件上显示禁用符号)。在套装中至少部件自动切换后,若当前换装次数没有达到换装次数上限,手动换装控件进入冷却状态,冷却状态下手动换装控件无法被触发,并在手动换装控件上显示预设冷却时长对应的倒计时,直至预设冷却时长(例如:60秒)结束。达到预设冷却时长时,手动换装控件恢复至可使用状态。
在一些实施例中,若当前换装次数达到了换装次数上限,还可以隐藏自动换装控件或者手动换装控件,以表征禁止使用自动换装控件或者手动换装控件。
在一些实施例中,虚拟场景中的自动换装控件、手动换装控件显示在虚拟场景的仓库界面中,仓库用于存储虚拟对象拥有的虚拟道具与部件,仓库界面中玩家可以查看虚拟对象拥有的虚拟道具与部件。参考图5E,图5E是本申请实施例提供的仓库界面的示意图;虚拟场景中设置有仓库控件507,响应于针对仓库控件507的触发操作,显示仓库界面508,仓库界面508中包括物品栏与时装栏,物品栏中存储虚拟对象拥有的虚拟道具部件与虚拟装备部件,时装栏存储虚拟对象拥有的时装类部件。当自动换装控件505处于开启状态时,可以执行自动换装功能;响应于针对手动换装控件506的触发操作,将虚拟对象的套装中至少部分部件替换为与环境颜色匹配的部件;或者,将虚拟对象的套装替换为与手动换装控件506对应的预设套装,并在手动换装控件506上显示使用中,以表示正在使用该预设套装。
本申请实施例中,将自动换装控件、手动换装控件设置在仓库界面中,避免了多个的控件对虚拟场景的画面进行遮挡。
在一些实施例中,当自动换装控件处于开启状态时,若虚拟对象的穿戴部位未穿戴任何部件,自动为该穿戴部位穿戴与当前区域的环境颜色最匹配的部件,该部件是与穿戴部位对应的部件。例如:虚拟对象进入游戏 对局时,仅穿戴了上衣与裤子,脚部与头部没有穿戴任何部件,则自动为虚拟对象穿戴与当前区域的环境颜色匹配的鞋子与帽子。或者,虚拟对象未穿着任何部件,则自动为虚拟对象匹配与当前区域的环境颜色匹配的套装。
在一些实施例中,玩家打开自动换装功能后,如果虚拟对象还未着装,则自动匹配与当前区域的环境颜色最接近的套装,自动穿着;如果已经着装,且着装中的部分部件与环境颜色不匹配,则以部件为单位,根据所处环境替换与环境颜色适配的部件;如果全部部件均与环境颜色不匹配,则以套装为单位替换。
在一些实施例中,自动换装控件与手动换装控件的使用并不互斥,在自动换装模式下,可以通过手动换装控件对套装整体或者套装中的部分部件进行切换。例如:用户选择虚拟对象的套装中任意部件作为待替换部件,并触发手动换装控件,将待替换部件替换为与手动换装控件关联的其他部件。其他部件可以是满足以下任意条件的部件:相较于待替换部件与当前区域的颜色更匹配的部件、相较于待替换部件的性能参数更好的部件、相较于待替换部件的使用频率更高的部件、与待替换部件颜色相反的部件、用户喜好的部件等。
在一些实施例中,当自动换装控件处于开启状态时,若玩家通过手动换装控件为虚拟对象穿戴了任意部件(例如:玩家喜好的部件),在预设时长内不再执行自动换装功能。响应于达到预设时长,且虚拟场景的环境颜色与虚拟对象的套装中至少部分部件的颜色不匹配,将至少部分部件切换为与当前的环境颜色最匹配的部件。
在一些实施例中,当虚拟对象的部位被虚拟场景所遮蔽,可以不对该部位的部件进行替换。参考图6A,图6A是本申请实施例提供的虚拟场景界面的示意图;第一虚拟对象502穿着部件511与部件510A,第一虚拟对象的腿部处于虚拟场景的水面509之下,则部件511被虚拟场景中的水所 遮蔽,在水面之上难以分辨水下环境的颜色,可以仅对第一虚拟对象502没有被遮蔽的部件510A进行替换。参考图6B,图6B是本申请实施例提供的虚拟场景界面的示意图;将部件510A替换为了与虚拟场景的颜色匹配的部件515,而被水遮蔽的部件511未替换。在替换了部件之后,还可以通过显示提示信息516(参考图6B,提示信息的内容可以为“外观已更换”)向玩家提示虚拟对象的套装的部件已经更换。
本申请实施例中,仅对虚拟对象未被虚拟场景所遮蔽的部件进行切换,避免了对虚拟对象穿戴的部件进行频繁替换,降低了判断环境颜色与套装的颜色之间相似度的频率,可以减少计算资源消耗以及客户端的内存占用率。
在如下几种场景中,用户可能存在提升第一虚拟对象的辨识度的需求:多个虚拟对象处于混战中、第一虚拟对象与队友虚拟对象集体行动、第一虚拟对象处于非游戏对局区域、虚拟场景的天气因素、环境因素影响虚拟对象的可见度(例如:雨雪天气、烟雾等)。在上述场景中,用户可能需要将第一虚拟对象与其他虚拟对象、虚拟场景进行区分。
在一些实施例中,虚拟场景还包括反色换装控件。用户可以通过触发反色换装控件,将虚拟对象的套装局部或者整体地切换为与虚拟场景的环境颜色相反的套装。颜色相反,也即颜色相似度低,在虚拟对象拥有的部件中选取与当前区域的环境颜色相似度最低的部件,作为与虚拟场景的环境颜色相反的反色部件(上文第五部件),将虚拟对象的当前穿着的部件替换为反色部件,提升虚拟对象在虚拟场景中的辨识度。
示例的,参考图6D,图6D是本申请实施例提供的虚拟场景界面的示意图;假设当前区域是草原,在该区域可以基于虚拟场景的地面503确定虚拟场景的环境颜色。虚拟场景的地面503为草绿色,虚拟对象512身上穿戴的部件513A是与虚拟场景的地面503的颜色匹配的部件,例如:部件513A是绿色迷彩服。响应于针对反色换装控件512的触发操作,将第 一虚拟对象502上身穿戴的部件513A替换为与虚拟场景的环境颜色不匹配的部件。参考图6E,图6E是本申请实施例提供的虚拟场景界面的示意图;将虚拟对象502身上的部件513A替换为了与虚拟场景的环境颜色不匹配的部件514,使得虚拟对象502在虚拟场景中的辨识度更高。
本申请实施例中,通过将虚拟对象的套装至少部分部件替换为与虚拟场景的环境颜色不匹配的部件,使得虚拟对象在虚拟场景中的辨识度更高,便于用户对虚拟对象进行观察,便于虚拟对象与虚拟场景、其他虚拟对象进行区分,能够提升用户控制虚拟对象的人机交互效率。
在一些实施例中,基于场景类型(例如:城市、废墟、雪地等)对虚拟场景的区域进行划分,每个区域内部的颜色差异相较于区域之间的颜色差异较小。当虚拟对象进入区域中时,根据区域对应的颜色对虚拟对象的套装进行局部替换或者整体替换,且在区域中保持替换后的套装,直至虚拟对象进入其他的区域。
以下结合附图进行解释说明,参考图7,图7是本申请实施例提供的虚拟场景的地图的示意图;在虚拟场景地图705中,假设区域701为雪山、区域702为草原、区域704是沙漠。区域701与区域702之间存在区域703。假设,三个区域内部颜色差异较小(颜色相似度大于颜色相似度阈值),但区域之间的颜色差异较大(颜色相似度小于颜色相似度阈值),由于虚拟场景中的贴图素材的颜色是固定的,在虚拟对象进入游戏对局之前,可以基于虚拟对象拥有的部件预先确定不同区域的颜色匹配的套装,或者部件。区域701、区域702、区域704分别设置有对应的颜色匹配的套装。区域703中部分位置的颜色与区域701的颜色相似度小于颜色相似度阈值,部分位置的颜色与区域701的颜色相似度大于颜色相似度阈值。
当虚拟对象进入区域701中时,将虚拟对象的第一套装中的部分部件切换为预先设置的与区域701的雪山场景的环境颜色匹配的部件,形成第三套装,并在区域701中继续穿着第三套装;或者,将第一套装整体切换 为与环境颜色匹配的第二套装,并在区域701中继续穿着第二套装;当虚拟对象进入区域703时,响应于环境颜色与虚拟对象的套装中部件的颜色不匹配,将虚拟对象的套装切换为与环境颜色匹配的套装。
本申请实施例中基于虚拟场景中的区域切换对虚拟对象的套装中的部件进行切换,降低了判断环境颜色与套装的颜色之间相似度的频率,可以减少计算资源消耗以及客户端的内存占用率。
本申请实施例中,通过将虚拟对象的套装中至少部分部件替换为与虚拟场景的环境颜色匹配的部件,使得虚拟对象的套装中的部件跟随游戏内场景的颜色自动变化,虚拟对象在虚拟场景中被暴露的可能性降低、避免了丰富的套装部件对战斗带来的不良干扰,用户可自动更换隐蔽性套装,减少战斗内操作和思考成本,提升了用户的游戏体验。
下面继续说明本申请实施例提供的虚拟对象的套装处理装置455的实施为软件模块的示例性结构,在一些实施例中,如图2所示,存储在存储器450的虚拟对象的套装处理装置455中的软件模块可以包括:显示模块4551,配置为显示虚拟场景,其中,虚拟场景包括穿着第一套装的第一虚拟对象,第一套装包括多个部件,多个部件分布在第一虚拟对象的不同部位;套装切换模块4552,配置为在第一虚拟对象处于虚拟场景中的第一区域的期间,响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,将第一部件替换为第二部件。这里,第二部件与第一区域的颜色匹配,且与第一部件的穿戴部位相同。
在一些实施例中,虚拟场景中还包括自动换装控件;套装切换模块4552,配置为响应于针对自动换装控件的开启操作,显示自动换装控件处于开启状态;响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,自动将第一部件替换为第二部件。
在一些实施例中,虚拟场景中还包括手动换装控件,套装切换模块4552, 配置为响应于针对手动换装控件的触发操作,将第一套装中的第一部件替换为第三部件,并在穿戴时长阈值内保持切换后的第一套装,其中,第三部件是与第一部件的穿戴部位相同的任意部件;响应于保持切换后的第一套装达到穿戴时长阈值,且第一区域的颜色与第一套装中的第三部件的颜色不匹配,将第三部件替换为第四部件;其中,第四部件与第一区域的颜色匹配,且与第三部件的穿戴部位相同。
在一些实施例中,套装切换模块4552,配置为在响应于针对手动换装控件的触发操作之前,通过以下任意一种方式确定第一部件:响应于针对第一套装中任意部件的选择操作,将选中的部件作为第一部件;将第一套装中与其他部件之间的颜色差异最大的部件作为第一部件;将第一套装中性能参数最小的部件作为第一部件。
在一些实施例中,虚拟场景中还包括手动换装控件;套装切换模块4552,配置为响应于满足手动换装条件,显示手动换装控件处于可使用状态;其中,手动换装条件包括以下至少之一:当前时刻与上一次换装的换装时刻之间的时间间隔大于或者等于间隔阈值;第一虚拟对象的换装次数未达到换装次数上限;响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,且接收到针对手动换装控件的触发操作,将第一部件替换为与第二部件。
在一些实施例中,套装切换模块4552,配置为响应于不满足手动换装条件,通过以下任意一种方式显示手动换装控件处于禁用状态:隐藏手动换装控件;灰度化显示手动换装控件;在手动换装控件上显示禁用符号。
在一些实施例中,套装切换模块4552,配置为在将第一部件替换为第二部件之前,获取与第一部件用于相同的穿戴部位的多个候选部件;将多个候选部件中满足筛选条件的候选部件作为第二部件,其中,多个候选部件是第一虚拟对象拥有的;其中,筛选条件包括以下任意一项:候选部件的功能与第一部件的功能相同;第一部件的穿戴部位未被虚拟环境遮蔽; 候选部件与第一区域的颜色相似度大于颜色相似度阈值。
在一些实施例中,套装切换模块4552,配置为将第一部件替换为第二部件之前,通过以下方式确定颜色相似度:确定第一部件在第一区域中的关联区域的颜色向量;确定每个候选部件的颜色向量与关联区域的颜色向量之间的向量距离,其中,向量距离用于表征候选部件与第一区域之间的颜色相似度,且向量距离与颜色相似度负相关。
在一些实施例中,套装切换模块4552,配置为获取第一虚拟对象对应的视野画面图像;基于第一部件的穿戴部位的关联区域对视野画面图像进行分割处理,得到关联区域图像;对关联区域图像进行转换处理,得到关联区域图像的颜色比例数据;对颜色比例数据进行特征提取处理,得到关联区域的颜色向量。
在一些实施例中,套装切换模块4552,配置为对关联区域图像进行缩小,并缩小图像转换为灰度图像;从灰度图像中统计关联区域图像中每种颜色的颜色比例数据。
在一些实施例中,套装切换模块4552,配置为基于颜色比例数据中每种颜色对应的比例值,确定颜色比例数据的颜色比例向量,其中,颜色比例向量的每个维度的数值与每个比例值一一对应;对颜色比例向量降维映射为关联区域的颜色向量。
在一些实施例中,套装切换模块4552,配置为在确定第一部件在第一区域中的关联区域的颜色向量之前,通过以下方式确定每个候选部件的颜色向量:对每个候选部件进行以下处理:提取候选部件的每个贴图素材,并将每个贴图素材组合为候选部件的候选部件图像;将候选部件图像转换为候选部件图像的颜色比例数据;从颜色比例数据提取候选部件的颜色向量。
在一些实施例中,套装切换模块4552,配置为对候选部件图像进行缩小,并对缩小得到的缩小图像进行灰度转换处理,得到灰度图像;对灰度 图像中每种颜色进行比例统计,得到候选部件图像的颜色比例数据。
在一些实施例中,套装切换模块4552,配置为基于颜色比例数据中每种颜色对应的比例值,确定颜色比例数据的颜色比例向量,其中,颜色比例向量的每个维度的数值与每个比例值一一对应;将颜色比例向量降维映射为候选部件的颜色向量。
在一些实施例中,套装切换模块4552,配置为响应于满足替换限制条件,将第一部件替换为第二部件,其中,替换限制条件包括以下至少之一:第一虚拟对象的换装次数未达到换装次数上限;第一虚拟对象需要隐蔽;第一虚拟对象在第一区域的停留时长大于时长阈值;第一区域的面积大于换装面积阈值。
在一些实施例中,套装切换模块4552,配置为在将第一部件替换为第二部件之前,通过以下方式识别虚拟对象的隐蔽需求:基于第一区域的环境参数、以及虚拟对象的属性参数调用神经网络模型对第一虚拟对象进行隐蔽预测,得到表征第一虚拟对象是否需要隐蔽的隐蔽预测结果;其中,虚拟对象的属性参数包括:第一虚拟对象的位置信息、第一虚拟对象的敌对虚拟对象的位置信息、以及第一虚拟对象的队友虚拟对象的位置信息;第一区域的环境参数包括:第一区域的地形信息、第一区域的视野。
在一些实施例中,套装切换模块4552,配置为在基于第一区域的环境参数、以及虚拟对象的属性参数调用神经网络模型对第一虚拟对象进行隐蔽预测之前,通过以下方式训练神经网络模型:获取虚拟场景的环境参数与至少两个阵营的对局数据,其中,至少两个阵营包括失败阵营与胜利阵营,对局数据包括:胜利阵营的虚拟对象执行隐蔽行为的位置、失败阵营的虚拟对象执行隐蔽行为的位置;获取标注后的对局数据,其中,胜利阵营的虚拟对象执行隐蔽行为的位置的标签为概率1,失败阵营的虚拟对象执行隐蔽行为的位置的标签为概率0;基于虚拟场景的环境参数、标注后的对局数据对初始神经网络模型进行训练,得到训练后的神经网络模型。
在一些实施例中,套装切换模块4552,配置为响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,执行以下处理:响应于第一区域为第一虚拟对象的预设换装区域,且第一部件对应的穿戴部位为预设换装区域的预设穿戴部位,将预设穿戴部位关联的预设部件作为第二部件,将第一部件替换为第二部件;其中,预设部件的颜色与第一区域的颜色匹配。
在一些实施例中,套装切换模块4552,配置为响应于满足全局替换条件,将第一套装整体替换为与第一区域的颜色匹配的第二套装,其中,全局替换条件包括以下至少之一:针对第一虚拟对象在第一区域预先设置有对应的第二套装;接收到针对第一套装的整体替换指令。
在一些实施例中,套装切换模块4552,配置为响应于第一虚拟对象离开第一区域并进入第二区域,执行以下处理:若第二区域与第一区域之间的颜色差异小于或者等于颜色差异阈值,则在第二区域中,控制第一虚拟对象继续穿着第一套装;若第二区域与第一区域之间的颜色差异大于颜色差异阈值,则将第一套装整体替换为与第二区域的颜色匹配的第二套装,并在第二区域中继续穿着第二套装。
在一些实施例中,套装切换模块4552,配置为在将第一套装整体替换为与第二区域的颜色匹配的第二套装之前,若不满足局部替换条件,则转入将第一套装整体替换为与第二区域的颜色匹配的第二套装的处理;若满足局部替换条件,则将第一套装中的第三部件的替换为第四部件,其中,第四部件与第二区域的颜色匹配,且与第三部件的穿戴部位相同;其中,局部替换条件包括以下至少之一:第一虚拟对象在第二区域不存在对应的第二套装;与第二区域的颜色不匹配的部件的数量小于替换数量阈值;第三部件与第一套装中的其他部件不具有绑定关系。
在一些实施例中,套装切换模块4552,配置为响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,且第一部件不满足换色条件,将第一部件替换为第二部件;其中,换色条件包括以下至少之一:第一部件 对应的每个候选部件的颜色与第一区域的颜色不匹配,其中,候选部件是第一虚拟对象拥有的;第一部件与第一套装中的其他部件具有绑定关系;第一部件的功能强于第一部件对应的每个候选部件;第一部件的功能与第一虚拟对象当前执行的任务相关联,其中,第二部件不具备当前执行的任务对应的功能。
在一些实施例中,套装切换模块4552,配置为响应于第一区域的颜色与第一套装中的第一部件的颜色不匹配,且第一部件满足换色条件,将第一部件的颜色替换为与第一区域的颜色匹配的目标颜色。
在一些实施例中,虚拟场景中还包括反色换装控件;套装切换模块4552,配置为响应于第一虚拟对象在第一区域中不需要隐蔽,且接收到针对反色换装控件的触发操作,将第一套装中与第一区域的颜色匹配的第一部件替换为第五部件;其中,第五部件是与第一区域的颜色相反的部件,且第五部件的穿戴部位与第一部件的穿戴部位相同。
在一些实施例中,套装切换模块4552,配置为在将第一套装中与第一区域的颜色匹配的第一部件替换为第五部件之前,在与第一部件处于相同的穿戴部位的多个候选部件中,将与第一区域的颜色相似度最低的候选部件作为第五部件,其中,多个候选部件是第一虚拟对象拥有的。
在一些实施例中,显示模块4552,配置为显示虚拟场景,其中,虚拟场景包括穿着第一套装的第一虚拟对象,第一套装包括多个部件,多个部件分布在第一虚拟对象的不同部位,虚拟场景中还包括反色换装控件;套装切换模块4551,配置为响应于针对反色换装控件的触发操作,将第一套装中与第一区域的颜色匹配的第一部件替换为第五部件;其中,第五部件是与第一区域的颜色相反的部件,且第五部件的穿戴部位与第一部件的穿戴部位相同。
在一些实施例中,显示模块4551,配置为显示虚拟场景,其中,虚拟场景包括穿着第一套装的第一虚拟对象,第一套装包括多个部件,多个部 件分布在第一虚拟对象的不同部位;套装切换模块4552,配置为响应于第一虚拟对象离开第一区域并进入第二区域,执行以下处理:若第二区域与第一区域之间的颜色差异大于颜色差异阈值,则将第一套装整体替换为与第二区域的颜色匹配的第二套装,并在第二区域中继续穿着第二套装;若第二区域与第一区域之间的颜色差异小于或者等于颜色差异阈值,则在第二区域中,控制第一虚拟对象继续穿着第一套装。
在一些实施例中,第一区域与第二区域之间不相邻,第一区域与第二区域之间具有第三区域;套装切换模块4552,配置为在第一虚拟对象处于第三区域中时,控制第一虚拟对象继续穿着第一套装。
在一些实施例中,套装切换模块4552,配置为在第二区域中继续穿着第二套装之前,若第二区域的颜色分布差异小于或者等于颜色差异阈值,则转入控制第一虚拟对象继续穿着第二套装的处理。
在一些实施例中,套装切换模块4552,配置为在将第一套装整体替换为与第二区域的颜色匹配的第二套装之前,若不满足局部替换条件,则转入将第一套装整体替换为与第二区域的颜色匹配的第二套装的处理;若满足局部替换条件,则将第一套装中的第三部件的替换为第四部件,其中,第四部件与第二区域的颜色匹配,且与第三部件的穿戴部位相同;其中,局部替换条件包括以下至少之一:第一虚拟对象在第二区域不存在对应的第二套装;与第二区域的颜色不匹配的部件的数量小于替换数量阈值;第三部件与第一套装中的其他部件不具有绑定关系。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例上述的虚拟对象的套装处理方法。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的虚拟对象的套装处理方法,例如,如图3A、4A或者4B示出的虚拟对象的套装处理方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
综上所述,通过本申请实施例通过将虚拟对象的套装中至少部分部件替换为与虚拟场景的环境颜色匹配的部件,使得虚拟对象的套装中的部件跟随游戏内场景的颜色自动变化,虚拟对象在虚拟场景中被暴露的可能性降低、避免了丰富的套装部件对战斗带来的不良干扰,用户可自动更换隐蔽性套装,减少战斗内操作和思考成本,提升了用户的游戏体验。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (36)

  1. 一种虚拟对象的套装处理方法,由电子设备执行,所述方法包括:
    显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位;
    在所述第一虚拟对象处于所述虚拟场景中的第一区域的期间,响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,将所述第一部件替换为第二部件,其中,所述第二部件与所述第一区域的颜色匹配,且与所述第一部件的穿戴部位相同。
  2. 如权利要求1所述的方法,其中,所述虚拟场景中还包括自动换装控件;
    所述响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,将所述第一部件替换为第二部件,包括:
    响应于针对所述自动换装控件的开启操作,显示所述自动换装控件处于开启状态;
    响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,自动将所述第一部件替换为第二部件。
  3. 如权利要求1或2所述的方法,其中,所述虚拟场景中还包括手动换装控件,所述方法还包括:
    响应于针对所述手动换装控件的触发操作,将所述第一套装中的第一部件替换为第三部件,并在穿戴时长阈值内保持切换后的第一套装,其中,所述第三部件是与所述第一部件的穿戴部位相同的任意部件;
    响应于保持切换后的第一套装达到所述穿戴时长阈值,且所述第一区域的颜色与所述第一套装中的第三部件的颜色不匹配,将所述第三部件替 换为第四部件,其中,所述第四部件与所述第一区域的颜色匹配,且与所述第三部件的穿戴部位相同。
  4. 如权利要求3所述的方法,其中,在响应于针对所述手动换装控件的触发操作之前,所述方法还包括:
    通过以下任意一种方式确定所述第一部件:
    响应于针对所述第一套装中任意部件的选择操作,将选中的部件作为第一部件;
    将所述第一套装中与其他部件之间的颜色差异最大的部件作为第一部件;
    将所述第一套装中性能参数最小的部件作为第一部件。
  5. 如权利要求1所述的方法,其中,所述虚拟场景中还包括手动换装控件;
    所述响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,将所述第一部件替换为第二部件,包括:
    响应于满足手动换装条件,显示所述手动换装控件处于可使用状态;其中,所述手动换装条件包括以下至少之一:当前时刻与上一次换装的换装时刻之间的时间间隔大于或者等于间隔阈值;所述第一虚拟对象的换装次数未达到换装次数上限;
    响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,且接收到针对所述手动换装控件的触发操作,将所述第一部件替换为与第二部件。
  6. 如权利要求5所述的方法,其中,所述方法还包括:
    响应于不满足所述手动换装条件,通过以下任意一种方式显示所述手动换装控件处于禁用状态:隐藏所述手动换装控件;灰度化显示所述手动换装控件;在所述手动换装控件上显示禁用符号。
  7. 如权利要求1至6任一项所述的方法,其中,在将所述第一部件替换为第二部件之前,所述方法还包括:
    获取与所述第一部件用于相同的穿戴部位的多个候选部件;
    将所述多个候选部件中满足筛选条件的所述候选部件作为所述第二部件,其中,所述多个候选部件是所述第一虚拟对象拥有的,所述筛选条件包括以下任意一项:所述候选部件的功能与所述第一部件的功能相同;所述第一部件的穿戴部位未被所述虚拟环境遮蔽;所述候选部件与所述第一区域的颜色相似度大于颜色相似度阈值。
  8. 如权利要求7所述的方法,其中,所述将所述第一部件替换为第二部件之前,所述方法还包括:
    通过以下方式确定所述颜色相似度:
    确定所述第一部件在所述第一区域中的关联区域的颜色向量;
    确定每个所述候选部件的颜色向量与所述关联区域的颜色向量之间的向量距离,其中,所述向量距离用于表征所述候选部件与所述第一区域之间的颜色相似度,且所述向量距离与所述颜色相似度负相关。
  9. 如权利要求8所述的方法,其中,所述确定所述第一部件在所述第一区域中的关联区域的颜色向量,包括:
    获取所述第一虚拟对象对应的视野画面图像;
    基于所述第一部件的穿戴部位的关联区域对所述视野画面图像进行分割处理,得到关联区域图像;
    确定所述关联区域图像的颜色比例数据,从所述颜色比例数据提取所述关联区域的颜色向量。
  10. 如权利要求9所述的方法,其中,所述对所述关联区域图像进行转换处理,得到所述关联区域图像的颜色比例数据,包括:
    对所述关联区域图像进行缩小,将缩小图像转换为灰度图像;
    从所述灰度图像中统计得到所述关联区域图像的每种颜色的颜色比例数据。
  11. 如权利要求9所述的方法,其中,所述对所述颜色比例数据进行特征提取处理,得到所述关联区域的颜色向量,包括:
    基于所述颜色比例数据中每种颜色对应的比例值,确定所述颜色比例数据的颜色比例向量,其中,所述颜色比例向量的每个维度的数值与每个所述比例值一一对应;
    将所述颜色比例向量降维映射为所述关联区域的颜色向量。
  12. 如权利要求8所述的方法,其中,在确定所述第一部件在所述第一区域中的关联区域的颜色向量之前,所述方法还包括:
    通过以下方式确定每个所述候选部件的颜色向量:
    对每个所述候选部件进行以下处理:
    提取所述候选部件的每个贴图素材,并将每个所述贴图素材组合为所述候选部件的候选部件图像;
    将所述候选部件图像转换为所述候选部件图像的颜色比例数据;
    从所述颜色比例数据提取所述候选部件的颜色向量。
  13. 如权利要求12所述的方法,其中,所述对所述候选部件图像进行转换处理,得到所述候选部件图像的颜色比例数据,包括:
    对所述候选部件图像进行缩小,并对缩小得到的缩小图像进行灰度转换处理,得到灰度图像;
    从所述灰度图像统计得到所述候选部件图像中每种颜色的颜色比例数据。
  14. 如权利要求12所述的方法,其中,所述将所述候选部件图像转换为所述候选部件图像的颜色比例数据,包括:
    基于所述颜色比例数据中每种颜色对应的比例值,确定所述颜色比例数据的颜色比例向量,其中,所述颜色比例向量的每个维度的数值与每个所述比例值一一对应;
    将所述颜色比例向量降维映射为所述候选部件的颜色向量。
  15. 如权利要求1至6任一项所述的方法,其中,所述将所述第一部件替换为第二部件,包括:
    响应于满足替换限制条件,将所述第一部件替换为第二部件,其中,所述替换限制条件包括以下至少之一:
    所述第一虚拟对象的换装次数未达到换装次数上限;
    所述第一虚拟对象需要隐蔽;
    所述第一虚拟对象在所述第一区域的停留时长大于时长阈值;
    所述第一区域的面积大于换装面积阈值。
  16. 如权利要求15所述的方法,其中,在将所述第一部件替换为第二部件之前,所述方法还包括:
    通过以下方式识别所述虚拟对象的所述隐蔽需求:
    基于所述第一区域的环境参数、以及所述虚拟对象的属性参数调用神经网络模型对所述第一虚拟对象,确定表征所述第一虚拟对象是否需要隐蔽的隐蔽预测结果,其中,所述虚拟对象的属性参数包括以下至少之一:所述第一虚拟对象的位置信息、所述第一虚拟对象的敌对虚拟对象的位置信息、以及所述第一虚拟对象的队友虚拟对象的位置信息;所述第一区域的环境参数包括:所述第一区域的地形信息、所述第一区域的视野。
  17. 如权利要求16所述的方法,其中,在所述确定表征所述第一虚拟对象是否需要隐蔽的隐蔽预测结果之前,所述方法还包括:
    通过以下方式训练所述神经网络模型:
    获取所述虚拟场景的环境参数与至少两个阵营的对局数据,其中,所述至少两个阵营包括失败阵营与胜利阵营,所述对局数据包括:胜利阵营 的虚拟对象执行隐蔽行为的位置、失败阵营的虚拟对象执行隐蔽行为的位置;
    获取标注后的对局数据,其中,所述胜利阵营的虚拟对象执行隐蔽行为的位置的标签为概率1,所述失败阵营的虚拟对象执行隐蔽行为的位置的标签为概率0;
    基于所述虚拟场景的环境参数、所述标注后的对局数据对初始神经网络模型进行训练,得到训练后的所述神经网络模型。
  18. 如权利要求1至6任一项所述的方法,其中,所述响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,将所述第一部件替换为第二部件,包括:
    响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,执行以下处理:
    响应于所述第一区域为所述第一虚拟对象的预设换装区域,且所述第一部件对应的穿戴部位为所述预设换装区域的预设穿戴部位,将所述预设穿戴部位关联的预设部件作为所述第二部件,将所述第一部件替换为所述第二部件;其中,所述预设部件的颜色与所述第一区域的颜色匹配。
  19. 如权利要求1至6任一项所述的方法,其中,所述方法还包括:
    响应于满足全局替换条件,将所述第一套装整体替换为与所述第一区域的颜色匹配的第二套装,其中,所述全局替换条件包括以下至少之一:
    针对所述第一虚拟对象在所述第一区域预先设置有对应的所述第二套装;
    接收到针对所述第一套装的整体替换指令。
  20. 如权利要求1所述的方法,其中,所述方法还包括:
    响应于所述第一虚拟对象离开所述第一区域并进入第二区域,执行以下处理:
    若所述第二区域与所述第一区域之间的颜色差异小于或者等于颜色差异阈值,在所述第二区域中,控制所述第一虚拟对象继续穿着所述第一套装;
    若所述第二区域与所述第一区域之间的颜色差异大于颜色差异阈值,将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装,并在控制所述第一虚拟对象在所述第二区域中继续穿着所述第二套装。
  21. 如权利要求20所述的方法,其中,在将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装之前,所述方法还包括:
    若不满足局部替换条件,转入所述将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装的处理;
    若满足所述局部替换条件,将所述第一套装中的第三部件的替换为第四部件,其中,所述第四部件与所述第二区域的颜色匹配,且与所述第三部件的穿戴部位相同,所述局部替换条件包括以下至少之一:
    所述第一虚拟对象在所述第二区域不存在对应的所述第二套装;
    与所述第二区域的颜色不匹配的部件的数量小于替换数量阈值;
    所述第三部件与所述第一套装中的其他部件不具有绑定关系。
  22. 如权利要求1至6任一项所述的方法,其中,所述响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,将所述第一部件替换为第二部件,包括:
    响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,且所述第一部件不满足换色条件,将所述第一部件替换为第二部件,其中,所述换色条件包括以下至少之一:
    所述第一部件对应的每个候选部件的颜色与所述第一区域的颜色不匹配,其中,所述候选部件是所述第一虚拟对象拥有的;
    所述第一部件与所述第一套装中的其他部件具有绑定关系;
    所述第一部件的功能强于所述第一部件对应的每个候选部件;
    所述第一部件的功能与所述第一虚拟对象当前执行的任务相关联,其中,所述第二部件不具备当前执行的任务对应的功能。
  23. 如权利要求22所述的方法,其中,所述方法还包括:
    响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,且所述第一部件满足换色条件,将所述第一部件的颜色替换为与所述第一区域的颜色匹配的目标颜色。
  24. 如权利要求1至6任一项所述的方法,其中,所述虚拟场景中还包括反色换装控件;所述方法还包括:
    响应于所述第一虚拟对象在所述第一区域中不需要隐蔽,且接收到针对所述反色换装控件的触发操作,将所述第一套装中与所述第一区域的颜色匹配的第一部件替换为第五部件,其中,所述第五部件是与所述第一区域的颜色相反的部件,且所述第五部件的穿戴部位与所述第一部件的穿戴部位相同。
  25. 如权利要求24所述的方法,其中,在将所述第一套装中与所述第一区域的颜色匹配的第一部件替换为第五部件之前,所述方法还包括:
    在与所述第一部件处于相同的穿戴部位的多个候选部件中,将与所述第一区域的颜色相似度最低的所述候选部件作为所述第五部件,其中,所述多个候选部件是所述第一虚拟对象拥有的。
  26. 一种虚拟对象的套装处理方法,由电子设备执行,所述方法包括:
    显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位,所述虚拟场景中还包括反色换装控件;
    响应于针对所述反色换装控件的触发操作,将所述第一套装中与第一区域的颜色匹配的第一部件替换为第五部件;
    其中,所述第五部件是与所述第一区域的颜色相反的部件,且所述第五部件的穿戴部位与所述第一部件的穿戴部位相同。
  27. 一种虚拟对象的套装处理方法,由电子设备执行,所述方法包括:
    显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位;
    响应于所述第一虚拟对象离开第一区域并进入第二区域,执行以下处理:
    若所述第二区域与所述第一区域之间的颜色差异大于颜色差异阈值,将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装,并在所述第二区域中继续穿着所述第二套装;
    若所述第二区域与所述第一区域之间的颜色差异小于或者等于颜色差异阈值,控制所述第一虚拟对象在所述第二区域中继续穿着所述第一套装。
  28. 如权利要求27所述的方法,其中,所述第一区域与所述第二区域之间不相邻,所述第一区域与所述第二区域之间具有第三区域;
    所述方法还包括:
    在所述第一虚拟对象处于所述第三区域中时,控制所述第一虚拟对象继续穿着所述第一套装。
  29. 如权利要求28所述的方法,其中,在所述控制所述第一虚拟对象在所述第二区域中继续穿着所述第一套装之前,所述方法还包括:
    若所述第二区域的颜色分布差异小于或者等于颜色差异阈值,转入控制所述第一虚拟对象继续穿着所述第二套装的处理。
  30. 如权利要求28所述的方法,其中,在将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装之前,所述方法还包括:
    若不满足局部替换条件,转入所述将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装的处理;
    若满足所述局部替换条件,将所述第一套装中的第三部件的替换为第四部件,其中,所述第四部件与所述第二区域的颜色匹配,且与所述第三部件的穿戴部位相同,所述局部替换条件包括以下至少之一:
    所述第一虚拟对象在所述第二区域不存在对应的所述第二套装;
    与所述第二区域的颜色不匹配的部件的数量小于替换数量阈值;
    所述第三部件与所述第一套装中的其他部件不具有绑定关系。
  31. 一种虚拟对象的套装处理装置,所述装置包括:
    显示模块,配置为显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位;
    套装切换模块,配置为在所述第一虚拟对象处于所述虚拟场景中的第一区域的期间,响应于所述第一区域的颜色与所述第一套装中的第一部件的颜色不匹配,将所述第一部件替换为第二部件,其中,所述第二部件与所述第一区域的颜色匹配,且与所述第一部件的穿戴部位相同。
  32. 一种虚拟对象的套装处理装置,所述装置包括:
    显示模块,配置为显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位,所述虚拟场景中还包括反色换装控件;
    套装切换模块,配置为响应于针对所述反色换装控件的触发操作,将所述第一套装中与第一区域的颜色匹配的第一部件替换为第五部件,其中,所述第五部件是与所述第一区域的颜色相反的部件,且所述第五部件的穿戴部位与所述第一部件的穿戴部位相同。
  33. 一种虚拟对象的套装处理装置,所述装置包括:
    显示模块,配置为显示虚拟场景,其中,所述虚拟场景包括穿着第一套装的第一虚拟对象,所述第一套装包括多个部件,所述多个部件分布在所述第一虚拟对象的不同部位;
    套装切换模块,配置为响应于所述第一虚拟对象离开第一区域并进入第二区域,执行以下处理:
    若所述第二区域与所述第一区域之间的颜色差异大于颜色差异阈值,将所述第一套装整体替换为与所述第二区域的颜色匹配的第二套装,并在所述第二区域中继续穿着所述第二套装;
    若所述第二区域与所述第一区域之间的颜色差异小于或者等于颜色差异阈值,控制所述第一虚拟对象在所述第二区域中继续穿着所述第一套装。
  34. 一种电子设备,所述电子设备包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至30任一项所述的方法。
  35. 一种计算机可读存储介质,存储有可执行指令,所述可执行指令被处理器执行时实现权利要求1至30任一项所述的方法。
  36. 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现权利要求1至30任一项所述的方法。
PCT/CN2023/088657 2022-06-14 2023-04-17 虚拟对象的套装处理方法、装置、电子设备、存储介质及程序产品 WO2023241206A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210671738.5A CN117258283A (zh) 2022-06-14 2022-06-14 虚拟对象的套装处理方法、装置、电子设备及存储介质
CN202210671738.5 2022-06-14

Publications (1)

Publication Number Publication Date
WO2023241206A1 true WO2023241206A1 (zh) 2023-12-21

Family

ID=89192132

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/088657 WO2023241206A1 (zh) 2022-06-14 2023-04-17 虚拟对象的套装处理方法、装置、电子设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN117258283A (zh)
WO (1) WO2023241206A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5423554A (en) * 1993-09-24 1995-06-13 Metamedia Ventures, Inc. Virtual reality game method and apparatus
KR20140108449A (ko) * 2013-02-28 2014-09-11 (주)클로버추얼패션 3차원 의상 착장 방법
CN104981830A (zh) * 2012-11-12 2015-10-14 新加坡科技设计大学 服装搭配系统和方法
CN109087369A (zh) * 2018-06-22 2018-12-25 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、电子装置及存储介质
CN110681157A (zh) * 2019-10-16 2020-01-14 腾讯科技(深圳)有限公司 控制虚拟对象更换穿戴部件的方法、装置、设备及介质
CN111494939A (zh) * 2020-04-16 2020-08-07 网易(杭州)网络有限公司 一种虚拟角色的穿戴控制方法、装置、计算机设备和介质
CN111672112A (zh) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 虚拟环境的显示方法、装置、设备及存储介质
CN113426107A (zh) * 2021-07-01 2021-09-24 腾讯科技(深圳)有限公司 虚拟装饰物的配置方法和装置、存储介质及电子设备
CN113476849A (zh) * 2021-07-21 2021-10-08 网易(杭州)网络有限公司 游戏中的信息处理方法、装置、设备及存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5423554A (en) * 1993-09-24 1995-06-13 Metamedia Ventures, Inc. Virtual reality game method and apparatus
CN104981830A (zh) * 2012-11-12 2015-10-14 新加坡科技设计大学 服装搭配系统和方法
KR20140108449A (ko) * 2013-02-28 2014-09-11 (주)클로버추얼패션 3차원 의상 착장 방법
CN109087369A (zh) * 2018-06-22 2018-12-25 腾讯科技(深圳)有限公司 虚拟对象显示方法、装置、电子装置及存储介质
CN110681157A (zh) * 2019-10-16 2020-01-14 腾讯科技(深圳)有限公司 控制虚拟对象更换穿戴部件的方法、装置、设备及介质
CN111494939A (zh) * 2020-04-16 2020-08-07 网易(杭州)网络有限公司 一种虚拟角色的穿戴控制方法、装置、计算机设备和介质
CN111672112A (zh) * 2020-06-05 2020-09-18 腾讯科技(深圳)有限公司 虚拟环境的显示方法、装置、设备及存储介质
CN113426107A (zh) * 2021-07-01 2021-09-24 腾讯科技(深圳)有限公司 虚拟装饰物的配置方法和装置、存储介质及电子设备
CN113476849A (zh) * 2021-07-21 2021-10-08 网易(杭州)网络有限公司 游戏中的信息处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN117258283A (zh) 2023-12-22

Similar Documents

Publication Publication Date Title
WO2022151946A1 (zh) 虚拟角色的控制方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN106803057B (zh) 图像信息处理方法及装置
US9727977B2 (en) Sample based color extraction for augmented reality
US10356382B2 (en) Information processing device, information processing method, and program
US20140125698A1 (en) Mixed-reality arena
CN112402960B (zh) 虚拟场景中状态切换方法、装置、设备及存储介质
US12020360B2 (en) Method and apparatus for displaying virtual character, device, and storage medium
CN106730815A (zh) 一种易实现的体感互动方法及系统
TWI818343B (zh) 虛擬場景的適配顯示方法、裝置、電子設備、儲存媒體及電腦程式產品
KR20230006517A (ko) 가상 객체를 위한 2차원 이미지 디스플레이 방법, 장치, 디바이스 및 저장 매체
CN110681157A (zh) 控制虚拟对象更换穿戴部件的方法、装置、设备及介质
CN111672112B (zh) 虚拟环境的显示方法、装置、设备及存储介质
US20230398453A1 (en) Virtual item processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN103785169A (zh) 混合现实的竞技场
CN111429543A (zh) 一种素材生成方法、装置、电子设备及介质
US10672190B2 (en) Customizing appearance in mixed reality
CN111640199B (zh) 一种ar特效数据生成的方法及装置
WO2023241206A1 (zh) 虚拟对象的套装处理方法、装置、电子设备、存储介质及程序产品
US20230271087A1 (en) Method and apparatus for controlling virtual character, device, and storage medium
KR20150071611A (ko) 합성 현실 경기장
KR102667751B1 (ko) 에이아이로 제어되는 증강현실 캐릭터를 활용한 사진 서비스 제공 방법
WO2024146246A1 (zh) 虚拟场景的交互处理方法、装置、电子设备及计算机存储介质
WO2024060924A1 (zh) 虚拟场景的互动处理方法、装置、电子设备及存储介质
WO2024021792A1 (zh) 虚拟场景的信息处理方法、装置、设备、存储介质及程序产品
WO2024032176A1 (zh) 虚拟道具的处理方法、装置、电子设备、存储介质及程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23822777

Country of ref document: EP

Kind code of ref document: A1