CN110860084B - Virtual picture processing method and device - Google Patents

Virtual picture processing method and device Download PDF

Info

Publication number
CN110860084B
CN110860084B CN201911115593.5A CN201911115593A CN110860084B CN 110860084 B CN110860084 B CN 110860084B CN 201911115593 A CN201911115593 A CN 201911115593A CN 110860084 B CN110860084 B CN 110860084B
Authority
CN
China
Prior art keywords
picture
virtual
resolution
area
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911115593.5A
Other languages
Chinese (zh)
Other versions
CN110860084A (en
Inventor
李侃
苏泰梁
马钦
刘文剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Digital Network Technology Co Ltd filed Critical Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority to CN201911115593.5A priority Critical patent/CN110860084B/en
Publication of CN110860084A publication Critical patent/CN110860084A/en
Application granted granted Critical
Publication of CN110860084B publication Critical patent/CN110860084B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/31Communication aspects specific to video games, e.g. between several handheld game devices at close range
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network

Abstract

The specification provides a virtual picture processing method and device, wherein the method comprises the following steps: acquiring the motion speeds of a plurality of virtual models in a virtual scene; dividing a display picture of the virtual scene into a plurality of picture areas according to the motion speeds of a plurality of virtual models, and setting corresponding resolution for each picture area; generating image data corresponding to each picture area according to the resolution corresponding to each picture area; and respectively transmitting the image data corresponding to each picture area to the client so as to enable the client to synthesize a complete display picture of the virtual scene.

Description

Virtual picture processing method and device
Technical Field
The present disclosure relates to the field of animated image rendering and transmission technologies, and in particular, to a virtual image processing method, apparatus, computing device, and computer readable storage medium.
Background
In the existing three-dimensional scene pictures, the resolution of the display pictures of the same virtual scene is the same, especially for the game pictures with higher picture resolution, the processing amount of data is larger, meanwhile, with the development of computer science and technology, three-dimensional games are becoming the aftermarket in the field of electronic games, three-dimensional games are in a game mode based on cloud computing, in the running mode of three-dimensional games, all games are run at a server side, the rendered game pictures are compressed and then transmitted to users through a network, and client equipment at a client can enjoy high-quality game pictures without any high-end processor and display card only needing basic video decompression capability, however, the three-dimensional game picture transmission process in the prior art is limited by broadband cost and network resources, and for the three-dimensional games with larger transmission data amount, the game picture transmission frames often appear in the transmission process, so that the situation of game pictures is blocked, and the game experience of users is seriously influenced.
Disclosure of Invention
In view of the foregoing, embodiments of the present disclosure provide a virtual frame processing method, apparatus, computing device, and computer readable storage medium, so as to solve the technical drawbacks in the prior art.
According to a first aspect of embodiments of the present disclosure, there is provided a virtual picture processing method, including:
acquiring the motion speeds of a plurality of virtual models in a virtual scene;
dividing a display picture of the virtual scene into a plurality of picture areas according to the motion speeds of a plurality of virtual models, and setting corresponding resolution for each picture area;
generating image data corresponding to each picture area according to the resolution corresponding to each picture area;
and respectively transmitting the image data corresponding to each picture area to the client so as to enable the client to synthesize a complete display picture of the virtual scene.
According to a second aspect of embodiments of the present specification, there is provided an image processing apparatus comprising:
a speed acquisition module configured to acquire movement speeds of a plurality of virtual models in a virtual scene;
a picture dividing module configured to divide a display picture of the virtual scene into a plurality of picture areas according to moving speeds of a plurality of the virtual models, and set a corresponding resolution for each picture area;
the data generation module is configured to generate image data corresponding to each picture area according to the resolution corresponding to each picture area;
and the data transmission module is configured to respectively transmit the image data corresponding to each picture area to the client so as to enable the client to synthesize a complete display picture of the virtual scene.
According to a third aspect of embodiments of the present specification, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the virtual picture processing method when executing the instructions.
According to a fourth aspect of embodiments of the present description, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the virtual picture processing method.
According to the method and the device, the display picture of the virtual scene is divided into a plurality of picture areas according to the movement speed of the virtual model, the corresponding resolution is set for each picture area, the picture areas moving at high speed are displayed by adopting low resolution, and the picture areas moving at low speed are displayed by adopting high resolution, so that the processing capacity of a system on image data is saved, when the device is used as a cloud game, the image data can be divided into a plurality of parts and transmitted to a client for synthesis, the bandwidth is greatly saved, the situation of the transmission of a video image of the cloud game is improved, and the situation of picture blocking is improved, so that a player obtains good game experience.
Drawings
FIG. 1 is a block diagram of a computing device provided by an embodiment of the present application;
fig. 2 is a flowchart of a virtual picture processing method provided in an embodiment of the present application;
FIG. 3 is another flowchart of a virtual frame processing method according to an embodiment of the present disclosure;
FIG. 4 is another flowchart of a virtual frame processing method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a virtual frame processing method according to an embodiment of the present application;
FIG. 6 is another flowchart of a virtual frame processing method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
First, terms related to one or more embodiments of the present invention will be explained.
Virtual scene: generally refers to a three-dimensional virtual scene, such as a three-dimensional game scene, etc., generated by a client device.
Virtual model: generally refers to a three-dimensional model in a three-dimensional virtual scene, such as a virtual object, task, or environment in a three-dimensional game scene, or the like.
Resolution ratio: the resolution refers to the resolution of the screen display, the screen resolution determines the setting of how much information is displayed on the computer screen, the resolution is measured in terms of horizontal pixels and vertical pixels, the resolution is the number of pixels displayed on the screen, the resolution 160×128 means that the number of pixels in the horizontal direction is 160, the number of pixels in the vertical direction is 128, and the display effect is finer and finer as the resolution is higher under the condition that the screen size is the same.
In the present application, a virtual screen processing method, apparatus, computing device, and computer-readable storage medium are provided, and detailed description is given in the following embodiments.
Fig. 1 shows a block diagram of a computing device 100 according to an embodiment of the present description. The components of the computing device 100 include, but are not limited to, a memory 110 and a processor 120. Processor 120 is coupled to memory 110 via bus 130 and database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 140 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 100, as well as other components not shown in FIG. 1, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device shown in FIG. 1 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the method shown in fig. 2. Fig. 2 is a schematic flow chart illustrating a virtual picture processing method according to an embodiment of the present application, including steps 202 to 208.
Step 202: the motion speeds of a plurality of virtual models in the virtual scene are obtained.
In the embodiment of the present application, there are a plurality of virtual models for forming the virtual scene in a three-dimensional virtual scene, and there is often a relative motion relationship between the virtual models and virtual shots of a user, further, in a virtual scene related to a three-dimensional game, especially for a three-dimensional game with a first person perspective, the virtual shots of the three-dimensional game observe the virtual scene of the whole three-dimensional game with the perspective of a player as the first perspective, and during the course of the game, the virtual models controlled by the player generate a relative motion relationship with the virtual models or other virtual models controlled by other players in the three-dimensional game, for example, in a first person shooting game and racing game, the player generally controls a specific virtual model (character or racing car) with the first perspective, and moves, such as walks, runs, shoots and drives, between the virtual shots with the player as the first perspective and the plurality of virtual models in the virtual scene, wherein the three-dimensional game may be a game running on a server, and the player may acquire a corresponding three-dimensional game with respect to the virtual models in the virtual scenes.
Step 204: dividing a display picture of the virtual scene into a plurality of picture areas according to the moving speeds of the plurality of virtual models, and setting a corresponding resolution for each picture area.
In the embodiment of the present application, when there are a plurality of virtual scenes having a relative motion relationship with the virtual lens, the server divides the display screen of the virtual scene into at least two screen areas according to the motion speed of each virtual model relative to the virtual lens, and sets a corresponding resolution for each of the screen areas, where the setting principle is: when the server preprocesses the display screen of the cloud game, for example, in the case of using the cloud game as the cloud game, the server divides the display screen of the cloud game into at least two screen areas according to the motion speed of each virtual model relative to the virtual lens, and sets the corresponding resolution for each screen area.
Step 206: and generating image data corresponding to each picture area according to the resolution corresponding to each picture area.
In the embodiment of the present application, the server performs view cone clipping and image rendering of the virtual lens according to the resolution set for each picture area, that is, draws each picture area according to the resolution corresponding to each picture area, and generates image data of each picture area.
Step 208: and respectively transmitting the image data corresponding to each picture area to the client so as to enable the client to synthesize a complete display picture of the virtual scene.
In the embodiment of the application, after the server transmits the image data corresponding to each picture area to the client through the network, the client re-synthesizes the image data corresponding to each picture area into a display picture of a complete virtual scene and displays the display picture in real time, wherein the client comprises a desktop computer, a set top box, a mobile terminal and the like, when the client is used as a cloud game, the client is only limited in data transmission and reception and the display picture presentation of the cloud game, the storage and the running of the cloud game are completed on the server, when the game is played, a player operates the client to transmit the data to the server, the server operates the game according to the operation, the display picture code of the cloud game is compressed, the client is returned to the client through the network, and finally the client decodes and outputs the display picture of the cloud game.
According to the method and the device, the display picture of the virtual scene is divided into a plurality of picture areas according to the movement speed of the virtual model, the corresponding resolution is set for each picture area, the picture areas moving at high speed are displayed by adopting low resolution, and the picture areas moving at low speed are displayed by adopting high resolution, so that the processing capacity of a system on image data is saved, when the device is used as a cloud game, the image data can be divided into a plurality of parts and transmitted to a client for synthesis, the bandwidth is greatly saved, the situation of the transmission of a video image of the cloud game is improved, and the situation of picture blocking is improved, so that a player obtains good game experience.
In one embodiment of the present application, as shown in fig. 3, the obtaining the motion speeds of the plurality of virtual models in the virtual scene includes steps 302 to 304:
step 302: and acquiring a plurality of virtual models in the virtual scene.
In an embodiment of the present application, the server may acquire, in real time, a plurality of virtual models in the virtual scene, that is, in a case where a virtual lens in the virtual scene moves in the virtual scene, the virtual lens in the virtual scene may generate a relative motion relationship with different virtual models in the virtual scene and observe a view cone range, for example, when the virtual game as the first person viewing angle is used, a player character may move at different positions in a three-dimensional game virtual scene corresponding to the cloud game, and various virtual models, such as various environments, non-player characters, task props, and the like, may be observed along the way, and the server may acquire, in real time, a plurality of virtual models in the three-dimensional game virtual scene observed by the virtual lens.
Step 304: and calculating the motion speed of each virtual model according to the position change of each virtual model in the current frame and the adjacent frames.
In the embodiment of the present application, the server may calculate, according to the position change condition of each virtual model in the current frame and the adjacent frame, the movement speed of each virtual model relative to the virtual lens at the current time.
In another embodiment of the present application, as shown in fig. 4, the dividing the display screen of the virtual scene into a plurality of screen areas according to the motion speeds of a plurality of virtual models, and setting a corresponding resolution for each screen area includes steps 402 to 406:
step 402: and comparing the motion speed of each virtual model with a plurality of preset speed thresholds, and dividing the virtual models positioned in the same speed threshold interval.
In the embodiment of the application, a plurality of speed thresholds are preset in a server, a speed threshold interval is formed between two adjacent speed thresholds, the server compares the motion speed of each virtual model relative to a virtual lens with the speed threshold, the virtual model located in the same speed threshold interval is divided into the same range, for example, in a three-dimensional game of racing vehicles with a first person view angle as shown in fig. 5, a plurality of speed thresholds such as 20m/s, 40m/s, 60m/s and 80m/s are preset in the server, the virtual lens in the game is always observed with a racing model as the first person view angle, the racing model can generate different relative motion speeds with an environment model on two sides of the racing model in the running process of the racing model, and the server can be divided into the same category or different categories according to the motion speeds of the racing model and the environment model relative to the racing model in the running process of the game.
Step 404: and dividing the display picture of the virtual scene into a plurality of picture areas according to the display picture of the virtual scene corresponding to the virtual model positioned in the same speed threshold interval.
In the embodiment of the present application, in the three-dimensional game of racing vehicles with a first-person viewing angle as shown in fig. 5, when the motion of the racing model and the environmental model relative to the racing model (virtual lens) is different, the server divides the display screen of the virtual scene into three areas, namely, an area A1, an area A2 and an area B according to the display screens of the virtual scene corresponding to the racing model and the environmental model, where the area A1 and the area A2 are both environmental models.
Step 406: and setting corresponding resolution for each picture area according to a preset corresponding relation rule.
In the embodiment of the present application, the server sets a corresponding resolution for each frame area according to a preset correspondence rule, where the correspondence rule includes a correspondence between each speed threshold interval and the resolution, for example, in a three-dimensional game of racing vehicles with a first person view angle as shown in fig. 5, a respective rate corresponding to a speed threshold interval between a speed threshold 20m/s and a speed threshold 40m/s may be 1600×1200, a respective rate corresponding to a speed threshold interval between a speed threshold 40m/s and a speed threshold 60m/s may be 1200×800, a respective rate corresponding to a speed threshold interval between a speed threshold 60m/s and a speed threshold 80m/s may be 640×480, and the correspondence rule satisfies a setting rule that a resolution corresponding to a frame area with a fast relative motion speed is low and a resolution corresponding to a frame area with a slow relative motion speed is high.
In the above embodiment, the dividing the display screen of the virtual scene into a plurality of screen areas includes:
dividing the display picture of the virtual scene into a plurality of rectangular picture areas.
According to the method and the device, the display picture of the virtual scene is divided into a plurality of picture areas according to the movement speed of the virtual model, the corresponding resolution is set for each picture area, when the method and the device are used as a cloud game, pictures of the cloud game do not need to be rendered according to uniform resolution, the low resolution is adopted to render under the condition that the movement speed of the virtual model is high, so that the flow of image data is saved, and the condition that the pictures of the cloud game are stuck is improved.
In another embodiment of the present application, as shown in fig. 6, after setting a corresponding resolution for each picture area, steps 602 to 604 are further included:
step 602: and setting a junction area at the junction of two adjacent picture areas.
Step 604: the resolution of the interface area is set to gradually change from the resolution of one of the two adjacent picture areas to the resolution of the other picture area.
In the embodiment of the present application, the server may set a boundary area at the boundary between two adjacent image areas, where the resolution of the boundary area continuously changes, so that the resolution difference between the two image areas can be seamlessly joined, so that the resolution change between the two adjacent image areas is not obvious visually, for example, in a three-dimensional game of racing vehicles with a first person viewing angle as shown in fig. 5, if the resolution corresponding to the racing track model is 640×480 and the resolution corresponding to the environmental model is 1200×800, then a boundary area is set between the image area corresponding to the racing track model and the image area corresponding to the environmental model, and the resolution of the boundary area gradually changes from 640×480 to 1200×800.
According to the method and the device, the junction areas used for transition are arranged between the picture areas with different resolutions, so that the junction portions between the picture areas are displayed softly visually.
Corresponding to the above method embodiments, the present disclosure further provides an image processing apparatus embodiment, and fig. 7 shows a schematic structural diagram of the image processing apparatus according to one embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
a speed acquisition module 701 configured to acquire motion speeds of a plurality of virtual models in a virtual scene;
a screen dividing module 702 configured to divide a display screen of the virtual scene into a plurality of screen areas according to the moving speeds of a plurality of the virtual models, and set a corresponding resolution for each screen area;
a data generating module 703 configured to generate image data corresponding to each of the picture areas according to the resolution corresponding to each of the picture areas;
and the data transmission module 704 is configured to transmit the image data corresponding to each picture area to the client respectively, so that the client synthesizes the complete display picture of the virtual scene.
Optionally, the speed acquisition module 701 includes:
a model acquisition unit configured to acquire a plurality of the virtual models in the virtual scene;
and a speed calculation unit configured to calculate a movement speed of each virtual model according to a change in a position of each virtual model in a current frame and an adjacent frame.
Optionally, the picture segmentation module 702 includes:
a model dividing unit configured to compare a movement speed of each virtual model with a plurality of preset speed thresholds and divide the virtual models located in the same speed threshold section;
a picture dividing unit configured to divide a display picture of the virtual scene into a plurality of picture areas according to the display picture of the virtual scene corresponding to the virtual model located within the same speed threshold section;
and a resolution setting unit configured to set a corresponding resolution for each picture area according to a preset correspondence of speed and resolution.
Optionally, the picture dividing unit is further configured to:
dividing the display picture of the virtual scene into a plurality of rectangular picture areas.
Optionally, the apparatus further includes:
a boundary area configuration module 705 configured to set a boundary area at a boundary of two adjacent picture areas;
the interface region setting module 706 is configured to set the resolution of the interface region to gradually change from the resolution of one of the two adjacent picture regions to the resolution of the other picture region.
Optionally, the data generating module 703 includes:
and an image drawing unit configured to draw each picture region according to a resolution corresponding to each picture region, and generate image data of each picture region.
According to the method and the device, the display picture of the virtual scene is divided into a plurality of picture areas according to the movement speed of the virtual model, the corresponding resolution is set for each picture area, the picture areas moving at high speed are displayed by adopting low resolution, and the picture areas moving at low speed are displayed by adopting high resolution, so that the processing capacity of a system on image data is saved, when the device is used as a cloud game, the image data can be divided into a plurality of parts and transmitted to a client for synthesis, the bandwidth is greatly saved, the situation of the transmission of a video image of the cloud game is improved, and the situation of picture blocking is improved, so that a player obtains good game experience.
An embodiment of the present application also provides a computing device including a memory, a processor, and computer instructions stored on the memory and executable on the processor, the processor implementing the following steps when executing the instructions:
acquiring the motion speeds of a plurality of virtual models in a virtual scene;
dividing a display picture of the virtual scene into a plurality of picture areas according to the motion speeds of a plurality of virtual models, and setting corresponding resolution for each picture area;
generating image data corresponding to each picture area according to the resolution corresponding to each picture area;
and respectively transmitting the image data corresponding to each picture area to the client so as to enable the client to synthesize a complete display picture of the virtual scene.
An embodiment of the present application also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of the virtual picture processing method as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the computer readable storage medium and the technical solution of the virtual picture processing method belong to the same concept, and details of the technical solution of the computer readable storage medium which are not described in detail can be referred to the description of the technical solution of the virtual picture processing method.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, randomAccess Memory), an electrical carrier signal, a telecommunication signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (14)

1. A virtual picture processing method, comprising:
obtaining the motion speeds of a plurality of virtual models in a virtual scene, wherein the motion speeds are the relative motion speeds existing between a virtual lens taking a player character as a first visual angle and the plurality of virtual models in the virtual scene;
dividing a display picture of the virtual scene into a plurality of picture areas according to the motion speeds of a plurality of virtual models, and setting corresponding resolution for each picture area;
generating image data corresponding to each picture area according to the resolution corresponding to each picture area;
and respectively transmitting the image data corresponding to each picture area to the client so as to enable the client to synthesize a complete display picture of the virtual scene.
2. The method of claim 1, wherein the obtaining the motion speed of the plurality of virtual models in the virtual scene comprises:
acquiring a plurality of virtual models in the virtual scene;
and calculating the motion speed of each virtual model according to the position change of each virtual model in the current frame and the adjacent frames.
3. The method of claim 1, wherein dividing the display screen of the virtual scene into a plurality of screen regions according to the motion speeds of the plurality of virtual models, and setting a corresponding resolution for each screen region comprises:
comparing the motion speed of each virtual model with a plurality of preset speed thresholds, and dividing the virtual models in the same speed threshold interval;
dividing the display picture of the virtual scene into a plurality of picture areas according to the display picture of the virtual scene corresponding to the virtual model in the same speed threshold interval;
and setting corresponding resolution for each picture area according to a preset corresponding relation rule.
4. The method of claim 3, wherein the dividing the virtual scene picture into a plurality of picture regions comprises:
dividing the display picture of the virtual scene into a plurality of rectangular picture areas.
5. The method of claim 1, further comprising, after setting the corresponding resolution for each picture region:
setting a junction area at the junction of two adjacent picture areas;
the resolution of the interface area is set to gradually change from the resolution of one of the two adjacent picture areas to the resolution of the other picture area.
6. The method of claim 1, wherein generating image data corresponding to each of the picture regions comprises:
and drawing each picture area according to the resolution corresponding to each picture area, and generating the image data of each picture area.
7. A virtual picture processing apparatus, comprising:
a speed acquisition module configured to acquire movement speeds of a plurality of virtual models in a virtual scene, wherein the movement speeds are relative movement speeds existing between a virtual lens taking a player character as a first view angle and the plurality of virtual models in the virtual scene;
a picture dividing module configured to divide a display picture of the virtual scene into a plurality of picture areas according to moving speeds of a plurality of the virtual models, and set a corresponding resolution for each picture area;
the data generation module is configured to generate image data corresponding to each picture area according to the resolution corresponding to each picture area;
and the data transmission module is configured to respectively transmit the image data corresponding to each picture area to the client so as to enable the client to synthesize a complete display picture of the virtual scene.
8. The apparatus of claim 7, wherein the speed acquisition module comprises:
a model acquisition unit configured to acquire a plurality of the virtual models in the virtual scene;
and a speed calculation unit configured to calculate a movement speed of each virtual model according to a change in a position of each virtual model in a current frame and an adjacent frame.
9. The apparatus of claim 7, wherein the picture segmentation module comprises:
a model dividing unit configured to compare a movement speed of each virtual model with a plurality of preset speed thresholds and divide the virtual models located in the same speed threshold section;
a picture dividing unit configured to divide a display picture of the virtual scene into a plurality of picture areas according to the display picture of the virtual scene corresponding to the virtual model located within the same speed threshold section;
and a resolution setting unit configured to set a corresponding resolution for each picture area according to a preset correspondence of speed and resolution.
10. The apparatus of claim 9, wherein the picture division unit is further configured to:
dividing the display picture of the virtual scene into a plurality of rectangular picture areas.
11. The apparatus as recited in claim 7, further comprising:
the interface area configuration module is configured to set an interface area at the interface of two adjacent picture areas;
and the boundary area setting module is configured to set the resolution of the boundary area to gradually change from the resolution of one picture area to the resolution of the other picture area in the two adjacent picture areas.
12. The apparatus of claim 7, wherein the data generation module comprises:
and an image drawing unit configured to draw each picture region according to a resolution corresponding to each picture region, and generate image data of each picture region.
13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the instructions, implements the steps of the method of any of claims 1-6.
14. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 6.
CN201911115593.5A 2019-11-14 2019-11-14 Virtual picture processing method and device Active CN110860084B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911115593.5A CN110860084B (en) 2019-11-14 2019-11-14 Virtual picture processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911115593.5A CN110860084B (en) 2019-11-14 2019-11-14 Virtual picture processing method and device

Publications (2)

Publication Number Publication Date
CN110860084A CN110860084A (en) 2020-03-06
CN110860084B true CN110860084B (en) 2024-02-23

Family

ID=69654206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911115593.5A Active CN110860084B (en) 2019-11-14 2019-11-14 Virtual picture processing method and device

Country Status (1)

Country Link
CN (1) CN110860084B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445560B (en) * 2020-03-18 2023-09-26 腾讯科技(深圳)有限公司 Animation playing method and device, storage medium and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001244818A (en) * 2000-02-25 2001-09-07 Seiko Epson Corp Device and method for reproducing moving picture and information recording medium
US6389072B1 (en) * 1998-12-23 2002-05-14 U.S. Philips Corp. Motion analysis based buffer regulation scheme
JP2009027535A (en) * 2007-07-20 2009-02-05 Sanyo Electric Co Ltd Image processor and imaging apparatus using same
CN101755460A (en) * 2007-07-20 2010-06-23 富士胶片株式会社 Image processing apparatus, image processing method, image processing system and program
CN102158636A (en) * 2010-09-30 2011-08-17 四川虹欧显示器件有限公司 Image processing method and device
JP2016066149A (en) * 2014-09-24 2016-04-28 株式会社コロプラ Display system, display method and program
CN107277474A (en) * 2017-06-26 2017-10-20 深圳看到科技有限公司 Panorama generation method and generating means
CN110166758A (en) * 2019-06-24 2019-08-23 京东方科技集团股份有限公司 Image processing method, device, terminal device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6106967B2 (en) * 2012-06-04 2017-04-05 ソニー株式会社 Information processing apparatus, information processing method, program, and image display apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6389072B1 (en) * 1998-12-23 2002-05-14 U.S. Philips Corp. Motion analysis based buffer regulation scheme
JP2001244818A (en) * 2000-02-25 2001-09-07 Seiko Epson Corp Device and method for reproducing moving picture and information recording medium
JP2009027535A (en) * 2007-07-20 2009-02-05 Sanyo Electric Co Ltd Image processor and imaging apparatus using same
CN101755460A (en) * 2007-07-20 2010-06-23 富士胶片株式会社 Image processing apparatus, image processing method, image processing system and program
CN102158636A (en) * 2010-09-30 2011-08-17 四川虹欧显示器件有限公司 Image processing method and device
JP2016066149A (en) * 2014-09-24 2016-04-28 株式会社コロプラ Display system, display method and program
CN107277474A (en) * 2017-06-26 2017-10-20 深圳看到科技有限公司 Panorama generation method and generating means
CN110166758A (en) * 2019-06-24 2019-08-23 京东方科技集团股份有限公司 Image processing method, device, terminal device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
大规模虚拟人的实时多样性绘制;阎光伟;胡海涛;赵建涛;;华北电力大学学报(自然科学版)(第06期);全文 *

Also Published As

Publication number Publication date
CN110860084A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
US9589357B2 (en) Avatar-based video encoding
CN105791977B (en) Virtual reality data processing method, equipment and system based on cloud service
CN113287118A (en) System and method for face reproduction
CN109756511B (en) Data processing method and device, computing equipment and storage medium
EP3573026B1 (en) Information processing apparatus, information processing method, and program
CN111614993B (en) Barrage display method and device, computer equipment and storage medium
KR102491140B1 (en) Method and apparatus for generating virtual avatar
WO2021098338A1 (en) Model training method, media information synthesizing method, and related apparatus
WO2023045649A1 (en) Video frame playing method and apparatus, and device, storage medium and program product
EP3683656A1 (en) Virtual reality (vr) interface generation method and apparatus
JP7425196B2 (en) hybrid streaming
WO2023151525A1 (en) Method and apparatus for generating special-effect video, and electronic device and storage medium
CN112884908A (en) Augmented reality-based display method, device, storage medium, and program product
US20240095981A1 (en) Video generation method and apparatus, device and readable storage medium
CN105389090A (en) Game interaction interface displaying method and apparatus, mobile terminal and computer terminal
CN114253647A (en) Element display method and device, electronic equipment and storage medium
CN110860084B (en) Virtual picture processing method and device
US10839587B2 (en) Image processing methods and devices for moving a target object by using a target ripple
CN112604279A (en) Special effect display method and device
US20230347240A1 (en) Display method and apparatus of scene picture, terminal, and storage medium
JP7427786B2 (en) Display methods, devices, storage media and program products based on augmented reality
CN114219888A (en) Method and device for generating dynamic silhouette effect of three-dimensional character and storage medium
KR101741149B1 (en) Method and device for controlling a virtual camera's orientation
US20170176934A1 (en) Image playing method and electronic device for virtual reality device
CN111617470B (en) Interface special effect rendering method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant