CN116188736A - Degradation display method, device and equipment for 3D virtual scene - Google Patents

Degradation display method, device and equipment for 3D virtual scene Download PDF

Info

Publication number
CN116188736A
CN116188736A CN202310098483.2A CN202310098483A CN116188736A CN 116188736 A CN116188736 A CN 116188736A CN 202310098483 A CN202310098483 A CN 202310098483A CN 116188736 A CN116188736 A CN 116188736A
Authority
CN
China
Prior art keywords
image
transparency
images
determining
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310098483.2A
Other languages
Chinese (zh)
Inventor
尹洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202310098483.2A priority Critical patent/CN116188736A/en
Publication of CN116188736A publication Critical patent/CN116188736A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the specification discloses a degradation display method, device and equipment of a 3D virtual scene. The scheme comprises the following steps: determining the running environment of the equipment, and judging whether the running environment meets preset degradation conditions or not; if yes, determining 3D elements contained in the virtual scene which is triggered to be displayed, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels; determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics; and integrating and displaying the 2D images after the image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.

Description

Degradation display method, device and equipment for 3D virtual scene
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a degradation display method, device and equipment for a 3D virtual scene.
Background
With the development of computer and internet technologies, 3D technologies are becoming mature, and many virtual scenes (such as online e-commerce platforms, public welfare applications, game APPs, etc.) with interactive functions can be displayed to users at clients in a 3D manner.
In practical application, the performance requirement of the 3D scene on the device is relatively high, and the processing performance of the terminal device on the user side is different, so that abnormal problems are easy to occur when part of the terminal devices display the 3D virtual scene, such as the problem that the terminal devices are blocked, the device heats and the like in the display process, and even the problem that the system is crashed or the device performance is too low to support the 3D scene is caused.
Based on the above, a virtual scene display scheme with higher compatibility is required for the user side.
Disclosure of Invention
One or more embodiments of the present disclosure provide a degradation display method, apparatus, device, and storage medium for a 3D virtual scene, so as to solve the following technical problems: the virtual scene display scheme with higher compatibility is required for the user side.
To solve the above technical problems, one or more embodiments of the present specification are implemented as follows:
one or more embodiments of the present disclosure provide a degradation display method for a 3D virtual scene, including:
determining the running environment of equipment, and judging whether the running environment accords with a preset degradation condition;
if yes, determining 3D elements contained in the virtual scene which is triggered to be displayed, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
Determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics;
and integrating and displaying the 2D images after image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.
One or more embodiments of the present disclosure provide a degradation display device for a 3D virtual scene, including:
the degradation judging module is used for determining the running environment of the equipment and judging whether the running environment meets preset degradation conditions or not;
the 2D image acquisition module is used for determining 3D elements contained in the virtual scene which is triggered to be displayed if the virtual scene is in accordance with the 3D elements, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
the image compression module is used for determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel and carrying out image compression on the 2D image according to the transparency dimension characteristics;
and the 2D mode display module is used for integrally displaying the 2D images after the image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene through the degraded 2D mode.
One or more embodiments of the present disclosure provide a degradation display device for a 3D virtual scene, including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
determining the running environment of equipment, and judging whether the running environment accords with a preset degradation condition;
if yes, determining 3D elements contained in the virtual scene which is triggered to be displayed, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics;
and integrating and displaying the 2D images after image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.
One or more embodiments of the present specification provide a non-volatile computer storage medium storing computer-executable instructions configured to:
Determining the running environment of equipment, and judging whether the running environment accords with a preset degradation condition;
if yes, determining 3D elements contained in the virtual scene which is triggered to be displayed, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics;
and integrating and displaying the 2D images after image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.
The above-mentioned at least one technical solution adopted by one or more embodiments of the present disclosure can achieve the following beneficial effects:
according to the 2D image obtained by the 3D element, the output of visual resource materials can be realized without additional research and development departments, and the realization cost is effectively controlled. And only the expression form of the 3D element is converted into a 2D mode, the visual experience and the functional change of the 3D version are small, the display of the virtual scene by the low-performance equipment can be supported, and the user experience can be ensured.
The 2D images with different transparency dimension characteristics are subjected to a corresponding image compression mode, so that information carried by the 2D images can be further reduced, and the method is more friendly to low-performance equipment.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow diagram of a degradation display method of a 3D virtual scene according to one or more embodiments of the present disclosure;
fig. 2 is a schematic diagram of a degradation display method of a 3D virtual scene in an application scene according to one or more embodiments of the present disclosure;
FIG. 3a is a schematic diagram of an ineffective transparency portion in an application scenario before clipping, provided in one or more embodiments of the present disclosure;
FIG. 3b is a schematic diagram of a partially cropped ineffective transparency in an application scenario according to one or more embodiments of the present disclosure;
FIG. 4 is a schematic diagram of 2D image acquisition in an application scenario provided in one or more embodiments of the present disclosure;
fig. 5 is a schematic structural diagram of a degradation display device for a 3D virtual scene according to one or more embodiments of the present disclosure;
Fig. 6 is a schematic structural diagram of a degradation display device for a 3D virtual scene according to one or more embodiments of the present disclosure.
Detailed Description
The embodiment of the specification provides a degradation display method, device and equipment of a 3D virtual scene and a storage medium.
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
To solve the above-mentioned problems, two more conventional solutions are provided herein.
Mode one: the image quality of the 3D virtual scene is reduced to reduce the requirement of the application on the client device, for example, custom options such as texture quality, special effects, frame rate and the like are set in the application, and an effect level is provided, and a user can perform custom setting based on the custom options.
However, this approach only can relatively alleviate the performance pressure, and does not fundamentally solve the problem that the 3D scene has a high threshold for the device performance, and it is still difficult for the device user who does not meet the minimum performance requirement to use the application, and after the image quality is reduced, the user experience is often poor.
Mode two: the application of generating 2D versions, partially or completely independent, has significant differences in product visual performance and even in functionality from the 3D version.
However, the mode is finished by the development personnel specially responsible for 2D version from the production of visual resource materials, product development and even part of product function operation, later-period demand iteration is carried out by double lines, and the realization cost and the maintenance cost are high. The difference between different applications is larger, and the technical universality and the universality are lower. And the 3D version and the 2D version have larger gap in visual experience and function, and the experience of partial users is easily influenced.
Based on this, fig. 1 provided herein is a flow diagram of a degradation display method of a 3D virtual scene provided in one or more embodiments of the present disclosure. The method can be applied to different business fields, such as the field of electric business, public welfare application, internet financial business, instant messaging business, game business, public affairs, etc. The process may be performed by a computing device in the relevant field (e.g., public welfare application server or intelligent mobile terminal, etc.), with some input parameters or intermediate results in the process allowing for manual intervention adjustments to help improve accuracy.
The flow in fig. 1 may include the steps of:
s102: and determining the running environment of the equipment, and judging whether the running environment meets preset degradation conditions or not.
The device mainly refers to user-side terminal devices, and can comprise non-mobile terminal devices such as personal PCs (personal computers) and mobile terminal devices such as smart phones, tablet computers and Internet of things devices. Generally, applications of mobile terminal devices are more popular and the processing performance is relatively lower, so the mobile terminal device is mainly used as an example for explanation.
The running environment comprises a software environment and a hardware environment. The software environment includes an operating system, a supporting protocol, etc. of the device, and the hardware environment includes a running memory, a CPU processing capability, a device model, etc. of the device.
After a user starts a corresponding application (such as an online public welfare application, an e-commerce platform and the like) in the equipment, the application obtains the running environment of the required equipment based on a preset interface and compares the running environment with a preset degradation condition to determine whether the degradation condition is met. The degradation condition is mainly used for judging whether the performance of the device is difficult to support the display of the 3D virtual scene, and meanwhile, whether the device supports the degradation display of the time is also needed to be judged.
In particular, for a mobile end device, the software environment may include: whether the operating system of the device is below a preset version (e.g., android.ltoreq.6, ios.ltoreq.10), whether a preset protocol is supported (e.g., webGL is supported), the hardware environment may include: whether the running memory is lower than a preset memory (for example, the running memory is less than 4 GB), and whether the number of CPU cores is lower than a preset number (for example, the CPU count is less than 4). Therefore, the universal rendering class library is realized, and 3D or 2D scenes are selected and displayed according to the difference of the performance and the like of the equipment on the aspect of service logic processing, and multiple sets of program code support are not needed.
When multiple ones are included in the degradation condition, the device may be specified as if it meets the degradation condition, e.g., meets any number of the conditions therein, or only when it meets all. Or weights are given to the conditions, and whether the equipment meets the degradation condition is judged by calculating the total value.
In addition, for some applications, when the user runs the application, if the user includes a display interface of the 3D virtual scene, the user may perform corresponding degradation judgment when the user operates to trigger the display interface, or after the user starts the application, trigger the degradation judgment in advance according to an idle stage in the application used by the user.
Fig. 2 is a schematic diagram of a degradation display method of a 3D virtual scene in an application scene according to one or more embodiments of the present disclosure, where after a terminal device determines whether 3D can be supported, if so, the 3D virtual scene is displayed in a 3D mode according to a 3D model file (for example, gltf, files, etc.) and if not, a 2D image (image) is obtained and displayed in a 2D mode through a corresponding rendering library (rendering package).
S104: if yes, determining 3D elements contained in the virtual scene which triggers the display, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels.
The virtual scene mentioned herein is at least partially composed of 3D elements, and the 3D elements contained therein are usually fixed and preset, however, for a partial virtual scene, the content contained therein may vary based on user information, real-time hot search, etc., while the total amount of 3D elements contained therein is fixed, only different 3D elements are shown for different states. For example, for a marine public welfare application, based on the user's grade and the cards obtained by the user, different kinds of marine organisms are shown in the virtual scene to motivate the user's interest in marine public welfare. Or in the online E-commerce platform, a 3D model of the corresponding commodity is displayed for the user based on the historical search record of the user and the current real-time hot search.
In order to improve the use experience of the user side, the corresponding 2D image of the 3D element can be obtained and stored in advance, and a corresponding mapping relation is established. When the user needs to use, the 2D image is called up.
Specifically, each 3D element corresponds to a corresponding 3D model file, which may be a file in gltf, glb, or the like. And after the 3D model file is imported, performing analysis rendering, so as to obtain the element attribute corresponding to the 3D element. The element attribute is used for the position, the dynamic effect, etc. of the 3D element, for example, the position attribute may include a coordinate position, and the dynamic effect attribute may include a size, an anchor point, a rotation angle, a zoom, illumination, a movement effect, a screen window size, etc. Where the coordinate location may refer to the location of the display relative to the canvas (which refers to all of the presentation content within the virtual scene, in 2D mode, which represents the maximum range in the virtual scene), or may refer to the location of the display relative to the parent container,
of course, for some element attributes that are difficult to automatically identify, adjustment and addition may also be performed manually, such as element ID, auxiliary information, etc.
When the virtual scene is larger, or the consideration of improving user experience and user interaction is taken into account, the whole content in the virtual scene can be displayed to the user not directly, but only part of the content in the virtual scene is displayed to the user in the window, the display range of the window is changed based on dragging, expanding and contracting of the canvas by the user, and other contents in the virtual scene are displayed.
According to the element attribute of the 3D element, a corresponding display image in the window can be obtained, and a 2D image corresponding to the 3D element in the window is obtained through a computer graphics 3D projection 2D mode. The 2D image is a format file carrying a transparency (alpha) channel, for example, png file.
S106: and determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics.
For different 2D images, the channel can reflect the characteristics of the image in the transparency dimension, referred to herein as transparency dimension characteristics, according to its transparency. The transparency dimension feature may include: transparency values, invalid transparency portions, valid pixel portions, etc. Wherein the transparency value reflects the pixel transparency level, which may include 0-100, 0 being opaque, 100 being fully transparent. The invalid transparency portions refer to transparency values of pixels in the region being 100, and the valid pixel portions indicate that transparency values of pixels in the region are not 100.
In the conventional image compression process, if the same mode is adopted for compressing the image, it is difficult to ensure the image quality and the user experience, or the compression strength of the image is insufficient, so that the compressed image still carries more useless information.
Based on the difference of the dimension characteristics of the image transparency, the image is compressed by adopting a corresponding compression mode, so that when the image is compressed, the image quality and the user experience can be ensured, and meanwhile, useless information in the image can be screened out as much as possible, and the space occupied by the image is reduced.
Specifically, first, it is determined whether there is an ineffective transparency portion at an edge thereof according to a transparency channel of the 2D image. In the ineffective transparency portion, effective pixels of the 3D element are not included, and even removal does not affect the content to be expressed by the image. The range of the effective pixel part can be obtained through the information such as the coordinate position, the size and the like in the element attribute of the 3D element, and then the range of the ineffective transparency part is obtained according to the size of the 2D image. Alternatively, the ineffective transparency portion may be obtained based on edge analysis.
For the 2D image with the invalid transparency part, the part is directly cut, and only the valid pixel part in the 2D image is reserved, so that the image compression is realized, and the useless invalid transparency part is removed. Fig. 3a and 3b are schematic diagrams of an ineffective transparency portion before clipping and after clipping in an application scenario according to one or more embodiments of the present disclosure, respectively. The 2D rabbit-shaped picture obtained by projection of the 3D element (rabbit-shaped 3D element) is included, and the periphery of the rabbit shape still belongs to the 2D image, but does not include substantial content, namely an ineffective transparency part, which does not include substantial content, but still has a transparency channel and the like, so that the 3D rabbit-shaped picture still occupies a certain space, and is directly cut. In the determination of the ineffective transparency portion and the effective pixel portion, as shown in fig. 3a and 3b, a rectangular determination frame may be used, but it is needless to say that a circular frame, particularly, an edge pixel, or the like may be used.
For a 2D image in which there is no invalid transparency portion, it is difficult to clip the valid pixel portion, and thus image compression is not performed in this way. The 2D image is now directly converted to an image format that does not carry transparency channels, such as jgp format. And each pixel point of the transparency-free channel image only uses 3 x 8 bytes, and compared with 4 x 8 bytes of the transparency-free channel, the data storage information is 1/3 less.
Of course, in addition to the image compression manner already described in the above embodiments, the 2D image may be further compressed by a data compression algorithm in a picture format, for example, lossless compression by huffman coding, run-length coding, or lossy compression by DCT transformation, wavelet coding, or the like, which will not be described herein.
S108: and integrating and displaying the 2D images after image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.
The element attribute mainly comprises two parts, namely an action attribute and a position attribute. For some static 3D elements or 3D elements with relatively simple dynamic effects (for example, the dynamic effect attribute is simply translated in the window), the 2D image can be integrated into the canvas according to the position attribute at this time, and the user is presented.
However, for a 3D element with a complex motion effect attribute, the 3D element often deforms when moving in a window, for example, a shell opens and closes in the ocean, guides a cartoon character of a user to perform a anthropomorphic motion, and the like. At this point, it is difficult to integrate it into the canvas by the location attribute alone.
Based on this, for these 3D elements, according to their dynamic properties, 2D projections of the 3D elements when they execute dynamic effects corresponding to the dynamic properties in the window are acquired, and the frequency of the acquisition may be set based on the user requirements or the actual situation. At this time, a plurality of 2D images are acquired, and spliced according to a time sequence to obtain a corresponding 2D dynamic image. The 2D dynamic image can reflect complex dynamic effects in the form of the 2D image, and user experience is guaranteed.
At this time, the virtual scenes observed by the user are all composed of 2D images (including 2D dynamic images), but the virtual scenes in the 3D mode can be approximately simulated for the user, so that the user can enter the virtual scenes in the low-performance equipment.
According to the 2D image obtained by the 3D element, the output of visual resource materials can be realized without additional research and development departments, and the realization cost is effectively controlled. And only the expression form of the 3D element is converted into a 2D mode, the visual experience and the functional change of the 3D version are small, the display of the virtual scene by the low-performance equipment can be supported, and the user experience can be ensured.
The 2D images with different transparency dimension characteristics are subjected to a corresponding image compression mode, so that information carried by the 2D images can be further reduced, and the method is more friendly to low-performance equipment.
Based on the method of fig. 1, the present specification also provides some specific embodiments and extensions of the method, and the following description will proceed.
In one or more embodiments of the present specification, it has been mentioned above that image compression may be achieved by cropping a 2D image. Here, taking a conventional rectangular decision box as an example, the effective pixel portion is cut from four directions, i.e., up, down, left, and right, compared to the original 2D image before cutting, and at this time, the amount of cutting of the 2D image in different directions is taken as the positional offset in that direction.
At this time, the position of the 2D image in the canvas is determined by performing preliminary positioning according to the position attribute in the element attribute. But since the 2D image has been cropped, the cropped 2D image requires a higher accuracy than the position attribute. At this time, accurate secondary positioning of the next step can be performed based on the position offset, so that the display position of the 2D image can be determined.
In general, in order to maintain the uniformity of the document and reduce the additional settings in projection, when the 3D element is projected to obtain a 2D image, the projection is performed on a uniformly sized background. FIG. 4 is a schematic diagram of 2D image acquisition in an application scenario provided in one or more embodiments of the present disclosure; as already mentioned above, after importing the 3D model file, the user may obtain the corresponding 3D element (such as the cylindrical, cuboid, cube-shaped 3D element shown in fig. 4) and the element attribute by adjusting and supplementing the information, and then project on the unified background to obtain the corresponding 2D image, where all the 2D images are consistent in size, and for convenience, the size is consistent with the size of the largest 2D image, and the largest 2D image is often the lowest background layer, and does not include the ineffective transparency portion. Finally, the invalid transparency part can be cut, and the obtained corresponding product is used as an image input to obtain a 2D image and element attribute layout data.
Based on this, the position offset may not only be used to determine the display position of the image, but also be used as one of the parameters for currently determining the display content when the user issues an interactive instruction for moving the window (for example, the user drags the canvas in the window or clicks the corresponding window direction movement button) when the canvas is not completely displayed in the window. For example, if the position offset on the left side of a certain 2D image is 100 pixels, and the position corresponding to the current window is the content in the leftmost 50 pixels, the 2D image is not displayed any more at this time, until the user moves the position corresponding to the window to the left 100 pixels in the process of sliding the window, and then gradually displaying the 2D image is started according to the moving distance (which can be obtained by scaling the dragging distance) in the interaction instruction of the user.
However, the moving process of the 2D image displayed in this way is relatively rigid, and only the window is moved by the interactive instruction of the user, so that the user can stand on the moving process to observe a more horizontal drawing board, and in some virtual scenes requiring atmosphere, the user is difficult to generate substitution feeling.
Based on this, the corresponding reference layer is set first. After the virtual scene is changed into the 2D mode, different 2D images are respectively located in different layers, and the upper layer can shield the lower layer. In order to facilitate the establishment of unified standards and the subsequent layer selection and data adjustment, the layer where the 2D image without the invalid transparency part is located is used as the reference layer, and if a plurality of 2D images are all provided, the 2D image at the lowest layer is set as the reference layer. The 2D image without the invalid transparency part usually contains more content, even the most of all layers, and is likely to be a background layer, and the corresponding layer is set as a reference layer, so that the layer does not need to be adjusted in the subsequent adjustment process, and the corresponding workload is reduced. And in some virtual scenes, the background layers may contain little content, for example, only pure color layers, and selecting the reference layer in this way can avoid the situation compared with directly selecting the lowest background layer.
For each layer (referred to herein as a designated layer for convenience of description) corresponding to each 2D image, the higher the layer is, the lower the corresponding motion coefficient is, and in a specific motion coefficient determining process, the motion coefficient of the reference layer may be set to 1, and then the inter-layer distance between the designated layer and the reference layer may be determined. Wherein, the inter-layer distance refers to how many layers are separated from each other, and the more layers are separated, the greater the inter-layer distance. For the layers above the reference layer, the motion coefficients of the layers are sequentially reduced according to the inter-layer distance, and for the layers below the reference layer, the motion coefficients of the layers are sequentially increased according to the inter-layer distance.
After receiving an interaction instruction sent by a user, obtaining the actual moving distance of the 2D image in the window according to the moving distance and the moving coefficient in the interaction instruction. For example, when the user drags the window to send an interaction instruction to drag the window to a first distance, based on a preset corresponding relationship, the moving distance of the 2D image in the window is determined to be a second distance (that is, the moving distance in the interaction instruction), and in the conventional scheme, the 2D image can be moved according to the distance B. In this scheme, the second distance is corrected according to the obtained movement coefficient, so as to obtain a corresponding third distance (i.e. the actual movement distance), and then the movement of the 2D image is performed according to the third distance.
Therefore, for the scene observed by the user side, the more the user gets in the object (namely the 2D image with the upper layer), when the user drags the window, the slower the moving speed, the faster the moving speed of the object at the farther distance, so that the scene more accords with the observation experience in reality, and the substitution feeling of the user to the virtual scene is increased.
In one or more embodiments of the present specification, it has been mentioned above that for a 2D image where no invalid transparency portion exists, image compression may be performed by deleting its transparency channel. However, in some virtual scenes, there is a partial image, and the effective pixel portion of the partial image also carries a transparency value, and if the transparency channel is directly deleted, the transparency value of the partial image is also deleted, so that the viewing experience of the user is affected.
Based on this, for a 2D image in which an ineffective transparency portion does not exist, whether an effective transparency value exists in an effective pixel portion is determined according to its corresponding transparency dimension feature, the effective transparency value means that it is not 0 or completely non-transparent, and of course, how effective is can be set based on actual conditions.
If so, the transparency of the effective pixel part in the 2D image is meaningful, and at the moment, the corresponding display range is determined according to the element attribute, and the display range can be comprehensively obtained according to the position, the size and the range corresponding to the dynamic effect. Other 2D images that appear within the presentation range are determined, and may be considered to be within the presentation range as long as part of the 2D image appears. The 2D image is then combined with other 2D images that appear within the scope of the presentation to obtain a combined image. At this time, the transparency channel is not deleted yet, so that the transparency of the 2D image is still maintained in the acquired combined image. After replacing the 2D image with the combined image, the combined image is converted into an image format that does not carry transparency channels. The combined image obtained at this time can restore the 2D image under the condition of having transparency values on the premise of not reserving a transparency channel because other 2D images in the display range are combined together, so that user experience is ensured.
In one or more embodiments of the present description, converting the 3D mode to the 2D mode inevitably affects the viewing experience of the user to some extent. For example, in the 3D mode, 360-degree omnidirectional observation and observation angle conversion can be performed on the elements in the three-dimensional mode, but in the 2D mode, the three-dimensional mode cannot be realized.
Based on the above, the user interaction level corresponding to the 3D element is determined in advance according to the interaction attribute in the element attributes of the 3D element. The interaction attribute refers to the degree to which the 3D element and the user can interact, for example, whether the user can interact with the 3D element, which interaction means (clicking, dragging, etc.), which interaction results include (jumping other web pages, displaying relevant values of the 3D element, etc.), and so on. The user interaction level can be obtained by calculating the weight given to the interaction attributes of different types, and the higher the user interaction level is, the higher the interaction degree of the 3D element with the user is, the more likely the user is to interact with the 3D element.
At this time, if it is determined that the user interaction level is higher than the preset threshold, 2D images at a plurality of different viewing angles, such as a left viewing angle, a right viewing angle, an enlarged viewing angle, and the like, are acquired when the corresponding 2D images are acquired. When the user sends out an interactive instruction for changing the visual angle of the 2D image in the display process of the 2D mode, the 2D image with the corresponding visual angle can be adopted for display.
The interactive instruction used for changing the viewing angle of the 2D image may be an instruction of subjective change of the viewing angle intention, such as a multi-finger zoom instruction, a rotation viewing angle instruction, or the like. Of course, the instruction may be an instruction that the user needs or may additionally change the viewing angle when triggering other instructions, for example, the 3D element automatically enlarges after the user clicks on the 3D element.
After the viewing angle is transferred, only part of 3D elements have 2D images under a plurality of viewing angles, and for other 2D images which can be matched with the 2D images under the corresponding viewing angles, the processing modes of background blurring, blurring and the like can be adopted, so that the viewing experience of a user is improved.
Based on the same thought, one or more embodiments of the present disclosure further provide apparatuses and devices corresponding to the above method, as shown in fig. 5 and fig. 6.
Fig. 5 is a schematic structural diagram of a degradation display device for a 3D virtual scene according to one or more embodiments of the present disclosure, where the device includes:
the degradation judging module 502 determines the running environment of the equipment and judges whether the running environment meets preset degradation conditions;
the 2D image obtaining module 504 determines a 3D element included in the virtual scene to be triggered and displayed if the two images are in accordance with each other, and obtains a 2D image which corresponds to the 3D element in the window and carries a transparency channel;
The image compression module 506 determines transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performs image compression on the 2D image according to the transparency dimension characteristics;
and the 2D mode display module 508 is used for integrally displaying the 2D images after the image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene through the degraded 2D mode.
Optionally, the image compression module 506 determines whether an invalid transparency portion exists at an edge of the 2D image according to the transparency channel;
if not, converting the 2D image into an image format which does not carry the transparency channel;
if so, the invalid transparency portion is cropped to preserve valid pixel portions in the 2D image.
Optionally, the 2D mode display module 504 uses clipping amounts of the 2D image in different directions as position offsets in the directions;
and positioning the display position of the 2D image according to the position attribute in the element attribute corresponding to the 3D element, and secondarily positioning the display position of the 2D image in a positioning result according to the position offset.
Optionally, the method further comprises:
the moving distance determining module 510 determines a moving coefficient corresponding to the 2D image according to a designated layer corresponding to the 2D image and a preset reference layer, where the reference layer is a layer where the 2D image without the invalid transparency part is located, and the moving coefficient corresponding to the designated layer is lower as the designated layer is higher;
and receiving an interaction instruction sent by the user and used for moving the window, and obtaining the actual moving distance of the 2D image in the window according to the moving distance in the interaction instruction and the moving coefficient.
Optionally, the image compression module 506 determines whether the effective pixel portion of the 2D image has an effective transparency value if not;
if so, determining a corresponding display range according to the element attribute of the 2D image, and combining the 2D image with other 2D images appearing in the display range to obtain a combined image;
and replacing the 2D image with the combined image, and converting the combined image into an image format which does not carry the transparency channel.
Optionally, the 2D image obtaining module 504 determines, according to a dynamic effect attribute in the element attributes corresponding to the 3D element, that the 3D element generates deformation in the window when the 3D element executes the dynamic effect corresponding to the dynamic effect attribute;
2D projection is carried out on the 3D element executing the dynamic effect in the window, so that a plurality of 2D images carrying transparency channels are obtained;
and splicing the plurality of 2D images to obtain corresponding 2D dynamic images.
Optionally, the method further comprises:
the multi-view display module 512 determines interaction attributes in the element attributes corresponding to the 3D element, and determines a user interaction level corresponding to the 3D element;
and if the user interaction level is higher than a preset threshold, acquiring a plurality of 2D images which correspond to the 3D elements in the window and carry the transparency channels under a plurality of view angles, so that when an interaction instruction which is sent by the user and is used for changing the view angle of the 2D images is received, the 2D images corresponding to the view angles are adopted for displaying.
Alternatively, the process may be carried out in a single-stage,
fig. 6 is a schematic structural diagram of a degradation display device for a 3D virtual scene according to one or more embodiments of the present disclosure, where the device includes:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
Determining the running environment of equipment, and judging whether the running environment accords with a preset degradation condition;
if yes, determining 3D elements contained in the virtual scene which is triggered to be displayed, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics;
and integrating and displaying the 2D images after image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.
Based on the same considerations, one or more embodiments of the present specification further provide a non-volatile computer storage medium corresponding to the above method, storing computer-executable instructions configured to:
determining the running environment of equipment, and judging whether the running environment accords with a preset degradation condition;
if yes, determining 3D elements contained in the virtual scene which is triggered to be displayed, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
Determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics;
and integrating and displaying the 2D images after image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that the present description may be provided as a method, system, or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description embodiments may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, devices, non-volatile computer storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to the section of the method embodiments being relevant.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely one or more embodiments of the present description and is not intended to limit the present description. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of one or more embodiments of the present description, is intended to be included within the scope of the claims of the present description.

Claims (15)

1. A degradation display method of a 3D virtual scene, comprising:
determining the running environment of equipment, and judging whether the running environment accords with a preset degradation condition;
if yes, determining 3D elements contained in the virtual scene which is triggered to be displayed, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics;
and integrating and displaying the 2D images after image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.
2. The method of claim 1, wherein the determining, according to the transparency channel, a transparency dimension feature corresponding to the 2D image, and performing image compression on the 2D image according to the transparency dimension feature, specifically includes:
determining whether an invalid transparency portion exists at an edge of the 2D image according to the transparency channel;
if not, converting the 2D image into an image format which does not carry the transparency channel;
if so, the invalid transparency portion is cropped to preserve valid pixel portions in the 2D image.
3. The method of claim 2, wherein the integrating and displaying the 2D image after image compression according to the element attribute corresponding to the 3D element specifically includes:
taking clipping amounts of the 2D image in different directions as position offset amounts in the directions;
and positioning the display position of the 2D image according to the position attribute in the element attribute corresponding to the 3D element, and secondarily positioning the display position of the 2D image in a positioning result according to the position offset.
4. The method of claim 2, the method further comprising:
determining a movement coefficient corresponding to the 2D image according to a designated layer corresponding to the 2D image and a preset reference layer, wherein the reference layer is a layer where the 2D image without the invalid transparency part is located, and the movement coefficient corresponding to the designated layer is lower when the designated layer is higher;
and receiving an interaction instruction sent by the user and used for moving the window, and obtaining the actual moving distance of the 2D image in the window according to the moving distance in the interaction instruction and the moving coefficient.
5. The method according to claim 2, wherein if not, converting the 2D image into an image format that does not carry the transparency channel, specifically comprises:
If not, judging whether the effective pixel part of the 2D image has an effective transparency value or not;
if so, determining a corresponding display range according to the element attribute of the 2D image, and combining the 2D image with other 2D images appearing in the display range to obtain a combined image;
and replacing the 2D image with the combined image, and converting the combined image into an image format which does not carry the transparency channel.
6. The method according to claim 1, wherein the acquiring the 2D image of the 3D element corresponding to the window and carrying the transparency channel specifically comprises:
according to the dynamic effect attribute in the element attribute corresponding to the 3D element, when the dynamic effect corresponding to the dynamic effect attribute is determined to be executed by the 3D element, the 3D element generates deformation in the window;
2D projection is carried out on the 3D element executing the dynamic effect in the window, so that a plurality of 2D images carrying transparency channels are obtained;
and splicing the plurality of 2D images to obtain corresponding 2D dynamic images.
7. The method according to claim 1, wherein the acquiring the 2D image of the 3D element corresponding to the window and carrying the transparency channel specifically comprises:
Determining interaction attributes in element attributes corresponding to the 3D elements, and determining user interaction levels corresponding to the 3D elements;
and if the user interaction level is higher than a preset threshold, acquiring a plurality of 2D images which correspond to the 3D elements in the window and carry the transparency channels under a plurality of view angles, so that when an interaction instruction which is sent by the user and is used for changing the view angle of the 2D images is received, the 2D images corresponding to the view angles are adopted for displaying.
8. A degraded presentation apparatus for a 3D virtual scene, comprising:
the degradation judging module is used for determining the running environment of the equipment and judging whether the running environment meets preset degradation conditions or not;
the 2D image acquisition module is used for determining 3D elements contained in the virtual scene which is triggered to be displayed if the virtual scene is in accordance with the 3D elements, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
the image compression module is used for determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel and carrying out image compression on the 2D image according to the transparency dimension characteristics;
and the 2D mode display module is used for integrally displaying the 2D images after the image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene through the degraded 2D mode.
9. The apparatus of claim 8, the image compression module to determine whether an invalid transparency portion exists at an edge of the 2D image according to the transparency channel;
if not, converting the 2D image into an image format which does not carry the transparency channel;
if so, the invalid transparency portion is cropped to preserve valid pixel portions in the 2D image.
10. The apparatus of claim 9, the 2D mode presentation module to take a clipping amount of the 2D image in different directions as a positional offset in the direction;
and positioning the display position of the 2D image according to the position attribute in the element attribute corresponding to the 3D element, and secondarily positioning the display position of the 2D image in a positioning result according to the position offset.
11. The apparatus of claim 9, further comprising:
the moving distance judging module is used for determining a moving coefficient corresponding to the 2D image according to a designated image layer corresponding to the 2D image and a preset reference image layer, wherein the reference image layer is an image layer where the 2D image without the invalid transparency part is located, and the moving coefficient corresponding to the designated image layer is lower when the designated image layer is higher;
And receiving an interaction instruction sent by the user and used for moving the window, and obtaining the actual moving distance of the 2D image in the window according to the moving distance in the interaction instruction and the moving coefficient.
12. The apparatus of claim 9, the image compression module to determine if there is a valid transparency value in the valid pixel portion of the 2D image if there is no;
if so, determining a corresponding display range according to the element attribute of the 2D image, and combining the 2D image with other 2D images appearing in the display range to obtain a combined image;
and replacing the 2D image with the combined image, and converting the combined image into an image format which does not carry the transparency channel.
13. The apparatus of claim 8, wherein the 2D image acquisition module determines, according to a dynamic effect attribute in the element attributes corresponding to the 3D element, that the 3D element is deformed in the window when the 3D element executes the dynamic effect corresponding to the dynamic effect attribute;
2D projection is carried out on the 3D element executing the dynamic effect in the window, so that a plurality of 2D images carrying transparency channels are obtained;
And splicing the plurality of 2D images to obtain corresponding 2D dynamic images.
14. The apparatus of claim 8, further comprising:
the multi-view display module is used for determining interaction attributes in element attributes corresponding to the 3D elements and determining user interaction levels corresponding to the 3D elements;
and if the user interaction level is higher than a preset threshold, acquiring a plurality of 2D images which correspond to the 3D elements in the window and carry the transparency channels under a plurality of view angles, so that when an interaction instruction which is sent by the user and is used for changing the view angle of the 2D images is received, the 2D images corresponding to the view angles are adopted for displaying.
15. A degraded presentation apparatus of a 3D virtual scene, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
determining the running environment of equipment, and judging whether the running environment accords with a preset degradation condition;
if yes, determining 3D elements contained in the virtual scene which is triggered to be displayed, and acquiring 2D images which correspond to the 3D elements in the window and carry transparency channels;
Determining transparency dimension characteristics corresponding to the 2D image according to the transparency channel, and performing image compression on the 2D image according to the transparency dimension characteristics;
and integrating and displaying the 2D images after image compression according to the element attributes corresponding to the 3D elements so as to display the virtual scene in a degraded 2D mode.
CN202310098483.2A 2023-01-13 2023-01-13 Degradation display method, device and equipment for 3D virtual scene Pending CN116188736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310098483.2A CN116188736A (en) 2023-01-13 2023-01-13 Degradation display method, device and equipment for 3D virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310098483.2A CN116188736A (en) 2023-01-13 2023-01-13 Degradation display method, device and equipment for 3D virtual scene

Publications (1)

Publication Number Publication Date
CN116188736A true CN116188736A (en) 2023-05-30

Family

ID=86432250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310098483.2A Pending CN116188736A (en) 2023-01-13 2023-01-13 Degradation display method, device and equipment for 3D virtual scene

Country Status (1)

Country Link
CN (1) CN116188736A (en)

Similar Documents

Publication Publication Date Title
US9240070B2 (en) Methods and systems for viewing dynamic high-resolution 3D imagery over a network
JP2020509647A (en) Image mapping and processing method, apparatus, and machine-readable medium
US8456467B1 (en) Embeddable three-dimensional (3D) image viewer
US11398059B2 (en) Processing 3D video content
CN110392904B (en) Method for dynamic image color remapping using alpha blending
KR20230007358A (en) Multilayer Reprojection Techniques for Augmented Reality
CN112868224B (en) Method, apparatus and storage medium for capturing and editing dynamic depth image
US10140729B2 (en) Data compression for visual elements
CN111275824A (en) Surface reconstruction for interactive augmented reality
CN114531553B (en) Method, device, electronic equipment and storage medium for generating special effect video
US9501812B2 (en) Map performance by dynamically reducing map detail
CN117280680A (en) Parallel mode of dynamic grid alignment
CN116188736A (en) Degradation display method, device and equipment for 3D virtual scene
Řeřábek et al. JPEG backward compatible coding of omnidirectional images
CN112843700B (en) Terrain image generation method and device, computer equipment and storage medium
CN114820988A (en) Three-dimensional modeling method, device, equipment and storage medium
CN115965519A (en) Model processing method, device, equipment and medium
KR20220077928A (en) Method, system, and computer readable medium for video coding
CN116095250B (en) Method and device for video cropping
EP3821602A1 (en) A method, an apparatus and a computer program product for volumetric video coding
CN117221504B (en) Video matting method and device
CN117541744B (en) Rendering method and device for urban live-action three-dimensional image
CN117915020A (en) Method and device for video cropping
CN116431851A (en) Method and related device for processing massive pictures
CN117714769A (en) Image display method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40089831

Country of ref document: HK