CN110956703B - Collision body mapping method and device, storage medium and electronic device - Google Patents

Collision body mapping method and device, storage medium and electronic device Download PDF

Info

Publication number
CN110956703B
CN110956703B CN201911135964.6A CN201911135964A CN110956703B CN 110956703 B CN110956703 B CN 110956703B CN 201911135964 A CN201911135964 A CN 201911135964A CN 110956703 B CN110956703 B CN 110956703B
Authority
CN
China
Prior art keywords
target
map
collision
volume
operation position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911135964.6A
Other languages
Chinese (zh)
Other versions
CN110956703A (en
Inventor
汪林
刘晶
任刚
钱策
何文清
陈伟
陈李羊
林洁
黄碧文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911135964.6A priority Critical patent/CN110956703B/en
Publication of CN110956703A publication Critical patent/CN110956703A/en
Application granted granted Critical
Publication of CN110956703B publication Critical patent/CN110956703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Abstract

The invention discloses a collision body mapping method and device, a storage medium and an electronic device. Wherein, the method comprises the following steps: receiving a mapping instruction for requesting to add a target mapping to a target collision volume in a virtual three-dimensional scene, and acquiring a first collision volume corresponding to the target collision volume, wherein the first collision volume is a collision volume to which a color mapping is added but a reflection mapping is not added; determining an bounding volume of the first collision volume; adding the target chartlet to the bounding volume to obtain a second collision volume; and adding the reflection map to the second collision body to obtain the target collision body added with the target map. The invention solves the technical problem that the display effect of the attached map is poor when the map is attached to the collision body in the related technology.

Description

Collision body mapping method and device, storage medium and electronic device
Technical Field
The invention relates to the field of computers, in particular to a collision volume mapping method and device, a storage medium and an electronic device.
Background
In the related art, in an optical virtual three-dimensional scene, a corresponding collision volume can be generally set, and the collision volume can perform collision detection in the virtual three-dimensional scene. In the related art, a color map is added to the outer surface of a collision volume, and then a reflection map is added, so that a user can view the external color texture and the like of the collision volume. In this case, if a map is added to a collision body, the map is directly added to the collision body in a manner of viewing projection in the related art, and the added map has a large deformation when viewed from another direction.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a collision body mapping method and device, a storage medium and an electronic device, and at least solves the technical problem that in the related art, mapping is applied to a collision body, and the display effect of the attached mapping is poor.
According to an aspect of an embodiment of the present invention, there is provided a collision volume mapping method including: receiving a mapping instruction for requesting to add a target mapping to a target collision volume in a virtual three-dimensional scene, wherein the target collision volume is a collision volume to which a color mapping and a reflection mapping are added, the color mapping records a surface layer color of the target collision volume, the reflection mapping records a surface layer texture of the target collision volume, and the mapping instruction carries a mapping identifier of the target mapping; acquiring a first collision body corresponding to the target collision body, wherein the first collision body is a collision body to which the color map is added but to which the reflection map is not added; determining an bounding volume of the first collision volume; adding the target map to the bounding volume to obtain a second collision volume; and adding the reflection map to the second collision body to obtain the target collision body to which the target map is added.
As an alternative embodiment, the target collision volume with the added target map is saved in a blockchain, or the target map is saved in the blockchain.
According to another aspect of an embodiment of the present invention, there is also provided a collision volume mapping apparatus including: a receiving unit, configured to receive a mapping instruction for requesting to add a target map to a target collision volume in a virtual three-dimensional scene, where the target collision volume is a collision volume to which a color map and a reflection map are added, the color map records a surface layer color of the target collision volume, the reflection map records a surface layer texture of the target collision volume, and the mapping instruction carries a map identifier of the target map; an acquisition unit configured to acquire a first collision body corresponding to the target collision body, the first collision body being a collision body to which the color map is added but to which the reflection map is not added; a determination unit configured to determine an enclosure of the first collision volume; a first adding unit configured to add the target map to the bounding volume to obtain a second collision volume; second adding means for adding the reflection map to the second collision body to obtain the target collision body to which the target map is added.
According to still another aspect of an embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned collision volume mapping method when executed.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the collision volume mapping method through the computer program.
In the embodiment of the invention, a mapping instruction for requesting to add a target mapping to a target collision volume in a virtual three-dimensional scene is received, wherein the target collision volume is a collision volume to which a color mapping and a reflection mapping are added, the color mapping records the surface color of the target collision volume, the reflection mapping records the surface texture of the target collision volume, and the mapping instruction carries the mapping identification of the target mapping; acquiring a first collision body corresponding to the target collision body, wherein the first collision body is a collision body to which the color map is added but to which the reflection map is not added; determining an bounding volume of the first collision volume; adding the target map to the bounding volume to obtain a second collision volume; and a mode of adding the reflection map to the second collision body to obtain the target collision body to which the target map is added, wherein when the target map is added to the collision body, the target map is not directly added to the collision body, but the target map is added to a bounding volume of the collision body to which the color map is added, and the reflection map is added to the collision body to which the target map is added to obtain a final result. According to the method, the target map is added to the enclosure, so that the deformation of the target map is reduced, the display effect of the added target map is improved, and the technical problem that the display effect of the attached map is poor when the map is attached to the collision body in the related art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic illustration of an application environment for an alternative collision volume mapping method according to an embodiment of the invention;
FIG. 2 is a flow diagram of an alternative collision volume mapping method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an alternative collision volume mapping method according to an embodiment of the invention;
FIG. 4 is a schematic diagram of another alternative collision volume mapping method according to an embodiment of the invention;
FIG. 5 is a schematic diagram of yet another alternative collision volume mapping method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of yet another alternative collision volume mapping method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of yet another alternative collision volume mapping method according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of yet another alternative collision volume mapping method according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an alternative collision volume mapping apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present invention, there is provided a collision volume mapping method, which may optionally, as an alternative implementation, be applied, but not limited, to the environment shown in FIG. 1.
Human-computer interaction between the user 102 and the user device 104 in fig. 1 is possible. The user equipment 104 comprises a memory 106 for storing interaction data and a processor 108 for processing the interaction data. User device 104 may interact with server 112 via network 110. The server 112 includes a database 114 for storing interaction data and a processing engine 116 for processing the interaction data. The user device 104 may receive the mapping instruction, obtain a mapping identifier in the mapping instruction after receiving the mapping instruction, and add a target mapping corresponding to the mapping identifier to an bounding volume of a first collision volume, where the first collision volume is a target collision volume to which the color mapping has been added but to which the reflection mapping has not been added. After the target map is added, a reflection map is added to the first collision volume to which the target map is added, and the final result is obtained. The end result is a target collision volume with the addition of a target map, which is less distorted by the bounding volume. By the method, the display effect of the added target map is improved.
Alternatively, the user device 104 may be, but is not limited to, a terminal such as a mobile phone, a tablet computer, a notebook computer, a PC, and the like, and the network 110 may include, but is not limited to, a wireless network or a wired network. Wherein, this wireless network includes: WIFI and other networks that enable wireless communication. Such wired networks may include, but are not limited to: wide area networks, metropolitan area networks, and local area networks. The server may include, but is not limited to, any hardware device capable of performing computations.
It should be noted that the user device 104 may be an unconnected device, i.e., not interacting with the server 112 via the network 110, and is also allowed.
Optionally, as an optional implementation, as shown in fig. 2, the method for mapping a collision volume includes:
s202, receiving a chartlet instruction for requesting to add a target chartlet to a target collision body in a virtual three-dimensional scene, wherein the target collision body is the collision body added with a color chartlet and a reflection chartlet, the color chartlet records the surface color of the target collision body, the reflection chartlet records the surface texture of the target collision body, and the chartlet instruction carries a chartlet mark of the target chartlet;
s204, acquiring a first collision body corresponding to the target collision body, wherein the first collision body is a collision body added with a color map but not added with a reflection map;
s206, determining a bounding volume of the first collision volume;
s208, adding the target map to the bounding volume to obtain a second collision volume;
s210, adding the reflection map to the second collision body to obtain the target collision body added with the target map.
Alternatively, the collision volume mapping method described above may be applied, but not limited to, in a gaming application in the field of gaming, which may include, but is not limited to, one of the following: three-dimensional (3D) game applications, Virtual Reality (VR) game applications, Augmented Reality (AR) game applications, Mixed Reality (MR) game applications. Alternatively, it may be applied in other 3D environments. Such as during development, testing, etc.
Taking the application to the game process as an example, for a 3D game, the game may include a racing car, when a user wishes to add a map, such as a pictogram, to the racing car, the game client may be controlled to send a map instruction, the game client receives the map instruction, obtains a racing car to which a color map is added and to which a reflection map is not added, adds the pictogram to an enclosure of the racing car to obtain an added racing car, and adds the reflection map to the added racing car to obtain a final racing car. The last racing car can see that the heart-shaped picture is attached to the racing car, and the heart-shaped picture is seen from various visual angles, so that the heart-shaped picture is reduced in shape.
Through the embodiment, the method can improve the display effect of the added target map when the target map is added to the target collision body in the virtual three-dimensional scene.
The target map in the scheme can be a map with any content, and the target collision body can be a collision body added with a color map and a reflection map (in a general virtual three-dimensional space, after the color map is added to the collision body, a user can see the external color of the collision body during display, and after the reflection map is added, the user can see the reflection gloss during display). A first collision volume can be obtained by adding a color map to one collision volume, and a target collision volume can be obtained by adding a reflection map to the first collision volume.
Optionally, the enclosure mentioned in the present embodiment is a closed space completely enclosing the collision body.
In this scheme, when adding the target map to the bounding volume of the first collision volume, the target position on the bounding volume needs to be determined. The map instruction carries a target position, after the map instruction is obtained, the target position can be obtained from the map instruction, the target map is added to the target position, and the central point of the target map is kept aligned with the target position.
Optionally, in the process of determining the target position, a trigger operation may be received through a display screen of the terminal, where the trigger operation may be a touch operation performed on the touch screen, including a click operation, a long-time press operation, and the like, and the trigger operation may also be obtained by inputting a trigger position through an input device, for example, a mouse click operation, a long-time press operation, and the like. The trigger operation is an operation performed by one point on the display screen. The point is a first operation position corresponding to the trigger operation. For example, the user touches a point on the touch screen, and the point is taken as the first operation position.
After the first operation position is acquired, a second operation position corresponding to the first operation position on the enclosure body is determined. The second operation position may be an intersection position of the object ray projected from the first operation position and the bounding volume, the object ray being a ray projected toward the bounding volume along a current viewing angle for displaying the virtual three-dimensional scene.
After the second operation position is determined by the method, the second operation position is the target position. And after the target position is determined, pasting the target map on the target position.
Optionally, in the scheme, when the target map is pasted, the target map may be pasted to the enclosure at a certain angle.
Alternatively, after the target position is determined, a third operation position corresponding to the target position may be determined on the first collision volume. The third operating position is a position on the first collision body.
In the virtual three-dimensional scene, creating any three-dimensional collision volume requires determining a normal for each point on the collision volume. The normal is the normal to the plane of each point in the process of creating the three-dimensional collision volume. Therefore, after the third operation position is determined, the normal vector corresponding to the third operation position can be determined. When adding the target map to the first collision volume, the target map is pasted onto the target locations of the bounding volume along the negative direction of the finding vector.
Optionally, in this scheme, after the target map is attached to the first collision body, the position of the target map on the first collision body may be adjusted at any time. For example, after the user wants to paste the first collision body on the target map, the user may trigger an operation to a point on the display screen again to obtain an adjustment instruction, where the adjustment instruction carries a fourth operation position triggered by the user on the display screen. The fourth operating position has a corresponding fifth operating position on the enclosure, and the method of determining the fifth operating position is the same as the method of determining the second operating position. The fifth operation position is the intersection position of the target ray projected from the fourth operation position and the bounding volume, and the target ray is the ray projected to the bounding volume along the current visual angle for displaying the virtual three-dimensional scene. After the fifth operation position is determined, the target map that has been attached to the first collision body may be adjusted to the fifth operation position, thereby achieving adjustment of the target map position. After adding the target map to the first collision volume to obtain a second collision volume, a reflection map may be added to the second collision volume to obtain a target collision volume to which the target map is added. Compared with the method of directly pasting the target map on the target collision body, the method and the device realize the reduction of the deformation of the target map and improve the effect of the pasted target map.
Optionally, in the present solution, a target map or a target collision volume added with the target map may also be added to the block chain.
The blockchain in the scheme is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of upgrading and canceling the contracts; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation, such as: alarm, monitoring network conditions, monitoring node equipment health status, and the like.
The platform product service layer provides basic capability and an implementation framework of typical application, and developers can complete block chain implementation of business logic based on the basic capability and the characteristics of the superposed business. The application service layer provides the application service based on the block chain scheme for the business participants to use.
For example, the scheme may be applied to a client running a game, and the client may be a mobile phone. The display screen of the mobile phone displays a target collision body in a virtual three-dimensional scene, such as a virtual racing car, and the virtual racing car is used for being controlled by a user to play in a game. Fig. 3 may be a view of a virtual race car in a virtual three-dimensional scene displayed on a display screen, the virtual race car having added color and reflection maps and the virtual race car having not added a target map. At this time, if the target map needs to be added to the virtual racing car, the position of the target map to be attached can be determined first. For example, as shown in fig. 4, in the process of displaying the virtual racing car on the display interface shown in fig. 4, a first operation position on the display screen may be determined by clicking or long-pressing a certain position on the display screen, where the first operation position is required in an area where the virtual racing car is displayed on the display screen. For example, the user selects the position 402 as the first operation position (if the user is on a computer such as a PC, the first operation position may be selected by input hardware such as a mouse). After the first operating position is determined, a second operating position on the enclosure of the virtual race car may be determined. For example, as shown in FIG. 5, the user 502 in FIG. 5 may select a first operating position 506 on the display 504, cast rays 512 from the first operating position 506 along a current perspective (which may be the current direction of the camera 508) toward the virtual race car 510, thereby identifying a second operating position on the enclosure of the virtual race car 510 (the enclosure is not shown in FIG. 5), and identifying the second operating position as a target position to which the target map is to be attached. After the target location is determined, a target map may be attached to the target location. The bounding volume may be implemented by a MeshCollider component. By checking the Convex option on the assembly, a layer of enclosure can be set for the virtual racing car. For example, as shown in FIG. 6, FIG. 6 is a schematic view of an enclosure added for a virtual race car. After the target position is determined, in the process of pasting the target map, the pasting angle needs to be determined. If the target position on the bounding volume is determined, the third operating position on the first virtual car corresponding to the target position is determined (the virtual car is a car having a color map and a reflection map added to a collision volume, and the first virtual car is a car having a color map added to a collision volume and no reflection map added thereto). The third operating position is a position corresponding to the second operating position on the enclosure, and the position on the first virtual racing car which is the shortest distance from the second operating position may be determined as the third operating position. The third operation position corresponds to a normal vector, for example, as shown in fig. 7, the target map 704 is added to the bounding volume of the first virtual car along the negative direction of the normal vector 702 to obtain a second virtual car (the second virtual car is the first virtual car to which the target map is added), and then the reflection map is added to the second virtual car to obtain the virtual car to which the target map is added. After adding the target map, the user can also change the position of the target map. For example, as shown in FIG. 8, location 802 in FIG. 8 is the location where the target map was initially added. The user may click on a location on the display screen to determine the fourth operating position 804 and then add the target map to a fifth operating position on the bounding volume of the virtual race car corresponding to the fourth operating position 804. The target map for location 802 is deleted automatically or manually. Multiple controls may be displayed on the display screen so that different functions may be performed on the target map. E.g., rotate, delete, add, adjust, etc. The positions 802 and 804 in fig. 8 are only exemplary, and in practical applications, the client generally displays them.
It should be noted that, if there are multiple target maps in the present solution, the target maps may be merged into one map for rendering, so that multiple target maps may be rendered by one rendering.
The collision volume in this scheme may be the model itself of the virtual car, the color map of the virtual car may be set to d, d may be added to the first set of uv (uv1), uv being the outer surface layer of the model in three-dimensional space. The first virtual car of the virtual cars may be c, then:
C=tex2D(d,uv1) (1)
resulting in a first virtual car with the color map added. After the first virtual car race is obtained, a MeshCollider component can be added to the first virtual car race, and a Convex option in the MeshCollider component can be used for adding a bounding volume for the first virtual car race. The enclosure encloses the first virtual race car. Determining a first operation position in a display area through a trigger operation executed on a target collision body in the display area, then acquiring a second operation position on a bounding volume, wherein the second operation position can be h, acquiring a normal vector n of a point h, and adding a target map to the point h by taking the negative direction-n of the n as a target direction. And if a plurality of target maps exist, rendering the target maps to one map to obtain a map m, and adding the map m to the first virtual racing car to obtain a result g.
g.rgb=c.rgb*(1.0-m*a)+m.rgb*m*a (2)
Wherein c is the first virtual racing car, m is the map where one or more target maps are located, and a is r in rgb.
After the operation, the second virtual racing car added with the color map and the target map is obtained. And adding a reflection map to the second virtual racing car.
The reflection map may be added to map 2, adding map 2 to the second virtual race car.
I=reflect(-v,n) (3)
F=texCUBE(_Cube,I)*fr (4)
Wherein, I is a reflection vector, n is a normal vector of the virtual racing car, v is a current view angle vector, i.e. a sight line vector, the value of the map 2 is _ Cube, and the reflection fresnel value is fr.
And adding the reflection map f to the second virtual racing car g to obtain a final result p, wherein p is the virtual racing car added with the target map.
Through the method, when the target mapping is added to the target collision body in the virtual three-dimensional scene, the deformation of the target mapping is reduced, and the effect of the target mapping is improved.
As an alternative embodiment, adding the target map to the bounding volume, resulting in the second collision volume comprises:
s1, determining the target position of the target map on the bounding volume according to the map command;
s2, adding the target map to the target position, wherein the central point of the target map is aligned with the target position.
According to the embodiment, the target map is added to the target position after the target position of the target map on the bounding volume is determined, so that the deformation of the target map is reduced, and the display effect of the added target map is improved.
As an alternative embodiment, determining the target location of the target map on the bounding volume according to the mapping instructions comprises:
s1, acquiring a trigger operation executed on a display area for displaying the target collision body;
s2, determining a first operation position corresponding to the trigger operation in the display area;
s3, acquiring a second operation position corresponding to the first operation position on the bounding volume, wherein the second operation position is an intersection point position of the target ray projected from the first operation position and the bounding volume, and the target ray is a ray projected to the bounding volume along the current visual angle for displaying the virtual three-dimensional scene;
s4, the second operation position is determined as the target position.
Through the method, various positions can be added randomly according to the actual situation of the user when the target map is added, and the flexibility of adding the target map is improved.
As an alternative embodiment, adding the target map to the target location comprises:
s1, acquiring a third operation position on the first collision body corresponding to the second operation position;
s2, acquiring a normal vector of a third operation position;
s3, adding the target map to the target position of the bounding volume in the negative direction of the normal vector of the third operation position.
Through the method, when the target map is added, the relative positions of the target map and the target collision body are attached, and the display effect of the added target map is improved.
As an alternative embodiment, after adding the target map to the target location, the method further comprises:
s1, receiving an adjusting instruction for adjusting the position of the target map on the target collision volume, wherein the adjusting instruction carries a fourth operation position determined in a display area for displaying the target collision volume;
s2, acquiring a fifth operation position corresponding to the fourth operation position on the bounding volume, wherein the fifth operation position is an intersection point position of the target ray projected from the fourth operation position and the bounding volume, and the target ray is a ray projected to the bounding volume along the current visual angle for displaying the virtual three-dimensional scene;
and S3, responding to the adjusting instruction, and adjusting the target map to a fifth operation position.
By the method, the position of the target map can be flexibly adjusted, and the flexibility of adding the target map is improved.
As an alternative embodiment, adding the target map to the bounding volume of the first collision volume, resulting in the second collision volume comprises:
s1, when the target map comprises a plurality of maps, the rendering operation is executed to the first collision body added with the plurality of target maps, and the second collision body is obtained.
By the method, the performance overhead caused by multiple maps is reduced, and the efficiency of adding the target map is improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of an embodiment of the present invention, there is also provided a collision volume mapping apparatus for implementing the collision volume mapping method described above. As shown in fig. 9, the apparatus includes:
(1) a receiving unit 902, configured to receive a chartlet instruction for requesting to add a target chartlet to a target collision volume in a virtual three-dimensional scene, where the target collision volume is a collision volume to which a color chartlet and a reflection chartlet have been added, a surface color of the target collision volume is recorded in the color chartlet, a surface texture of the target collision volume is recorded in the reflection chartlet, and the chartlet instruction carries a chartlet identifier of the target chartlet;
(2) an acquisition unit 904 configured to acquire a first collision volume corresponding to a target collision volume, wherein the first collision volume is a collision volume to which a color map has been added but to which a reflection map has not been added;
(3) a determining unit 906 for determining a bounding volume of the first collision volume;
(4) a first adding unit 908 for adding the target map to the bounding volume to obtain a second collision volume;
(5) a second adding unit 910, configured to add the reflection map to the second collision volume, so as to obtain the target collision volume to which the target map is added.
Alternatively, the collision volume mapping apparatus may be applied, but not limited to, a game application in the field of games, and the game application may include, but is not limited to, one of the following: three-dimensional (3D) game applications, Virtual Reality (VR) game applications, Augmented Reality (AR) game applications, Mixed Reality (MR) game applications. Alternatively, it may be applied in other 3D environments. Such as during development, testing, etc.
Taking the application to the game process as an example, for a 3D game, the game may include a racing car, when a user wishes to add a map, such as a pictogram, to the racing car, the game client may be controlled to send a map instruction, the game client receives the map instruction, obtains a racing car to which a color map is added and to which a reflection map is not added, adds the pictogram to an enclosure of the racing car to obtain an added racing car, and adds the reflection map to the added racing car to obtain a final racing car. The last racing car can see that the heart-shaped picture is attached to the racing car, and the heart-shaped picture is seen from various visual angles, so that the heart-shaped picture is reduced in shape.
Through the embodiment, the method can improve the display effect of the added target map when the target map is added to the target collision body in the virtual three-dimensional scene.
The target map in the scheme can be a map with any content, and the target collision body can be a collision body added with a color map and a reflection map (in a general virtual three-dimensional space, after the color map is added to the collision body, a user can see the external color of the collision body during display, and after the reflection map is added, the user can see the reflection gloss during display). A first collision volume can be obtained by adding a color map to one collision volume, and a target collision volume can be obtained by adding a reflection map to the first collision volume.
Optionally, the enclosure mentioned in the present embodiment is a closed space completely enclosing the collision body.
In this scheme, when adding the target map to the bounding volume of the first collision volume, the target position on the bounding volume needs to be determined. The map instruction carries a target position, after the map instruction is obtained, the target position can be obtained from the map instruction, the target map is added to the target position, and the central point of the target map is kept aligned with the target position.
Optionally, in the process of determining the target position, a trigger operation may be received through a display screen of the terminal, where the trigger operation may be a touch operation performed on the touch screen, including a click operation, a long-time press operation, and the like, and the trigger operation may also be obtained by inputting a trigger position through an input device, for example, a mouse click operation, a long-time press operation, and the like. The trigger operation is an operation performed by one point on the display screen. The point is a first operation position corresponding to the trigger operation. For example, the user touches a point on the touch screen, and the point is taken as the first operation position.
After the first operation position is acquired, a second operation position corresponding to the first operation position on the enclosure body is determined. The second operation position may be an intersection position of the object ray projected from the first operation position and the bounding volume, the object ray being a ray projected toward the bounding volume along a current viewing angle for displaying the virtual three-dimensional scene.
After the second operation position is determined by the method, the second operation position is the target position. And after the target position is determined, pasting the target map on the target position.
Optionally, in the scheme, when the target map is pasted, the target map may be pasted to the enclosure at a certain angle.
Alternatively, after the target position is determined, a third operation position corresponding to the target position may be determined on the first collision volume. The third operating position is a position on the first collision body.
In the virtual three-dimensional scene, creating any three-dimensional collision volume requires determining a normal for each point on the collision volume. The normal is the normal to the plane of each point in the process of creating the three-dimensional collision volume. Therefore, after the third operation position is determined, the normal vector corresponding to the third operation position can be determined. When adding the target map to the first collision volume, the target map is pasted onto the target locations of the bounding volume along the negative direction of the finding vector.
Optionally, in this scheme, after the target map is attached to the first collision body, the position of the target map on the first collision body may be adjusted at any time. For example, after the user wants to paste the first collision body on the target map, the user may trigger an operation to a point on the display screen again to obtain an adjustment instruction, where the adjustment instruction carries a fourth operation position triggered by the user on the display screen. The fourth operating position has a corresponding fifth operating position on the enclosure, and the method of determining the fifth operating position is the same as the method of determining the second operating position. The fifth operation position is the intersection position of the target ray projected from the fourth operation position and the bounding volume, and the target ray is the ray projected to the bounding volume along the current visual angle for displaying the virtual three-dimensional scene. After the fifth operation position is determined, the target map that has been attached to the first collision body may be adjusted to the fifth operation position, thereby achieving adjustment of the target map position. After adding the target map to the first collision volume to obtain a second collision volume, a reflection map may be added to the second collision volume to obtain a target collision volume to which the target map is added. Compared with the method of directly pasting the target map on the target collision body, the method and the device realize the reduction of the deformation of the target map and improve the effect of the pasted target map.
Through the method, when the target mapping is added to the target collision body in the virtual three-dimensional scene, the deformation of the target mapping is reduced, and the effect of the target mapping is improved.
As an alternative embodiment, the first adding unit 908 includes:
(1) the first determination module is used for determining the target position of the target map on the bounding volume according to the map instruction;
(2) a first adding module for adding the target map to the target position, wherein the center point of the target map is aligned with the target position.
According to the embodiment, the target map is added to the target position after the target position of the target map on the bounding volume is determined, so that the deformation of the target map is reduced, and the display effect of the added target map is improved.
As an alternative embodiment, the 906 determining unit includes:
(1) a first acquisition module for acquiring a trigger operation performed on a display area for displaying a target collision volume;
(2) the second determining module is used for determining a first operation position corresponding to the trigger operation in the display area;
(3) the second acquisition module is used for acquiring a second operation position corresponding to the first operation position on the enclosure, wherein the second operation position is an intersection point position of a target ray projected from the first operation position and the enclosure, and the target ray is a ray projected to the enclosure along a current visual angle for displaying the virtual three-dimensional scene;
(4) and the third determining module is used for determining the second operation position as the target position.
Through the method, various positions can be added randomly according to the actual situation of the user when the target map is added, and the flexibility of adding the target map is improved.
As an alternative embodiment, the first adding unit 908 further includes:
(1) a third acquisition module, configured to acquire a third operation position, corresponding to the second operation position, on the first collision body;
(2) the fourth acquisition module is used for acquiring a normal vector of the third operation position;
(3) and the second adding module is used for adding the target map to the target position of the enclosure along the negative direction of the normal vector of the third operation position.
Through the method, when the target map is added, the relative positions of the target map and the target collision body are attached, and the display effect of the added target map is improved.
As an alternative embodiment, the apparatus further comprises:
(1) a second receiving unit, configured to receive an adjustment instruction for adjusting a position of the target map on the target collision volume after the target map is added to the target position, where the adjustment instruction carries a fourth operation position determined in a display area where the target collision volume is displayed;
(2) the second acquisition unit is used for acquiring a fifth operation position corresponding to the fourth operation position on the bounding volume, wherein the fifth operation position is the intersection position of the target ray projected from the fourth operation position and the bounding volume, and the target ray is the ray projected to the bounding volume along the current visual angle for displaying the virtual three-dimensional scene;
(3) and the adjusting unit is used for responding to the adjusting instruction and adjusting the target map to a fifth operation position.
By the method, the position of the target map can be flexibly adjusted, and the flexibility of adding the target map is improved.
As an alternative embodiment, the first adding unit 908 includes:
(1) and the rendering module is used for performing rendering operation on the first collision body added with the plurality of target maps to obtain a second collision body under the condition that the plurality of target maps are included.
By the method, the performance overhead caused by multiple maps is reduced, and the efficiency of adding the target map is improved.
According to yet another aspect of an embodiment of the present invention, there is also provided an electronic device for implementing the collision volume mapping method, as shown in fig. 10, the electronic device includes a memory 1002 and a processor 1004, the memory 1002 stores a computer program, and the processor 1004 is configured to execute the steps in any one of the method embodiments through the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, receiving a chartlet instruction for requesting to add a target chartlet to a target collision volume in a virtual three-dimensional scene, wherein the target collision volume is a collision volume to which a color chartlet and a reflection chartlet are added, the color chartlet records the surface color of the target collision volume, the reflection chartlet records the surface texture of the target collision volume, and the chartlet instruction carries a chartlet mark of the target chartlet;
s2, acquiring a first collision body corresponding to the target collision body, wherein the first collision body is a collision body added with a color map but not added with a reflection map;
s3, determining a bounding volume of the first collision volume;
s4, adding the target map to the bounding volume to obtain a second collision volume;
s5, adding the reflection map to the second collision body to obtain the target collision body added with the target map.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 10 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 10 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
The memory 1002 may be used to store software programs and modules, such as program instructions/modules corresponding to the collision volume mapping method and apparatus in the embodiment of the present invention, and the processor 1004 executes various functional applications and data processing by executing the software programs and modules stored in the memory 1002, so as to implement the collision volume mapping method. The memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1002 may further include memory located remotely from the processor 1004, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1002 may be, but not limited to, specifically configured to store information such as a target map. As an example, as shown in fig. 10, the memory 1002 may include, but is not limited to, a first receiving unit 902, a first obtaining unit 904, a determining unit 906, a first adding unit 908, and a second adding unit 910 of the collision volume mapping apparatus. In addition, other module units in the collision volume mapping apparatus may also be included, but are not limited to, and are not described in detail in this example.
Optionally, the above-mentioned transmission device 1006 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1006 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices so as to communicate with the internet or a local area Network. In one example, the transmission device 1006 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1008 for displaying the target collision volume to which the target map is added; and a connection bus 1010 for connecting the respective module parts in the above-described electronic apparatus.
According to a further aspect of an embodiment of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the steps in any of the above-mentioned method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, receiving a chartlet instruction for requesting to add a target chartlet to a target collision volume in a virtual three-dimensional scene, wherein the target collision volume is a collision volume to which a color chartlet and a reflection chartlet are added, the color chartlet records the surface color of the target collision volume, the reflection chartlet records the surface texture of the target collision volume, and the chartlet instruction carries a chartlet mark of the target chartlet;
s2, acquiring a first collision body corresponding to the target collision body, wherein the first collision body is a collision body added with a color map but not added with a reflection map;
s3, determining a bounding volume of the first collision volume;
s4, adding the target map to the bounding volume to obtain a second collision volume;
s5, adding the reflection map to the second collision body to obtain the target collision body added with the target map.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A method of mapping a collision volume, comprising:
receiving a chartlet instruction for requesting to add a target chartlet to a target collision volume in a virtual three-dimensional scene, wherein the target collision volume is a collision volume to which a color chartlet and a reflection chartlet are added, the color chartlet records the surface color of the target collision volume, the reflection chartlet records the surface texture of the target collision volume, and the chartlet instruction carries a chartlet mark of the target chartlet;
obtaining a first collision volume corresponding to the target collision volume, wherein the first collision volume is a collision volume to which the color map has been added but to which the reflection map has not been added;
determining an bounding volume of the first collision volume;
adding the target map to the bounding volume to obtain a second collision volume, specifically comprising: determining a target position of the target map on the bounding volume according to the map instruction; adding the target map to the target location, wherein a center point of the target map is aligned with the target location;
and adding the reflection map to the second collision volume to obtain the target collision volume added with the target map.
2. The method of claim 1, wherein said determining a target location of the target map on the bounding volume according to the map instructions comprises:
acquiring a trigger operation performed on a display area for displaying the target collision volume;
determining a first operation position corresponding to the trigger operation in the display area;
acquiring a second operation position corresponding to the first operation position on the enclosure, wherein the second operation position is an intersection position of a target ray projected from the first operation position and the enclosure, and the target ray is a ray projected to the enclosure along a current visual angle for displaying the virtual three-dimensional scene;
determining the second operating position as the target position.
3. The method of claim 2, wherein adding the target map to the target location comprises:
acquiring a third operation position corresponding to the second operation position on the first collision body;
acquiring a normal vector of the third operation position;
adding the target map to the target position of the bounding volume along a negative direction of a normal vector of the third operating position.
4. The method of claim 1, wherein after adding the target map to the target location, the method further comprises:
receiving an adjustment instruction for adjusting the position of the target map on the target collision volume, wherein the adjustment instruction carries a fourth operation position determined in a display area for displaying the target collision volume;
acquiring a fifth operation position corresponding to the fourth operation position on the bounding volume, wherein the fifth operation position is an intersection position of a target ray projected from the fourth operation position and the bounding volume, and the target ray is a ray projected to the bounding volume along a current visual angle for displaying the virtual three-dimensional scene;
and responding to the adjusting instruction, and adjusting the target map to the fifth operation position.
5. The method of any of claims 1 to 4, wherein the adding the target map to the bounding volume of the first collision volume resulting in a second collision volume comprises:
and in the case that the target map comprises a plurality of target maps, performing rendering operation on the first collision body added with the plurality of target maps to obtain the second collision body.
6. A collision volume mapping apparatus, comprising:
a receiving unit, configured to receive a chartlet instruction for requesting to add a target chartlet to a target collision volume in a virtual three-dimensional scene, where the target collision volume is a collision volume to which a color chartlet and a reflection chartlet have been added, a surface color of the target collision volume is recorded in the color chartlet, a surface texture of the target collision volume is recorded in the reflection chartlet, and the chartlet instruction carries a chartlet identifier of the target chartlet;
an acquisition unit configured to acquire a first collision volume corresponding to the target collision volume, wherein the first collision volume is a collision volume to which the color map has been added but to which the reflection map has not been added;
a determination unit for determining an enclosure of the first collision volume;
the first adding unit is used for adding the target chartlet to the bounding volume to obtain a second collision volume, and specifically comprises a first determining module used for determining a target position of the target chartlet on the bounding volume according to the chartlet instruction; a first adding module, configured to add the target map to the target location, where a center point of the target map is aligned with the target location;
a second adding unit, configured to add the reflection map to the second collision volume, so as to obtain the target collision volume to which the target map is added.
7. The apparatus of claim 6, wherein the determining unit comprises:
a first acquisition module for acquiring a trigger operation performed on a display area for displaying the target collision volume;
the second determining module is used for determining a first operation position corresponding to the trigger operation in the display area;
a second obtaining module, configured to obtain a second operation position corresponding to the first operation position on the enclosure, where the second operation position is an intersection position of a target ray projected from the first operation position and the enclosure, and the target ray is a ray projected to the enclosure along a current view angle for displaying the virtual three-dimensional scene;
a third determining module, configured to determine the second operation position as the target position.
8. The apparatus according to claim 7, wherein the first adding unit further comprises:
a third acquiring module, configured to acquire a third operation position, corresponding to the second operation position, on the first collision volume;
a fourth obtaining module, configured to obtain a normal vector of the third operation position;
and the second adding module is used for adding the target map to the target position of the enclosure along the negative direction of the normal vector of the third operation position.
9. The apparatus of claim 6, further comprising:
a second receiving unit, configured to receive an adjustment instruction for adjusting a position of the target map on the target collision volume after the target map is added to the target position, where the adjustment instruction carries a fourth operation position determined in a display area where the target collision volume is displayed;
a second obtaining unit, configured to obtain a fifth operation position corresponding to the fourth operation position on the enclosure, where the fifth operation position is an intersection position of a target ray projected from the fourth operation position and the enclosure, and the target ray is a ray projected to the enclosure along a current viewing angle for displaying the virtual three-dimensional scene;
and the adjusting unit is used for responding to the adjusting instruction and adjusting the target map to the fifth operation position.
10. The apparatus according to any one of claims 6 to 9, wherein the first adding unit includes:
and the rendering module is used for performing rendering operation on the first collision body added with the plurality of target maps to obtain the second collision body under the condition that the plurality of target maps are included.
11. A computer-readable storage medium, in which a computer program is stored, which computer program, when running, performs the method of any one of claims 1 to 5.
12. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 5 by means of the computer program.
CN201911135964.6A 2019-11-19 2019-11-19 Collision body mapping method and device, storage medium and electronic device Active CN110956703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911135964.6A CN110956703B (en) 2019-11-19 2019-11-19 Collision body mapping method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911135964.6A CN110956703B (en) 2019-11-19 2019-11-19 Collision body mapping method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110956703A CN110956703A (en) 2020-04-03
CN110956703B true CN110956703B (en) 2021-03-16

Family

ID=69977822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911135964.6A Active CN110956703B (en) 2019-11-19 2019-11-19 Collision body mapping method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110956703B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101485934A (en) * 2008-01-16 2009-07-22 盛趣信息技术(上海)有限公司 Racing car game color dress conversion property and color dress conversion method
CN109448099A (en) * 2018-09-21 2019-03-08 腾讯科技(深圳)有限公司 Rendering method, device, storage medium and the electronic device of picture
US20190333320A1 (en) * 2018-04-30 2019-10-31 Igt Augmented reality systems and methods for sports racing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101485934A (en) * 2008-01-16 2009-07-22 盛趣信息技术(上海)有限公司 Racing car game color dress conversion property and color dress conversion method
US20190333320A1 (en) * 2018-04-30 2019-10-31 Igt Augmented reality systems and methods for sports racing
CN109448099A (en) * 2018-09-21 2019-03-08 腾讯科技(深圳)有限公司 Rendering method, device, storage medium and the electronic device of picture

Also Published As

Publication number Publication date
CN110956703A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
CN110401715B (en) Resource collection task management method, device, storage medium and system
CN109598147B (en) Data processing method and device based on block chain and electronic equipment
CN106131057B (en) Certification based on virtual reality scenario and device
CN110033259B (en) Block chain-based data evidence storing method and device and electronic equipment
CN113592486B (en) Payment method, system and related equipment based on cloud application instance
CN106920079A (en) Virtual objects distribution method and device based on augmented reality
CN111124567B (en) Operation recording method and device for target application
CN110585698B (en) Virtual asset transaction method and related device
US20200097961A1 (en) Decentralized smart resource sharing between different resource providers
US11074351B2 (en) Location specific identity verification system
CN110187761A (en) Method for managing resource, device, equipment and system based on virtual reality
CN110581891A (en) Game data processing method, device, equipment and storage medium based on block chain
KR20210157738A (en) System for certificating and synchronizing virtual world and physical world
CN112288881B (en) Image display method and device, computer equipment and storage medium
CN110956703B (en) Collision body mapping method and device, storage medium and electronic device
CN110601850B (en) Scenic spot information recording method, related equipment and storage medium
KR20210157741A (en) Method and system for user check-in on certified space
CN115796997A (en) Air conditioner selective purchasing method and device and computer equipment
CN108959311A (en) A kind of social activity scene configuration method and device
CN111815784A (en) Method and device for presenting reality model, electronic equipment and storage medium
KR100395760B1 (en) A service method for online hair style management
CN110532324A (en) Notice information methods of exhibiting, device, equipment and storage medium based on block chain
US11153084B1 (en) System for certificating and synchronizing virtual world and physical world
CN115702435A (en) Object management system
CN113806728A (en) Data display method and system for racing field operation monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021046

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant