CN112435312B - Motion trajectory generation method and device, computer equipment and readable storage medium - Google Patents

Motion trajectory generation method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN112435312B
CN112435312B CN202010920206.1A CN202010920206A CN112435312B CN 112435312 B CN112435312 B CN 112435312B CN 202010920206 A CN202010920206 A CN 202010920206A CN 112435312 B CN112435312 B CN 112435312B
Authority
CN
China
Prior art keywords
model
motion
color
image
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010920206.1A
Other languages
Chinese (zh)
Other versions
CN112435312A (en
Inventor
杨意晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202010920206.1A priority Critical patent/CN112435312B/en
Publication of CN112435312A publication Critical patent/CN112435312A/en
Application granted granted Critical
Publication of CN112435312B publication Critical patent/CN112435312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The application provides a motion trail generation method and device, computer equipment and a readable storage medium. The method comprises the following steps: creating a first model of a simulated moving body; creating a second model of the concave portion and a third model of the convex portion corresponding to the motion trajectory; respectively obtaining color images obtained after the three models move according to the motion tracks, and sequentially and correspondingly obtaining a first color image, a second color image and a third color image; obtaining a depth map obtained after the third model moves according to the motion trail; calculating a plane projection drawing of the motion trail on a first plane by using the first color drawing, the second color drawing and the third color drawing; obtaining a fusion image according to the depth image and the plane projection image; creating a model of a horizontal plane where the simulated motion track is located to obtain a plane model diagram; and setting the size of the point in the plane model diagram in the first direction according to the fusion diagram to obtain the motion trail. Through the application, the motion trail can be freely set according to the requirement.

Description

Motion trajectory generation method and device, computer equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating a motion trajectory, a computer device, and a readable storage medium.
Background
In some games, animations and other scenes, it is often necessary to show the motion trail of a moving body such as a person, an animal or an object left after the surface of snow, desert or the like moves, and the motion trail needs to reflect the effects of snow heaps, sand and the like being pushed away to rise and being depressed (or being pressed down) to sink.
Therefore, how to generate a motion trajectory having the above-mentioned effects becomes a technical problem to be solved in the art.
Disclosure of Invention
The present application aims to provide a method, an apparatus, a computer device and a readable storage medium for generating a motion trajectory, which are used to solve the above technical problems in the prior art.
In one aspect, a method for generating a motion trajectory is provided.
The generation method of the motion trail comprises the following steps: creating a first model of a simulated moving body; creating a second model corresponding to a concave part of the motion trail and a third model corresponding to a convex part of the motion trail, wherein the first model, the second model and the third model have the same height in a first direction, and the first direction is the depth direction of the motion trail;
respectively obtaining color images obtained after the first model, the second model and the third model move according to the motion trail, and sequentially and correspondingly obtaining a first color image, a second color image and a third color image; obtaining a depth map obtained after the third model moves according to the motion trail; calculating a plane projection drawing of the motion trail on a first plane by using the first color drawing, the second color drawing and the third color drawing, wherein the first plane is vertical to the first direction; obtaining a fusion graph according to the depth graph and the plane projection graph; creating a model simulating a horizontal plane where the motion trail is located to obtain a plane model diagram; and setting the size of the point in the plane model diagram in the first direction according to the fusion diagram to obtain the motion trail.
Further, the step of creating a second model corresponding to a concave portion of the motion trajectory, and a third model corresponding to a convex portion of the motion trajectory includes: determining a first amplification factor and a second amplification factor according to the motion track; amplifying the first model in a second direction and a third direction according to the first amplification factor to obtain a second model, wherein the second direction and the third direction are perpendicular to each other and are parallel to the first plane; and amplifying the first model in the second direction and the third direction according to the second amplification factor to obtain a third model.
Further, the step of respectively obtaining the color images obtained after the first model, the second model and the third model move according to the motion trail, and correspondingly obtaining the first color image, the second color image and the third color image in sequence comprises: creating a first camera, a second camera and a third camera which are respectively identical in position and observation direction; setting the first model in a viewport of the first camera, and moving according to the motion trajectory to obtain a viewport image of the first camera to obtain the first color image; the second model is arranged in a viewport of the second camera, and a viewport image of the second camera is obtained according to the motion of the motion trail, so as to obtain a second color image; and setting the third model in a viewport of the third camera, and moving according to the motion trail to obtain a viewport image of the third camera to obtain the third color map.
Further, the step of obtaining the depth map obtained after the third model moves according to the motion trajectory includes: and acquiring a rendering depth map of the third camera to obtain the depth map.
Further, the step of calculating a planar projection of the motion trajectory on a first plane by using the first color map, the second color map and the third color map comprises: subtracting the pixel value of the R channel of the second color image from the pixel value of the G channel of the third color image to obtain a first image; subtracting the pixel value of the B channel of the first color image from the pixel value of the R channel of the second color image to obtain a second image; setting the pixel with the pixel value larger than 0.5 in the second image to be 0.5 to obtain a third image; superposing the first graph and the third graph to obtain a fourth graph; acquiring pixel values of a channel B of the second color image to construct a fifth image; and setting the pixel with the pixel value less than 0.5 in the fourth image as 0.5 by taking the fifth image as a mask to obtain the plane projection image.
Further, the step of obtaining a fusion map according to the depth map and the planar projection map comprises: subtracting each depth data of the depth map by using a preset depth basic value to obtain a sixth map; carrying out fuzzy processing on the plane projection drawing to obtain a seventh drawing; and fusing the sixth graph and the seventh graph to obtain the fused graph.
Further, creating a model simulating a horizontal plane in which the motion trail is located, and obtaining a plane model diagram includes: and establishing a model simulating the horizontal plane of the motion trail by using a tessellation shader to obtain a plane model diagram.
In another aspect, to achieve the above object, the present application provides a motion trajectory generation device.
The motion trail generation device comprises: the first creating module is used for creating a first model for simulating a motion body; a second creating module, configured to create a second model corresponding to a concave portion of a motion trajectory and a third model corresponding to a convex portion of the motion trajectory, where heights of the first model, the second model, and the third model in a first direction are the same, and the first direction is a depth direction of the motion trajectory; the first acquisition module is used for respectively acquiring the color images obtained after the first model, the second model and the third model move according to the motion trail, and sequentially and correspondingly obtaining a first color image, a second color image and a third color image; the second obtaining module is used for obtaining a depth map obtained after the third model moves according to the motion trail; the calculating module is used for calculating a plane projection drawing of the motion trail on a first plane by utilizing the first color drawing, the second color drawing and the third color drawing, wherein the first plane is vertical to the first direction; the first processing module is used for obtaining a fusion map according to the depth map and the plane projection map; the third establishing module is used for establishing a model for simulating a horizontal plane where the motion trail is located to obtain a plane model diagram; and the second processing module is used for setting the size of the point in the plane model diagram in the first direction according to the fusion diagram to obtain the motion trail.
In another aspect, to achieve the above object, the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a further aspect, to achieve the above object, the present application further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above method.
The method comprises the steps of establishing a first model for simulating a motion body, a second model corresponding to a concave part of a motion track and a third model corresponding to a convex part of the motion track, wherein the three models are consistent in height in the depth direction of the motion track, respectively obtaining a color image obtained after the first model, the second model and the third model move according to the motion track, sequentially and correspondingly obtaining a first color image, a second color image and a third color image, obtaining a depth image obtained after the third model moves according to the motion track, calculating a plane projection image of the motion track on a first plane by using the first color image, the second color image and the third color image, wherein the first plane is vertical to the first direction, obtaining a fusion image according to the depth image and the plane projection image, establishing a model simulating the horizontal plane of the motion track, obtaining a plane model image, and finally setting the size of points in the plane model image in the first direction according to the fusion image to obtain the motion track. Through the application, when the motion trail with the sinking effect and the bulging effect is realized, the sinking part and the bulging part of the motion trail cannot be limited by the outline of the motion main body, the outline of the motion trail can be freely set according to the requirement, and the freedom is larger when the outline of the motion trail is realized.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a flowchart of a method for generating a motion trajectory according to an embodiment of the present application;
FIG. 2 is a diagram illustrating an effect of a motion trajectory provided by an embodiment of the present application;
FIG. 3 is an effect diagram of a depth map provided by an embodiment of the present application;
FIG. 4 is an effect diagram of the sixth diagram provided by the embodiment of the present application; fig. 5 is a diagram of an effect of creating a camera according to an embodiment of the present application;
FIG. 6 is a diagram illustrating effects of creating a first model, a second model, and a third model according to an embodiment of the present disclosure;
FIG. 7 is a diagram of an effect of a superposition of a first color map, a second color map and a third color map provided by an embodiment of the present application;
FIG. 8 is a diagram illustrating a first diagram of effects provided by an embodiment of the present application;
FIG. 9 is an effect diagram of the third diagram provided by the embodiment of the present application;
FIG. 10 is a diagram illustrating an effect of the fifth drawing provided by an embodiment of the present application;
FIG. 11 is a diagram illustrating an effect of the fourth drawing provided by an embodiment of the present application;
FIG. 12 is a diagram illustrating an effect of a planar projection provided by an embodiment of the present application;
FIG. 13 is a graph illustrating the effects of a fusion map provided by an embodiment of the present application;
fig. 14 is a block diagram of a motion trajectory generation apparatus according to a third embodiment of the present application;
fig. 15 is a hardware configuration diagram of a computer device according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In order to realize a motion trajectory showing the effect of pushing away snow, sand, and the like to rise and being depressed (or depressed), a method of generating a motion trajectory is proposed in the related art: firstly, a depth map corresponding to a motion track is obtained, then, an edge detection post-processing technology is adopted to obtain the inner and outer outline delineation of a motion body, and then, the detail effect, namely the mask of the raised and sunken parts, is further realized.
The inventor finds that the mask obtained by the method is necessarily the same as the original motion body, that is, the concave part and the convex part of the motion track are limited by the contour of the motion body, and the contour of the motion track cannot be freely set according to requirements. Therefore, it is a technical problem to be solved by the present application that how to generate the contour of the concave portion and the convex portion in the motion trajectory can be freely set according to the requirement without being limited by the contour of the motion body.
In order to solve the technical problem that when a motion trail is generated in the related art, the outline of the motion trail is defined by the outline of a motion subject and cannot be freely set according to requirements, the application provides a motion trail generation method, a device, a computer device and a readable storage medium. After three models are created, color images obtained after the first model, the second model and the third model move according to the motion trail are respectively obtained, a first color image, a second color image and a third color image are sequentially and correspondingly obtained, a depth image obtained after the third model moves according to the motion trail is obtained, a plane projection image of the motion trail on a first plane is calculated by utilizing the first color image, the second color image and the third color image, wherein the first plane is vertical to the first direction, a fusion image is obtained according to the depth image and the plane projection image, a model simulating the horizontal plane of the motion trail is created to obtain a plane model image, and finally the size of points in the plane model image in the first direction is set according to the fusion image to obtain the motion trail.
It can be seen from the above that, when the motion trail with the sinking effect and the bulging effect is realized, the sinking part and the bulging part of the motion trail are not limited by the profile of the motion body, the profile of the motion trail can be freely set according to requirements, and the freedom is higher when the profile of the motion trail is realized.
Specific embodiments of the method, apparatus, computer device and readable storage medium for generating a motion trajectory provided in the present application will be described in detail below.
Example one
In an embodiment of the present application, a method for generating a motion trajectory of a moving body is provided, by which, when the motion trajectory of the moving body is generated, a concave portion and a convex portion of the motion trajectory are not defined by a contour of the moving body, so that the contour of the motion trajectory has greater freedom, and specifically, fig. 1 is a flowchart of a method for generating a motion trajectory provided in an embodiment of the present application, as shown in fig. 1, the method for generating a motion trajectory provided in this embodiment includes steps S101 to S108 as follows.
Step S101: a first model is created that simulates a body of motion.
In step S101, a model for simulating the motion subject is created according to the requirements such as the shape and size of the motion subject, and is defined as the first model, which may specifically adopt any modeling method in the prior art, and details thereof are not repeated here.
Step S102: a second model corresponding to the concave portion of the motion profile and a third model corresponding to the convex portion of the motion profile are created.
The heights of the first model, the second model and the third model in the first direction are the same, and the first direction is the depth direction of the motion trail.
As shown in fig. 2, for a motion trajectory which needs to reflect the effect that snow and sand are pushed away to rise and are depressed (or pressed) to sink when a moving body moves in a snow heap, sand, and the like, in this step, corresponding to a sinking portion and a sinking portion are modeled respectively, it is only necessary to ensure that the first model, the second model, and the third model have the same height in the depth direction of the motion trajectory, and the second model and the third model can be set according to the shape requirements of the sinking portion and the sinking portion in the transverse direction perpendicular to the depth direction.
Specifically, when the second model and the third model are created, the second model and the third model may be separately created, or after the first model is created, the first model may be deformed to obtain the second model and the third model. Optionally, when creating a second model corresponding to the concave portion of the motion trajectory and a third model corresponding to the convex portion of the motion trajectory, the specifically executed steps include:
step S1021: and determining a first amplification factor and a second amplification factor according to the motion track.
Step S1022: and amplifying the first model in the second direction and the third direction according to the first amplification factor to obtain a second model.
The second direction and the third direction are perpendicular to each other and are parallel to the first plane. Specifically, the first amplification factor may be a single numerical value, and the first model is amplified in equal proportion in the second direction and the third direction to obtain the second model, or the first amplification factor includes two numerical values, and the amplification ratio of the first model in the second direction and the amplification ratio of the first model in the third direction are respectively defined, so that the first model is amplified in different amplification ratios in the second direction and the third direction to obtain the second model.
Step S1023: and amplifying the first model in the second direction and the third direction according to a second amplification factor to obtain a third model.
Specifically, the first amplification factor and the second amplification factor are different. In addition, the second amplification factor may be a single numerical value, and the first model is amplified in equal proportion in the second direction and the third direction to obtain the third model, or the second amplification factor includes two numerical values, and the amplification ratio of the first model in the second direction and the amplification ratio of the first model in the third direction are respectively defined, so that the first model is amplified in different amplification ratios in the second direction and the third direction to obtain the third model.
The second model and the third model are created in the above manner, and only the second model and the third model need to be amplified in the transverse direction perpendicular to the depth direction, and specifically, the amplification ratios in the second direction and the third direction can be correspondingly and respectively set according to the concave part and the convex part of the movement track.
Step S103: and respectively obtaining a color image obtained after the first model, the second model and the third model move according to the motion trail, and sequentially and correspondingly obtaining a first color image, a second color image and a third color image.
Step S104: and obtaining a depth map obtained after the third model moves according to the motion trail.
In step S103 and step S104, controlling the first model to move according to the motion trajectory, and capturing the color map to obtain a first color map; controlling the second model to move according to the motion trail, and capturing the color image to obtain a second color image; and controlling the third model to move according to the motion trail, capturing the color image to obtain a third color image, and capturing the depth image at the same time.
Optionally, in step S103, when the color map obtained after the first model, the second model, and the third model move according to the motion trajectory is obtained, and the first color map, the second color map, and the third color map are obtained in sequence, the specifically executed steps include:
step S1031: a first camera, a second camera, and a third camera are created, the positions and the viewing directions of which are the same, respectively.
Specifically, three cameras with the same viewing direction are created at the same position, wherein the viewing direction is the depth direction of the motion trajectory, that is, the viewport and the viewing distance of the three cameras are the same.
Step S1032: and arranging the first model in a viewport of the first camera, and moving according to the motion trail to obtain a viewport image of the first camera to obtain a first color image.
Step S1033: and arranging the second model in a viewport of the second camera, and moving according to the motion trail to obtain a viewport image of the second camera to obtain a second color image.
Step S1034: and arranging the third model in a viewport of a third camera, and moving according to the motion trail to obtain a viewport image of the third camera to obtain a third color map.
The three models move according to the same motion trail, each camera correspondingly captures the motion process of one model, and the view port image of each camera is the color image corresponding to one model. Meanwhile, in step S104, a rendering depth map of the third camera is obtained, that is, a depth map is obtained.
Step S105: and calculating a plane projection drawing of the motion trail on the first plane by using the first color drawing, the second color drawing and the third color drawing.
Wherein the first plane is perpendicular to the first direction. The planar projection of the movement locus on the first plane is the top view of the movement locus.
Optionally, in step S105, when a planar projection of the motion trajectory on the first plane is calculated by using the first color map, the second color map, and the third color map, the specifically executed steps include:
step S1051: and subtracting the pixel value of the R channel of the second color image from the pixel value of the G channel of the third color image to obtain a first image.
Wherein the first figure is the edge of the raised portion.
Step S1052: and subtracting the pixel value of the B channel of the first color image from the pixel value of the R channel of the second color image to obtain a second image.
Wherein the second figure is the edge of the recessed portion.
Step S1053: and setting the pixel with the pixel value larger than 0.5 in the second image as 0.5 to obtain a third image.
By the processing of this step S1053, the edge of the ridge portion is distinguished from the edge of the depression portion.
Step S1054: and superposing the first graph and the third graph to obtain a fourth graph.
By the process of this step S1054, the edge of the ridge portion and the edge of the depression portion are superimposed.
Step S1055: and acquiring pixel values of a B channel of the second color image to construct a fifth image.
Step S1056: and taking the fifth image as a mask, and setting the pixel with the pixel value less than 0.5 in the fourth image to be 0.5 to obtain a planar projection image.
The plane projection drawing of the motion trail on the first plane can be obtained through the method.
Step S106: and obtaining a fusion map according to the depth map and the plane projection map.
Optionally, when obtaining the fusion map according to the depth map and the planar projection map, the specifically executed steps include: subtracting the depth map by using a preset depth basic value to obtain a sixth map; carrying out fuzzy processing on the plane projection drawing to obtain a seventh drawing; and fusing the sixth graph and the seventh graph to obtain a fused graph.
Specifically, the depth of the moving body is larger as the moving body is farther from the camera and smaller as the moving body is closer to the camera, and the depth is larger as 1 and smaller or smaller as 0 in the depth map, and when a predetermined depth base value is set to 1, the depth is made larger than 0 and smaller than 1 by subtracting each depth data of the depth map by 1 and using the depth Reserve-z. And (3) blurring the plane projection graph, blurring the edges of the raised part and the depressed part, and transiting the raised part and the depressed part to obtain a fused graph which comprises data of the motion trail in the depth direction and the plane direction.
Alternatively, in one embodiment, the intensity of the planar projection is controlled by using a depth map as a mask, as shown in fig. 3, the depth map is shown in which the depth of the moving body is greater when the moving body is farther away from the camera and smaller when the moving body is closer to the camera, and in the depth map, 1 represents large and 0 represents small or no depth. In this embodiment, the depth Reserver-z is adopted, the depth is made to be 0, the depth is made to be 1, and the sixth map is obtained by subtracting each depth data of the depth map from 1, as shown in fig. 4, when the depth of the moving body is the minimum, the depth map of the moving body is 1.
Step S107: and (4) creating a model of a horizontal plane where the simulated motion trail is located to obtain a plane model diagram.
Optionally, a model of a horizontal plane in which the simulated motion trajectory is located may be created by using a tessellation shader, so as to obtain a planar model diagram, and increase the number of meshes on the ground.
Step S108: and setting the size of the point in the plane model diagram in the first direction according to the fusion diagram to obtain the motion trail.
Example two
The second embodiment of the present application provides a method for generating a motion trajectory, which specifically includes the following steps:
step S1: an orthogonal camera (i.e. the first camera described above) is created, as shown in fig. 5 in particular, the rectangular box in fig. 5 represents the field of view of the camera, and the straight line inside the box in fig. 5 represents the ground on which the footprint is to be made.
Step S2: creating a model corresponding to the object to be footprint (i.e. the above-mentioned moving body), schematically illustrated in this embodiment as a cube, and after creating the model, placing it in the viewing range of the camera, and processing the model: the model is copied to obtain another 2 copied models, the spatial vertex position and the normal of the copied model are converted to the position and the normal under the world space, and then vertex offset is performed according to the XZ direction component of the world normal (here, the default Y direction is up, namely the direction in which the camera faces, and the depth direction of the motion trail). As shown in fig. 6, the smallest cube on the rightmost side is the original cube, that is, the first model, and the other two models are two models that are obtained by horizontally enlarging the first model in the normal direction, wherein the middle model is the second model, and the largest model is the third model. Wherein, when the second model and the third model are obtained, the scale of the lateral magnification is determined based on the requirement to correspond to the effect of the motion trail. The larger the magnification factor, the larger the corresponding effect region range.
And step S3: two more cameras (i.e., the second camera and the third camera) are created, and the position and the direction are consistent with those of the first camera, so that three cameras are in the scene and the parameters are consistent. However, the model rendered by each camera corresponds to the first model, the second model and the third model one by one, so that three color maps and a depth map corresponding to the third model are obtained.
And step S4: three renderTextures are obtained by rendering each camera's view to one renderTexture.
Step S5: and rendering the track. Specifically, for each camera, in the rendering process, the RenderTexture obtained in step S4 is not removed, and new observation views are continuously written in, so as to obtain a first color map, a second color map, and a third color map, respectively.
Step S6: for each position of the moving body, the superposition of the first color map, the second color map and the third color map is shown in fig. 7, wherein the part of the rectangular block of the central mark 1 corresponds to the first model and represents the original object; the portion of the rectangular ring of the edge marker 3 corresponds to the third model, representing a raised edge; the portion of the rectangular ring between the rectangular block portion of the mark 1 and the rectangular ring portion of the mark 3 corresponds to the second model, representing the recessed edge. Subtracting the R channel of #2 (i.e. the second color map) from the G channel of #3 (i.e. the third color map) to obtain FIG. 8 (i.e. the first map); subtracting the B channel of #1 (i.e., the first color map) from the R channel of #2 (to obtain the second map), and then limiting the maximum value to 0.5 (to obtain the third map) to obtain fig. 9; subtracting channel #1 from channel 1 results in FIG. 10 (i.e., the fifth plot described above).
Step S7: and then fusion calculation is carried out. Fig. 8 and fig. 9 are added to obtain fig. 11 (i.e., the fourth graph), and then fig. 10 is used as a mask (i.e., the influence of the place where b is 1 is not affected, and the influence of the place where b is 0), and fig. 11 is operated to obtain fig. 12 (i.e., the planar projection graph).
Step S8: the intensity of figure 12 is controlled by adding depth processing, i.e. depth control intensity, using the depth map as a mask, the object being deeper the farther away it is from the camera and being closer to the camera, the less deep it is. In the depth map, 1 represents large, 0 represents small or no depth, and in this embodiment, depth Reserve-z is used, so that the depth is 0 and 1, and when the object reaches the minimum depth, the "depth map" of the object is 1. Continuously writing the corresponding depth in the depth map into a render instance, and continuously overlapping to obtain a fusion map, as shown in fig. 13.
Step S9: and establishing a model corresponding to the ground, and applying Tessellation to the ground to obtain a plane model diagram, so that the number of grids on the ground is greatly increased.
Step S10: the fusion map is calculated in step S8 as a mask for Y-axis movement of the vertices of the planar model map, and a motion trajectory having a concave-convex effect is obtained, as shown in fig. 2.
EXAMPLE III
Corresponding to the first embodiment, a third embodiment of the present application provides a motion trajectory generation device, and accordingly, reference may be made to the first embodiment for details of technical features, which are not described herein again. Fig. 14 is a block diagram of a motion trajectory generation apparatus according to a second embodiment of the present application, and as shown in fig. 14, the apparatus includes: a first creation module 301, a second creation module 302, a first acquisition module 303, a second acquisition module 304, a calculation module 305, a first processing module 306, a third creation module 307, and a second processing module 308.
The first creating module 301 is used for creating a first model for simulating a motion subject; the second creating module 302 is configured to create a second model corresponding to a concave portion of a motion trajectory and a third model corresponding to a convex portion of the motion trajectory, wherein heights of the first model, the second model and the third model in a first direction are the same, and the first direction is a depth direction of the motion trajectory; the first obtaining module 303 is configured to obtain color images obtained after the first model, the second model, and the third model move according to the motion trajectory, and obtain a first color image, a second color image, and a third color image in sequence; the second obtaining module 304 is configured to obtain a depth map obtained after the third model moves according to the motion trajectory; the calculating module 305 is configured to calculate a plane projection diagram of the motion trajectory on a first plane by using the first color map, the second color map, and the third color map, where the first plane is perpendicular to the first direction; the first processing module 306 is configured to obtain a fusion map according to the depth map and the planar projection map; the third creating module 307 is configured to create a model for simulating a horizontal plane where the motion trajectory is located, so as to obtain a plane model diagram; and the second processing module 308 is configured to set the size of a point in the plane model diagram in the first direction according to the fusion diagram, so as to obtain the motion trajectory.
Optionally, in an embodiment, the second creating module 302 includes: the device comprises a determining unit, a first amplifying unit and a second amplifying unit. The determining unit is used for determining a first amplification factor and a second amplification factor according to the motion track; the first amplification unit is used for amplifying the first model in a second direction and a third direction according to the first amplification factor to obtain a second model, wherein the second direction and the third direction are perpendicular to each other and are parallel to the first plane; the second amplifying unit is configured to amplify the first model in the second direction and the third direction according to the second amplification factor to obtain the third model.
Optionally, in an embodiment, the first obtaining module 303 includes: the device comprises a first creating unit, a first acquiring unit, a second acquiring unit and a third acquiring unit. The first creating unit is used for creating a first camera, a second camera and a third camera which are respectively identical in position and observation direction; the first obtaining unit is configured to set the first model in a viewport of the first camera, and obtain a viewport image of the first camera according to the motion trajectory to obtain the first color map; the second obtaining unit is configured to set the second model in a viewport of the second camera, and obtain a viewport image of the second camera according to the motion trajectory, so as to obtain the second color map; and the third obtaining unit is configured to set the third model in a viewport of the third camera, and obtain a viewport image of the third camera according to the motion trajectory, so as to obtain the third color map.
Optionally, in an embodiment, the second obtaining module 304 includes a fourth obtaining unit, configured to obtain a rendered depth map of the third camera, so as to obtain the depth map.
Optionally, in an embodiment, the calculation module 305 includes: the device comprises a first calculating unit, a second calculating unit, a third calculating unit, a fourth acquiring unit and a fifth calculating unit. The first calculating unit is used for subtracting the pixel value of the R channel of the second color image from the pixel value of the G channel of the third color image to obtain a first image; the second calculating unit is used for subtracting the pixel value of the B channel of the first color image from the pixel value of the R channel of the second color image to obtain a second image; the third calculating unit is used for setting the pixel with the pixel value larger than 0.5 in the second image to be 0.5 to obtain a third image; the fourth calculating unit is used for superposing the first graph and the third graph to obtain a fourth graph; the fourth acquisition unit is used for acquiring pixel values of a B channel of the second color image to construct a fifth image; and the fifth calculating unit is used for setting the pixel with the pixel value less than 0.5 in the fourth image as 0.5 to obtain the plane projection image by taking the fifth image as a mask.
Optionally, in an embodiment, the first processing module 306 includes: the device comprises a sixth calculating unit, a fuzzy processing unit and a seventh calculating unit, wherein the sixth calculating unit is used for subtracting the depth map by using a preset depth basic value to obtain a sixth map; the fuzzy processing unit is used for carrying out fuzzy processing on the plane projection graph to obtain a seventh graph; and the seventh calculation unit is used for fusing the sixth graph and the seventh graph to obtain the fused graph.
Optionally, in an embodiment, the third creating module 307 is specifically configured to create a model simulating a horizontal plane where the motion trajectory is located by using a tessellation shader, so as to obtain a planar model map.
Example four
The fourth embodiment further provides a computer device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server or a rack server (including an independent server or a server cluster composed of a plurality of servers) capable of executing programs, and the like. As shown in fig. 15, the computer device 01 of the present embodiment at least includes but is not limited to: a memory 011 and a processor 012 which are communicatively connected to each other via a system bus, as shown in fig. 15. It is noted that fig. 15 only shows the computer device 01 having the component memory 011 and the processor 012, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the memory 011 (i.e., a readable storage medium) includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 011 can be an internal storage unit of the computer device 01, such as a hard disk or a memory of the computer device 01. In other embodiments, the memory 011 can also be an external storage device of the computer device 01, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device 01. Of course, the memory 011 can also include both internal and external memory units of the computer device 01. In this embodiment, the memory 011 is generally used to store an operating system installed in the computer device 01 and various application software, for example, the program code of the motion trajectory generation apparatus in the third embodiment. Further, the memory 011 can also be used to temporarily store various kinds of data that have been output or are to be output.
The processor 012 may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip in some embodiments. The processor 012 is generally used to control the overall operation of the computer device 01. In the present embodiment, the processor 012 is configured to execute a program code stored in the memory 011 or process data, such as a method of generating a motion trajectory.
EXAMPLE five
The fifth embodiment further provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of this embodiment is used to store a motion trajectory generation device, and when executed by a processor, implements the motion trajectory generation method of the first embodiment.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all the equivalent structures or equivalent processes that can be directly or indirectly applied to other related technical fields by using the contents of the specification and the drawings of the present application are also included in the scope of the present application.

Claims (10)

1. A method for generating a motion trail is characterized by comprising the following steps:
creating a first model of a simulated moving body;
creating a second model corresponding to a concave part of the motion trail and a third model corresponding to a convex part of the motion trail, wherein the heights of the first model, the second model and the third model in a first direction are the same, and the first direction is the depth direction of the motion trail;
respectively obtaining color images obtained after the first model, the second model and the third model move according to the motion trail, and sequentially and correspondingly obtaining a first color image, a second color image and a third color image;
obtaining a depth map obtained after the third model moves according to the motion trail;
calculating a plane projection drawing of the motion trail on a first plane by using the first color drawing, the second color drawing and the third color drawing, wherein the first plane is vertical to the first direction;
obtaining a fusion map according to the depth map and the plane projection map;
creating a model simulating a horizontal plane where the motion trail is located to obtain a plane model diagram; and
and setting the size of the point in the plane model diagram in the first direction according to the fusion diagram to obtain the motion trail.
2. The method for generating a motion trajectory according to claim 1, wherein the step of creating a second model corresponding to a concave portion of the motion trajectory, and a third model corresponding to a convex portion of the motion trajectory includes:
determining a first amplification factor and a second amplification factor according to the motion track;
amplifying the first model in a second direction and a third direction according to the first amplification factor to obtain a second model, wherein the second direction and the third direction are perpendicular to each other and are parallel to the first plane;
and amplifying the first model in the second direction and the third direction according to the second amplification factor to obtain a third model.
3. The method for generating a motion trail according to claim 1, wherein the step of obtaining the color map obtained by the first model, the second model and the third model after moving according to the motion trail respectively and obtaining the first color map, the second color map and the third color map sequentially comprises:
creating a first camera, a second camera and a third camera which are respectively identical in position and observation direction;
setting the first model in a viewport of the first camera, and moving according to the motion trajectory to obtain a viewport image of the first camera to obtain the first color image;
setting the second model in a viewport of the second camera, and moving according to the motion trajectory to obtain a viewport image of the second camera to obtain the second color image;
and setting the third model in a viewport of the third camera, and moving according to the motion trail to obtain a viewport image of the third camera to obtain the third color map.
4. The method for generating the motion trail according to claim 3, wherein the step of obtaining the depth map obtained after the third model moves according to the motion trail comprises:
and acquiring a rendering depth map of the third camera to obtain the depth map.
5. The method of generating motion trajectories of claim 3, wherein the step of calculating a planar projection of the motion trajectories on a first plane using the first color map, the second color map and the third color map comprises:
subtracting the pixel value of the R channel of the second color image from the pixel value of the G channel of the third color image to obtain a first image;
subtracting the pixel value of the B channel of the first color image from the pixel value of the R channel of the second color image to obtain a second image;
setting the pixel with the pixel value larger than 0.5 in the second image as 0.5 to obtain a third image;
superposing the first graph and the third graph to obtain a fourth graph;
acquiring pixel values of a channel B of the second color image to construct a fifth image;
and setting the pixel with the pixel value less than 0.5 in the fourth image as 0.5 by taking the fifth image as a mask to obtain the plane projection image.
6. The method for generating a motion trajectory according to claim 5, wherein the step of obtaining a fusion map from the depth map and the planar projection map comprises:
subtracting each depth data of the depth map by using a preset depth basic value to obtain a sixth map;
carrying out fuzzy processing on the plane projection drawing to obtain a seventh drawing;
and fusing the sixth graph and the seventh graph to obtain the fused graph.
7. The method for generating a motion trail according to claim 1, wherein the step of creating a model simulating a horizontal plane in which the motion trail is located to obtain a planar model map comprises:
and establishing a model simulating the horizontal plane of the motion trail by using a tessellation shader to obtain a plane model diagram.
8. An apparatus for generating a motion trajectory, comprising:
the first creating module is used for creating a first model for simulating a motion body;
a second creating module, configured to create a second model corresponding to a concave portion of a motion trajectory and a third model corresponding to a convex portion of the motion trajectory, where heights of the first model, the second model, and the third model in a first direction are the same, and the first direction is a depth direction of the motion trajectory;
the first acquisition module is used for respectively acquiring the color images obtained after the first model, the second model and the third model move according to the motion trail, and sequentially and correspondingly obtaining a first color image, a second color image and a third color image;
the second obtaining module is used for obtaining a depth map obtained after the third model moves according to the motion trail;
the calculating module is used for calculating a plane projection drawing of the motion trail on a first plane by utilizing the first color drawing, the second color drawing and the third color drawing, wherein the first plane is vertical to the first direction;
the first processing module is used for obtaining a fusion graph according to the depth graph and the plane projection graph;
the third establishing module is used for establishing a model for simulating a horizontal plane where the motion trail is located to obtain a plane model diagram; and
and the second processing module is used for setting the size of the point in the plane model diagram in the first direction according to the fusion diagram to obtain the motion trail.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 7.
CN202010920206.1A 2020-09-04 2020-09-04 Motion trajectory generation method and device, computer equipment and readable storage medium Active CN112435312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010920206.1A CN112435312B (en) 2020-09-04 2020-09-04 Motion trajectory generation method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010920206.1A CN112435312B (en) 2020-09-04 2020-09-04 Motion trajectory generation method and device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112435312A CN112435312A (en) 2021-03-02
CN112435312B true CN112435312B (en) 2023-04-11

Family

ID=74689960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010920206.1A Active CN112435312B (en) 2020-09-04 2020-09-04 Motion trajectory generation method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112435312B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751691A (en) * 2008-12-11 2010-06-23 虹软(杭州)科技有限公司 Image processing method for simulating true effect of natural weather in film
CN102243770A (en) * 2011-07-19 2011-11-16 南昌航空大学 Method for quickly drawing realistic rendering of naval battlefield based on OSG
CN102722859A (en) * 2012-05-31 2012-10-10 北京像素软件科技股份有限公司 Method for rendering computer simulation scene
CN105243682A (en) * 2015-09-25 2016-01-13 翟翊民 Limb element model, role and two-dimensional animation production method
WO2016015544A1 (en) * 2014-07-28 2016-02-04 努比亚技术有限公司 Method and device for shooting track of moving object
CN106127847A (en) * 2016-06-30 2016-11-16 刘姗姗 A kind of generate real-time landscape painting characteristic line and the method suitably rendered
CN108345006A (en) * 2012-09-10 2018-07-31 广稹阿马斯公司 Capture the unit and system of moving scene
CN109448137A (en) * 2018-10-23 2019-03-08 网易(杭州)网络有限公司 Exchange method, interactive device, electronic equipment and storage medium
CN109803575A (en) * 2016-07-22 2019-05-24 爱脉(知识产权)有限公司 It is a kind of for measuring the electronic equipment and its method of physiologic information
CN109814559A (en) * 2019-01-25 2019-05-28 北京百度网讯科技有限公司 Method and apparatus for controlling excavator excavation
CN110249626A (en) * 2017-10-26 2019-09-17 腾讯科技(深圳)有限公司 Implementation method, device, terminal device and the storage medium of augmented reality image
CN110599583A (en) * 2019-07-26 2019-12-20 深圳眸瞳科技有限公司 Unmanned aerial vehicle flight trajectory generation method and device, computer equipment and storage medium
CN110706291A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN110956679A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN111028323A (en) * 2019-11-27 2020-04-17 深圳奇迹智慧网络有限公司 Simulation method, device and equipment for water ripple in map and readable storage medium
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment
CN111467807A (en) * 2020-05-18 2020-07-31 网易(杭州)网络有限公司 Snow melting effect rendering method and device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI530909B (en) * 2013-12-31 2016-04-21 財團法人工業技術研究院 System and method for image composition
KR101888872B1 (en) * 2018-05-28 2018-08-16 한국지질자원연구원 A method for analysis technique of fines migration in sediments with multiphase flow, using x-ray images
US11270111B2 (en) * 2019-02-04 2022-03-08 International Business Machines Corporation Automated management of potentially hazardous objects near power lines

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751691A (en) * 2008-12-11 2010-06-23 虹软(杭州)科技有限公司 Image processing method for simulating true effect of natural weather in film
CN102243770A (en) * 2011-07-19 2011-11-16 南昌航空大学 Method for quickly drawing realistic rendering of naval battlefield based on OSG
CN102722859A (en) * 2012-05-31 2012-10-10 北京像素软件科技股份有限公司 Method for rendering computer simulation scene
CN108345006A (en) * 2012-09-10 2018-07-31 广稹阿马斯公司 Capture the unit and system of moving scene
WO2016015544A1 (en) * 2014-07-28 2016-02-04 努比亚技术有限公司 Method and device for shooting track of moving object
CN105243682A (en) * 2015-09-25 2016-01-13 翟翊民 Limb element model, role and two-dimensional animation production method
CN106127847A (en) * 2016-06-30 2016-11-16 刘姗姗 A kind of generate real-time landscape painting characteristic line and the method suitably rendered
CN109803575A (en) * 2016-07-22 2019-05-24 爱脉(知识产权)有限公司 It is a kind of for measuring the electronic equipment and its method of physiologic information
CN110249626A (en) * 2017-10-26 2019-09-17 腾讯科技(深圳)有限公司 Implementation method, device, terminal device and the storage medium of augmented reality image
CN110956679A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN109448137A (en) * 2018-10-23 2019-03-08 网易(杭州)网络有限公司 Exchange method, interactive device, electronic equipment and storage medium
CN109814559A (en) * 2019-01-25 2019-05-28 北京百度网讯科技有限公司 Method and apparatus for controlling excavator excavation
CN110599583A (en) * 2019-07-26 2019-12-20 深圳眸瞳科技有限公司 Unmanned aerial vehicle flight trajectory generation method and device, computer equipment and storage medium
CN110706291A (en) * 2019-09-26 2020-01-17 哈尔滨工程大学 Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment
CN111028323A (en) * 2019-11-27 2020-04-17 深圳奇迹智慧网络有限公司 Simulation method, device and equipment for water ripple in map and readable storage medium
CN111467807A (en) * 2020-05-18 2020-07-31 网易(杭州)网络有限公司 Snow melting effect rendering method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112435312A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN107251101B (en) Scene modification for augmented reality using markers with parameters
CN107798725B (en) Android-based two-dimensional house type identification and three-dimensional presentation method
CN110163831B (en) Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
CN110910452B (en) Low-texture industrial part pose estimation method based on deep learning
CN111583381B (en) Game resource map rendering method and device and electronic equipment
CN116012843B (en) Virtual scene data annotation generation method and system
CN110930503A (en) Method and system for establishing three-dimensional model of clothing, storage medium and electronic equipment
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
CN110428504B (en) Text image synthesis method, apparatus, computer device and storage medium
CN115937461A (en) Multi-source fusion model construction and texture generation method, device, medium and equipment
CN115409957A (en) Map construction method based on illusion engine, electronic device and storage medium
CN112435312B (en) Motion trajectory generation method and device, computer equipment and readable storage medium
CN110321184B (en) Scene mapping method and computer storage medium
CN112419460B (en) Method, apparatus, computer device and storage medium for baking model map
CN115409962A (en) Method for constructing coordinate system in illusion engine, electronic equipment and storage medium
TW202312100A (en) Grid generation method, electronic device and computer-readable storage medium
CN116962816B (en) Method and device for setting implantation identification, electronic equipment and storage medium
CN113409385B (en) Characteristic image identification and positioning method and system
CN116645299B (en) Method and device for enhancing depth fake video data and computer equipment
CN117788524A (en) Plane target tracking method, device, equipment and medium based on multitask learning
CN117635634A (en) Image processing method, device, electronic equipment, chip and storage medium
CN112419459A (en) Method, apparatus, computer device and storage medium for baked model AO mapping
CN113144615A (en) 3D scene modeling system from single design picture
CN116012699A (en) Vectorized skeleton extraction method and system for indoor scene point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant