CN114972583A - User motion trajectory generation method and device, electronic equipment and storage medium - Google Patents

User motion trajectory generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114972583A
CN114972583A CN202110203540.XA CN202110203540A CN114972583A CN 114972583 A CN114972583 A CN 114972583A CN 202110203540 A CN202110203540 A CN 202110203540A CN 114972583 A CN114972583 A CN 114972583A
Authority
CN
China
Prior art keywords
user
dimensional
motion
track
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110203540.XA
Other languages
Chinese (zh)
Inventor
黄欢
邓明育
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinghong Technology Co ltd
Original Assignee
Shenzhen Jinghong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinghong Technology Co ltd filed Critical Shenzhen Jinghong Technology Co ltd
Priority to CN202110203540.XA priority Critical patent/CN114972583A/en
Publication of CN114972583A publication Critical patent/CN114972583A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a device for generating a user motion trail, electronic equipment and a storage medium, belonging to the technical field of computers, wherein the method comprises the following steps: generating static three-dimensional identity information based on the acquired user image; generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing the deformable three-dimensional human body model of the user; determining a three-dimensional track route of a sampling point of a user, and fusing the three-dimensional track route with a preset map to generate a three-dimensional track map; and fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video. The user motion track video can more intuitively and vividly express the reality of the user and can obtain better personalized image and personalized experience.

Description

Method and device for generating user motion trail, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a user motion trajectory, an electronic device, and a storage medium.
Background
Along with diversification of life of people, more and more people rely on electronic equipment to record and share own life tracks, such as running, riding and the like. On an application program of the electronic device, an arrow or a dot representing a user can be seen on a plane map, a running track of the user from a starting point to an end point is recorded by a line segment, and a track animation video capable of recording the motion process of the user is generated.
The user may be marked on the application as a dot, or an arrow, or a 2D character graphic, or a 3D character graphic, from one dimension to two to three dimensions, making the user more intuitive of the trajectory of an activity that was once moved. However, these user marks cannot represent the user himself more truly and vividly, some auxiliary marks are still needed to represent the user more fully, and meanwhile, the user marks are not represented intuitively enough, and the user motion track displaying mode in the prior art is generally a recording mode and does not have a real-time live broadcast function, so that personalized images and personalized experiences are lacked.
Disclosure of Invention
The invention provides a method and a device for generating a user motion track, electronic equipment and a storage medium, which are used for solving the problems that the existing mark of a user cannot better represent the life track of the user and the personalized experience is insufficient in the prior art.
The invention provides a method for generating a user motion trail, which comprises the following steps:
generating static three-dimensional identity information based on the acquired user image;
generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing the deformable three-dimensional human body model of the user;
determining a three-dimensional track route of a sampling point of a user, and fusing the three-dimensional track route with a preset map to generate a three-dimensional track map;
and fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
According to the method for generating the user motion trail provided by the invention, after the user motion trail video is generated, the method further comprises the following steps:
acquiring the motion track video of the user;
rendering the user motion track video, wherein the rendering comprises adding one or more combinations of materials, effects, subtitles, icons and music;
and displaying the rendered user motion track video on a preset application program.
According to the method for generating the user motion trail provided by the invention, the rendering processing of the user motion trail video comprises the following steps:
calling one or more pieces of static three-dimensional identity information stored in advance;
and generating a corresponding user motion track video based on the one or more pieces of static three-dimensional identity information which are stored in advance and according to the static three-dimensional identity information generated by calling the user image.
According to the method for generating the user motion trail provided by the invention, the rendering processing of the user motion trail video comprises the following steps:
personalized modification is carried out on the static three-dimensional identity information;
and generating a corresponding user motion track video according to the personalized and modified static three-dimensional identity information.
According to the method for generating the user motion trail provided by the invention, after the user motion trail video is generated, the method further comprises the following steps:
live broadcasting the motion track video of the user in real time; or
And carrying out video recording and playing on the user motion track video.
According to the method for generating the user motion trail provided by the invention, the video playing of the user motion trail video comprises the following steps:
and performing one or more combined operations of editing, forwarding, storing and playing back the user motion track video.
According to the method for generating the user motion trail provided by the invention, the user image comprises a first image and a second image, and the step of generating the static three-dimensional identity information based on the acquired user image comprises the following steps:
acquiring a first image and a second image of a user, wherein the first image comprises user characteristic information, and the second image comprises three-dimensional information corresponding to the first image;
extracting user characteristics from the first image and the second image respectively to generate corresponding user texture information and user three-dimensional information;
and mapping the user texture information to the user three-dimensional information to generate the static three-dimensional identity information.
According to the method for generating the motion trail of the user, provided by the invention, the user texture information is one or more combinations of human face features, human head features and human body features.
According to the method for generating the motion trail of the user, provided by the invention, the three-dimensional information of the user is one or more combinations of human face characteristics, human head characteristics and human body characteristics.
According to the method for generating the motion trail of the user, the second image is obtained by any one of a 3D TOF camera, a 3D structured light camera, a binocular stereo vision camera, a coded structured light camera and a three-dimensional scanner.
According to the method for generating the motion trail of the user, the user image is one or more of a real human face model, a real human head model, a real human body model and an edited real human model which are pre-stored in the system.
According to the method for generating the user motion trail provided by the invention, the step of generating the dynamic three-dimensional identity information according to the static three-dimensional identity information comprises the following steps:
carrying out displacement parameter adjustment on the static three-dimensional identity information to generate a three-dimensional human body model, wherein the displacement parameter is adjusted to a parameter value which is moved from a first coordinate system to a second coordinate system;
and adjusting the motion parameters of the three-dimensional human body model to generate the dynamic three-dimensional identity information, wherein the motion parameters comprise one or more combinations of parameters of mass center motion, parameters of upper body motion and parameters of lower body motion.
According to the method for generating the motion trail of the user provided by the invention, the step of determining the three-dimensional trail route of the sampling point of the user comprises the following steps:
the three-dimensional track route is generated by system preset setting according to user requirements, and the three-dimensional track route describes a track route which a user wants to move; or the three-dimensional track route is generated according to the real-time motion of the user, and the three-dimensional track route describes the track route generated by the real-time motion of the user.
According to the method for generating the user motion trail provided by the invention, the step of determining the three-dimensional trail route of the sampling point of the user, fusing the three-dimensional trail route with the preset map and generating the three-dimensional trail map comprises the following steps:
acquiring geographic information of a user sampling point, sequentially connecting the geographic information of the sampling point to generate a three-dimensional track route, and marking a starting point and an end point of the three-dimensional track route;
mapping data of all sampling points to the preset map, and drawing a connecting line from the starting point to the next adjacent point according to a preset time beat until the connecting line reaches the end point;
and unifying the coordinate system of the sampling point and the coordinate system of the preset map into the same world coordinate system so as to realize the fusion of the three-dimensional track route and the preset map.
According to the method for generating the user motion trail provided by the invention, the step of fusing the dynamic three-dimensional identity information with the three-dimensional trail route to generate the motion trail frame sequence comprises the following steps:
mapping the dynamic three-dimensional identity information to the sampling points to create real figure image information of the user;
and fusing the image information of the real person with the three-dimensional track route to generate the motion track frame sequence.
According to the method for generating the user motion trail provided by the invention, the step of fusing the motion trail frame sequence and the three-dimensional trail map to generate the user motion trail video comprises the following steps:
synthesizing the motion track frame sequence and the three-dimensional track map, wherein the synthesizing comprises binding coordinate data, motion data and physical data between the image information of the real person of the user and the three-dimensional track map;
and zooming the image information of the real person and the three-dimensional track map according to a preset proportion so as to ensure that the frame sequence of the image information of the real person is matched and adjusted with the time sequence of the three-dimensional track map.
The invention also provides a device for generating the motion trail of the user, which comprises:
the user static identity information generation module is used for generating static three-dimensional identity information based on the acquired user image;
the user dynamic identity information generation module is used for generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing a three-dimensional human body model of the user;
the three-dimensional track map generation module is used for determining a three-dimensional track route of a sampling point of a user, fusing the three-dimensional track route with a preset map and generating a three-dimensional track map;
and the user motion track video generation module is used for fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of any one of the methods for generating the motion trail of the user.
The present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for generating a trajectory of a user's motion as described in any one of the above.
According to the method and the device for generating the user motion trail, the electronic equipment and the storage medium, the user image is obtained, the user image is processed to generate the dynamic three-dimensional identity information of the augmented reality character phenomenon, the three-dimensional trail map is generated according to the three-dimensional trail route of the user, the dynamic three-dimensional identity information and the three-dimensional trail map are fused to generate the user motion trail video, the user motion trail video can express the reality image of the user more visually and vividly, and better personalized image and personalized experience can be obtained.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for generating a user motion trajectory according to the present invention;
FIG. 2 is a schematic diagram of a user motion trajectory video provided by the present invention;
FIG. 3 is a schematic flow chart of generating static three-dimensional identity information according to the present invention;
FIG. 4 is a schematic diagram of a process for generating dynamic three-dimensional identity information according to the present invention;
FIG. 5 is a schematic diagram of a skeletal tree provided by the present invention;
FIG. 6 is a schematic flow chart of generating a three-dimensional trajectory map according to the present invention;
FIG. 7 is a schematic flow chart of generating a user motion trajectory video according to the present invention;
fig. 8 is a block diagram of a structure of a device for generating a motion trail of a user according to the present invention;
fig. 9 is a block diagram of an electronic device according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and in the claims, and in the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein.
The following describes a method, an apparatus, an electronic device, and a storage medium for generating a user motion trajectory according to the present invention with reference to fig. 1 to 9.
Fig. 1 is a schematic flow chart of a method for generating a user motion trajectory according to the present invention, as shown in fig. 1. A method for generating a motion trail of a user, the method comprising:
step 101, generating static three-dimensional identity information based on the acquired user image.
Optionally, the user image may be obtained by instant shooting with a camera, or may be obtained by acquiring an existing user image. The user image can be the image of the user, is more authentic, can directly and visually present the real image of the user according to the user image, does not need to add more marks, and is more personalized.
It is to be understood that the user image of the present invention is not limited to the real image of the user himself, but may be other images, such as cartoon image, landscape image, etc., and the present invention is not limited thereto.
Optionally, the static three-dimensional identity information may be generated in real time, or may be one or more pieces of static three-dimensional identity information stored in advance, and the generated static three-dimensional identity information may be subjected to personalized modifications such as editing and the like, for example, a static model corresponding to the static three-dimensional identity information may be modified, including but not limited to personalized dress up such as adding eyebrows, adding beards, and loading glasses.
It should be noted that the purpose of generating the static three-dimensional identity information in step 101 is to ensure the integrity of the three-dimensional information of the user image, so as to better show the real personal image of the user. It is to be understood that the static three-dimensional identity information may be generated in real time, or may be static three-dimensional identity information pre-stored by the system, and the present invention is not limited thereto.
And 102, generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing the deformable three-dimensional human body model of the user.
It should be noted that the static three-dimensional identity information ensures the integrity of the three-dimensional information of the divided user, and then the dynamic three-dimensional identity information is generated according to the static three-dimensional identity information. The purpose of generating the dynamic three-dimensional identity information is to enable a three-dimensional human body model described by the dynamic three-dimensional identity information to perform deformable motion, wherein the motion of the three-dimensional human body model comprises one or more combinations of centroid motion, upper body motion and lower body motion. The centroid movement theorem is one of the general theorems of dynamics, and is not described here.
Step 103, determining a three-dimensional track route of a sampling point of a user, and fusing the three-dimensional track route with a preset map to generate a three-dimensional track map.
Optionally, the three-dimensional trajectory route may be generated according to user requirements and may be preset by a system, and the three-dimensional trajectory route describes a trajectory route which a user wants to move. For example, when the user uses the invention, a three-dimensional trajectory route which the user wants to move can be preset on the mobile phone app, and the user can also set the three-dimensional trajectory route which the user wants to move into three-dimensional trajectory routes of various shapes, such as a golden cow shape, a duck shape and the like, so that personalized requirements of the user can be met.
Optionally, the three-dimensional trajectory route may also be generated according to the real-time motion of the user, and the three-dimensional trajectory route describes a trajectory route generated by the real-time motion of the user.
Optionally, the route activity track of the user from the starting point to the end point is connected to generate a three-dimensional track route through the geographic information of the sampling points according to the change of the time sequence and the spatial position. The three-dimensional track route represents the actual activity track of the user.
Optionally, the three-dimensional trajectory route is fused with a preset map, specifically, data of all sampling points of the user are mapped onto the preset map, and a coordinate system of the sampling points and a coordinate system of the preset map are unified into the same world coordinate system, so that the fusion of the three-dimensional trajectory route and the preset map is ensured, and the three-dimensional trajectory route of the user can be presented in the preset map.
And 104, fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
Optionally, the reference value of the normal walking speed of the augmented reality character image presented by the dynamic three-dimensional identity information may be set first. Therefore, different motion types can be obtained, and the moving speed of the three-dimensional track route and the reference value of the normal walking speed are in a positive correlation linear relation, so that the motion track frame sequence can be obtained.
Optionally, the motion track frame sequence is fused with the three-dimensional track map to generate a user motion track video. Specifically, the augmented reality character image of the user and the coordinate data, the motion data, the physical data and the like between the three-dimensional track map are bound, so that the matching adjustment of the reality character image information and the three-dimensional track map is ensured.
It should be noted that, the augmented reality character image presented by the dynamic three-dimensional identity information is fused with the three-dimensional motion track route to generate a motion track frame sequence, and then the motion track frame sequence is fused with the three-dimensional track map to generate the user motion track video, so that the augmented reality character image of the user can be effectively realized, the three-dimensional motion track and the three-dimensional track map are seamlessly connected, the reality sense is stronger, the effect is more personalized, and better personalized experience is achieved.
In an embodiment of the present invention, the method for generating the user motion trajectory further includes the following steps:
the generated user motion trajectory video may be displayed on a preset application program after being rendered (as shown in fig. 2). Fig. 2 shows a map which is a three-dimensional map, in which a three-dimensional character image of a user is positioned in the middle, and a darkened line positioned behind the character image is a three-dimensional movement track route of the user, which is a track connection line from a starting point to a current position.
Optionally, in order to enable the user motion trajectory video to have better personalized experience, rendering processing may be performed according to the generated user motion trajectory video, where the rendering processing includes adding one or more combinations of materials, effects, subtitles, icons, and music.
Specifically, after the user motion track video is generated, more added materials can be displayed together through content such as adding effects, adding subtitles, adding icons, adding music, controlling identifiers and the like. If the caption is added, the contents of the total mileage, the time consumption, the average speed, the altitude, the climbing speed, the descending speed, the maximum altitude difference, even the heartbeat speed of the user and the like can be displayed.
Further, the rendering processing of the user motion trajectory video includes:
and calling one or more pieces of static three-dimensional identity information which are stored in advance, generating a corresponding user motion track video according to the static three-dimensional identity information generated by calling the user image based on the one or more pieces of static three-dimensional identity information which are stored in advance.
For example, for the generated user motion track video, the user may also invoke switching to the user motion track video for generating other types of real person models, that is, different static three-dimensional identity information may be stored in advance, and then invoke the static three-dimensional identity information generated from the user image to synthesize a new real person model, for example: the new real model is generated by combining the real face and the virtual model, or the new real model is generated by combining the real head and the virtual model, and the generated real model of the user motion track video is replaced by the new real model to generate the corresponding user motion track video, so that the rendering processing of the user motion track video is realized.
Furthermore, the display mode of the user motion track video is not only live display, but also off-line display, such as video playing. The real-time live broadcast display is that a user can watch in real time, friends can be invited to participate in interaction, and more users can watch in real time by transmitting the live broadcast to another display terminal through a network. The offline display is a video segment file after the user finishes the exercise, and the user can perform operations such as viewing, editing, storing, forwarding, sharing and the like on the video segment file.
It should be noted that the user may render or not render the generated user motion trail video, which may be specifically determined according to the personalized setting of the user, and the display mode of the user motion trail video may also be determined according to the personalized setting of the user, which is not limited to this embodiment of the invention.
The specific implementation of steps 101 to 104 in fig. 1 will be further described below by using specific embodiments.
Fig. 3 is a schematic flow chart of generating static three-dimensional identity information according to the present invention, as shown in fig. 3. In step 101, the user image includes a first image and a second image, and the step of generating the static three-dimensional identity information based on the acquired user image includes:
step 301, acquiring a first image and a second image of a user, where the first image includes user feature information, and the second image includes three-dimensional information corresponding to the first image.
Alternatively, a first image and a second image of the user may be captured by the camera, wherein the first image may be a color image and the second image may be a depth image. The color image is an RGB image photographed by a color camera, and is a color image, each pixel has a color, and the color on the pixel represents texture information of the user image. The depth image is a value containing the Z direction in each pixel, is similar to a gray scale image, takes a gray scale value from 0 to 255 as a distance value representing each pixel point, and can adopt any one of a 3D TOF camera, a 3D structured light camera, a binocular stereo vision camera, a coded structured light camera and a three-dimensional scanner to shoot the acquired depth information. The depth image is colorless, each pixel represents a point coordinate of X, Y and Z, and surface position information corresponding to each target in the space is formed through the point coordinates.
Optionally, the first image includes user feature information, and the user feature information may be a human face, or feature information such as a human head or a whole body. The second image includes three-dimensional information corresponding to the first image of the user, and the three-dimensional information of the user is position information of point coordinates of X, Y, and Z included in each pixel.
Optionally, the user image is obtained in real time, or may be one or a combination of a human face model, a human head model, a human body model, and an edited human model of a user, which are pre-stored in the system.
Step 302, extracting user features from the first image and the second image respectively, and generating corresponding user texture information and user three-dimensional information.
Optionally, the user texture information may be one or a combination of human face features, human head features, and human body features; the three-dimensional information of the user can be one or more combinations of human face features, human head features and human body features.
The texture information of the user refers to an RGB image (color image) shot by using a color camera, and each pixel has a color and represents the texture information of an object; the user three-dimensional information refers to position information of xyz point coordinates included in each pixel.
Optionally, the user feature is extracted from the first image, and at least one frame of image is required. If a frame of image is shot under a fixed preset visual angle, contour information of user characteristics in the frame can be detected according to the shot image, and user texture information in the contour information is extracted to obtain initial user texture information.
Further, if the images of the frames are shot under the preset visual angle, the outline information of the user features between the frames can be detected according to the shot images, the outline information is extracted, and then the extracted texture information is spliced to obtain more real user texture information.
Optionally, the user features are extracted for the second image. If a frame of image is shot under a fixed preset visual angle, outline information of user features in the frame can be detected, and user depth information in the outline information is segmented to obtain initial user three-dimensional information.
Furthermore, if a plurality of frames of images are shot under the fixed visual angle, the contour information of the user characteristics between frames can be detected according to the shot images, the contour information is divided, and then the divided contour information is spliced to obtain more real three-dimensional information of the user.
Optionally, a plurality of frames of images can be shot under different preset viewing angles, so that user texture information and user three-dimensional information with a larger viewing angle can be obtained, and more real user texture information and user three-dimensional information can be realized.
Therefore, the generation of the user texture information and the user three-dimensional information may be performed in a suitable manner according to the actual effect requirement, for example, in a manner of shooting a frame of image at a fixed preset angle, or in a manner of shooting multiple frames of images at different preset angles, which is not limited in the present invention.
Step 303, mapping the user texture information to the user three-dimensional information, and generating the static three-dimensional identity information.
Optionally, the static identity information of the user refers to a computer virtual character image with a three-dimensional model, and is static.
Optionally, after the segmentation is completed, the three-dimensional information of the user is subjected to three-dimensional modeling processing to obtain a three-dimensional model, and then the texture information of the user is mapped to the three-dimensional model of the user. Meanwhile, corresponding filling processing is carried out according to the integrity of the three-dimensional information of the user and the category of the texture information of the user so as to generate the static three-dimensional identity information of the user.
Specifically, after the user three-dimensional information is segmented, the step of judging whether the user three-dimensional information is completed or not is as follows:
step one, judging whether the user three-dimensional information is complete after being segmented. If the three-dimensional information is incomplete, filling processing is carried out on the incomplete three-dimensional information of the user by identifying the texture information of the user and determining a specific category (for example, determining that the specific category is a human face, a human head, a human body and the like).
It should be noted that the purpose of determining whether the user three-dimensional information is complete after being segmented is that a data loss phenomenon, that is, a so-called void, occurs in the three-dimensional signal during the acquisition process. If data is lacked, the three-dimensional model is distorted when being formed, so whether the point cloud data of the pixel is lost or not is determined according to the relation between the point clouds of the same frame image, the relation (time, space and the like) of the adjacent previous frame images or the synchronous image of another RGB camera, and then the missing data is filled up to restore the point cloud in the cavity. The hole filling of the three-dimensional point cloud before three-dimensional modeling is a basic step, and is not described in detail here.
And step two, after the three-dimensional information of the user is complete, three-dimensional modeling processing can be carried out to obtain an initial three-dimensional model of the user.
And step three, mapping the user texture information to the user initial three-dimensional model to generate the user static three-dimensional identity information.
It should be noted that the static three-dimensional identity information of the user includes real identity information of the user, and default predefined skin information may also be set. The real identity information refers to information inside the outline of the user image, and the non-real identity information refers to information outside the outline of the user image, such as filled information. If the shot user image only contains the face information of the user, the information except the face is non-real identity information; if the shot user image only contains the head information, the information except the head is non-real identity information; if the user image taken as described above is whole-body information, there is no non-genuine identity information at this time.
Optionally, if it is necessary to set non-genuine identity information, for example, skin information of the user image may be skinned, that is, predefined skin information is adapted to the non-genuine identity information, and is overlapped with the non-genuine identity information and spliced with the genuine identity information.
Further, the skin information may also include other defined skin information, which may be predefined or defined later. By changing the skin information of the image, the user can obtain more personalized choices, and the personalized experience is better.
The skin information includes, but is not limited to, eyebrow, hair, clothes, and the like.
Fig. 4 is a schematic flow chart of generating dynamic three-dimensional identity information according to the present invention, as shown in fig. 4. In step 102, the step of generating dynamic three-dimensional identity information according to the static three-dimensional identity information includes:
step 401, performing displacement parameter adjustment on the static three-dimensional identity information to generate a three-dimensional human body model, wherein the displacement parameter adjustment is a parameter value moving from a first coordinate system (namely, a current coordinate system) to a second coordinate system (namely, a next coordinate system).
In order to make the user image in the application program more realistic, more activity parameters need to be given to the static three-dimensional identity information generated as described above, so that the user image becomes more realistic.
Specifically, the step of generating the three-dimensional human body model includes:
step one, performing skeletonization processing on the static three-dimensional identity information to obtain a skeleton tree and movable nodes. The skeleton tree is the appearance of a corresponding tree formed by dividing the body and limbs of a human body into different sections and then connecting the sections according to the activity nodes (as shown in fig. 5).
The skeleton tree mainly comprises basic components such as a head, a body trunk, a left arm, a left forearm, a right arm, a right forearm, a left thigh, a left shank, a right thigh and a right shank. The bones and nodes change their positions according to the local coordinate system of the input and the bones change their positions along with the parent bone. For example, when the trunk of the body rotates, the left arm can rotate along with the body, and the left forearm can rotate along with the left arm.
And step two, marking the relationship from the parent node to the child node according to the sequence of the skeleton tree from top to bottom.
Optionally, a specific labeling method, for example, if the trunk is taken as a main trunk, the shoulder and arm positions are labeled as parent nodes, and the arm positions are labeled as child nodes; when the arm is marked as a father node, the wrist is marked as a child node. That is, the child node moves with the parent node, but the parent node does not necessarily move when the child node moves.
And step three, endowing each movable node with transformation matrix parameters synthesized by translation components and rotation components to obtain a deformable three-dimensional human body model, and simultaneously respectively obtaining a human body coordinate system and a local coordinate system of each joint.
The translation component is a mobile position parameter of the mobile node on the x, y, and z axes of the corresponding local coordinate system, and is limited within a reasonable range according to the actual situation, and when the value of the translation component is equal to the median, the mobile node is regarded as stationary.
The rotation component is a rotation quaternion parameter of the movable node on the x, y, z and w axes of the corresponding local coordinate system, and is limited within a reasonable range according to practical situations, and when the w value of the rotation component is equal to the median value, the rotation component is regarded as not rotating.
Step 402, adjusting the motion parameters of the three-dimensional human body model to generate the dynamic three-dimensional identity information, wherein the motion parameters comprise one or more combinations of parameters of mass center motion, parameters of upper body motion and parameters of lower body motion.
For a mannequin, embodiments of the present invention may divide the overall body motion into a centroid run, an upper body motion and a lower body motion. Since the motion of the human body is a relative motion between limbs, it can be regarded as a periodic, continuous and stable coordinated motion. From kinematic analysis, the trajectory of the center of mass of a human body in walking motion is similar to a smooth sine curve, but the method is particularly applied to the method, and only horizontal displacement and vertical displacement need to be acquired. Based on this, the upper body movement and the lower body movement can be seen as a swing, i.e. each movement posture is a combination of spatial angles between adjacent limbs.
Taking the following exercises as an example, the three most basic exercises of walking, running and stepping all have similarity, and can be simplified into two single steps corresponding to a total step, and the exercise frequency f is 2. The movement speed P is equal to the total step SL divided by the movement frequency f, and the time taken to complete one total step SL (i.e. from the start of one pose to the next same pose) is the movement period TC. According to the kinematics analysis, in the linear walking motion, the swing foot does not move at a constant speed, the speed is slowest at the initial part and the final part of the swing stage, the speed is fastest in the middle time period, and a smooth curve function for describing the swing ankle motion track by adopting a quadratic Bezier (Bezier) curve is as follows:
Figure BDA0002948882340000161
wherein u is ∈ [0, 1 ]]Representing the ratio of the position of the control point, P 0 ,P 2 Is the starting and ending point of the curve, P 1 The control points are obtained by reversely solving coordinates of a starting point, an end point and a highest point which are passed by the foot during pedaling motion, so that the position coordinates of the ankle are obtained, namely parameters of lower body motion are obtained.
In addition, the coordinated relationship of the swinging of the hands and the legs can be known, so that the angle function of the front-back swinging is completed in one period
Figure BDA0002948882340000162
Wherein
Figure BDA0002948882340000163
Is the back-and-forth swing angle of the shoulder joint, h is a scale factor, a is the back-and-forth swing angle of the hip joint of the foot on the reverse side of the body, and theta is the offset, namely the parameter for obtaining the upper body movement again.
In summary, the dynamic three-dimensional identity information is generated by adjusting the motion parameters of the three-dimensional human body model, wherein the motion parameters can be obtained by the above method.
Fig. 6 is a schematic flow chart of generating a three-dimensional trajectory map according to the present invention, as shown in fig. 6. In step 103, the step of determining a three-dimensional trajectory route of the sampling point of the user, and fusing the three-dimensional trajectory route with a preset map to generate a three-dimensional trajectory map includes:
601, acquiring geographic information of a user sampling point, sequentially connecting the geographic information of the sampling point to generate a three-dimensional track route, and marking a starting point and an end point of the three-dimensional track route; the method comprises the following specific steps:
step one, acquiring geographic information of a user.
The embodiment of the invention can communicate with one or more of a satellite, a base station and other instruments through equipment (such as electronic equipment, a mobile phone and the like) carried by a user to acquire the geographic information of the user, wherein the geographic information comprises information such as longitude, latitude or altitude.
And step two, establishing a coordinate system with time and space.
When the user selects the start, the system records the start time, simultaneously acquires the current geographic information as a geographic coordinate origin, establishes a coordinate system with time and space, marks the coordinate origin as a start position, and obtains a start coordinate (X0, Y0, Z0, T0); then, at intervals of time or/and a longitude or/and latitude, more coordinates (Xn, Yn, Zn, Tn; where n is 1, 2, … k) of the sample point are obtained in turn, and once the user selects the end, the sample point is marked as the end coordinate and the end time. And sequentially connecting the n +1 coordinate points to obtain a three-dimensional track route.
And step three, calculating the speed of any two sampling points.
The speed of any two sampling points can be calculated according to any two coordinates (Xn, Yn, Zn) and (Xn-1, Yn-1, Zn-1) to obtain the connecting line distance Sn between the two sampling points, and then the ratio of the connecting line distance Sn to the time difference of the two sampling points can obtain the speed Vn in the time period; finally, the link distances of all sampling points are added to obtain the total distance S of the track, and the total time t can be obtained by subtracting the starting time from the ending time; this total distance S is then divided by the total time t to obtain an average velocity v.
Furthermore, the sampling point can be obtained by time and space together, and the representation unit of time is second; the default meter or kilometer (or kilometer) of the representation unit in space is adopted, so that the automatic switching of the representation unit is carried out through a space threshold value, namely the meter is adopted as the unit when the distance from a starting point to an end point is less than 1 kilometer, otherwise, the kilometer is adopted as the unit. In order to reduce huge data processing brought by increase of sampling points, time sampling can be reduced according to a certain rule through the sampling points, time sampling is very accurate, but space sampling has large errors, small errors are dozens of centimeters, large errors are dozens of meters, a flexible sampling strategy needs to be made, meter-level interval sampling is adopted when the representation distance is less than ten meters, ten-meter interval sampling is adopted when the representation distance is less than hundred meters, and hundred-meter interval sampling is adopted when the representation distance exceeds hundred meters, so that very accurate sampling data can be obtained.
Step 602, mapping data of all sampling points to the preset map, and drawing a connection line from the starting point to the next adjacent point according to a preset time beat until the connection line reaches the end point.
Specifically, after the starting point and the end point are determined, supplementary deletion processing needs to be performed on the middle sampling point according to the characteristic distance precision, and if the characteristic distance precision is meter-level, point supplementation is performed by combining the relationship between the motion speed and the time interval if the distance between adjacent points exceeds meter-level; if the characterization distance precision is ten meters, deleting if the distance between adjacent points is less than 10 meters, and if the distance between adjacent points exceeds ten meters, performing point supplement by combining the relation between the movement speed and the time interval; and if the characteristic distance precision is on the hundred-meter level, deleting if the distance between adjacent points is less than 100 meters, and if the distance between adjacent points exceeds the hundred meters, performing point compensation by combining the relation between the motion speed and the time interval. The process is to optimize redundant sampling points or missing sampling points to obtain continuous adjacent points with equal intervals, and then all the sampling points from the starting point to the end point can be connected to obtain a three-dimensional track route.
Step 603, unifying the coordinate system of the sampling point and the coordinate system of the preset map into the same world coordinate system, so as to realize the fusion of the three-dimensional track route and the preset map.
Specifically, data of all sampling points are mapped to a preset map, connection drawing is carried out on the data and the next adjacent point according to the time and the beat from the starting point until the time and the beat reach the end point, and then the coordinate system of the sampling points and the coordinate system of the preset map are unified into the same world coordinate system.
In summary, the coordinate system of the sampling point and the coordinate system of the preset map are unified into the same world coordinate system, so that the three-dimensional trajectory route and the preset map are fused.
Fig. 7 is a schematic flow chart of generating a user motion trajectory video provided by the present invention, and as shown in fig. 7, in the step 104, the dynamic three-dimensional identity information is fused with the three-dimensional trajectory route to generate a motion trajectory frame sequence, and the motion trajectory frame sequence is fused with the three-dimensional trajectory map to generate the user motion trajectory video, including:
step 701, mapping the dynamic three-dimensional identity information to the sampling points to create the image information of the real person of the user.
Specifically, the centroid position of the three-dimensional human body model of the user is mapped to the sampling point, relative offset is carried out according to the size of the three-dimensional human body model, and then a human body coordinate system of the three-dimensional human body model of the user is bound with a world coordinate system to create the real character image information of the user. Wherein, the body of the human body is vertical to the ground surface, and the direction of the human face is consistent with the direction of the connecting line of the adjacent points.
It should be noted that the above-mentioned need to be relatively offset according to the body size of the three-dimensional human body model is because if the center of mass of the human body model is located at the waist, and the point center of mass of the motion trajectory is at the heel, there will be a center of mass misalignment at this time, if the center of mass is forced to be overlapped, the lower half of the human body model will be sunk on the ground, it looks unreal, it needs to place the foot surface of the human body model on the ground surface, and the body of the human body will be away from the vertical ground wall surface, so the embodiment of the present invention sets an offset parameter to change to intersect with different surfaces, so as to realize that the real character image of the user is more real.
And 702, fusing the image information of the real person with the three-dimensional track route to generate the motion track frame sequence.
Specifically, it is assumed that the walking speed of a normal person is about 1.2 m/sec, the normal running speed is about 3 times the walking speed, and the normal riding speed is about 4 times the non-running speed. The speed of going up the hill is half slower than the walking speed, and the speed of going down the hill is twice faster than the walking speed.
Assuming that the walking speed of a normal person is taken as a reference, the normal walking motion frequency of the augmented reality character image of the user is defaulted to 4 steps/second, other different motion types can be combined with the linear relation that the moving speed of the three-dimensional track route and the normal walking motion frequency are in positive correlation, and if the running motion speed is 4 times of the normal walking speed, the running motion frequency is equal to 16 steps/second, so that the motion track frame sequence is obtained.
It should be noted that the step frequency and the linear relationship coefficient of any operation type can be updated through static configuration, and are not described in detail herein.
And 703, synthesizing the motion track frame sequence and the three-dimensional track map, wherein the synthesizing process comprises binding the real character image information of the user with coordinate data, motion data and physical data between the three-dimensional track map.
Specifically, the motion track frame sequence and the three-dimensional track map are subjected to synthesis processing, including binding of coordinate data, motion data, physical data and the like between the augmented reality character image information of the user and the three-dimensional track map, so that the augmented reality character image of the user is ensured to stand on the ground surface and an interference phenomenon is avoided.
Step 704, zooming the real character image information and the three-dimensional track map according to a preset scale to ensure that the frame sequence of the real character image information is matched and adjusted with the time sequence of the three-dimensional track map.
Specifically, the augmented reality character image information of the user and the three-dimensional track map are zoomed according to a certain size proportion in space, and when the map is zoomed, a linkage relation is generated, the linkage relation is increased along with the increase, and the linkage relation is reduced along with the reduction. And in the aspect of time, matching and adjusting the frame sequence of the image information of the augmented reality character of the user and the time sequence of the three-dimensional track map.
It should be noted that the spatial parameters are configured because the map can be enlarged or reduced, and if the face model remains unchanged, phenomena such as relatively large distortion and discomfort may occur. Therefore, through the space parameter configuration function, a proper proportion can be set for random following, and the user can be allowed to set the scaling of zooming in and zooming out by himself.
Similar to the spatial parameter configuration, the time parameter is configured because the time duration of the user motion track video cannot be expected if the user motion track video is live in real time, so that a proper proportion can be set in real time to randomly follow the motion swing, and a rhythmic motion feeling is maintained. However, when the user motion track video is finished, the time length is fixed, some videos are very long, some videos are very short, and if all videos are played normally, phenomena such as large distortion sense and disagreement sense can be generated, so that the user is allowed to control the time to be a short time (such as tens of seconds or minutes) to represent the original actual time, the compressed tracks are increased, and the motion swing is accelerated; therefore, time parameters are needed to be configured to reduce the acquisition points and the swing to a reasonable state.
The above-mentioned spatial parameters and temporal parameters may be updated by static configuration, which is not described in detail herein.
The following describes the user motion trajectory generation device provided by the present invention, and the user motion trajectory generation device described below and the user motion trajectory generation method described above may be referred to in correspondence with each other.
Fig. 8 is a block diagram of a structure of a device for generating a user motion trajectory according to the present invention, as shown in fig. 8. A device 800 for generating a user motion trail comprises a user static identity information generating module 801, a user dynamic identity information generating module 802, a three-dimensional trail map generating module 803 and a user motion trail video generating module 804.
A user static identity information generating module 801, configured to generate static three-dimensional identity information based on the obtained user image.
Specifically, the static identity information generating module 801 is configured to obtain a first image and a second image of a user, where the first image includes user feature information, the second image includes three-dimensional information corresponding to the first image, and the user image includes the first image and the second image; extracting user characteristics from the first image and the second image respectively to generate corresponding user texture information and user three-dimensional information; and mapping the user texture information to the user three-dimensional information to generate the static three-dimensional identity information.
Further, the user static identity information generating module 801 is configured to obtain the user static identity information after processing the identity image information of the user. The user static identity information is augmented reality character image information obtained by synthesizing real identity information of a user and identity information of a virtual character. The real identity information of the user at least comprises one or more of a human face, a human head and a human body, so that people can obviously distinguish who the augmented reality object is from the sense of the user without adding other marks for supplement. The real identity information of the user can be acquired by shooting the user to acquire a color image and a depth image, then the color image and the depth image are processed respectively to obtain augmented reality three-dimensional model information of the user, and then the augmented reality three-dimensional model information and the three-dimensional model information of the virtual character are fused, so that a combination body with the real character information and the virtual character information is obtained.
A user dynamic identity information generating module 802, configured to generate dynamic three-dimensional identity information according to the static three-dimensional identity information, where the dynamic three-dimensional identity information is information describing a three-dimensional human body model of the user.
Specifically, the user dynamic identity information generating module 802 is configured to perform displacement parameter adjustment on the static three-dimensional identity information to generate a three-dimensional human body model, where the displacement parameter is adjusted to a parameter value that moves from a first coordinate system to a second coordinate system; and adjusting the motion parameters of the three-dimensional human body model to generate the dynamic three-dimensional identity information, wherein the motion parameters comprise one or more combinations of parameters of mass center motion, parameters of upper body motion and parameters of lower body motion.
Further, the user dynamic identity information generating module 802 is configured to process the augmented reality character image of the user with motion parameter data to obtain a motion mechanism with a real character. The augmented reality character image of the user with the movement capability is obtained by skeletonizing the augmented reality character image information of the user according to character kinematics to obtain a skeleton tree which mainly comprises a head, a body trunk, a left arm, a left forearm, a right arm, a right forearm, a left thigh, a left calf, a right thigh, a right calf and the like, marking the skeleton and the nodes, establishing a coordinate system related to the human body, and finally determining corresponding movement parameters according to the relation among the skeleton tree and the nodes.
The three-dimensional trajectory map generation module 803 is configured to determine a three-dimensional trajectory route of a sampling point of a user, and fuse the three-dimensional trajectory route with a preset map to generate a three-dimensional trajectory map.
Specifically, the three-dimensional trajectory map generating module 803 is configured to obtain geographic information of a user sampling point, sequentially connect the geographic information of the sampling point to generate a three-dimensional trajectory route, and mark a start point and an end point on the three-dimensional trajectory route; mapping data of all sampling points to the preset map, and drawing a connecting line from the starting point to the next adjacent point according to a preset time beat until the connecting line reaches the end point; and unifying the coordinate system of the sampling point and the coordinate system of the preset map into the same world coordinate system so as to realize the fusion of the three-dimensional track route and the preset map.
Further, the three-dimensional trajectory map generating module 803 is configured to acquire and process time information and spatial information simultaneously during a movement process of a user to obtain a three-dimensional trajectory route. When a user moves, the portable electronic equipment can simultaneously acquire time information and space information, the acquisition of the user from a starting point to an end point and adjacent points in the middle is completed according to a certain strategy, redundant sampling points are deleted, or missing sampling points are filled, so that each sampling point has time and space information and is equal to the movement of the user; and finally, connecting the acquisition points according to a time sequence to obtain a three-dimensional track line.
Further, the three-dimensional trajectory map generating module 803 is configured to synthesize the three-dimensional trajectory line and the user identity information to obtain a user motion trajectory. Mapping the static or dynamic identity information of the user to a three-dimensional trajectory line, binding a human body coordinate system and world coordinates of the three-dimensional trajectory line at the moment, and ensuring that the front orientation of the human body is consistent with the direction of the three-dimensional trajectory line; in addition, the limb movement parameters of the human body are determined, so that the hands and the legs of the human body can be ensured to swing in a coordinated and rhythmic manner in the moving process along the three-dimensional trajectory.
Further, the three-dimensional trajectory map generating module 803 is configured to synthesize the three-dimensional trajectory line and a predetermined map to obtain a three-dimensional trajectory map; and mapping the three-dimensional trajectory line to a world coordinate system of a preset map for binding, and drawing a connecting line from a starting point to an end point according to a time sequence by sampling points of the three-dimensional trajectory line.
A user motion track video generating module 804, configured to fuse the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fuse the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
Specifically, the user motion trajectory video generating module 804 is configured to map the dynamic three-dimensional identity information to the sampling points, and create real character image information of the user; fusing the image information of the real person with the three-dimensional track route to generate the motion track frame sequence; synthesizing the motion track frame sequence and the three-dimensional track map, wherein the synthesizing comprises binding coordinate data, motion data and physical data between the image information of the real person of the user and the three-dimensional track map; and zooming the image information of the real person and the three-dimensional track map according to a preset proportion so as to ensure that the frame sequence of the image information of the real person is matched and adjusted with the time sequence of the three-dimensional track map.
Further, the user motion trajectory video generating module 804 is configured to perform fusion processing on the augmented reality character image of the user and the three-dimensional trajectory map to obtain a user trajectory video. The coordinate parameters, the motion parameters, the physical parameters and the like of the augmented reality figure image of the user are bound to the three-dimensional track map, so that the human body does not interfere with the map, and the generation of video data is realized according to a certain time and space relation.
In an embodiment of the present invention, the apparatus for generating a user motion trajectory further includes a user trajectory video display module.
The user track video display module is used for acquiring the user motion track video; rendering the user motion track video, wherein the rendering comprises adding one or more combinations of materials, effects, subtitles, icons and music; and displaying the rendered user motion track video on a preset application program.
Specifically, the user trajectory video display module is configured to render a user trajectory video, and add more contents including, but not limited to, characters, pictures, expressions, music, effects, and the like according to an actual situation, so that the user trajectory video is vivid and vivid.
Therefore, the augmented reality character image presented by the dynamic three-dimensional identity information is fused with the three-dimensional motion track route to generate the motion track frame sequence, and then the motion track frame sequence is fused with the three-dimensional track map to generate the user motion track video, so that seamless connection of the augmented reality character image, the three-dimensional motion track and the three-dimensional track map of the user can be effectively realized, the reality sense is stronger, the effect is more personalized, and better personalized experience is achieved.
Fig. 9 illustrates a schematic structural diagram of an electronic device, which may include, as shown in fig. 9: a processor (processor)910, a communication Interface (Communications Interface)920, a memory (memory)930 and a communication bus 940, wherein the processor 910, the communication Interface 920 and the memory 930 complete communication with each other through the communication bus 940, and the memory 930 is used for storing user identity information, track information, map information and other related motion video data. The processor 910 may invoke logic instructions in the memory 930, and the processor 910 is configured to process the user's identity information, trajectory information, map information, and other motion-related video data, including but not limited to performing a method for generating a motion trajectory of the user, the method including:
generating static three-dimensional identity information based on the acquired user image;
generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing the deformable three-dimensional human body model of the user;
determining a three-dimensional track route of a sampling point of a user, and fusing the three-dimensional track route with a preset map to generate a three-dimensional track map;
and fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
Furthermore, the logic instructions in the memory 930 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to perform the method for generating a motion trajectory of a user provided by the above methods, the method comprising:
generating static three-dimensional identity information based on the acquired user image;
generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing the deformable three-dimensional human body model of the user;
determining a three-dimensional track route of a sampling point of a user, and fusing the three-dimensional track route with a preset map to generate a three-dimensional track map;
and fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the method for generating a motion trajectory of a user provided in the above aspects, the method comprising:
generating static three-dimensional identity information based on the acquired user image;
generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing the deformable three-dimensional human body model of the user;
determining a three-dimensional track route of a sampling point of a user, and fusing the three-dimensional track route with a preset map to generate a three-dimensional track map;
and fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (19)

1. A method for generating a motion trail of a user is characterized by comprising the following steps:
generating static three-dimensional identity information based on the acquired user image;
generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing the deformable three-dimensional human body model of the user;
determining a three-dimensional track route of a sampling point of a user, and fusing the three-dimensional track route with a preset map to generate a three-dimensional track map;
and fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
2. The method for generating a user motion trail according to claim 1, wherein after generating the user motion trail video, the method further comprises:
acquiring the motion track video of the user;
rendering the user motion track video, wherein the rendering comprises adding one or more combinations of materials, effects, subtitles, icons and music;
and displaying the rendered user motion track video on a preset application program.
3. The method for generating the user motion trail according to claim 2, wherein the rendering the user motion trail video comprises:
calling one or more pieces of static three-dimensional identity information stored in advance;
and generating a corresponding user motion track video based on the one or more pieces of static three-dimensional identity information which are stored in advance and according to the static three-dimensional identity information generated by calling the user image.
4. The method for generating the user motion trail according to claim 2, wherein the rendering process of the user motion trail video comprises:
the static three-dimensional identity information is subjected to personalized modification;
and generating a corresponding user motion track video according to the personalized and modified static three-dimensional identity information.
5. The method for generating a user motion trail according to claim 1, wherein after generating the user motion trail video, the method further comprises:
live broadcasting the user motion track video in real time; or
And carrying out video recording and playing on the user motion track video.
6. The method of claim 5, wherein the recording and playing the video of the user motion trajectory comprises:
and performing one or more combined operations of editing, forwarding, storing and playing back the user motion track video.
7. The method for generating the motion trail of the user according to claim 1, wherein the user image comprises a first image and a second image, and the step of generating the static three-dimensional identity information based on the acquired user image comprises:
acquiring a first image and a second image of a user, wherein the first image comprises user characteristic information, and the second image comprises three-dimensional information corresponding to the first image;
extracting user characteristics from the first image and the second image respectively to generate corresponding user texture information and user three-dimensional information;
and mapping the user texture information to the user three-dimensional information to generate the static three-dimensional identity information.
8. The method according to claim 7, wherein the user texture information is one or more of a human face feature, a human head feature, and a human body feature.
9. The method for generating the motion trail of the user according to claim 7, wherein the three-dimensional information of the user is one or more combinations of human face features, human head features and human body features.
10. The method for generating the motion trail of the user according to claim 7, wherein the second image is obtained by any one of a 3D TOF camera, a 3D structured light camera, a binocular stereo vision camera, a coded structured light camera, and a three-dimensional scanner.
11. The method for generating the motion trail of the user according to claim 1, wherein the user image is one or more of a real human face model, a real human head model, a real human body model, and an edited real human model of the user pre-stored in the system.
12. The method for generating a motion trail of a user according to claim 1, wherein the step of generating dynamic three-dimensional identity information according to the static three-dimensional identity information comprises:
carrying out displacement parameter adjustment on the static three-dimensional identity information to generate a three-dimensional human body model, wherein the displacement parameter is adjusted to a parameter value which is moved from a first coordinate system to a second coordinate system;
and adjusting the motion parameters of the three-dimensional human body model to generate the dynamic three-dimensional identity information, wherein the motion parameters comprise one or more combinations of parameters of mass center motion, parameters of upper body motion and parameters of lower body motion.
13. The method for generating a motion trail of a user according to claim 1, wherein the step of determining a three-dimensional trail route of the sampling points of the user comprises:
the three-dimensional track route is generated by system preset setting according to user requirements, and the three-dimensional track route describes a track route which a user wants to move; or the three-dimensional track route is generated according to the real-time motion of the user, and the three-dimensional track route describes the track route generated by the real-time motion of the user.
14. The method for generating the motion trail of the user according to claim 1, wherein the step of determining a three-dimensional trail route of the sampling point of the user and fusing the three-dimensional trail route with a preset map to generate a three-dimensional trail map comprises the following steps:
acquiring geographic information of a user sampling point, sequentially connecting the geographic information of the sampling point to generate a three-dimensional track route, and marking a starting point and an end point of the three-dimensional track route;
mapping data of all sampling points to the preset map, and drawing a connecting line from the starting point to the next adjacent point according to a preset time beat until the connecting line reaches the end point;
and unifying the coordinate system of the sampling point and the coordinate system of the preset map into the same world coordinate system so as to realize the fusion of the three-dimensional track route and the preset map.
15. The method for generating a motion trail according to claim 1, wherein the step of fusing the dynamic three-dimensional identity information with the three-dimensional trail route to generate a motion trail frame sequence comprises:
mapping the dynamic three-dimensional identity information to the sampling points to create real figure image information of the user;
and fusing the image information of the real person with the three-dimensional track route to generate the motion track frame sequence.
16. The method for generating a user motion trail according to claim 15, wherein the step of fusing the motion trail frame sequence with the three-dimensional trail map to generate a user motion trail video comprises:
synthesizing the motion track frame sequence and the three-dimensional track map, wherein the synthesizing comprises binding coordinate data, motion data and physical data between the image information of the real person of the user and the three-dimensional track map;
and zooming the image information of the real person and the three-dimensional track map according to a preset proportion so as to ensure that the frame sequence of the image information of the real person is matched and adjusted with the time sequence of the three-dimensional track map.
17. An apparatus for generating a motion trajectory of a user, the apparatus comprising:
the user static identity information generation module is used for generating static three-dimensional identity information based on the acquired user image;
the user dynamic identity information generation module is used for generating dynamic three-dimensional identity information according to the static three-dimensional identity information, wherein the dynamic three-dimensional identity information is information describing a three-dimensional human body model of the user;
the three-dimensional track map generation module is used for determining a three-dimensional track route of a sampling point of a user, fusing the three-dimensional track route with a preset map and generating a three-dimensional track map;
and the user motion track video generation module is used for fusing the dynamic three-dimensional identity information with the three-dimensional track route to generate a motion track frame sequence, and fusing the motion track frame sequence with the three-dimensional track map to generate a user motion track video.
18. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method for generating a motion profile of a user according to any one of claims 1 to 16 when executing the program.
19. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for generating a trajectory of motion of a user according to any one of claims 1 to 16.
CN202110203540.XA 2021-02-23 2021-02-23 User motion trajectory generation method and device, electronic equipment and storage medium Pending CN114972583A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110203540.XA CN114972583A (en) 2021-02-23 2021-02-23 User motion trajectory generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110203540.XA CN114972583A (en) 2021-02-23 2021-02-23 User motion trajectory generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114972583A true CN114972583A (en) 2022-08-30

Family

ID=82973394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110203540.XA Pending CN114972583A (en) 2021-02-23 2021-02-23 User motion trajectory generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114972583A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385608A (en) * 2023-06-05 2023-07-04 广州悦跑信息科技有限公司 Running route track reproduction method of virtual character

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385608A (en) * 2023-06-05 2023-07-04 广州悦跑信息科技有限公司 Running route track reproduction method of virtual character
CN116385608B (en) * 2023-06-05 2023-08-18 广州悦跑信息科技有限公司 Running route track reproduction method of virtual character

Similar Documents

Publication Publication Date Title
CN103106604B (en) Based on the 3D virtual fit method of body sense technology
CN107274469A (en) The coordinative render method of Virtual reality
CN113012282B (en) Three-dimensional human body reconstruction method, device, equipment and storage medium
KR20180100476A (en) Virtual reality-based apparatus and method to generate a three dimensional(3d) human face model using image and depth data
Roth et al. A simplified inverse kinematic approach for embodied VR applications
CN111729283B (en) Training system and method based on mixed reality technology
CN109671141B (en) Image rendering method and device, storage medium and electronic device
JP2012181688A (en) Information processing device, information processing method, information processing system, and program
CN108564642A (en) Unmarked performance based on UE engines captures system
CN107185245B (en) SLAM technology-based virtual and real synchronous display method and system
CN109242950A (en) Multi-angle of view human body dynamic three-dimensional reconstruction method under more close interaction scenarios of people
CN106095094A (en) The method and apparatus that augmented reality projection is mutual with reality
Gonzalez-Franco et al. Movebox: Democratizing mocap for the microsoft rocketbox avatar library
CN108564643A (en) Performance based on UE engines captures system
CN114821675B (en) Object processing method and system and processor
JP4695275B2 (en) Video generation system
CN107862718A (en) 4D holographic video method for catching
US20140249789A1 (en) Virtual testing model for use in simulated aerodynamic testing
CN114972583A (en) User motion trajectory generation method and device, electronic equipment and storage medium
EP4160545A1 (en) Three-dimensional avatar generation device, three-dimensional avatar generation method, and three-dimensional avatar generation program
CN109829960A (en) A kind of VR animation system interaction method
JP2020071718A (en) Information processing device, information processing method, and program
CN206039650U (en) Mutual application system of architectural design based on virtual reality
CN106960467A (en) A kind of face reconstructing method and system with bone information
Ami-Williams et al. Digitizing traditional dances under extreme clothing: The case study of eyo

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination