CN114979708A - Video pushing method and device, server equipment and readable storage medium - Google Patents

Video pushing method and device, server equipment and readable storage medium Download PDF

Info

Publication number
CN114979708A
CN114979708A CN202210555431.9A CN202210555431A CN114979708A CN 114979708 A CN114979708 A CN 114979708A CN 202210555431 A CN202210555431 A CN 202210555431A CN 114979708 A CN114979708 A CN 114979708A
Authority
CN
China
Prior art keywords
athlete
target
rendering
real
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210555431.9A
Other languages
Chinese (zh)
Other versions
CN114979708B (en
Inventor
周吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
MIGU Digital Media Co Ltd
Original Assignee
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
MIGU Digital Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Migu Cultural Technology Co Ltd, China Mobile Communications Group Co Ltd, MIGU Digital Media Co Ltd filed Critical Migu Cultural Technology Co Ltd
Priority to CN202210555431.9A priority Critical patent/CN114979708B/en
Publication of CN114979708A publication Critical patent/CN114979708A/en
Application granted granted Critical
Publication of CN114979708B publication Critical patent/CN114979708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a video pushing method, a video pushing device, server equipment and a readable storage medium, and belongs to the technical field of communication. The video pushing method of the embodiment of the application comprises the following steps: determining the motion trail of the athlete according to the real-time position data of the athlete in the competition process, rendering a pre-established model according to the motion trail to obtain a rendering result, wherein the pre-established model is a three-dimensional model of a competition related object, converting the rendering result into a target video stream, and pushing the target video stream. Therefore, the live game of the view angle of the athlete can be shown in the event broadcasting, and the watching experience of the audience is improved.

Description

Video pushing method and device, server equipment and readable storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video pushing method, a video pushing device, a server device and a readable storage medium.
Background
In the prior art, sports images of snowmobile/sled competitions are mainly relayed after on-site competition pictures are collected by a fixed camera. However, the fixed camera cannot track the movement process of the athlete in real time, so that the audience can only see fixed pictures and scenes at several positions, thereby causing poor viewing experience of the audience.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video pushing method, an apparatus, a server device, and a readable storage medium, so as to solve the problem that the viewing experience of a viewer is poor due to the current event relay.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, a video push method is provided, including:
determining the motion trail of the athlete according to the real-time position data of the athlete in the competition process;
rendering a pre-established model according to the motion track to obtain a rendering result, wherein the pre-established model is a three-dimensional model of a game related object;
and converting the rendering result into a target video stream, and pushing the target video stream.
Optionally, the determining the motion trajectory of the athlete according to the real-time position data of the athlete in the competition process includes:
acquiring real-time position data of the athlete in the current track area;
correcting a first motion track according to the real-time position data to obtain the motion track of the athlete in the current track area; the first motion trail is a predicted motion trail of the athlete in the current track area based on real-time position data of the athlete in a last track area of the current track area.
Optionally, before the pre-established model is rendered according to the motion trajectory and a rendering result is obtained, the method further includes:
determining a target perspective of the athlete during the race;
the rendering the pre-established model according to the motion trail to obtain a rendering result comprises the following steps:
and rendering the pre-established model according to the motion track and the target view angle to obtain the rendering result.
Optionally, the determining the target view angle of the athlete during the game comprises any one of:
determining a true perspective of the athlete during the race;
and under the condition that the target action occurs to the athlete, determining the target visual angle according to the angular speed and the angular acceleration of the athlete before exceeding the shooting range.
Optionally, after determining the real viewing angle of the athlete during the game, when the real viewing angle is greater than a preset viewing angle threshold, the determining the target viewing angle of the athlete during the game further includes:
and correcting the real visual angle by utilizing the preset visual angle threshold value to obtain the target visual angle.
Optionally, the modifying the real view angle by using the preset view angle threshold to obtain the target view angle includes: and calculating to obtain the target visual angle beta by adopting the following formula:
Figure BDA0003654675100000021
wherein f represents the preset view angle threshold, and θ represents the real view angle.
Optionally, the pushing the target video stream includes:
and mixing the target video stream with other video streams and then pushing the mixed video streams to a client, wherein the other video streams are game video streams of the athlete collected by a camera.
Optionally, the pre-established model includes at least one of: a three-dimensional model of a track, a three-dimensional model of a sports implement, and a three-dimensional model of the athlete.
In a second aspect, a video push apparatus is provided, including:
the first determination module is used for determining the motion trail of the athlete according to the real-time position data of the athlete in the competition process;
the rendering module is used for rendering a pre-established model according to the motion track to obtain a rendering result, wherein the pre-established model is a three-dimensional model of a game related object;
and the pushing module is used for converting the rendering result into a target video stream and pushing the target video stream.
In a third aspect, a server device is provided, comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a readable storage medium is provided, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In the embodiment of the application, the movement track of the athlete can be determined according to the real-time position data of the athlete in the competition process, the pre-established model is rendered according to the movement track to obtain the rendering result, the pre-established model is a three-dimensional model of a competition related object, the rendering result is converted into a target video stream, the target video stream is pushed, and the target video stream is a competition video stream based on the visual angle of the athlete. From this, can show the live match of sportsman's visual angle in the event rebroadcasting based on virtual reality technique to reduce the distance between spectator and the sportsman, improve spectator's substitution and feel, promote spectator's the experience of watching.
Drawings
Fig. 1 is a flowchart of a video push method provided in an embodiment of the present application;
FIG. 2 is a schematic view of an athlete viewing angle in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a video pushing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a server device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Optionally, the applicable scenarios of the embodiments of the present application include, but are not limited to, racing scenarios such as snowmobiles/skis.
The video pushing method, the video pushing apparatus, the server device and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart of a video pushing method provided in an embodiment of the present application, where the method is applied to a server device, and as shown in fig. 1, the method includes the following steps:
step 11: and determining the motion trail of the athlete according to the real-time position data of the athlete in the competition process.
In this embodiment, real-time position data of the athlete during the race may be determined by arranging several ultrasonic devices in the track. For example, each group of the ultrasonic devices is 3, based on the positioning and tracking of the ultrasonic waves, the transmitting end (i.e., the ultrasonic device) transmits the sound wave signal, and simultaneously notifies the receiving end (e.g., a receiver at the position of the athlete) to start timing through the radio frequency signal, and when the receiving end receives the sound wave, the timing is finished, and then the distance between the transmitting end and the receiving end is the sound velocity t. The distances between the three transmitting ends and the receiving end are sequentially calculated, the coordinates of the receiving end are set to be (x, y, z), the coordinates of the three transmitting ends and the distances between the three transmitting ends and the receiving end are known through structural definition, three linear equations can be obtained according to a distance formula (x-x ')2+ (y-y ')2+ (z-z ')2 ═ d between two points, coordinate values can be solved by forming an equation set, and real-time position data of the athlete in the competition process can be obtained.
In some embodiments, a camera, such as a high speed black and white camera, may be provided on the ultrasonic devices disposed in the track for capturing motion image data of the athlete.
Step 12: and rendering the pre-established model according to the motion track to obtain a rendering result.
Optionally, the pre-established model is a three-dimensional model of the game related object, and may include, but is not limited to, at least one of the following: a three-dimensional model of a track, a three-dimensional model of a sports implement, and a three-dimensional model of an athlete. For example, taking a snow car/sled competition scene as an example, since the snow car/sled share a track in a sports event, only the starting point is different, and the track is fixed, point cloud data can be constructed through a laser radar, and a map is constructed by adopting a laser radar odometer and a real-time mapping (load-SLAM) method, so as to obtain a three-dimensional (3D) model of the track. For the sports apparatus and the 3D model of the athlete, the body and apparatus data of the athlete, mainly including the height, the weight, the helmet, the height of the face of the upper and lower limbs of the athlete, and the length, the width, the weight, the height above the ground of the snowmobile/sled, can be obtained in advance, and the 3D model of the sports apparatus and the athlete can be obtained by performing 3D modeling by using the data. In addition, the angle of view of the model can be set according to the height of the eyes, and the angle of view can be finely adjusted according to the head inclination angle of the athlete, so that the aim of feeding back the track scene of the angle of view of the real athlete is fulfilled.
It should be noted that the Modeling manner of the three-dimensional model of the game related object may be an existing manner, such as Polygon Modeling (Polygon Modeling), Parametric Modeling (Parametric Modeling), Reverse Modeling (Reverse Modeling), and surface Modeling (NURBS Modeling), and is not limited thereto.
In some embodiments, the rendering may use real-time 3D rendering tools such as UE or Unity. After the movement track and the target visual angle of the athlete are obtained, an athlete visual angle image approaching to the real race condition can be rendered through a 3D rendering tool, and the rendered result is converted into a video stream to be output to a live broadcasting end in real time.
Step 13: and converting the rendering result into a target video stream, and pushing the target video stream.
In this embodiment, after the movement track of the athlete is determined, the visual range of the athlete is basically determined, so that the three-dimensional model of the game related object is rendered according to the movement track of the athlete, and the rendering result is converted into the target video stream, so that the game video stream based on the visual angle of the athlete can be obtained. The target video stream is a game video stream based on the view angle of the athlete.
Optionally, when the target video stream is pushed, the target video stream and other video streams may be pushed to the client after being mixed, and the other video streams are game video streams of athletes acquired by the camera, so that the spectators can watch the game videos from multiple angles conveniently.
In some embodiments, the target video stream may be presented to the viewer as a live video stream of one camera position and by merging the stream with the video streams of other live camera positions into one video stream. The target video stream based on the view angle of the athlete can be played in a video widget or can be shown to the audience as a live stream. For mixed-flow video, a viewer can watch a target video stream obtained by real-time operation in a cut-flow mode.
According to the video pushing method, the movement track of the athlete can be determined according to the real-time position data of the athlete in the competition process, the pre-established model is rendered according to the movement track, the rendering result is obtained, the rendering result is converted into the target video stream, the target video stream is pushed, and the target video stream is the competition video stream based on the visual angle of the athlete. From this, can show the live match of sportsman's visual angle in the event rebroadcasting based on virtual reality technique to reduce the distance between spectator and the sportsman, improve spectator's substitution and feel, promote spectator's the experience of watching.
Furthermore, the video pushing method provided by the embodiment of the application does not need to install a camera on the body of the athlete and does not depend on an external camera, an almost real sports match video can be obtained through calculation, the influence of a peripheral on the athlete is reduced, the whole match process can be continuously watched, and the camera position segment is not needed.
Optionally, in order to obtain a reasonable motion trail of the athlete, the process of determining the motion trail of the athlete may include: firstly, acquiring real-time position data of a player in a current track area, for example, the real-time position data of the player can be acquired based on an ultrasonic device arranged in the track; then, correcting the first motion track according to the real-time position data to obtain the motion track of the athlete in the current track area; the first motion trail is a predicted motion trail of the athlete in the current track area based on real-time position data of the athlete in a previous track area of the current track area. Therefore, by means of the prediction and correction processes, the movement locus of the athlete can be continuously calculated, sufficient time can be provided for subsequent rendering, and the reasonable movement locus of the athlete can be obtained.
In some embodiments, the prediction of the motion trajectory may use Semi-Supervised learning to generate a countermeasure Network (SGAN) model, that is, to generate optimization of a countermeasure Network GAN model, and a module for generating latent code distribution may be newly added to a generation structure of an original GAN model to improve accuracy of the prediction.
For example, taking a ski competition scene as an example, the average speed per hour of an athlete in a ski competition is about 90KM/H, the track length is about 1.5 to 2 KM, there are about 15 to 20 curves with different angles and inclinations, the position and the motion trajectory of the athlete at each straight line and curve can be obtained through the position calculation of a plurality of ultrasonic waves, before the athlete passes through the next section of area, the motion trajectory of the next section of area is calculated in advance through a pre-trained prediction model, and after the athlete reaches the next section of area, the predicted key point trajectory is subjected to advance correction through key points acquired by an ultrasonic device, so that the purpose of obtaining a reasonable motion trajectory before the rebroadcast camera shoots the athlete is achieved, and sufficient time is provided for rendering.
In the embodiment of the present application, after the motion trail of the athlete is determined, the visual range of the athlete is substantially determined, however, the visual range is the head-up range of the athlete, and generally has a large difference from the actual visual angle of the athlete. Therefore, in order to obtain a more real competition video stream based on the view angle of the athlete, when the three-dimensional model of the competition related object is rendered according to the movement track of the athlete, the rendering process can be corrected by using the view angle of the athlete in the competition process.
Optionally, before the pre-established model is rendered according to the motion trajectory of the athlete, a target view angle of the athlete in the competition process may be determined; and then, rendering the pre-established model according to the motion track and the target view angle to obtain a rendering result, converting the rendering result into a target video stream, and pushing the target video stream. Therefore, the target visual angle of the athlete in the competition process is considered in the rendering process, the player does not default to head up, and a real competition video stream based on the visual angle of the athlete can be obtained.
Optionally, the determining the target view angle of the athlete during the game may include: the actual viewing angle of the athlete during the game is determined. Therefore, the match video stream can be generated according to the real visual angle of the athlete in the process of the match, so that the substitution feeling of the audience is improved.
In actual sports, an athlete is usually in a fast observation process, if a captured match picture of the view angle of the athlete is completely displayed to a user, the match picture can be changed quickly, the audience is poor in appearance, therefore, a preset view angle threshold value f can be introduced, when the real view angle of the athlete is larger than the preset view angle threshold value, the real view angle of the athlete is corrected, a match video stream is generated based on the corrected view angle, and therefore the appearance of the audience is prevented from being influenced.
Optionally, when the actual viewing angle of the athlete is greater than the preset viewing angle threshold, the determining the target viewing angle of the athlete during the game may include: and correcting the real visual angle of the athlete by utilizing a preset visual angle threshold value to obtain the target visual angle of the athlete in the competition process. The target perspective may then be used to generate a game video stream, thereby avoiding affecting the viewer's look and feel.
In some embodiments, when the real view angle θ of the athlete is corrected by using the preset view angle threshold f, the target view angle β may be calculated by using the following formula:
Figure BDA0003654675100000071
in some embodiments, such as a ski game scenario, after obtaining the predicted player's trajectory, the player's field of view is substantially determined, but the actual viewing angle is also related to the player's head-up height and steering at that time. The helmet can be worn by athletes in sled sports, and the athletes can not clearly obtain facial data of the athletes to calculate visual angles in high-speed sports, but the sled sports helmet has specific specifications, a black fixing position exists in the middle of the head of the helmet, and data of the helmet, the eyes and the nose bridge can be collected in advance. As shown in fig. 2, if a fixed position in the middle of the head of the helmet is taken as a center, instead of the position of the nose bridge, the distance between the centers of the two eyes of the person is defined as a, an included angle between the extension line of the fixed position and the centers of the two eyes is defined as b, and the offset of the intersection point of the fixed position and the centers of the two eyes relative to the midpoint of the two eyes is defined as c, then: when b is 0 and c is 0, the player's view angle θ is f (a, 0, 0) is 0, that is, the person is looking straight, to calculate the deviation of the view angle, the dark position (i.e., the fixed position and the eyeball) is locked by the high-speed black-and-white camera, the projection position of the fixed position in the whole helmet is d (d can be obtained by rotating the circle because the helmet is circular), and θ is corrected to eliminate the view angle influenced by the deviation of the helmet, where θ is f (a, b, c) + d, for example, θ is calculated to be 15 °, but the actual human eye deviation is 20 ° after the helmet is rotated by 5 ° again; in order to assist the view angle drift caused by the fact that the face cannot be accurately captured at high speed, FaceNet pre-training data can be used, face data are generated in advance to increase the calculation speed, face state data can be quickly obtained after the player appears in a high-speed camera, after the view angle of the player is obtained, because the player is in motion and finally adds the projection angle e of the sled, the real view angle theta of the player is f (a, b, c) + d + e, if the projection angle of the sled is 180 degrees when the length of the sled steel rail is 1 degrees and the projection angle of the sled steel rail is 0 degrees when the length of the sled steel rail is 0 degrees, when the length of the sled steel rail in a picture is sequentially changed from 0 to 1 degrees, the corresponding projection angle can be linearly obtained, and if the length of the sled steel rail is 0.5 degrees, the corresponding projection angle is 90 degrees. By the aid of the view angle and corner data, the data can be converted into modeling data, and then the view angle obtained by motion trajectory prediction is corrected, so that a real view angle range is obtained.
Optionally, in order to cope with various abnormal game conditions, the determining of the target view angle of the athlete during the game may include: and under the condition that the target action occurs to the athlete, determining the target visual angle according to the angular speed and the angular acceleration of the athlete before exceeding the shooting range. Such as a player deviating from a track, etc. It should be noted that, in a real game, there are various abnormal states, so that abnormal scene detection needs to be added to the athlete, and when the athlete makes a mistake (for example, when the athlete is in a ski game, the athlete is detected that the body of the athlete is away from the ski or the ski is away from the track, etc.), in order to give stronger visual impact to the spectator, the limitation of the preset visual angle threshold value f needs to be removed, and a game picture is expressed by violent shaking of the real visual angle. Since the athlete in this case may also shake vigorously and cannot capture the face completely in real time, the angle of view θ of the athlete within a preset time (for example, 1s) before the abnormality may be sampled, for example, 50 times at intervals of 0.02s, and then the corresponding angular velocity ω ═ d θ/dt and the angular acceleration β ═ d ω/dt are calculated, and when the athlete is in a position where the athlete cannot be observed by the camera, the actual angle of view of the athlete is shifted by the final angular velocity ω and the acceleration β before exceeding the shooting range until the next occurrence of a match picture by the athlete. The next time the athlete is out of the camera range, the process can be repeated to determine the athlete's view angle until the athlete stabilizes.
It should be noted that, in the video push method provided in the embodiment of the present application, the execution main body may be a video push apparatus, or a control module in the video push apparatus for executing the video push method. The embodiment of the present application takes a video push method executed by a video push device as an example, and describes the video push device provided in the embodiment of the present application.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a video pushing apparatus according to an embodiment of the present application, where the video pushing apparatus is applied to a server device, and as shown in fig. 3, the video pushing apparatus 30 includes:
the first determining module 31 is used for determining the motion trail of the athlete according to the real-time position data of the athlete in the competition process;
the rendering module 32 is configured to render a pre-established model according to the motion trajectory to obtain a rendering result, where the pre-established model is a three-dimensional model of a game related object;
a pushing module 33, configured to convert the rendering result into a target video stream, and push the target video stream, where the target video stream is a game video stream based on the view angle of the athlete.
Optionally, the pre-established model includes at least one of: a three-dimensional model of a track, a three-dimensional model of a sports implement, and a three-dimensional model of the athlete.
Optionally, the first determining module 31 is specifically configured to: acquiring real-time position data of the athlete in the current track area; correcting a first motion track according to the real-time position data to obtain the motion track of the athlete in the current track area; the first motion trail is a predicted motion trail of the athlete in the current track area based on real-time position data of the athlete in a last track area of the current track area.
Optionally, the video pushing apparatus 30 further includes:
a second determination module for determining a target perspective for the athlete during the race;
the rendering module 32 is specifically configured to: and rendering the pre-established model according to the motion track and the target view angle to obtain the rendering result.
Optionally, the second determining module is specifically configured to: determining the actual viewing angle of the athlete during the game.
Optionally, the second determining module is specifically configured to: and when the real visual angle is larger than a preset visual angle threshold value, correcting the real visual angle by using the preset visual angle threshold value to obtain the target visual angle.
Optionally, the second determining module is specifically configured to: and calculating to obtain the target visual angle beta by adopting the following formula:
Figure BDA0003654675100000101
wherein f represents the preset view angle threshold, and θ represents the real view angle.
Optionally, the second determining module is specifically configured to: and under the condition that the target action occurs to the athlete, determining the target visual angle according to the angular speed and the angular acceleration of the athlete before exceeding the shooting range.
Optionally, the pushing module 34 is specifically configured to: and mixing the target video stream with other video streams and then pushing the mixed streams to a client, wherein the other video streams are game video streams of athletes acquired by a camera.
The video push apparatus 30 of the embodiment of the present application can implement each process of the method embodiment shown in fig. 1, and can achieve the same technical effect, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 4, an embodiment of the present application further provides a server device 40, which includes a processor 41, a memory 42, and a program or an instruction stored in the memory 42 and capable of being executed on the processor 41, where the program or the instruction is executed by the processor 41 to implement each process of the video push method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present application further provides a readable storage medium, on which a program or an instruction is stored, where the program or the instruction, when executed by a processor, can implement each process of the method embodiment shown in fig. 1 and achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a service classification device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A video push method, comprising:
determining the motion trail of the athlete according to the real-time position data of the athlete in the competition process;
rendering a pre-established model according to the motion track to obtain a rendering result, wherein the pre-established model is a three-dimensional model of a game related object
And converting the rendering result into a target video stream, and pushing the target video stream.
2. The method of claim 1, wherein determining the player's motion trajectory based on real-time player position data during the game comprises:
acquiring real-time position data of the athlete in the current track area;
correcting a first motion track according to the real-time position data to obtain the motion track of the athlete in the current track area; the first motion trail is a predicted motion trail of the athlete in the current track area based on real-time position data of the athlete in a last track area of the current track area.
3. The method of claim 1, wherein before the rendering the pre-established model according to the motion trajectory and obtaining a rendering result, the method further comprises:
determining a target perspective of the athlete during the race;
the rendering the pre-established model according to the motion trail to obtain a rendering result comprises the following steps:
and rendering the pre-established model according to the motion track and the target view angle to obtain the rendering result.
4. The method of claim 3, wherein said determining a target perspective for said athlete during said race comprises any one of:
determining a true perspective of the athlete during the game;
and under the condition that the target action occurs to the athlete, determining the target visual angle according to the angular speed and the angular acceleration of the athlete before exceeding the shooting range.
5. The method of claim 4, wherein after determining the actual perspective of the athlete during the race, when the actual perspective is greater than a preset perspective threshold, the determining the athlete's target perspective during the race further comprises:
and correcting the real visual angle by utilizing the preset visual angle threshold value to obtain the target visual angle.
6. The method of claim 1, wherein the pushing the target video stream comprises:
and mixing the target video stream with other video streams and then pushing the mixed video streams to a client, wherein the other video streams are game video streams of the athlete collected by a camera.
7. The method according to any one of claims 1 to 6, wherein the pre-established model comprises at least one of: a three-dimensional model of a track, a three-dimensional model of a sports implement, and a three-dimensional model of the athlete.
8. A video push apparatus, comprising:
the first determination module is used for determining the motion trail of the athlete according to the real-time position data of the athlete in the competition process;
the rendering module is used for rendering a pre-established model according to the motion track to obtain a rendering result, wherein the pre-established model is a three-dimensional model of a game related object;
and the pushing module is used for converting the rendering result into a target video stream and pushing the target video stream.
9. A server device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video push method according to any one of claims 1 to 7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video push method according to any one of claims 1 to 7.
CN202210555431.9A 2022-05-20 2022-05-20 Video pushing method, device, server equipment and readable storage medium Active CN114979708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210555431.9A CN114979708B (en) 2022-05-20 2022-05-20 Video pushing method, device, server equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210555431.9A CN114979708B (en) 2022-05-20 2022-05-20 Video pushing method, device, server equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114979708A true CN114979708A (en) 2022-08-30
CN114979708B CN114979708B (en) 2023-10-17

Family

ID=82984646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210555431.9A Active CN114979708B (en) 2022-05-20 2022-05-20 Video pushing method, device, server equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114979708B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160361658A1 (en) * 2015-06-14 2016-12-15 Sony Interactive Entertainment Inc. Expanded field of view re-rendering for vr spectating
CN106454401A (en) * 2016-10-26 2017-02-22 乐视网信息技术(北京)股份有限公司 Method and device for playing video
US20170124769A1 (en) * 2014-07-28 2017-05-04 Panasonic Intellectual Property Management Co., Ltd. Augmented reality display system, terminal device and augmented reality display method
US20170289617A1 (en) * 2016-04-01 2017-10-05 Yahoo! Inc. Computerized system and method for automatically detecting and rendering highlights from streaming videos
US20180061130A1 (en) * 2016-09-01 2018-03-01 Avid Technology, Inc. Personalized video-based augmented reality
CN108379809A (en) * 2018-03-05 2018-08-10 宋彦震 Skifield virtual track guiding based on AR and Training Control method
US10325410B1 (en) * 2016-11-07 2019-06-18 Vulcan Inc. Augmented reality for enhancing sporting events
WO2019128787A1 (en) * 2017-12-26 2019-07-04 阿里巴巴集团控股有限公司 Network video live broadcast method and apparatus, and electronic device
CN112184920A (en) * 2020-10-12 2021-01-05 中国联合网络通信集团有限公司 AR-based skiing blind area display method and device and storage medium
JP2021074277A (en) * 2019-11-08 2021-05-20 株式会社コナミデジタルエンタテインメント Game program, information processing device and method
CN113971693A (en) * 2021-10-29 2022-01-25 咪咕文化科技有限公司 Live broadcast picture generation method, system and device and electronic equipment
CN113992974A (en) * 2021-10-14 2022-01-28 咪咕视讯科技有限公司 Method and device for simulating competition, computing equipment and computer-readable storage medium
CN114268827A (en) * 2021-12-22 2022-04-01 咪咕互动娱乐有限公司 Game viewing interaction method, device, equipment and computer readable storage medium
CN114359343A (en) * 2021-12-31 2022-04-15 北京市商汤科技开发有限公司 Motion trail management method, device and equipment and computer readable storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170124769A1 (en) * 2014-07-28 2017-05-04 Panasonic Intellectual Property Management Co., Ltd. Augmented reality display system, terminal device and augmented reality display method
US20160361658A1 (en) * 2015-06-14 2016-12-15 Sony Interactive Entertainment Inc. Expanded field of view re-rendering for vr spectating
US20170289617A1 (en) * 2016-04-01 2017-10-05 Yahoo! Inc. Computerized system and method for automatically detecting and rendering highlights from streaming videos
US20180061130A1 (en) * 2016-09-01 2018-03-01 Avid Technology, Inc. Personalized video-based augmented reality
CN106454401A (en) * 2016-10-26 2017-02-22 乐视网信息技术(北京)股份有限公司 Method and device for playing video
US10325410B1 (en) * 2016-11-07 2019-06-18 Vulcan Inc. Augmented reality for enhancing sporting events
WO2019128787A1 (en) * 2017-12-26 2019-07-04 阿里巴巴集团控股有限公司 Network video live broadcast method and apparatus, and electronic device
CN108379809A (en) * 2018-03-05 2018-08-10 宋彦震 Skifield virtual track guiding based on AR and Training Control method
JP2021074277A (en) * 2019-11-08 2021-05-20 株式会社コナミデジタルエンタテインメント Game program, information processing device and method
CN112184920A (en) * 2020-10-12 2021-01-05 中国联合网络通信集团有限公司 AR-based skiing blind area display method and device and storage medium
CN113992974A (en) * 2021-10-14 2022-01-28 咪咕视讯科技有限公司 Method and device for simulating competition, computing equipment and computer-readable storage medium
CN113971693A (en) * 2021-10-29 2022-01-25 咪咕文化科技有限公司 Live broadcast picture generation method, system and device and electronic equipment
CN114268827A (en) * 2021-12-22 2022-04-01 咪咕互动娱乐有限公司 Game viewing interaction method, device, equipment and computer readable storage medium
CN114359343A (en) * 2021-12-31 2022-04-15 北京市商汤科技开发有限公司 Motion trail management method, device and equipment and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王艾莎;刘梅;周锐;牛鹤璇;徐妹妍;周宁;: "基于体育迷观赛体验的VR和AR技术在冬季体育赛事中的应用研究", 新媒体研究, no. 07 *
郭洋;马翠霞;滕东兴;杨;王宏安;: "运动目标三维轨迹可视化与关联分析方法", 软件学报, no. 05 *

Also Published As

Publication number Publication date
CN114979708B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
US11223821B2 (en) Video display method and video display device including a selection of a viewpoint from a plurality of viewpoints
US10922879B2 (en) Method and system for generating an image
Bao et al. Shooting a moving target: Motion-prediction-based transmission for 360-degree videos
US20200193671A1 (en) Techniques for rendering three-dimensional animated graphics from video
US10152826B2 (en) Augmented reality display system, terminal device and augmented reality display method
US10771760B2 (en) Information processing device, control method of information processing device, and storage medium
EP3413570B1 (en) Video display method and video display device
KR102346437B1 (en) Methods, devices and systems for automatic zoom when playing an augmented reality scene
US9298986B2 (en) Systems and methods for video processing
US20160379415A1 (en) Systems and Methods for Generating 360 Degree Mixed Reality Environments
US9251603B1 (en) Integrating panoramic video from a historic event with a video game
US20190287310A1 (en) Generating three-dimensional content from two-dimensional images
CN112819852A (en) Evaluating gesture-based motion
CN114097248B (en) Video stream processing method, device, equipment and medium
CN111951325B (en) Pose tracking method, pose tracking device and electronic equipment
JP2020086983A (en) Image processing device, image processing method, and program
WO2017092432A1 (en) Method, device, and system for virtual reality interaction
CN114520920B (en) Multi-machine-position video synchronization method and system and computer program product
US9906769B1 (en) Methods and apparatus for collaborative multi-view augmented reality video
JP2008194095A (en) Mileage image generator and generation program
CN113515187B (en) Virtual reality scene generation method and network side equipment
CN114979708A (en) Video pushing method and device, server equipment and readable storage medium
KR20190019407A (en) Server, method and user device for providing time slice video
WO2023061356A1 (en) Advertisement serving method and apparatus, device, storage medium and computer program product
Carrillo et al. Automatic football video production system with edge processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant