CN114425162A - Video processing method and related device - Google Patents

Video processing method and related device Download PDF

Info

Publication number
CN114425162A
CN114425162A CN202210130431.4A CN202210130431A CN114425162A CN 114425162 A CN114425162 A CN 114425162A CN 202210130431 A CN202210130431 A CN 202210130431A CN 114425162 A CN114425162 A CN 114425162A
Authority
CN
China
Prior art keywords
game
target
video
video data
target game
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210130431.4A
Other languages
Chinese (zh)
Inventor
王梦佳
何秋豪
郑学权
丁亮亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210130431.4A priority Critical patent/CN114425162A/en
Publication of CN114425162A publication Critical patent/CN114425162A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6615Methods for processing data by generating or executing the game program for rendering three dimensional images using models with different levels of detail [LOD]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a video processing method and a related device, which can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like. The game environment image of the target game can be displayed through the set display area, and the game environment image displayed in the display area is used as the background of the entity object, so that the entity object in the video data is just in the scene of the target game, and the effect of telepresence is achieved. By identifying the shape parameters of the entity objects in the video data, the target game object can be determined from a plurality of game objects of the target game based on the shape parameters, and the virtual item corresponding to the target game object is rendered on the entity object in the video data to obtain the target video. Since the target game object is determined based on the shape parameter, the shape has correlation with the physical object, and the physical object plays the appearance of the target game object in the target game scene.

Description

Video processing method and related device
Technical Field
The present application relates to the field of data processing, and in particular, to a video processing method and related apparatus.
Background
For game-like applications, in addition to providing game services to users through games themselves, immersive experiences can be provided to users through offline game environment construction.
At present, the off-line experience mainly adopts manufactured entity game props (such as clothes, holding props, ornaments and the like) to provide experience services for playing game roles for users, and frequent putting on and taking off of the props is not only troublesome, but also difficult to bring immersion feeling to the users.
Therefore, how to provide high-quality in-line game immersive experience is a technical problem which needs to be solved at present.
Disclosure of Invention
In order to solve the technical problem, the application provides a video processing method and a related device, which play the role of an entity object playing a target game object in a target game scene by generating a target video, thereby not only bringing better immersive experience, but also having interestingness.
The embodiment of the application discloses the following technical scheme:
in one aspect, an embodiment of the present application provides a video processing method, where the method includes:
acquiring video data acquired aiming at a display area, wherein the display area is used for displaying a game environment image of a target game and is provided with a solid object;
determining appearance parameters of the entity objects in the video data;
determining a target game object in a plurality of game objects of the target game according to the shape parameters;
rendering the virtual item corresponding to the target game object on the entity object in the video data to obtain a target video.
On the other hand, an embodiment of the present application provides a video processing apparatus, which includes an obtaining unit, a determining unit, and a rendering unit:
the acquisition unit is used for acquiring video data acquired aiming at a display area, the display area is used for displaying a game environment image of a target game, and a solid object is arranged in the display area;
the determining unit is used for determining the appearance parameters of the entity objects in the video data;
the determining unit is further used for determining a target game object in a plurality of game objects of the target game according to the shape parameter;
and the rendering unit is used for rendering the virtual prop corresponding to the target game object on the entity object in the video data to obtain a target video.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of the above aspect according to instructions in the program code.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium is configured to store a computer program, where the computer program is configured to execute the method according to the foregoing aspect.
In another aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above aspect.
According to the technical scheme, the game environment image of the target game can be displayed through the set display area, when the entity object enters the display area, the video data can be collected aiming at the display area, and as the game environment image displayed in the display area is used as the background of the entity object, the entity object in the video data is just in the scene of the target game, so that the telepresence effect is achieved. By identifying the shape parameters of the entity objects in the video data, the target game object can be determined from a plurality of game objects of the target game based on the shape parameters, and the virtual item corresponding to the target game object is rendered on the entity object in the video data to obtain the target video. Because the target game object is determined based on the shape parameters and has correlation with the entity object in shape, the appearance of the entity object playing the target game object in the target game scene is realized, and the target game object has better immersive experience and is interesting.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a video processing scene according to an embodiment of the present application;
fig. 2 is a flowchart of a video processing method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of key points of a joint of a human body according to an embodiment of the present disclosure;
fig. 4 is a schematic view of rendering special effects for different human bodies according to an embodiment of the present disclosure;
fig. 5a is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 5b is a schematic diagram of an offline experience scenario provided in the embodiment of the present application;
FIG. 6a is a diagram illustrating a display of an operation panel according to an embodiment of the present disclosure;
fig. 6b is a second display content of an operation console according to an embodiment of the present application;
fig. 6c is a third display content of an operation console according to an embodiment of the present application;
FIG. 7 is a game scene diagram of a three-screen display according to an embodiment of the present disclosure;
fig. 8 is an effect diagram of rendering of a virtual item according to an embodiment of the present application;
FIG. 9a is a schematic flow chart of a single-person experience provided by an embodiment of the present application;
FIG. 9b is a schematic flowchart of a multi-user experience provided by an embodiment of the present application;
fig. 10 is a device structure diagram of a video processing device according to an embodiment of the present application;
fig. 11 is a structural diagram of a terminal device according to an embodiment of the present application;
fig. 12 is a block diagram of a server according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
Constructing a high-quality online game immersive experience is an important way to enhance user stickiness and spread game audiences, and in the related art, the manufactured entity game props are mainly adopted, so that the complicated operation causes difficulty in achieving sufficient immersive feeling.
Therefore, the video processing method provided by the embodiment of the application plays the role of the entity object playing the target game object in the target game scene by generating the target video, so that better immersive experience is brought, and interestingness is achieved.
The video processing method provided by the embodiment of the application can be implemented by computer equipment, and the computer equipment can be terminal equipment or a server, wherein the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud computing service. The terminal devices include, but are not limited to, mobile phones, computers, intelligent voice interaction devices, intelligent household appliances, vehicle-mounted terminals, and the like. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. The embodiment of the application can be applied to various scenes, including but not limited to cloud technology, artificial intelligence, intelligent traffic, driving assistance and the like.
The embodiments of the present application can be applied to Artificial Intelligence (AI), which is a theory, method, technique, and application system that simulates, extends, and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge, and uses the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and the like.
The present application relates to Computer Vision technology (CV), and Computer Vision is a science for researching how to make a machine "see", and more specifically, it refers to that a camera and a Computer are used to replace human eyes to identify and measure a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes observation or transmitted to an instrument for detection. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, and the like. For example, the application may identify actions of physical objects in video data through computer vision techniques.
It is understood that in the specific implementation of the present application, the data related to the shape parameters and the like may be related to the user information, when the above embodiments of the present application are applied to specific products or technologies, individual permission or consent of the user needs to be obtained, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related countries and regions.
Fig. 1 is a schematic diagram of a video processing scene according to an embodiment of the present application, and in fig. 1, a terminal device 100 is taken as an example of the foregoing computer device for explanation. The camera 200 is used to capture video data of the display area 300, and the terminal device 100 acquires the captured video data through the camera 200. A game environment image of the target game is displayed in the display area 300.
In the process of capturing the video data by the camera 200, the user a is in the display area 300 as an entity object, and since the game environment image displayed in the display area is used as the background of the user a, the user a in the video data is just in the scene of the target game, which plays a role of telepresence.
The terminal device 100 may determine a profile parameter of the user a, e.g. a height parameter of one meter eight, based on the video data. And determining a target game object matched with the user a, such as a male fighting family with a large stature, in a plurality of game objects of the target game according to the height parameters, and rendering a virtual item, such as a boxing glove, corresponding to the male fighting family on the user a in the video data to obtain a target video, as shown in a display interface 20 of the terminal device 100.
Because the male fighting family determined based on the height parameter is similar to the user a in height, the user a with the boxing glove rendered in the target video can be more similar to the male fighting family in the target game, the impression that the user a plays the male fighting family in the target game scene is achieved, better immersion experience is brought, interestingness can be achieved, and the user is attracted to try to play which game object.
Fig. 2 is a flowchart of a method of a video processing method according to an embodiment of the present application, and in this embodiment, a terminal device is taken as an example of the foregoing computer device, where the method includes:
s201: video data collected for a display area is acquired.
The display area is used for displaying a game environment image of a target game, when video data are collected, an entity object is arranged in the display area, and in the video collection direction, the entity object is arranged between the collection device and the display area, so that the entity object in the collected video data is arranged in the background of the game environment image, and the telepresence in the target game scene can be brought.
The game environment image may be a static image or a dynamic image, which is not limited in this application.
The physical object may be various physical objects having physical objects, such as a human body, an animal, or various objects, which are not limited in this application.
In order to improve the presence of the display area, in one possible implementation manner, the display area is a three-panel display, and three display screens of the three-panel display are arranged perpendicular to each other.
Three display screens of the three-sided screen can form a rectangular angle-shaped space through vertical arrangement, a three-dimensional display area can bring a three-dimensional display effect, the reality sense is improved, when an entity object is positioned in the display area of the three-sided screen, vivid game environment images are arranged on the left side, the right side, the upper side or the lower side, and the telepresence in a target game scene is enhanced.
The number and types of the entity objects in the video data are not limited in the present application, and there may be a plurality of entity objects or one entity object, and the entity objects may all be of the same type, for example, all be human bodies, or may include different types, for example, human bodies and animals.
S202: determining a shape parameter of the physical object in the video data.
The shape parameters are used for identifying the shape characteristics of the entity object, and the shape characteristics can be embodied through the body type and also can be embodied through the appearance. Therefore, the terminal equipment can effectively describe the physical characteristics of the entity object through the appearance characteristics, and is convenient for subsequently identifying the target game object matched with the entity object in the appearance.
When the physical object is a human body, S202 includes: and determining at least one of the human face characteristics, the height parameters and the body type parameters of the human body in the video data as the appearance parameters.
It should be noted that although the determined shape parameters include at least one of the human face characteristics, height parameters, and body shape parameters, these shape parameters are only used for temporary game object matching, and after the matching is finished, the shape parameters are not saved or used for other purposes. The determined shape parameters are relatively coarse in granularity, so long as the shape characteristics can be embodied, and the privacy information of the user cannot be involved.
When the entity object is an animal, the appearance parameters can also identify the physical and physical characteristics, the species characteristics and the like of the animal.
When the entity object is an object, the shape parameters can also identify the structural characteristics, physical forms and the like of the object.
S203: and determining a target game object in a plurality of game objects of the target game according to the shape parameters.
A plurality of game objects in the target game have corresponding avatars, which may be embodied, for example, by virtual models. The image characteristics of the game object can be determined based on the virtual image, and the appearance parameters can reflect the appearance characteristics of the entity object, so that the terminal equipment can determine the target game object from a plurality of game objects according to the appearance parameters.
The determined target game object has correlation with the entity object in the video data in appearance, and the correlation in appearance can show a more consistent appearance or a more different appearance. The matched appearance can enable the entity object in the target video to play the target game object, and the appearance with large difference can enable the entity object in the target video to have contrast feeling after being rendered into the virtual prop, so that interestingness is brought to offline experience.
The present application does not limit the type of game object, and may be a game character of various forms and types, and the specific image and type of game object may be related to the target game.
In one possible implementation, S203 includes: and determining a target game object matched with the entity object in a plurality of game objects of the target game according to the shape parameters.
In this manner, the target game object obtained by matching is similar to the physical object in appearance, for example, the target game object matched based on the tall physical object may be a tall game character, the target game object matched by the small physical object may be a small game character, and the target game object matched by the puppy as the physical object may be a puppy avatar in the target game.
Because the target game object is similar to the entity object in appearance, the entity object with the virtual prop in the target video is more similar to the target game object, and the effect that the entity object plays the impression of the target game object in the target game scene is achieved.
S204: rendering the virtual item corresponding to the target game object on the entity object in the video data to obtain a target video.
The virtual prop corresponding to the target game object can be used for identifying the appearance characteristics of the target game object and plays a role of being different from other game objects to a certain extent. The virtual item may be a virtual item used in the target game to decorate or wear on the target game object, and may be at least one of a virtual ornament, clothing, skin, hair, or a holding item, for example. The virtual items may also include additional virtual items associated with the target game object, such as virtual pets, halo special effect materials, and the like, which are not limited in this application.
Because the appearance of the target game object obtained by matching is similar to that of the entity object, the terminal device renders the virtual prop corresponding to the target game object to the entity object in the video data, so that the entity object plays the role of playing the target game object in the target game scene as if playing the target game object in the target video, better immersive experience can be brought by playing the target video, more participants can be attracted to try to determine which target game object can be matched by the terminal device, and the offline experience further brings interest.
As described above, the shape of the target game object determined by the shape parameter may be different from the physical object in the video data (for example, there may be some differences when the shapes are matched, and the difference is larger when the shape difference is large), so in order to better render the virtual item and improve the reality, in a possible implementation manner, the method further includes:
s41: and determining the appearance difference between the entity object and the target game object in the video data according to the game image parameters and the appearance parameters of the target game object.
S42: and adjusting the game image parameters of the target game object according to the shape difference to obtain adjusted image parameters matched with the shape parameters.
The corresponding S204 includes: and rendering the virtual prop corresponding to the target game object on the entity object in the video data according to the adjusted image parameter to obtain a target video.
The appearance difference marks the difference of the appearance of the target game object relative to the entity object in the video data, the game image parameter of the target game object is adjusted through the determined appearance difference to obtain the bar-shaped image parameter, and the adjusted image parameter can mark the mapping relation between the virtual prop and the entity object in the video data, so that when the virtual prop of the target game object is rendered on the entity object of the video data, the virtual prop which originally has the appearance difference can be rendered on the entity object in the video data in a fitting manner and is coordinated with the entity object in proportion, therefore, the picture is more attractive, and better reality is achieved.
Therefore, the game environment image of the target game can be displayed through the set display area, when the entity object enters the display area, the video data can be collected aiming at the display area, and as the game environment image displayed in the display area is used as the background of the entity object, the entity object in the video data is just in the scene of the target game, so that the telepresence effect is achieved. By identifying the shape parameters of the entity objects in the video data, the target game object can be determined from a plurality of game objects of the target game based on the shape parameters, and the virtual item corresponding to the target game object is rendered on the entity object in the video data to obtain the target video. Because the target game object is determined based on the shape parameters and has correlation with the entity object in the shape, the entity object plays the impression of the target game object in the target game scene, and the target game object not only brings better immersion experience, but also has interestingness.
In one possible implementation, S203 includes:
s2031: and acquiring game image parameters corresponding to the plurality of game objects respectively.
S2032: and comparing the similarity according to the shape parameters and the game image parameters, and determining the target game object from the game objects with the similarity meeting a threshold value.
The game image parameters can identify image characteristics of the corresponding game objects, and the shape parameters can embody the shape characteristics of the entity objects, so that similarity comparison can be performed on the basis of the shape parameters and the game image parameters, and the target game object is determined from the game objects with the similarity meeting a threshold value. The threshold may be set to a relatively high value.
The game objects having the similarity satisfying the threshold value may be at least one, and in the case of a plurality of game objects, the most targeted game object having the highest similarity may be randomly selected or picked.
In order to further improve the similarity between the entity object and the target game object, when the entity object is a human body, the gender parameter of the human body can be introduced as a matching basis. The gender parameter may identify a male or a female.
In one possible implementation, the method further includes: and acquiring the sex parameters of the human body.
Accordingly, S203 includes:
s2033: determining a game object to be determined that meets the gender parameter from the plurality of game objects.
S2034: and determining a target game object matched with the entity object in the undetermined game objects according to the shape parameters.
The scene mainly relates to a target game with gender distinction on game objects, so that a game object to be determined consistent with the gender can be determined from a plurality of game objects based on the gender identified by the gender parameter of the entity object, and then the target game object is determined in the game object to be determined through the appearance parameter of the entity object, so that the determined target game object is consistent with the gender of the entity object, and the user experience is improved.
In some application scenarios, the number of physical objects included in the video data is not limited, and one or more physical objects may be included in the display area. Different processing modes can be provided for different numbers of entity objects.
In one possible implementation, the method further includes:
s11: determining the number of the objects of the entity objects in the video data, if the number of the objects is a designated number, executing S12, and if the number of the objects is not the designated number, executing S14.
The specific numerical value of the specified number is not limited in the present application, and may be, for example, 1 or an integer greater than 1. The number of objects may be the number of physical objects of a certain type (e.g. including several human bodies) or the total number of physical objects of all types (e.g. how many human bodies and animals together).
S12: an action made by the physical object in the video data is identified.
S13: and in response to the action being the designated action, executing the operation of rendering the virtual item corresponding to the target game object on the entity object in the video data to obtain the target video.
S14: and executing the operation of S204, namely rendering the virtual prop corresponding to the target game object on the entity object in the video data to obtain the target video.
When the number of the objects of the entity object is a specified number, it is necessary to determine whether to perform an operation of obtaining the target video based on whether the action made by the entity object meets the requirement. The mechanism can bring extra interactive experience for offline experience, and improves the participation sense of the entity object by guiding the entity object to make a specified action.
The designated action is not limited in the application, and may be related to the type of the entity object, the designated number, or the target game. For example, when the physical object is a human body, if the designated number is 1, the designated motion may be a single motion such as opening both hands or heart. If the designated number is an integer greater than 1, the designated motion may be a combination motion or a plurality of single motions. The combined action may be an action that requires a specified number of physical objects to be matched together to complete, such as a hand pulling by multiple people, a heart comparing by multiple people, and the like.
For a human body as an entity object, a motion made by the entity object in video data may be identified by identifying joint key point positions of the human body.
In one possible implementation, S12 includes:
s121: and determining the positions of the joint key points of the specified number of human bodies in the video data.
S122: and identifying the actions of the specified number of human bodies in the video data according to the positions of the joint key points.
The joint key points of the human body are used for identifying joints of the human body, such as joints of shoulders, necks, elbows, ankles, pelvis bones and the like which can generate displacement and rotation, the positions of the joint key points are used for identifying the positions of the corresponding joint key points, and actions and postures made by the human body can be identified based on the positions of the joint key points.
The positions of the key points of the joints of the human bodies in each video frame can be determined by identifying the video frames included in the video data, and then the actions of the human bodies in the specified number in the video data are determined according to the time sequence relation between the positions of the key points of the joints and the video frames.
The position of the corresponding joint key point may be selected for recognition based on the designated motion to be recognized, or may be recognized based on the position of the default joint key point, which is not limited in the present application.
In the embodiment of the application, in addition to determining the corresponding target game object based on the appearance characteristics of the entity object, richer interaction modes can be provided, for example, rendering of virtual items can be performed on the entity object based on the game object selected by the user. That is, in addition to providing an AR rendering of a game object that matches the physical object appearance, an AR rendering based on a specified object may be performed based on a user's selection.
In a possible implementation, therefore, the method further includes:
s21: a game object selection instruction for a specified object, which is one of the plurality of game objects, is acquired.
S22: rendering the virtual prop corresponding to the specified object on the entity object in the video data to obtain the specified video.
Through the method, the user can select any game object which is played through the entity object, and the rendering diversity and the rendering selectivity of the virtual prop are improved. For example, when a user who is an entity object wants to play a game object that does not conform to the appearance characteristics of the user, a game object selection instruction may be generated by selecting the game object, and then the terminal device renders a virtual item corresponding to the specified object on the entity object in the video data to obtain a specified video. Thereby meeting the requirements of users who want to play game objects with different sexes and appearances.
Next, a specific scene in which the designated number is 1 and the designated movement is "open both hands" will be described as an example.
The terminal equipment can recognize the designated action of opening two hands through a recognition model, and the recognition model can recognize a plurality of positions of key points of joints of a human body. For example, in the scenario shown in fig. 3, for the right hand portion of the human body, by identifying the positions of the joint key points 6 (right shoulder), 8 (right elbow), and 10 (right wrist), it can be determined whether the human body has made an action of opening the right hand, and similarly, by identifying the positions of the joint key points 5 (left shoulder), 7 (left elbow), and 9 (left wrist), it can be determined whether the human body has made an action of opening the left hand. Therefore, the human body can be recognized to open the two hands after the conditions are met.
For example, by calculating the angles of the wrist and elbow and the angles of the shoulder and elbow, it is judged as "open both hands" when the angles of the left and right shoulders and elbow are less than 180 degrees and the angles of the wrist and elbow are greater than 25 degrees.
The code may be as follows:
Figure BDA0003502202470000111
Figure BDA0003502202470000121
wherein A, B refer to left and right, respectively.
After the human body is determined to make the action of opening the two hands, the positions and the sizes of the virtual prop and the virtual special effect can be calculated through the positions of the identified joint key points 6 (right shoulder), 12 (right crotch), 16 (right ankle) and 5 (left shoulder), so that the props and the special effects which are matched with different styles for the human bodies with different positions and heights can be realized.
The terminal equipment can be different from the special effect on the basis of the props matched with the human bodies in different positions and heights.
In a possible implementation, therefore, the method further includes: determining a video position of the entity object in the video data.
Accordingly, S204 includes: determining a display form corresponding to the virtual prop according to the video position; rendering the virtual prop on the entity object in the video data based on the display form to obtain a target video.
The physical object may be at an arbitrary position in the display area, which is denoted as video position in the video data. Correspondingly, in the collected video data, the sizes of the same entity object in the video data are different when the same entity object is in different video positions.
Therefore, in order to better enable the virtual item to be rendered on the entity object of the video data, at least two different display forms can be set for the virtual item corresponding to different game objects based on the relation between the video position of the entity object and the distance and offset between the entity object and the video acquisition device. Before rendering, selecting a display form corresponding to the virtual prop based on the video position of the entity object in the video data, and rendering the virtual prop on the entity object in the video data according to the display form to obtain a target video, thereby improving the rendering reality of the virtual prop.
As shown in fig. 4, the left side diagram and the right side diagram correspond to different human bodies (physical objects), for example, the human body in the left side diagram is higher in height relative to the human body in the right side diagram, and the standing position is further back relative to the human body in the right side diagram. The left side image has a larger special effect than the right side image, and the virtual props of the corresponding target game objects are rendered on the human body by flying the dragon in the special effect to the chest of the human body to flash.
The code may be as follows:
Figure BDA0003502202470000131
wherein x and y respectively identify the coordinates of a horizontal axis and the coordinates of a vertical axis in the positions of the key points of the joint.
The code calculates the middle point of the left and right shoulders and the crotch of the human body as the middle point of the rendered virtual prop, and calculates and sets the proportion of the virtual prop according to the distance from the shoulders to the feet of the human body.
The determined target video may be an Augmented Reality (AR) video without music. The AR technology is a technology for skillfully fusing virtual information and a real world, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, wherein the two kinds of information supplement each other, so that the real world is enhanced.
The interface used for making the video can be called to generate a target video, the dynamic content of a canvas in the video data can be recorded by the interface, real content shot by a camera and virtual props (or virtual special effects and the like) are drawn in the canvas, wherein the content aiming at a human body and three screens is the real content collected by the camera, and the virtual props are a plurality of transparent AR videos drawn at specified positions on the canvas. And finally, an AR video (namely a target video) in a WEBM format is generated, wherein the AR video embodies the interaction between the human body and the virtual prop. Specifically, the recorded information can be stored through the data array, and the data is converted into a file after the recording is finished, so that the target video is obtained.
Since the aforementioned target video may be silent in some scenes, audio-video fusion is required to blend background music into the target video. An ffmpeg tool (which is free software with an open source code and can execute video recording, conversion and streaming functions of various formats of audio and video) used for audio and video fusion can be used for realizing the fusion of silence WEBM + MP3 (background audio) in the scheme, and the obtained silence video and the audio of a server are fused by using an ffmpeg command to obtain a target video with background sound, wherein the target video can be a file in an MP4 format.
After the target video is generated, an acquisition path of the target video can be provided for the user. Therefore, the user can conveniently store the target video on the terminal of the user and repeatedly watch or share the target video when needed.
In one possible implementation, the method further includes:
s31: and storing the target video in a storage position.
S32: and generating a corresponding address code based on the storage position, and displaying the address code.
S33: and if the video downloading request sent by the terminal equipment is acquired based on the address code, returning the target video to the terminal equipment from the storage position.
After the target video is generated, the target video can be stored, and the storage position for storing the target video can be local to the terminal device or can be a server at the cloud end.
After the target video is stored, an address code, which may be in the form of a two-dimensional code, a barcode, or the like, may be generated based on the storage location.
The user can generate and send a video downloading request by scanning the address code through own terminal equipment such as a mobile phone, and the terminal equipment serving as the computer equipment can return the target video to the terminal equipment of the user from the storage position after acquiring the video downloading request.
For example, after a user scans a two-dimensional code as an address code through a applet, the terminal device obtains a relevant parameter and makes a network request according to the relevant parameter, then judges whether a storage address (which may be in a link form) of a target video is valid, and accesses the target video and inquires whether the user stores the target video if the link is valid. And if the link is invalid, continuing the request until the target video in the cloud WEBM format is successfully converted into the MP4 format, and then accessing.
The video processing method provided by the present application is further described based on fig. 5a and 5b, as shown in fig. 5b, the terminal device 100, the camera 200, the three-screen 300 and the console 400 are included in the offline scene. Wherein, the designated number is 1, and the designated action is used as the opening of the two hands.
That is, a total of two hosts and three screens are included. The two hosts are respectively an operation desk and terminal equipment, wherein the operation desk is also used as a server and is used for selecting an interactive experience mode and confirming or deleting a finally generated video, and the terminal equipment is used for recording an immersive interactive AR video of a user and a hero special effect.
The three screens are respectively connected with the operation end and the display end, and the external large screen is synchronized with the display end in real time.
The user can be used as an entity object, and the camera 200 can acquire video data when the user is in the display area 300 identified by three screens, so as to obtain the video data. The terminal device 100 renders the virtual item of the target game object on the user in the video data based on the identification and matching of the video data.
Before beginning to implement embodiments of the present application, the display interface 10 of the console 400 may include steps one, two, three, and single and multiple person options to participate in an offline experience, as shown in fig. 6 a.
The first step is used for indicating a user to enter a video acquisition area (namely a display area), the second step is used for indicating to render the virtual prop, and the third step is used for indicating to scan the code to acquire the target video. The user may also select a single person or a plurality of persons based on the number of participating persons.
As shown in fig. 5a, in the case of single person participation, by triggering the "single person" control click of fig. 6a to start, or by triggering the "multiple person" control click of fig. 6a to start.
After the click starts, the console 400 displays the content shown in fig. 6b, and indicates the user to enter the display area 300 by "please go to the group photo area for interaction", and the camera 200 acquires the corresponding video data, wherein the content displayed in the display area 300 may be as shown in fig. 7.
The console 400 may also present the content of fig. 6c, instructing the user to make a specified action by means of a pattern and text "please lift open arms to call dragon".
The terminal device 100 determines whether the user makes an action of opening both hands based on the video data, and if the identification is successful, determines whether to click a special effect, that is, whether to render a virtual item.
When determining the click special effect, the terminal device 100 renders the corresponding virtual item on the user in the video data based on the determined target game object, and generates a target video, where the target video may be displayed through the display interface 20 of the terminal device 100 as shown in fig. 8, where fig. 8 shows a case where two users are in the display area 300 at the same time.
In fig. 8, the virtual item of the target game object matched by the left user is a garment and a headwear, and the virtual item of the target game object matched by the right user is a machine gun hung on a shoulder. Accordingly, the user in this case may trigger the "multi-person" control in FIG. 6 a.
The terminal device 100 stores the processed target video in a storage location, and generates an address code based on the storage location, where the address code may be displayed on the terminal device 100 in the scene shown in fig. 5b, or may be displayed on the console 400.
The user scans the address code through the applet, if the authorization is given, the terminal device 100 returns the target video to the user and stores the target video in an album of the mobile phone of the user, and if the authorization is not given, the target video cannot be acquired.
The flow of the single experience can be seen in fig. 9a, where the user clicks the single shoot button on the console. Then the user station opens both hands in the optimal shooting area in the display area, and the current environment shot by the camera is displayed on a large screen of the terminal equipment based on the transmitted video data. After the action of opening the two hands is recognized, the terminal equipment can adapt to the special effect of the virtual prop of the hero game according to the position and the height of the user and render the special effect together with the content shot by the camera. Then the user can interact with the hero special effect, can generate the target video of whole AR interactive experience after the interaction is finished, can fuse the audio frequency after accomplishing the shooting of target video. And displaying the target video on the operation console, after the user clicks the saved video, the terminal equipment saves the target video and generates a corresponding address code, the address code is transmitted to the display terminal to be displayed, and the user can use the small program to scan the code for saving.
The process of the multi-person experience can be seen in fig. 9b, and the user clicks the multi-person shooting button on the console. Under some conditions, when multiple people open two hands, the multiple people directly interfere with each other when identified by using tentorflow. The interaction after opening is consistent with the single experience, and is not repeated one by one.
The embodiment of the application can be realized on the basis of a web end, and the immersive game role experience is created through the combination of technology and an offline environment.
On the basis of the foregoing fig. 1-9 b, fig. 10 is a device structure diagram of a video processing device according to an embodiment of the present application, where the video processing device 1000 includes an obtaining unit 1001, a determining unit 1002, and a rendering unit 1003:
the acquiring unit 1001 is configured to acquire video data acquired for a display area, where the display area is used to display a game environment image of a target game and has a physical object in the display area;
the determining unit 1002 is configured to determine an outline parameter of the entity object in the video data;
the determining unit 1002 is further configured to determine a target game object in a plurality of game objects of the target game according to the shape parameter;
the rendering unit 1003 is configured to render the virtual item corresponding to the target game object on the entity object in the video data to obtain a target video.
In a possible implementation manner, the determining unit is further configured to determine, according to the shape parameter, a target game object that matches the physical object from among a plurality of game objects of the target game.
In a possible implementation manner, the determining unit is further configured to:
determining the shape difference between the entity object and the target game object in the video data according to the game image parameters and the shape parameters of the target game object;
adjusting the game image parameters of the target game object according to the appearance difference to obtain adjusted image parameters matched with the appearance parameters;
and the rendering unit is further used for rendering the virtual prop corresponding to the target game object on the entity object in the video data according to the adjusted image parameter to obtain a target video.
In a possible implementation manner, if the entity object is a human body, the determining unit is further configured to determine at least one of a face feature, a height parameter, and a body shape parameter of the human body in the video data as the shape parameter.
In a possible implementation manner, the determining unit is further configured to:
obtaining game image parameters corresponding to the plurality of game objects respectively;
and comparing the similarity according to the shape parameters and the game image parameters, and determining the target game object from the game objects with the similarity meeting a threshold value.
In a possible implementation manner, the obtaining unit is further configured to obtain a gender parameter of the human body;
the determination unit is further configured to:
determining a game object to be determined which meets the gender parameter from the plurality of game objects;
and determining a target game object matched with the entity object in the pending game objects according to the shape parameters.
In one possible implementation, the apparatus further includes an identification unit:
the determining unit is further configured to determine an object number of the entity object in the video data;
the identification unit is used for identifying the action of the entity object in the video data if the number of the objects is a specified number;
the identification unit is also used for responding to the action as a specified action and triggering the rendering unit;
the determining unit is further configured to trigger the rendering unit if the number of objects is not the specified number.
In a possible implementation manner, the physical object is a human body, and the identification unit is further configured to:
determining the positions of the key points of the joints of the specified number of human bodies in the video data;
and identifying the actions of the specified number of human bodies in the video data according to the positions of the joint key points.
In a possible implementation manner, the obtaining unit is further configured to:
acquiring a game object selection instruction for a specified object, wherein the specified object is one of the plurality of game objects;
rendering the virtual prop corresponding to the specified object on the entity object in the video data to obtain the specified video.
In a possible implementation manner, the determining unit is further configured to determine a video position in the video data where the entity object is located;
the rendering unit is further configured to:
determining a display form corresponding to the virtual prop according to the video position;
rendering the virtual prop on the entity object in the video data based on the display form to obtain a target video.
In one possible implementation manner, the apparatus further includes a storage unit, and the storage unit is configured to:
storing the target video in a storage position;
generating a corresponding address code based on the storage position, and displaying the address code;
and if the video downloading request sent by the terminal equipment is acquired based on the address code, returning the target video to the terminal equipment from the storage position.
In one possible implementation manner, the display area is a three-panel display, and three display screens of the three-panel display are arranged perpendicular to each other.
Therefore, the game environment image of the target game can be displayed through the set display area, when the entity object enters the display area, the video data can be collected aiming at the display area, and as the game environment image displayed in the display area is used as the background of the entity object, the entity object in the video data is just in the scene of the target game, so that the telepresence effect is achieved. By identifying the shape parameters of the entity objects in the video data, a target game object matched with the entity objects can be determined from a plurality of game objects of the target game based on the shape parameters, and the virtual props corresponding to the target game object are rendered on the entity objects in the video data to obtain the target video. Because the target game object is similar to the entity object in appearance, the entity object with the rendered virtual prop in the target video is more similar to the target game object, the effect that the entity object plays the impression of the target game object in the target game scene is achieved, better immersive experience is brought, and interestingness can be achieved.
An embodiment of the present application further provides a computer device, where the computer device is the computer device described above, and may include a terminal device or a server, and the video processing apparatus described above may be configured in the computer device. The computer apparatus is described below with reference to the drawings.
If the computer device is a terminal device, please refer to fig. 11, an embodiment of the present application provides a terminal device, taking the terminal device as a mobile phone as an example:
fig. 11 is a block diagram illustrating a partial structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 11, the cellular phone includes: radio Frequency (RF) circuitry 1410, memory 1420, input unit 1430, display unit 1440, sensor 1450, audio circuitry 1460, wireless fidelity (WiFi) module 1470, processor 1480, and power supply 1490. Those skilled in the art will appreciate that the handset configuration shown in fig. 11 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 11:
RF circuit 1410 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for processing received downlink information of a base station to processor 1480; in addition, the data for designing uplink is transmitted to the base station.
The memory 1420 may be used to store software programs and modules, and the processor 1480 executes various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 1420.
The input unit 1430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. In particular, the input unit 1430 may include a touch panel 1431 and other input devices 1432. Touch panel 1431, also referred to as a touch screen, can collect touch operations of a user on or near the touch panel 1431 (for example, operations of a user on or near the touch panel 1431 by using any suitable object or accessory such as a finger, a stylus, etc.), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 1431 may include two parts of a touch detection device and a touch controller.
The display unit 1440 may be used to display information input by or provided to the user and various menus of the mobile phone. The display unit 1440 may include a display panel 1441.
The handset may also include at least one sensor 1450, such as light sensors, motion sensors, and other sensors.
Audio circuitry 1460, speaker 1461, microphone 1462 may provide an audio interface between a user and a cell phone. The audio circuit 1460 can transmit the received electrical signal converted from the audio data to the loudspeaker 1461, and the electrical signal is converted into a sound signal by the loudspeaker 1461 and output; on the other hand, the microphone 1462 converts collected sound signals into electrical signals, which are received by the audio circuit 1460 and converted into audio data, which are then processed by the audio data output processor 1480, and then passed through the RF circuit 1410 for transmission to, for example, another cellular phone, or for output to the memory 1420 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a WiFi module 1470, and provides wireless broadband internet access for the user. Although fig. 11 shows the WiFi module 1470, it is understood that it does not belong to the essential constitution of the handset and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1480 is the control center of the mobile phone, connects the various parts of the entire mobile phone by various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1420 and calling data stored in the memory 1420.
In this embodiment, the processor 1480 included in the terminal device also has the following functions:
acquiring video data acquired aiming at a display area, wherein the display area is used for displaying a game environment image of a target game and is provided with a solid object;
determining appearance parameters of the entity objects in the video data;
determining a target game object matched with the entity object in a plurality of game objects of the target game according to the shape parameters;
rendering the virtual item corresponding to the target game object on the entity object in the video data to obtain a target video.
If the computer device is a server, the embodiment of the present application further provides a server, please refer to fig. 12, where fig. 12 is a structural diagram of the server 1500 provided in the embodiment of the present application, and the server 1500 may generate a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1522 (e.g., one or more processors) and a memory 1532, and one or more storage media 1530 (e.g., one or more mass storage devices) for storing an application program 1542 or data 1544. Memory 1532 and storage media 1530 may be, among other things, transient or persistent storage. The program stored in the storage medium 1530 may include one or more modules (not shown), and each module may include a series of instructions for operating on the server. Still further, a central processor 1522 may be provided in communication with the storage medium 1530, executing a series of instruction operations in the storage medium 1530 on the server 1500.
The Server 1500 may also include one or more power supplies 1526, one or more wired or wireless network interfaces 1550, one or more input-output interfaces 1558, and/or one or more operating systems 1541, such as a Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 12.
In addition, a storage medium is provided in an embodiment of the present application, and the storage medium is used for storing a computer program, and the computer program is used for executing the method provided in the embodiment.
The embodiment of the present application also provides a computer program product including instructions, which when run on a computer, causes the computer to execute the method provided by the above embodiment.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium may be at least one of the following media: various media that can store program codes, such as Read-only Memory (ROM), RAM, magnetic disk, or optical disk.
It should be noted that, in the present specification, all the embodiments are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Moreover, the present application can be further combined to provide more implementations on the basis of the implementations provided by the above aspects. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method of video processing, the method comprising:
acquiring video data acquired aiming at a display area, wherein the display area is used for displaying a game environment image of a target game and is provided with a solid object;
determining appearance parameters of the entity objects in the video data;
determining a target game object in a plurality of game objects of the target game according to the shape parameters;
rendering the virtual item corresponding to the target game object on the entity object in the video data to obtain a target video.
2. The method of claim 1, wherein said determining a target game object among a plurality of game objects of the target game based on the form factor comprises:
and determining a target game object matched with the entity object in a plurality of game objects of the target game according to the shape parameters.
3. The method of claim 1, further comprising:
determining the shape difference between the entity object and the target game object in the video data according to the game image parameters and the shape parameters of the target game object;
adjusting the game image parameters of the target game object according to the appearance difference to obtain adjusted image parameters matched with the appearance parameters;
the rendering the virtual item corresponding to the target game object on the entity object in the video data to obtain a target video includes:
and rendering the virtual prop corresponding to the target game object on the entity object in the video data according to the adjusted image parameter to obtain a target video.
4. The method of claim 1, wherein if the physical object is a human body, the determining the shape parameter of the physical object in the video data comprises:
and determining at least one of the human face characteristics, the height parameters and the body type parameters of the human body in the video data as the appearance parameters.
5. The method of claim 2, wherein determining a target game object that matches the physical object among a plurality of game objects in the target game based on the form factor comprises:
obtaining game image parameters corresponding to the plurality of game objects respectively;
and comparing the similarity according to the shape parameters and the game image parameters, and determining the target game object from the game objects with the similarity meeting a threshold value.
6. The method of claim 2, wherein if the physical object is a human body, the method further comprises:
acquiring gender parameters of the human body;
the determining, according to the shape parameter, a target game object matching the physical object in a plurality of game objects of the target game includes:
determining a game object to be determined which meets the gender parameter from the plurality of game objects;
and determining a target game object matched with the entity object in the pending game objects according to the shape parameters.
7. The method of claim 1, further comprising:
determining the number of the entity objects in the video data;
if the number of the objects is a specified number, identifying the action of the entity object in the video data;
responding to the action as a designated action, and executing the operation of rendering the virtual item corresponding to the target game object on the entity object in the video data to obtain a target video;
and if the number of the objects is not the specified number, the operation of rendering the virtual prop corresponding to the target game object on the entity object in the video data to obtain the target video is executed.
8. The method of claim 7, wherein the physical object is a human body, and wherein the identifying the action made by the physical object in the video data comprises:
determining the positions of the key points of the joints of the specified number of human bodies in the video data;
and identifying the actions of the specified number of human bodies in the video data according to the positions of the joint key points.
9. The method of claim 1, further comprising:
acquiring a game object selection instruction for a specified object, wherein the specified object is one of the plurality of game objects;
rendering the virtual prop corresponding to the specified object on the entity object in the video data to obtain the specified video.
10. The method of claim 1, further comprising:
determining a video position of the entity object in the video data;
the rendering the virtual item corresponding to the target game object on the entity object in the video data to obtain a target video includes:
determining a display form corresponding to the virtual prop according to the video position;
rendering the virtual prop on the entity object in the video data based on the display form to obtain a target video.
11. The method according to any one of claims 1-10, further comprising:
storing the target video in a storage position;
generating a corresponding address code based on the storage position, and displaying the address code;
and if the video downloading request sent by the terminal equipment is acquired based on the address code, returning the target video to the terminal equipment from the storage position.
12. The method of any one of claims 1-10, wherein the display area is a triple-panel, and wherein three display screens of the triple-panel are arranged perpendicular to each other.
13. A video processing apparatus, characterized in that the apparatus comprises an acquisition unit, a determination unit, and a rendering unit:
the acquisition unit is used for acquiring video data acquired aiming at a display area, the display area is used for displaying a game environment image of a target game, and a solid object is arranged in the display area;
the determining unit is used for determining the appearance parameters of the entity objects in the video data;
the determining unit is further used for determining a target game object matched with the entity object in a plurality of game objects of the target game according to the shape parameters;
and the rendering unit is used for rendering the virtual prop corresponding to the target game object on the entity object in the video data to obtain a target video.
14. A computer device, the computer device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of any of claims 1-12 according to instructions in the program code.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program for performing the method of any of claims 1-12.
16. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 12.
CN202210130431.4A 2022-02-11 2022-02-11 Video processing method and related device Pending CN114425162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210130431.4A CN114425162A (en) 2022-02-11 2022-02-11 Video processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210130431.4A CN114425162A (en) 2022-02-11 2022-02-11 Video processing method and related device

Publications (1)

Publication Number Publication Date
CN114425162A true CN114425162A (en) 2022-05-03

Family

ID=81312389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210130431.4A Pending CN114425162A (en) 2022-02-11 2022-02-11 Video processing method and related device

Country Status (1)

Country Link
CN (1) CN114425162A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117085315A (en) * 2023-07-25 2023-11-21 北京维艾狄尔信息科技有限公司 AR interactive game method, system and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117085315A (en) * 2023-07-25 2023-11-21 北京维艾狄尔信息科技有限公司 AR interactive game method, system and storage medium

Similar Documents

Publication Publication Date Title
CN108305317B (en) Image processing method, device and storage medium
US11948260B1 (en) Streaming mixed-reality environments between multiple devices
CN111556278B (en) Video processing method, video display device and storage medium
US9779554B2 (en) Filtering and parental control methods for restricting visual activity on a head mounted display
CN106730815B (en) Somatosensory interaction method and system easy to realize
US20230419582A1 (en) Virtual object display method and apparatus, electronic device, and medium
CN108876878B (en) Head portrait generation method and device
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110691279A (en) Virtual live broadcast method and device, electronic equipment and storage medium
CN110178158A (en) Information processing unit, information processing method and program
WO2022267729A1 (en) Virtual scene-based interaction method and apparatus, device, medium, and program product
US20220398816A1 (en) Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements
CN110166848A (en) A kind of method of living broadcast interactive, relevant apparatus and system
CN109224432A (en) Control method, device, storage medium and the wearable device of entertainment applications
CN108325208A (en) Augmented reality implementation method applied to field of play
CN108983974A (en) AR scene process method, apparatus, equipment and computer readable storage medium
CN115631270A (en) Live broadcast method and device of virtual role, computer storage medium and terminal
KR20050082559A (en) Dance learning system, internet community service system and internet community service method using the same, dance learning method, and computer executable recording media on which programs implement said methods are recorded
CN114425162A (en) Video processing method and related device
US20230386147A1 (en) Systems and Methods for Providing Real-Time Composite Video from Multiple Source Devices Featuring Augmented Reality Elements
CN112973110A (en) Cloud game control method and device, network television and computer readable storage medium
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN115624740A (en) Virtual reality equipment, control method, device and system thereof, and interaction system
CN114173142A (en) Object live broadcast display method and device, storage medium and electronic equipment
CN105426039A (en) Method and apparatus for pushing approach image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40070373

Country of ref document: HK