CN112822555A - Shooting method, shooting device, electronic equipment and storage medium - Google Patents

Shooting method, shooting device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112822555A
CN112822555A CN202011623855.1A CN202011623855A CN112822555A CN 112822555 A CN112822555 A CN 112822555A CN 202011623855 A CN202011623855 A CN 202011623855A CN 112822555 A CN112822555 A CN 112822555A
Authority
CN
China
Prior art keywords
shooting
current
animation
data
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011623855.1A
Other languages
Chinese (zh)
Inventor
胡婷婷
赵男
包炎
刘超
施一东
李鑫培
师锐
董一夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mihoyo Tianming Technology Co Ltd
Original Assignee
Shanghai Mihoyo Tianming Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mihoyo Tianming Technology Co Ltd filed Critical Shanghai Mihoyo Tianming Technology Co Ltd
Priority to CN202011623855.1A priority Critical patent/CN112822555A/en
Publication of CN112822555A publication Critical patent/CN112822555A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a shooting method, a shooting device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring current animation data corresponding to a current user at a current moment; and triggering the shooting of at least one target object in the current animation data based on the animation historical shooting data of the current user. According to the technical scheme of the embodiment of the invention, the current animation data is shot by triggering based on the animation historical shooting data of the current user, personalized analysis is carried out on the current user, the current animation data can be shot by automatic triggering, the current animation data can be shot with high efficiency, and pictures which are possibly interested by the user can be shot automatically, so that the highlight time of the user can be recorded in time, the personalized requirements of the user can be met, and the technical effect of improving the experience of the user can be achieved.

Description

Shooting method, shooting device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of game development, in particular to a shooting method, a shooting device, electronic equipment and a storage medium.
Background
In order to record animation such as storyline, player interaction and the like in a network game, the conventional game is provided with a function of recording animation or taking pictures. When a game player wants to record a certain animation or a certain frame of picture, the shooting button can be triggered to achieve the purpose of recording.
At present, in the game process, if a game player needs to shoot a game picture, the game player often intercepts the game picture through manual operation of the player. In such a manual screenshot manner, since the screenshot operation of the Player is not in time or the screenshot is forgotten, various key frames cannot be obtained in time, for example, an interactive frame with a Non-Player Character (NPC), a battle frame with BOSS, or a special effect frame that releases skills during battle, it is difficult for the Player to obtain a frame that is fleeting during the game. Meanwhile, the manual screenshot may also cause inaccurate screenshot opportunity and miss a proper game picture due to network delay of player equipment or machine stutter.
Disclosure of Invention
The embodiment of the invention provides a shooting method, a shooting device, electronic equipment and a storage medium, and aims to realize automatic shooting of current animation data.
In a first aspect, an embodiment of the present invention provides a shooting method, where the method includes:
acquiring current animation data corresponding to a current user at a current moment;
and triggering the shooting of at least one target object in the current animation data based on the animation historical shooting data of the current user.
In a second aspect, an embodiment of the present invention further provides a shooting apparatus, where the shooting apparatus includes:
the animation acquisition module is used for acquiring current animation data corresponding to a current user at the current moment;
and the shooting triggering module is used for triggering the shooting of at least one target object in the current animation data based on the animation historical shooting data of the current user.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the photographing method according to any of the embodiments of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the shooting method according to any one of the embodiments of the present invention.
According to the technical scheme of the embodiment of the invention, the current animation data is shot by triggering based on the animation historical shooting data of the current user, personalized analysis is carried out on the current user, the current animation data can be shot by automatic triggering, the current animation data can be shot with high efficiency, and pictures which are possibly interested by the user can be shot automatically, so that the highlight time of the user can be recorded in time, the personalized requirements of the user can be met, and the technical effect of improving the experience of the user can be achieved.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, a brief description is given below of the drawings used in describing the embodiments. It should be clear that the described figures are only views of some of the embodiments of the invention to be described, not all, and that for a person skilled in the art, other figures can be derived from these figures without inventive effort.
Fig. 1 is a schematic flowchart of a shooting method according to an embodiment of the present invention; (ii) a
Fig. 2 is a schematic flow chart of a shooting method according to a second embodiment of the present invention; (ii) a
Fig. 3 is a schematic flowchart of a shooting method according to a third embodiment of the present invention;
fig. 4 is a schematic flowchart of a shooting method according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a shooting device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Before the embodiments of the present invention are described, an application scenario is described. The shooting method provided by the embodiment of the invention can be suitable for scenes for automatically shooting the animation data in the multimedia resources, and is particularly suitable for automatically shooting the animation data in the game animation. For convenience of understanding, in the embodiment of the present invention, an application scene is taken as an example of a game scene, and a shooting method is described. Animation data in a game animation may include environments, characters, monsters, and the like.
Example one
Fig. 1 is a flowchart of a shooting method according to an embodiment of the present invention, where the present embodiment is applicable to a case where current animation data is automatically shot based on animation historical shooting data, and the method may be executed by a shooting device, where the shooting device may be configured in a terminal or a server, and the terminal and the server independently execute or cooperate to execute the shooting method according to the embodiment of the present invention.
As shown in fig. 1, the shooting method in this embodiment may specifically include:
and S110, acquiring current animation data corresponding to the current user at the current moment.
Note that the animation is often composed of many frame images. Animations are usually themed and are provided with a storyline. As the story line progresses, the content of each frame of image also changes. During the playing of the animation, the user may continuously see new animation data. And the playing progress of the same animation corresponding to different users may also be different. Therefore, in the embodiment of the present invention, the current animation data corresponding to the current user at the current time may be obtained.
The current animation data may be data corresponding to an animation picture currently played. Taking a game animation as an example, the current animation data may be data of a scene picture including one or more objects of a character, a monster, weather, a tree, or a building. In animation, these objects generally need to be realized through the corresponding information groups of the objects. For example, the information group corresponding to each object in the current animation data may be rendered based on a preset game player viewing angle to obtain a current animation picture. That is, the current animation data is determined based on the information groups of the respective objects included in the current animation picture.
And S120, triggering the shooting of at least one target object in the current animation data based on the animation historical shooting data of the current user.
Specifically, it may be determined whether to photograph the current animation data based on animation history photographing data of the current user, and if so, the photographing of at least one target object in the current animation data may be triggered. In other words, animation history photographing data of the current user may be analyzed to determine whether current animation data is to be photographed.
Wherein, the animation history shooting data can include but is not limited to at least one of the following data: history shooting animation data, history shooting parameters, user operation information corresponding to the history shooting animation data, and the like. The user operation information may be understood as operation behavior information of a user acting on the history shooting animation data. Illustratively, the user operation information may specifically include, but is not limited to, at least one of the following operations: and deleting, saving, sharing and manually re-shooting the historical shooting animation data.
Note that the historical captured animation data may be understood as animation data that has been captured once, and may or may not include current animation data. For example, the historical captured animation data includes captured data of a preset animation event by the current user, and the current picture data may be a picture in which the preset animation event is played again by the current user. Of course, the current picture data may be new picture data different from the history shooting animation data.
Optionally, triggering the shooting of at least one target object in the current animation data based on the animation history shooting data of the current user includes: determining whether the current animation data is target shooting animation data or not based on animation historical shooting data of a current user; if yes, shooting of at least one target object in the current animation data is triggered based on the animation historical shooting data of the current user.
For example, it may be determined whether the current animation data is the target photographing animation data based on a picture type of the history photographing animation data among animation history photographing data of the current user. Specifically, determining whether the current animation data is the target photographing animation data based on the picture type of the history photographing animation data in the animation history photographing data of the current user may include: determining the historical shooting picture type of the historical shooting animation data in the animation historical shooting data of the current user; determining a current picture type of the current animation data; and if the type of the picture shot based on the history is the same as the type of the current picture, determining the current animation data as target shooting animation data.
The picture type may be determined based on a scene type of the current animation data, and may be, for example, a battle scene, a reloading scene, an upgrading scene, or the like. The picture type may also be determined based on picture style, for example, may be a fierce style, an aesthetic style, a fresher style, or the like. The picture type can also be determined based on picture colors, for example, the picture type can be determined based on the type of the picture colors and a preset color type threshold, for example, the picture type can be divided into gorgeous or simple colors based on the picture colors; for example, the determination may be based on the tone of the picture color, and specifically, the picture type may be determined based on the proportion of the preset color in the picture. It should be noted that, the determining manner of the picture type may be various, and the specific division basis may be set according to the actual requirement, and is specifically limited herein.
Alternatively, the present operation can also be realized by artificial intelligence. For example, a preset machine learning model may be trained based on animation historical shooting data of the current user to obtain an animation shooting prediction model, and then shooting of at least one target object in the current animation data is triggered based on a far-side result of the animation shooting prediction model on the current animation data.
Optionally, triggering the shooting of at least one target object in the current animation data based on the animation history shooting data of the current user includes: determining whether the current animation data is target animation shooting data or not based on historical shooting data which corresponds to at least one historical user and corresponds to the current animation data in animation historical shooting data of at least one historical user; if yes, shooting of at least one target object in the current animation data is triggered based on the animation historical shooting data of the current user.
It is understood that the historical photographing data corresponding to at least one historical user and corresponding to the current animation data, for example, the historical animation data when the historical user and the current user are in the same game stage, tends to have a higher similarity to the current animation data. In game animation, animation data includes a game character, a game scene, and the like. The game scene usually includes some fixed objects which are not changed by the change of the game player, and the data can be used as static data. I.e., the inherent object data in the animation data. For example, inanimate objects such as buildings, plants, and small objects in the game scene, or atmosphere data such as time, weather, wind conditions, and tide in the environment atmosphere may be used.
Optionally, historical static data in historical shooting animation data in animation historical shooting data based on at least one historical user and current static data of the current animation data corresponding to the current user are respectively obtained, and historical shooting data corresponding to the at least one historical user and the current animation data is determined based on the similarity between the current static data and the historical static data. Specifically, if the similarity between the current static data and the historical static data is greater than a preset static similarity threshold, the historical captured animation data may be determined as the historical captured data corresponding to the current animation data and corresponding to at least one historical user.
Of course, the history shooting data corresponding to the at least one history user and corresponding to the current animation data may be determined based on the history shooting animation data in the animation history shooting data of the at least one history user, the picture type of the current animation data of the current user, and the like. For a specific implementation manner, reference may be made to the explanation of the picture type in the embodiment of the present invention, which is not described herein again.
As previously described, the current animation data may include one or more objects such as characters, monsters, weather, trees, or buildings. The target object may be understood as a key object to be photographed among the respective objects contained in the current animation data. The number of target objects may be one, two, or more than two. Target objects include, but are not limited to, player-manipulated characters, game monsters, game NPCs, scene buildings. It is noted that player-manipulated characters include, but are not limited to, characters and animals; scene buildings include, but are not limited to, natural scenes such as mountains, sky, grass, etc., and real buildings such as churches, arenas, etc.
In one embodiment, at least two target objects to be photographed may be determined based on a picture type of the current animation data. The association relationship between the picture type and the corresponding target object to be photographed may be stored in advance. After the picture type of the current animation data is identified, the target object to be shot corresponding to the current animation data can be determined based on the pre-stored association relation. For example, when the current animation data is a battle scene, the corresponding at least two target objects to be photographed include at least one player control character and a game monster battle with each control character, or include player control characters respectively battle with each other. Namely, at least two target objects to be photographed corresponding to the current animation data are determined by recognizing the type of the current animation data of the target player.
In another embodiment, at least the target player manipulation character is included in the at least two target objects to be photographed. The remaining target objects to be photographed may be interactive objects of the target player manipulating the character in the current animation data. Specifically, the target player control character in the current animation data can be detected in real time, so that when the target player control character generates an interactive behavior, such as battle, each mutual object of the target player control character is determined as the rest target objects to be photographed.
In another embodiment, at least two target objects to be photographed may be further determined based on the attribute information of each object. The attribute information of each object includes, but is not limited to, object types such as player characters, monsters, NPCs, buildings, and the like. Specifically, when it is detected that the attribute information of each object in the current picture data includes a preset object type, an object conforming to the preset object type is selected from the objects to serve as at least two target objects to be photographed. The preset attribute information may be player character + monster or player character + building, etc.
Optionally, after triggering the shooting of at least one target object in the current animation data, the method further includes: and shooting at least one target object in the current animation data. The shooting of the at least one target object in the current animation data may be performed on the at least one target object in the current animation data, or the at least one target object in the current animation data is recorded.
It is to be understood that the target object is a main subject, and is not a limitation on the contents of photographing. When at least one target object in the current animation data is shot, objects except the target object can be included in the shot data.
According to the technical scheme of the embodiment of the invention, the current animation data is shot by triggering based on the animation historical shooting data of the current user, personalized analysis is carried out on the current user, the current animation data can be shot by automatic triggering, the current animation data can be shot with high efficiency, and pictures which are possibly interested by the user can be shot automatically, so that the highlight time of the user can be recorded in time, the personalized requirements of the user can be met, and the technical effect of improving the experience of the user can be achieved.
Example two
Fig. 2 is a schematic flow chart of a shooting method according to a second embodiment of the present invention, which is further refined based on the foregoing optional technical solutions in this embodiment, and optionally, the triggering of shooting at least one target object in the current animation data based on the animation history shooting data of the current user includes: determining the current shooting preference degree of the current user to the current animation data based on the animation historical shooting data of the current user; and if the current shooting preference reaches a preset preference threshold, triggering the shooting of at least one target object in the current animation data. The same or corresponding terms as those in the above embodiments are not explained in detail herein.
As shown in fig. 2, the shooting method in this embodiment may specifically include:
s210, current animation data corresponding to the current user at the current moment are obtained.
S220, determining the current shooting preference degree of the current user to the current animation data based on the animation historical shooting data of the current user.
Optionally, historical captured animation data in animation historical captured data of a current user is determined, and a current capture preference of the current user for the current animation data is determined based on the historical captured animation data. For example, scene information of historical photographing animation data and current animation data may be respectively determined, and a current photographing preference of the current user for the current animation data may be determined based on the scene information.
Optionally, determining the current shooting preference of the current user for the current animation data based on the scene information includes: determining whether historical shooting animation data consistent with scene information of current animation data exists in the historical shooting animation data, and determining the current shooting preference of a current user on the current animation data based on the determination result. For example, if the current shooting preference degree exists, the current shooting preference degree is determined to be 1; and if not, determining that the current shooting preference degree is 0.
Optionally, the determining of the current photographing preference of the current user for the current animation data based on the determination result includes: if the historical shooting animation data consistent with the scene information of the current animation data exists in the historical shooting animation data, determining the proportion of the historical shooting animation data consistent with the scene information of the current animation data in the total historical shooting animation data, and determining the current shooting preference of the current animation data based on the proportion. For example, the higher the occupied ratio, the higher the current shooting preference, and conversely, the higher the occupied ratio, the lower the current shooting preference.
Alternatively, if there is history photographed animation data that coincides with the scene information of the current animation data among the history photographed animation data, a history photographing time of the history photographed animation data that coincides with the scene information of the current animation data is determined, and a current photographing preference of the current animation data is determined based on the history photographing time. For example, the current shooting preference may be higher as the history shooting time is closer to the current time, whereas the current shooting preference may be lower as the history shooting time is farther from the current time.
In addition, the current shooting preference of the current user for the current animation data can be determined based on the historical shooting animation data, the picture type of the current animation data, the story line and/or the attribute information of the target object in the picture, and the like.
And S230, if the current shooting preference reaches a preset preference threshold, triggering the shooting of at least one target object in the current animation data.
It is understood that the preset preference threshold may be set according to actual requirements, and specific values thereof are not specifically limited herein.
According to the technical scheme, the current shooting preference degree of the current user for the current animation data is determined by analyzing the animation historical shooting data of the current user, the current shooting preference degree can also be understood as the degree that the current user wants to shoot the current animation data, if the current shooting preference degree reaches a preset preference degree threshold value, the current user can be determined as the current user wants to shoot the current animation data, shooting of at least one target object in the current animation data is triggered at the moment, the preference of the user can be analyzed from the animation historical shooting data of the current user, a personalized shooting trigger mechanism for the current user is generated, and user experience is further improved.
EXAMPLE III
Fig. 3 is a schematic flow chart of a shooting method according to a third embodiment of the present invention, where this embodiment further refines on the basis of the foregoing optional technical solutions, and optionally, the determining the shooting preference of the current user for the current animation data based on the animation history shooting data of the current user includes: training an original machine learning model based on the animation historical shooting data of the current user to obtain a preference degree evaluation model; and determining the current shooting preference of the current user to the current animation data based on the trained preference evaluation model. The same or corresponding terms as those in the above embodiments are not explained in detail herein.
As shown in fig. 3, the shooting method in this embodiment may specifically include:
s310, current animation data corresponding to the current user at the current time are obtained.
And S320, training an original machine learning model based on the animation historical shooting data of the current user to obtain a preference degree evaluation model.
The animation historical shooting data comprises historical shooting animation data and historical shooting preference degrees corresponding to the historical shooting animation data.
Specifically, historical shooting animation data and a historical shooting preference corresponding to the historical shooting animation data may be input into a pre-established original machine learning model, and a predicted shooting preference corresponding to the historical shooting animation data may be output; and adjusting an original machine learning model based on the predicted shooting preference and the historical shooting preference corresponding to the historical shooting animation data to obtain a preference evaluation model.
Specifically, the original machine learning model is adjusted based on the predicted shooting preference and the historical shooting preference corresponding to the historical shooting animation data, and a loss function may be constructed based on the predicted shooting preference and the historical shooting preference corresponding to the historical shooting animation data, and model parameters of original machine learning are adjusted to converge the loss function. When the loss function converges, the trained original machine learning model is used as a preference evaluation model. The model parameter may be a weight of each feature data of the history shooting animation data.
It can be understood that the original machine learning model needs to be iteratively trained through multiple sets of historical shooting animation data to obtain a preference evaluation model. Optionally, after the training is completed, the trained preference degree evaluation model may be further verified based on verification sample data in the historical shooting animation data, and it is determined whether the preference degree evaluation model needs to continue training based on a verification result.
Illustratively, the raw machine learning model may comprise a neural network model. Further, the neural network model may include a deep learning model.
Optionally, before training the original machine learning model based on the historical shooting animation data, the method further includes: and determining the historical shooting preference of the current user to the historical shooting animation data. Specifically, historical captured animation data in animation historical captured data of the current user and user operation information corresponding to the historical captured animation data may be determined first; and then determining the historical shooting preference of the current user to the historical shooting animation data based on the user operation information.
The user operation information corresponding to the history photographed moving image data may be understood as operation information of the current user on the history photographed moving image data. The user operation information may include one item, two items, or more items. Specifically, the user operation information may include, but is not limited to, at least one of the following operations: clipping, beautifying, deleting, saving, sharing, and/or manually re-shooting, etc. In the embodiment of the present invention, the preference information of the current user for the history shooting animation data may be determined by the user operation information.
Illustratively, the history shooting preference for the history shooting animation data when the current user shares and saves the history shooting animation data is higher than the history shooting preference for the history shooting animation data when the current user only saves the history shooting animation data; when the current user stores the historical shooting animation data, the historical shooting preference degree of the historical shooting animation data is higher than that of the current user beautifying the historical shooting animation data; when the current user beautifies the historical shooting animation data, the historical shooting preference degree of the historical shooting animation data is higher than that of the current user for deleting the historical shooting animation data and manually shooting the historical shooting animation data again; the history shooting preference for the history shooting animation data when the current user deletes the history shooting animation data and manually retakes it is higher than the history shooting preference for the history shooting animation data when the current user deletes only the history shooting animation data.
It should be noted that the above level of the preference and the specific user operation information depended on in the setting are only an exemplary illustration of the determination method of the history shooting preference, and are not limited.
For example, the history shooting preference for the history shooting animation data may also be determined by the number of times of sharing the history shooting animation data, or the like.
S330, determining the current shooting preference of the current user to the current animation data based on the trained preference evaluation model.
Specifically, the current animation data is input to the preference evaluation model after training is completed, and the current shooting preference of the current animation data is obtained.
S340, if the current shooting preference degree reaches a preset preference degree threshold value, triggering the shooting of at least one target object in the current animation data.
According to the technical scheme, the original machine learning model is trained through animation historical shooting data of a current user, a preference degree evaluation model is obtained, then the current shooting preference degree of the current user on the current animation data is evaluated through the preference degree evaluation model, the shooting preference degree of the animation historical shooting data of the current user can be fully analyzed through automatic decision of the machine learning model, the current shooting preference degree of the current animation data can be quickly and effectively evaluated, the model can be continuously and automatically optimized along with increase of the animation historical shooting data, compared with a fixed statistical mode, the preference degree of the current user can be more flexibly and dynamically evaluated through a machine learning mode, the data processing efficiency is improved, and meanwhile, the user experience is further improved.
Example four
Fig. 4 is a schematic flow chart of a shooting method according to a fourth embodiment of the present invention, where this embodiment further refines on the basis of the foregoing optional technical solutions, and optionally after triggering shooting of at least one target object in the current animation data, the method further includes: determining current shooting parameters based on historical shooting parameters in the animation historical shooting data; and shooting at least one target object in the current animation data based on the current shooting parameters. The same or corresponding terms as those in the above embodiments are not explained in detail herein.
As shown in fig. 4, the shooting method in this embodiment may specifically include:
and S410, acquiring current animation data corresponding to the current user at the current moment.
And S420, triggering the shooting of at least one target object in the current animation data based on the animation historical shooting data of the current user.
And S430, determining current shooting parameters based on the historical shooting parameters in the animation historical shooting data.
The historical shooting parameters can be understood as corresponding camera shooting parameters, camera attribute parameters and/or scene parameters when shooting the historical animation data. Illustratively, the camera shooting parameters are parameters corresponding to shooting history shooting animation data, and may include, but are not limited to, at least one of the following data: the shooting angle at the time of shooting the history shot moving image data, the shooting position of the camera at the time of shooting, the wide angle parameter of the camera at the time of shooting, and the like. Specifically, the camera attribute parameters are parameters used in the shooting process, that is, specific parameters corresponding to shooting, for example, a shutter, an aperture, an angle of view, an exposure amount, whether a flash is on, and may also be shooting light and/or a shooting angle corresponding to the current user. The scene parameter may be a light source parameter in the current scene, e.g. number of light sources, intensity of light sources, etc. The current shooting parameters can be understood as camera shooting parameters corresponding to shooting of the current animation data, camera attribute parameters and scene parameters corresponding to the current animation data.
For example, a history shooting parameter used when shooting is performed the last time from the current time in animation history shooting data of the current user may be acquired as the current shooting parameter.
Optionally, a history shooting parameter with the highest frequency of use among history shooting parameters in the animation history shooting data of the current user may be acquired as the current shooting parameter.
Optionally, determining the current shooting parameters based on the historical shooting parameters in the animation historical shooting data includes: determining at least one group of target historical shooting animation data corresponding to the current animation data in the animation historical shooting data; determining current photographing parameters based on the history photographing parameters corresponding to the at least one set of target history photographing animation data.
For example, at least one set of target history photographed animation data corresponding to the current animation data among animation history photographed data may be determined according to a picture type of the history photographed animation data and a picture type of the current animation data. Specifically, the history captured animation data that coincides with the picture type of the current animation data among the history captured animation data may be taken as the target history captured animation data.
For example, at least one set of target historical captured animation data corresponding to the current animation data in the animation historical captured data may be determined according to the similarity between the historical captured animation data and the current animation data in the animation historical captured data. Specifically, if the degree of similarity of the history captured animation data to the current animation data exceeds a preset similarity threshold, the history captured animation data is determined as the target history captured animation data.
Optionally, determining the current photographing parameters based on the historical photographing parameters corresponding to the at least one set of target historical photographing animation data includes: respectively acquiring historical shooting parameters corresponding to each group of target historical shooting animation data, and respectively making each group of historical shooting parameters as a group of current shooting parameters; or determining target historical shooting parameters from historical shooting parameters corresponding to the at least one group of target historical shooting animation data, and taking the target historical shooting parameters as current shooting parameters.
For example, determining the target history photographing parameter from the history photographing parameters corresponding to the at least one set of target history photographing animation data may include: determining the picture similarity of the target historical shooting animation data and the current animation data, and determining reference animation shooting data from the at least one group of target historical shooting animation data based on the picture similarity; and taking the historical shooting parameters corresponding to the reference animation shooting data as target historical shooting parameters. For example, the target history captured animation data having the highest degree of screen similarity with the current animation data may be used as the target history captured animation data as the reference animation captured data.
For example, the determining of the picture similarity of the target history captured animation data and the current animation data may specifically be calculating the picture similarity of the target history captured animation data and the current animation data based on a preset similarity parameter. Wherein, the preset similarity parameter may include, but is not limited to, at least one of the following parameters: picture color, number of picture objects and/or picture type, etc.
The number of the preset similarity parameters may be one, two or more. When the picture similarity of the target historical shooting animation data and the current animation data is calculated based on two or more preset similarity parameters, the picture similarity of the target historical shooting animation data and the current animation data can be calculated through each similarity parameter, and then the picture similarity of the target historical shooting animation data and the current animation data is obtained through a summing or weighted summing mode.
S440, shooting at least one target object in the current animation data based on the current shooting parameters.
Alternatively, if a set of current photographing parameters is determined, at least one target object in the current animation data may be photographed based on the set of current photographing parameters.
Alternatively, if two or more sets of current photographing parameters are determined, at least one target object in the current animation data may be photographed based on each set of current photographing parameters, respectively. On the basis, at least two groups of shot images and/or video data can be displayed. Furthermore, operation information of the current user on the image and/or video data can be received, and the operation information is responded. The operation information comprises cutting, beautifying, deleting, storing and/or sharing and the like.
According to the technical scheme, the current shooting parameters are determined according to the historical shooting parameters in the animation historical shooting data, differences of different users in shooting habits are fully considered, personal shooting habits or shooting styles of the current users can be determined, and then at least one target object in the current animation data is shot based on the current shooting parameters, so that the shot data can be more fit with the manual shooting effect of the current users, the personalized requirements of the users are fully met, and the user experience is greatly improved.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a shooting apparatus according to a fifth embodiment of the present invention, which can be used to execute the shooting method according to any embodiment of the present invention, and the apparatus can be implemented by software and/or hardware. The photographing apparatus of an embodiment of the present invention may include: an animation acquisition module 510 and a photographing trigger module 520.
The animation obtaining module 510 is configured to obtain current animation data corresponding to a current user at a current time; a shooting triggering module 520, configured to trigger shooting of at least one target object in the current animation data based on the animation history shooting data of the current user.
According to the technical scheme of the embodiment of the invention, the current animation data is shot by triggering based on the animation historical shooting data of the current user, personalized analysis is carried out on the current user, the current animation data can be shot by automatic triggering, the current animation data can be shot with high efficiency, and pictures which are possibly interested by the user can be shot automatically, so that the highlight time of the user can be recorded in time, the personalized requirements of the user can be met, and the technical effect of improving the experience of the user can be achieved.
On the basis of the technical solutions of the embodiments of the present invention, optionally, the shooting triggering module 520 includes:
the preference degree determining unit is used for determining the current shooting preference degree of the current user to the current animation data based on the animation historical shooting data of the current user;
and the shooting triggering unit is used for triggering the shooting of at least one target object in the current animation data if the current shooting preference degree reaches a preset preference degree threshold value.
On the basis of the technical solutions of the embodiments of the present invention, the preference degree determining unit may be configured to:
training an original machine learning model based on animation historical shooting data of the current user to obtain a preference degree evaluation model, wherein the animation historical shooting data comprises historical shooting animation data and historical shooting preference degrees corresponding to the historical shooting animation data;
and determining the current shooting preference of the current user to the current animation data based on the trained preference evaluation model.
On the basis of the technical solutions of the embodiments of the present invention, the photographing apparatus may further include: an operation information determination module and a historical preference determination module.
The operation information determining module is used for determining historical shooting animation data in animation historical shooting data of the current user and user operation information corresponding to the historical shooting animation data; and the historical preference determining module is used for determining the historical shooting preference degree of the current user on the historical shooting animation data based on the user operation information.
On the basis of each technical scheme of the embodiment of the invention, the shooting device further comprises: the device comprises a shooting parameter determining module and a shooting module.
The shooting parameter determining module is used for determining current shooting parameters based on historical shooting parameters in animation historical shooting data after the shooting of at least one target object in the current animation data is triggered; and the shooting module is used for shooting at least one target object in the current animation data based on the current shooting parameters.
On the basis of the technical solutions of the embodiments of the present invention, the shooting parameter determining module may be configured to:
determining at least one group of target historical shooting animation data corresponding to the current animation data in the animation historical shooting data;
determining current photographing parameters based on the history photographing parameters corresponding to the at least one set of target history photographing animation data.
On the basis of the technical solutions of the embodiments of the present invention, the shooting trigger module 520 may be configured to:
determining whether the current animation data is target animation shooting data or not based on historical shooting data which corresponds to at least one historical user and corresponds to the current animation data in animation historical shooting data of at least one historical user;
if yes, shooting of at least one target object in the current animation data is triggered based on the animation historical shooting data of the current user.
The shooting device provided by the embodiment of the invention can execute the shooting method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the above-mentioned shooting device are merely divided according to functional logic, but are not limited to the above-mentioned division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiment of the invention.
EXAMPLE six
Fig. 6 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 6 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 6, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement a photographing method provided by the present embodiment.
EXAMPLE seven
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a photographing method, including:
acquiring current animation data corresponding to a current user at a current moment;
and triggering the shooting of at least one target object in the current animation data based on the animation historical shooting data of the current user.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A photographing method, characterized by comprising:
acquiring current animation data corresponding to a current user at a current moment;
and triggering the shooting of at least one target object in the current animation data based on the animation historical shooting data of the current user.
2. The method of claim 1, wherein triggering the capture of at least one target object in the current animation data based on the current user's animation history capture data comprises:
determining the current shooting preference degree of the current user to the current animation data based on the animation historical shooting data of the current user;
and if the current shooting preference reaches a preset preference threshold, triggering the shooting of at least one target object in the current animation data.
3. The method of claim 2, wherein the determining of the shooting preference of the current user for the current animation data based on the animation history shooting data of the current user comprises:
training an original machine learning model based on animation historical shooting data of the current user to obtain a preference degree evaluation model, wherein the animation historical shooting data comprises historical shooting animation data and historical shooting preference degrees corresponding to the historical shooting animation data;
and determining the current shooting preference of the current user to the current animation data based on the trained preference evaluation model.
4. The method of claim 3, further comprising:
determining historical shooting animation data in animation historical shooting data of the current user and user operation information corresponding to the historical shooting animation data;
and determining the historical shooting preference degree of the current user on the historical shooting animation data based on the user operation information.
5. The method of claim 1, further comprising, after the triggering the capturing of the at least one target object in the current animation data:
determining current shooting parameters based on historical shooting parameters in the animation historical shooting data;
and shooting at least one target object in the current animation data based on the current shooting parameters.
6. The method of claim 5, wherein determining current shooting parameters based on historical shooting parameters in the animation historical shooting data comprises:
determining at least one group of target historical shooting animation data corresponding to the current animation data in the animation historical shooting data;
determining current photographing parameters based on the history photographing parameters corresponding to the at least one set of target history photographing animation data.
7. The method of claim 1, wherein triggering the capture of at least one target object in the current animation data based on the current user's animation history capture data comprises:
determining whether the current animation data is target animation shooting data or not based on historical shooting data which corresponds to at least one historical user and corresponds to the current animation data in animation historical shooting data of at least one historical user;
if yes, shooting of at least one target object in the current animation data is triggered based on the animation historical shooting data of the current user.
8. A camera, comprising:
the animation acquisition module is used for acquiring current animation data corresponding to a current user at the current moment;
and the shooting triggering module is used for triggering the shooting of at least one target object in the current animation data based on the animation historical shooting data of the current user.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the photographing method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the photographing method according to any one of claims 1-7.
CN202011623855.1A 2020-12-31 2020-12-31 Shooting method, shooting device, electronic equipment and storage medium Pending CN112822555A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011623855.1A CN112822555A (en) 2020-12-31 2020-12-31 Shooting method, shooting device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011623855.1A CN112822555A (en) 2020-12-31 2020-12-31 Shooting method, shooting device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112822555A true CN112822555A (en) 2021-05-18

Family

ID=75854785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011623855.1A Pending CN112822555A (en) 2020-12-31 2020-12-31 Shooting method, shooting device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112822555A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113318446A (en) * 2021-06-30 2021-08-31 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010274070A (en) * 2009-06-01 2010-12-09 Sony Computer Entertainment Inc Game control program, game device, and game control method
CN106843897A (en) * 2017-02-09 2017-06-13 腾讯科技(深圳)有限公司 A kind of method and apparatus for intercepting game picture
CN108229369A (en) * 2017-12-28 2018-06-29 广东欧珀移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN109165074A (en) * 2018-08-30 2019-01-08 努比亚技术有限公司 Game screenshot sharing method, mobile terminal and computer readable storage medium
CN109976634A (en) * 2019-03-18 2019-07-05 北京智明星通科技股份有限公司 A kind of game APP screenshot method and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010274070A (en) * 2009-06-01 2010-12-09 Sony Computer Entertainment Inc Game control program, game device, and game control method
CN106843897A (en) * 2017-02-09 2017-06-13 腾讯科技(深圳)有限公司 A kind of method and apparatus for intercepting game picture
CN108229369A (en) * 2017-12-28 2018-06-29 广东欧珀移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN109165074A (en) * 2018-08-30 2019-01-08 努比亚技术有限公司 Game screenshot sharing method, mobile terminal and computer readable storage medium
CN109976634A (en) * 2019-03-18 2019-07-05 北京智明星通科技股份有限公司 A kind of game APP screenshot method and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113318446A (en) * 2021-06-30 2021-08-31 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN113318446B (en) * 2021-06-30 2023-11-21 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US11620800B2 (en) Three dimensional reconstruction of objects based on geolocation and image data
CN112827172B (en) Shooting method, shooting device, electronic equipment and storage medium
CN106375674A (en) Method and apparatus for finding and using video portions that are relevant to adjacent still images
CN112843735B (en) Game picture shooting method, device, equipment and storage medium
WO2022095516A1 (en) Livestreaming interaction method and apparatus
CN113301358A (en) Content providing and displaying method and device, electronic equipment and storage medium
CN112423143A (en) Live broadcast message interaction method and device and storage medium
CN112422844A (en) Method, device and equipment for adding special effect in video and readable storage medium
CN112866562B (en) Picture processing method and device, electronic equipment and storage medium
CN112822555A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112843693B (en) Method and device for shooting image, electronic equipment and storage medium
CN112843733A (en) Method and device for shooting image, electronic equipment and storage medium
CN112791401B (en) Shooting method, shooting device, electronic equipment and storage medium
CN116966557A (en) Game video stream sharing method and device, storage medium and electronic equipment
CN109819271A (en) The method and device of game direct broadcasting room displaying, storage medium, electronic equipment
CN112843691B (en) Method and device for shooting image, electronic equipment and storage medium
CN112791402A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114125552A (en) Video data generation method and device, storage medium and electronic device
CN114344920A (en) Data recording method, device, equipment and storage medium based on virtual scene
CN113676734A (en) Image compression method and image compression device
CN112774199A (en) Target scene picture restoration method and device, electronic equipment and storage medium
CN112449249A (en) Video stream processing method and device, electronic equipment and storage medium
CN112861612A (en) Method and device for shooting image, electronic equipment and storage medium
CN112860372B (en) Method and device for shooting image, electronic equipment and storage medium
CN112843678B (en) Method and device for shooting image, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210518

RJ01 Rejection of invention patent application after publication