CN110650367A - Video processing method, electronic device, and medium - Google Patents
Video processing method, electronic device, and medium Download PDFInfo
- Publication number
- CN110650367A CN110650367A CN201910818124.3A CN201910818124A CN110650367A CN 110650367 A CN110650367 A CN 110650367A CN 201910818124 A CN201910818124 A CN 201910818124A CN 110650367 A CN110650367 A CN 110650367A
- Authority
- CN
- China
- Prior art keywords
- target
- video
- image frame
- processing
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Abstract
The embodiment of the invention discloses a video processing method, electronic equipment and a medium. The video processing method comprises the following steps: acquiring a target image frame containing a preset target object in a target video to be played; and processing the target image frame according to a target processing mode associated with the target object to obtain a processed target video. By using the embodiment of the invention, the target object contained in the target video can be subjected to the preset processing according to the user requirement.
Description
Technical Field
Embodiments of the present invention relate to the field of video processing technologies, and in particular, to a video processing method, an electronic device, and a medium.
Background
At present, when the electronic device plays a video, the same video can play the same content for all users, and if the content which the user is interested in or does not want to see exists in the video, the user can only manually drag the progress bar of the video to change the playing process of the video, so that the purposes of repeatedly watching the content which the user wants to see and skipping the content which the user does not want to see are achieved, the adjustment mode of the video is single, and the user experience of watching the video is poor.
Disclosure of Invention
Embodiments of the present invention provide a video processing method, an electronic device, and a medium, so as to solve a problem in the prior art that an adjustment mode of a video is relatively single.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video processing method, including:
acquiring a target image frame containing a preset target object in a target video to be played;
and processing the target image frame according to a target processing mode associated with the target object to obtain a processed target video.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
the image acquisition module is used for acquiring a target image frame containing a preset target object in a target video to be played;
and the video processing module is used for processing the target image frame according to a target processing mode associated with the target object to obtain a processed target video.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and being executable on the processor, and when executed by the processor, the electronic device implements the steps of the video processing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the video processing method according to the first aspect.
In the embodiment of the invention, the target image frame containing the preset target object in the target video to be played can be identified firstly, then the target image frame is processed by utilizing the preset processing mode associated with the target object to obtain the processed target video, and the processed target video is played, so that the electronic equipment can automatically process the target image frame in the target video according to the user requirement before playing the target video to obtain the processed target video meeting the user requirement, and therefore, different target objects can be adjusted more variously by adopting corresponding processing modes, and the video playing process can meet the personalized requirement of the user.
Drawings
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present invention;
fig. 2 is a flow chart illustrating a video playing process according to an embodiment of the present invention;
FIG. 3 is an interface diagram of an object handling setup interface, in accordance with an embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating a video playing process according to another embodiment of the present invention;
FIG. 5 is an interface diagram of an object handling prompt interface, in accordance with an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the existing electronic equipment plays videos, the same videos can play the same content for all users, and particularly in one video, if a specific object which the user is interested in or does not want to see exists, the electronic equipment cannot identify and process the specific object in the video, so that the adjustment mode of the video is single, and the user experience of watching the video is poor.
In order to solve the problems of the prior art, embodiments of the present invention provide a video processing method, an electronic device, and a medium. First, a video processing method provided by an embodiment of the present invention is described below.
Fig. 1 is a flowchart illustrating a video processing method according to an embodiment of the present invention. As shown in fig. 1, the video processing method includes:
step 110, acquiring a target image frame containing a preset target object in a target video to be played;
the preset target object may be an object designated by a user, may also be an object preset in the electronic device, and may also be an object detected in an object preset in the electronic device based on an object type or an object feature selected by the user.
And step 120, processing the target image frame according to a target processing mode associated with the target object to obtain a processed target video.
In the embodiment of the invention, the target image frame containing the preset target object in the target video to be played can be identified firstly, then the target image frame is processed by utilizing the preset processing mode associated with the target object to obtain the processed target video, and the processed target video is played, so that the electronic equipment can automatically process the target image frame in the target video according to the user requirement before playing the target video to obtain the processed target video meeting the user requirement, thereby realizing more diversified adjustment modes of the video for different target objects by adopting the corresponding processing modes, and enabling the video playing process to meet the personalized requirement of the user.
In the embodiment of the present invention, the target video to be played may be a network video or a local video played by using video playing application software.
In step 110 of some embodiments of the present invention, a specific method for acquiring a target image frame containing a preset target object in a target video to be played may include:
acquiring a video playing request, wherein the video playing request comprises a target video to be played;
and responding to the video playing request, carrying out image recognition on the image frame of the target video, and acquiring the target image frame containing the target object.
Therefore, after the electronic equipment acquires the video playing request for the target video to be played, the electronic equipment directly responds to the video playing request, automatically performs image recognition on the image frame of the target video, acquires the target image frame containing the target object, so that the acquired target image frame can be automatically processed, other operations on the target video by a user are not needed, the convenience of user operation can be improved, and the intelligence of the electronic equipment for processing the target video is enhanced.
In step 110 of other embodiments of the present invention, a specific method for obtaining a target image frame containing a preset target object in a target video to be played may also include:
acquiring a video processing request input by a user, wherein the video processing request comprises a target video to be played;
and responding to the video processing request, performing image recognition on the image frame of the target video, and acquiring a target image frame containing the target object.
At the moment, the electronic equipment can independently process and carry out the two processes of video playing and video processing so as to meet the diversity requirements of users on video processing.
In the embodiment of the present invention, the preset target object and the target processing mode associated with the target object may be set by the video processing service provider in a unified manner, or may be set by the user. The preset target object and the target processing mode associated with the target object can be stored in the electronic device in an associated manner after being set.
In the case that the preset target object and the target processing manner associated with the target object are set by the user, the preset target object and the target processing manner associated with the target object may be preset by the user before the target video is processed by the electronic device, for example, the user may set the preset target object and the target processing manner associated with the target object before the video playing request or the video processing request is input, and for example, the user may set the preset target object and the target processing manner associated with the target object after the video playing request or the video processing request is input and before the target video is processed by the electronic device.
Therefore, in some embodiments of the present invention, step 110 may further include:
displaying an object processing setting interface;
receiving a second input at the object handling setting interface;
and determining the target object and the target processing mode associated with the target object according to the second input.
The object processing setting interface is used for receiving a target object input by a user and a target processing mode corresponding to the target object. After receiving the target object and the target processing manner corresponding to the target object, the electronic device may store the target object and the target processing manner corresponding to the target object in an associated manner.
In some cases, the user may input a preset trigger operation before inputting a video playing request or a video processing request, so that the electronic device displays an object processing setting interface after receiving the preset trigger operation. In other cases, after the user inputs a video play request or a video processing request, the electronic device may automatically display an object handling setting interface in response to the video play request or the video processing request.
Therefore, in the embodiment of the invention, the user can set the target object and the target processing mode associated with the target object through the object processing setting interface according to the own requirements, thereby meeting the personalized requirements of video processing.
In the embodiment of the present invention, the target object may be a specific object set by the user himself, for example, at least one of a certain article, a certain animal, a certain plant, a certain person, and the like; the target processing method may be a specific method set by the user himself, for example, at least one of replacement with an Augmented Reality (AR) image, overlaying with a specific image, blurring processing, cropping processing, and direct deletion of a corresponding image frame. The fuzzy processing can be mosaic processing, so that the specific content can be modified or deleted according to the requirements of users when the video is played.
In the embodiment of the present invention, the second input of the user for the object processing setting interface may be text, an image, video, or audio, and the electronic device may determine the target object input by the user according to the received text, image, video, or audio.
Taking the user input image as an example, the user may directly input a local image of the electronic device or an internet image downloaded in the internet on the object processing setting interface, or may select a photographing function on the object processing setting interface to obtain a photographed image in a photographing manner. After the user inputs the image, the electronic device may perform image recognition on the image, thereby determining the target object input by the user.
It should be noted that the second input of the user to the object processing setting interface may also be a selection operation based on a plurality of displayed selectable objects, which is not limited herein.
In some embodiments of the present invention, the audio input by the user may be a recording of the name of the target object spoken by the user, or may be a recording of a specific sound corresponding to the target object. For example, if the user wants to set "dog" as the target object, the user may record a recording of reading "dog" and input the recording to the object processing setting interface, or the user may record a dog call and input the dog call to the object processing setting interface.
Therefore, the embodiment of the invention can receive the target object input by the user and the target processing mode associated with the target object in a flexible mode, thereby providing a processing basis for personalized video processing.
When a user inputs a target object in a text or audio mode, the electronic device may first search for a reference image corresponding to the target object, and then determine a target image frame with a similarity that meets a predetermined similarity threshold by using image similarity between each image frame in the target video and the searched reference image. When a user inputs a target object in an image or video mode, the electronic device may directly use any one of the images or the video as a reference image, and then determine a target image frame with a similarity meeting a predetermined similarity threshold by using the similarity between each image frame in the target video and the searched reference image.
Or, after the user inputs text, images, videos or audios, the electronic device may identify a target object input by the user and search for an object identification model corresponding to the target object, so as to identify a target image frame of the target object preset in the target video by using the object identification model.
In the embodiment of the present invention, the target image frame acquired in step 110 may be one frame, two frames, or more than two frames, which is not limited herein.
In the embodiment of the invention, the electronic device can perform image recognition on all image frames of the target video based on the target object before playing the target video so as to recognize all the target image frames in the target video. The electronic device may also perform image recognition on a plurality of image frames to be played in the target video based on the target object in the process of playing the target video, so as to recognize the target image frame in the plurality of image frames to be played.
First case
When the electronic device identifies all target image frames in the target video before playing the target video, the number of the target objects may be one or more, and when the target objects are multiple, each target object may be provided with one associated target processing manner, each target object may also be provided with multiple associated target processing manners, and multiple target objects may also be associated with the same target processing manner.
In the case that the number of the target objects is multiple, if the identified target image frame contains multiple target objects, the target image frame corresponding to some or all of the identified multiple target objects may be processed according to the selection of the user.
In the case that each target object is provided with a plurality of associated target processing modes, one target processing mode corresponding to each target object can be selected from the plurality of target processing modes according to the selection of the user, and the target processing mode is used for processing the target image frame.
In these embodiments, step 120 may specifically include;
displaying an object processing prompt interface, wherein the object processing prompt interface is used for displaying at least one target object contained in the target image frame and at least one target processing mode related to the target object;
receiving a first input in a processing prompt interface;
in response to a first input, determining a target object to be processed in at least one target object, and determining a processing mode to be executed in at least one target processing mode;
and processing the target image frame according to the target object to be processed and the processing mode to be executed to obtain the processed target video.
Specifically, when it is recognized that the target image frame is included in the target video, the electronic device may automatically display an object processing prompt interface, where the object processing prompt interface may display at least one target object included in the identified target image frame and at least one target processing manner associated with each target object, and a user may select between at least one target object and at least one target processing manner, and use a selection result as a first input for the object processing prompt interface, so that the electronic device determines, in response to the first input, a to-be-processed target object selected by the user and a to-be-executed processing manner corresponding to the to-be-processed target object, so as to process the target image frame, and obtain a processed target video.
Taking preset target objects as 'snake', 'mouse' and 'spider', and taking a target processing mode as an example of target image frame deletion, when the electronic equipment detects that the target video contains the target image frame of at least one target object of 'snake' and 'mouse', an object processing prompt interface is generated, wherein the object processing prompt interface can comprise options of 'snake deletion' and 'mouse deletion' for selection by a user, the target image frame containing the 'snake' is deleted under the condition that the user selects 'snake deletion', and the target image frame containing the 'mouse' is deleted under the condition that the user selects 'mouse deletion'.
Taking preset target objects as 'snake', 'mouse' and 'spider', and taking the target processing mode associated with each target object as target image frame deletion and fuzzy processing as an example, when the electronic device detects that the target video contains the target image frame of at least one target object of 'snake' and 'mouse', an object processing prompt interface is generated, and the object processing prompt interface can comprise an option for selecting the identified target objects 'snake' and 'mouse', an option for 'snake' and 'fuzzy' associated target processing mode of the target object 'snake', and an option for 'mouse' associated target processing mode 'deletion' and 'fuzzy' of the target object 'mouse', so as to be selected by a user.
In the above example, the image corresponding to the identified target object in the target video may be displayed on the object processing prompt interface, so that the user may select according to the degree of acceptance of the image corresponding to the identified target object.
Although the target image frame of the target video is identified before the target video is played, the target video may be processed during the playing process or before the target video is played, which is not limited herein.
Therefore, in the embodiment of the invention, under the condition that the target image frame is identified according to the selection of the user, the target object to be processed is selected from at least one target object contained in the target image frame, and the target processing mode associated with the target object is selected to be the processing mode to be executed, so that the intelligence and flexibility of video processing are further improved.
Second case
When the electronic device identifies a target image frame in a plurality of image frames to be played in the process of playing the target video, the number of the target objects may be one or more, each target object may have a related target processing mode, and a plurality of target objects may also have the same target processing mode.
Specifically, in the case of identifying the target image frame, no matter how many target objects are included in the target image frame, some or all of the target image frames may be processed directly according to the target objects and their associated target processing methods.
Because the video processing is carried out in the process of playing the target video, the purposes of playing the video and processing the video can be achieved, and the time for a user to wait for the video to be played is reduced.
In embodiments of the present invention, step 120 may be implemented in a variety of ways.
First mode
In the first way, the target object can be replaced or overlaid with the substitute object, enabling the user to modify the video content based on his own preferences.
The specific method of step 120 may include:
acquiring a substitute object associated with the target object;
and replacing or covering the target object in the target image frame by using the substitute object to obtain a processed target video.
The substitute object is an augmented reality AR object, a 2D object or a 3D object corresponding to the target object.
In some embodiments, the substitute object is an AR object corresponding to the target object. The AR technology enables some virtual things to be embodied in the real world, allowing people to perceive and experience something that is originally present only on the network or computer. At this time, the user may view the video with the AR object by adding a sticker for viewing the AR object on a display screen of the electronic device or wearing glasses for viewing the AR object.
In the embodiment of the present invention, the substitute object may be an AR object preset in the electronic device for the user to select, and is stored in association with the target object after being selected. The substitute object may also be a 2D or 3D image that the user himself downloads from the internet, shoots himself, or makes himself. The substitute object may also be an AR object, a 2D object, or a 3D object generated from the target object according to the electronic device, e.g., a 3D target object may be generated with the 2D target object. As another example, a cartoon object is generated using the original object.
In the embodiment of the present invention, the substitute object may be a static object or a dynamic object. When the substitute object is a dynamic object, the pose of the substitute object in the dynamic object may be matched to the pose of the target object in the corresponding target image frame.
In the embodiment of the present invention, a specific method for replacing or covering a target object in a target image frame with a substitute object may be: image recognition is performed on the target image frame to identify a first image area containing the target object and a second image area not containing the target object, and then the first image area is replaced or covered with the substitute object.
In some embodiments of the present invention, since the target object is replaced or covered by the substitute object, problems such as incomplete replacement or coverage, unnatural edge connection and transition may easily occur, after replacing or covering the target object in the target image frame by the substitute object, in order to ensure the image quality of the target image frame including the substitute object, step 120 may further include:
determining a first region to be fused in the substitute object and a second region to be fused in the target image frame;
and carrying out image fusion processing on the first region to be fused and the second region to be fused to obtain a processed target video.
Specifically, a first region to be fused in the substitute object and a second region to be fused in the target image frame may be determined by an image fusion processing technique in the digital image processing technique, and the first region to be fused is subjected to appropriate scaling, clipping, and the like, and then the processed first region to be fused and the processed second region to be fused are subjected to smoothing, blurring, and the like, so that transition engagement between the substitute object and the second image region is smoother and natural, and the substitute object is better fused into the target image frame.
Second mode
In the second way, the audio of the target video can be processed by using the audio data corresponding to the target object, so that the user can modify the video content based on the preference of the user.
The specific method of step 120 may include:
acquiring audio data associated with a target object;
determining a target audio frame corresponding to a target image frame in a target video;
and adding audio data in the target audio frame to obtain a processed target video.
Taking the target object as a "dog" as an example, when the electronic device detects a target image frame containing the target object "dog" in the target video, audio data corresponding to the target object "dog" may be acquired, for example, "dog cry", and then "dog cry" is added to the target audio frame corresponding to each target image frame in the target video, so as to obtain the processed target video.
In the embodiment of the present invention, the second method may be used alone, or may be used in combination with the first method.
Continuing to take the target object as the "dog" as an example, when the electronic device detects a target image frame containing the target object "dog" in the target video, firstly, the target object "dog" in the target image frame may be replaced by an AR object of the "dog", then, the audio data "dog beep" corresponding to the target object "dog" may be obtained, and the target video "dog beep" is added to the target audio frame corresponding to each target image frame in the target video, so as to obtain the processed target video.
At the moment, when watching a video, a user can see a virtual dog on a mobile phone screen and can hear the dog call, so that the user can build a scene that the user can closely contact the target object dog through the visual and auditory effects of AR, and the user can feel that the target object dog is around himself.
Third mode
In a third approach, the target object may be blurred to modify the video content so that the user avoids viewing an object that the user does not like.
The specific method of step 120 may include:
and carrying out fuzzy processing on the target object in the target image frame to obtain a processed target video.
For example, when the target objects are "snake", "spider", and "mouse", the user can blur these target objects so as to avoid seeing an object that is disliked. Wherein, the blurring process may be a mosaic process or a blurring filter process.
Fourth mode
In the fourth mode, the target image frame containing the target object can be directly deleted to modify the video content, so that the picture containing the object disliked by the user can be automatically skipped when the video is played.
The specific method of step 120 may include:
and deleting the target image frame in the target video to obtain the processed target video.
Fifth mode
In the fifth way, the image area corresponding to the target object can be deleted to modify the video content, so that the user can avoid viewing the object that the user does not like, but still does not miss other contents in the target image frame.
The specific method of step 120 may include:
and removing an image area corresponding to the target object in the target image frame to obtain a processed target video.
If the target object is located at the left edge of the target image frame, the image area corresponding to the target object may be all left edge areas including the target object; if the target object is located in the upper left corner of the target image frame, the image region corresponding to the target object may be the upper left corner region including the target object, and specifically, the upper left corner region may be a triangle or a triangle-like shape.
In the embodiment of the present invention, when the number of the target image frames is at least two frames, all or part of the at least two frames of the target image frames may be processed according to a target processing mode associated with the target object, so as to obtain a processed target video.
In some embodiments, a timestamp corresponding to each frame of the target image frame may be obtained, then, in at least two frames of the target image frames, the target image frame whose timestamp belongs to a preset playing time range is determined, and the determined target image frame is processed according to a target processing mode associated with the target object, so as to obtain a processed target video.
For example, the user may set the preset playing time range to be the first 30 minutes of the video, at this time, the electronic device only selects the target image frame within the first 30 minutes of the target video as the target image frame to be processed, and processes the selected target image frame to be processed to obtain the processed target video, but does not process the target image frames within other playing time ranges.
In other embodiments, a timestamp corresponding to each frame of the target image frame may be obtained, then, in at least two frames of the target image frames, a preset number of target image frames are obtained from a first timestamp according to a timestamp sequence, and the obtained target image frames are processed according to a target processing mode associated with a target object to obtain a processed target video.
For example, the preset number may be set to 10 by the user, at this time, the electronic device selects only the target image frame appearing in the previous 10 frames as the target image frame to be processed, and processes the selected target image frame to be processed to obtain the processed target video without processing other target image frames.
Therefore, the user can carry out more flexible video processing on part or all of the target image frames in the target video according to the preference, so that the video processing mode is more flexible and diversified, and the processed target video is more personalized.
Fig. 2 is a flow chart illustrating a video playing process according to an embodiment of the present invention. As shown in fig. 2, a specific process of playing a video by an electronic device may include the following steps:
step 203, receiving a video playing request, wherein the video playing request comprises a target video to be played selected by a user;
step 204, detecting whether a target video has a target image frame containing a kangaroo, if so, executing step 205, and if not, executing step 207;
and step 207, normally playing the target video.
Therefore, the embodiment of the invention can enable the user to obtain better viewing experience by detecting the target object set by the user in the target video to be played and processing the target image frame containing the target object according to the target processing mode associated with the target object set by the user. For example, some animals may be more lovely, liked by people, or may be physically inaccessible to life, and embodiments of the present invention may enable a user to have more, closer access to such lovely animals.
Fig. 4 is a flow chart illustrating a video playing process according to another embodiment of the present invention. As shown in fig. 4, a specific process of playing a video by an electronic device may include the following steps:
and step 309, normally playing the target video.
Therefore, according to the embodiment of the invention, some video contents which contain the target object and are not wanted to be seen can be deleted according to the preference of the user. For example, some people may fear certain animals, such as snakes, mice, spiders, etc., and therefore do not want to see pictures or video content of the animals, embodiments of the present invention can enable users to avoid viewing pictures with the animals in the video.
Fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 6, the electronic apparatus includes:
the target identification module 410 is configured to acquire a target image frame containing a preset target object in a target video to be played;
and the target processing module 420 is configured to process the target image frame according to a target processing mode associated with the target object to obtain a processed target video.
In the embodiment of the invention, the target image frame containing the preset target object in the target video to be played can be identified firstly, then the target image frame is processed by utilizing the preset processing mode associated with the target object to obtain the processed target video, and the processed target video is played, so that the electronic equipment can automatically process the target image frame in the target video according to the user requirement before playing the target video to obtain the processed target video meeting the user requirement, thereby realizing more diversified adjustment modes of the video for different target objects by adopting the corresponding processing modes, and enabling the video playing process to meet the personalized requirement of the user.
In the embodiment of the present invention, the target video to be played may be a network video or a local video played by using video playing application software.
In some embodiments of the present invention, the target identification module 410 is specifically configured to: acquiring a video playing request, wherein the video playing request comprises a target video to be played; and responding to the video playing request, carrying out image recognition on the image frame of the target video, and acquiring the target image frame containing the target object.
Therefore, after the electronic equipment acquires the video playing request for the target video to be played, the electronic equipment directly responds to the video playing request, automatically performs image recognition on the image frame of the target video, acquires the target image frame containing the target object, so that the acquired target image frame can be automatically processed, other operations on the target video by a user are not needed, the convenience of user operation can be improved, and the intelligence of the electronic equipment for processing the target video is enhanced.
In some embodiments of the present invention, the electronic device may further include an object setting module configured to: displaying an object processing setting interface; receiving a second input at the object handling setting interface; and determining the target object and the target processing mode associated with the target object according to the second input.
In the embodiment of the present invention, the second input of the user for the object processing setting interface may be text, an image, video, or audio, and the electronic device may determine the target object input by the user according to the received text, image, video, or audio.
Therefore, the embodiment of the invention can receive the target object input by the user and the target processing mode associated with the target object in a flexible mode, thereby providing a processing basis for personalized video processing.
In the embodiment of the invention, the electronic device can perform image recognition on all image frames of the target video based on the target object before playing the target video so as to recognize all the target image frames in the target video. The electronic device may also perform image recognition on a plurality of image frames to be played in the target video based on the target object in the process of playing the target video, so as to recognize the target image frame in the plurality of image frames to be played.
In the case that the electronic device identifies all target image frames in the target video before playing the target video, the target processing module 420 may be specifically configured to: displaying an object processing prompt interface, wherein the object processing prompt interface is used for displaying at least one target object contained in the target image frame and at least one target processing mode related to the target object; receiving a first input in a processing prompt interface; in response to a first input, determining a target object to be processed in at least one target object, and determining a processing mode to be executed in at least one target processing mode; and processing the target image frame according to the target object to be processed and the processing mode to be executed to obtain the processed target video.
Therefore, in the embodiment of the invention, under the condition that the target image frame is identified according to the selection of the user, the target object to be processed is selected from at least one target object contained in the target image frame, and the target processing mode associated with the target object is selected to be the processing mode to be executed, so that the intelligence and flexibility of video processing are further improved.
In this embodiment of the present invention, the target processing module 420 may implement processing the target image frame according to a target processing manner associated with the target object in a variety of ways.
First mode
In the first way, the preset target object can be replaced or covered by the substitute object, so that the user can modify the video content based on the preference of the user.
The target processing module 420 may obtain a substitute object associated with the target object, and replace or cover the target object in the target image frame with the substitute object to obtain a processed target video. The substitute object is an augmented reality AR object, a 2D object or a 3D object corresponding to the target object.
In some embodiments, the substitute object is an AR object corresponding to the target object. The AR technology enables some virtual things to be embodied in the real world, allowing people to perceive and experience something that is originally present only on the network or computer. At this time, the user may view the video with the AR object by adding a sticker for viewing the AR object on a display screen of the electronic device or wearing glasses for viewing the AR object.
In some embodiments of the invention, the target processing module 420 may be further configured to: after the target object in the target image frame is replaced or covered by the substitute object, a first region to be fused in the substitute object and a second region to be fused in the target image frame are determined, and the first region to be fused and the second region to be fused are subjected to image fusion processing to obtain a processed target video, so that the substitute object is better fused into the target image frame.
Second mode
In the second way, the audio of the target video can be processed by using the audio data corresponding to the target object, so that the user can modify the video content based on the preference of the user.
The target processing module 420 may obtain audio data associated with the target object, determine a target audio frame corresponding to the target image frame in the target video, and add audio data to the target audio frame to obtain a processed target video.
In the embodiment of the present invention, the second method may be used alone, or may be used in combination with the first method.
Taking the target object as a "dog" as an example, when the target identification module 410 detects a target image frame including the target object "dog", the target processing module 420 may first replace the target object "dog" in the target image frame with an AR object of the "dog", then may obtain audio data "dog call" corresponding to the target object "dog", and add the target video "dog call" to a target audio frame corresponding to each target image frame in the target video, thereby obtaining the processed target video.
At the moment, when watching a video, a user can see a virtual dog on a mobile phone screen and can hear the dog call, so that the user can build a scene that the user can closely contact the target object dog through the visual and auditory effects of AR, and the user can feel that the target object dog is around himself.
Third mode
In a third approach, the target object may be blurred to modify the video content so that the user avoids viewing an object that the user does not like.
The target processing module 420 may perform blurring processing on a target object in the target image frame to obtain a processed target video.
For example, when the target objects are "snake", "spider", and "mouse", the user can blur these target objects so as to avoid seeing an object that is disliked. Wherein, the blurring process may be a mosaic process or a blurring filter process.
Fourth mode
In the fourth mode, the target image frame containing the target object can be directly deleted to modify the video content, so that the picture containing the object disliked by the user can be automatically skipped when the video is played.
The target processing module 420 may delete the target image frame in the target video to obtain the processed target video.
Fifth mode
In the fifth way, the image area corresponding to the target object can be deleted to modify the video content, so that the user can avoid viewing the object that the user does not like, but still does not miss other contents in the target image frame.
The target processing module 420 may remove an image area corresponding to a target object in the target image frame to obtain a processed target video.
In this embodiment of the present invention, when the number of the target image frames is at least two frames, the target processing module 420 may process all or part of the at least two frames of the target image frames according to a target processing mode associated with the target object to obtain a processed target video.
In some embodiments, the target processing module 420 may be specifically configured to: acquiring a timestamp corresponding to each frame of target image frame, then determining the target image frame of which the timestamp belongs to a preset playing time range in at least two frames of target image frames, and processing the determined target image frame according to a target processing mode associated with a target object to obtain a processed target video.
In other embodiments, the target processing module 420 may be specifically configured to: acquiring a time stamp corresponding to each frame of target image frame, then, in at least two frames of target image frames, obtaining a preset number of target image frames from a first time stamp according to the time stamp sequence, and processing the obtained target image frames according to a target processing mode associated with a target object to obtain a processed target video.
Because the user can carry out more flexible video processing on part or all of the target image frames in the target video according to the preference, the video processing mode is more flexible and diversified, and the processed target video is more personalized.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 5, and is not described herein again to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 7, the electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 510 is configured to: acquiring a target image frame containing a preset target object in a target video to be played; and processing the target image frame according to a target processing mode associated with the target object to obtain a processed target video.
In the embodiment of the invention, the target image frame containing the preset target object in the target video to be played can be identified firstly, then the target image frame is processed by utilizing the preset processing mode associated with the target object to obtain the processed target video, and the processed target video is played, so that the electronic equipment can automatically process the target image frame in the target video according to the user requirement before playing the target video to obtain the processed target video meeting the user requirement, thereby realizing more diversified adjustment modes of the video for different target objects by adopting the corresponding processing modes, and enabling the video playing process to meet the personalized requirement of the user.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 7, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the video processing method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (15)
1. A video processing method, comprising:
acquiring a target image frame containing a preset target object in a target video to be played;
and processing the target image frame according to a target processing mode associated with the target object to obtain a processed target video.
2. The method according to claim 1, wherein the processing the target image frame according to the target processing mode associated with the target object to obtain a processed target video comprises:
acquiring a substitute object associated with the target object;
and replacing or covering the target object in the target image frame by using the substitute object to obtain the processed target video.
3. The method of claim 2, further comprising, after said replacing or overlaying said target object in said target image frame with said substitute object:
determining a first region to be fused in the substitute object and a second region to be fused in the target image frame;
and carrying out image fusion processing on the first region to be fused and the second region to be fused to obtain the processed target video.
4. The method of claim 2, wherein the substitute object is an Augmented Reality (AR) object, a 2D object, or a 3D object corresponding to the target object.
5. The method according to claim 1 or 2, wherein the processing the target image frame according to the target processing mode associated with the target object to obtain a processed target video further comprises:
acquiring audio data associated with the target object;
determining a target audio frame corresponding to the target image frame in the target video;
and adding the audio data in the target audio frame to obtain the processed target video.
6. The method according to claim 1, wherein the processing the target image frame according to the target processing mode associated with the target object to obtain a processed target video comprises:
and carrying out fuzzy processing on the target object in the target image frame to obtain the processed target video.
7. The method according to claim 1, wherein the processing the target image frame according to the target processing mode associated with the target object to obtain a processed target video comprises:
and removing an image area corresponding to the target object in the target image frame to obtain the processed target video.
8. The method according to claim 1, wherein the processing the target image frame according to the target processing mode associated with the target object to obtain a processed target video comprises:
and deleting the target image frame in the target video to obtain the processed target video.
9. The method of claim 1, wherein the number of frames of the target image frame is at least two frames;
the processing the target image frame according to the target processing mode associated with the target object to obtain a processed target video includes:
acquiring a time stamp corresponding to each frame of the target image frame;
determining a target image frame with a timestamp belonging to a preset playing time range in at least two target image frames;
and processing the determined target image frame according to the target processing mode associated with the target object to obtain a processed target video.
10. The method of claim 1, wherein the number of frames of the target image frame is at least two frames;
the processing the target image frame according to the target processing mode associated with the target object to obtain a processed target video includes:
acquiring a time stamp corresponding to each frame of the target image frame;
in at least two frames of the target image frames, according to the sequence of the time stamps, obtaining a preset number of target image frames from a first time stamp;
and processing the obtained target image frame according to a target processing mode associated with the target object to obtain a processed target video.
11. The method according to claim 1, wherein processing the target image frame according to a target processing mode associated with the target object to obtain a processed target video further comprises;
displaying an object processing prompt interface, wherein the object processing prompt interface is used for displaying at least one target object contained in the target image frame and at least one target processing mode associated with the target object;
receiving a first input at the processing prompt interface;
in response to the first input, determining a target object to be processed in the at least one target object, and determining a processing mode to be executed in at least one target processing mode;
and processing the target image frame according to the target object to be processed and the processing mode to be executed to obtain the processed target video.
12. The method according to claim 1, before the obtaining a target image frame containing a preset target object in the target video to be played, further comprising:
displaying an object processing setting interface;
receiving a second input entered at the object handling setup interface;
and determining the target object and the target processing mode related to the target object according to the second input.
13. The method according to claim 1, wherein the obtaining a target image frame containing a preset target object in a target video to be played comprises:
acquiring a video playing request, wherein the video playing request comprises a target video to be played;
and responding to the video playing request, performing image recognition on the image frame of the target video, and acquiring the target image frame containing the target object.
14. An electronic device, comprising:
the image acquisition module is used for acquiring a target image frame containing a preset target object in a target video to be played;
and the video processing module is used for processing the target image frame according to the target processing mode associated with the target object to obtain a processed target video.
15. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video processing method according to any one of claims 1 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910818124.3A CN110650367A (en) | 2019-08-30 | 2019-08-30 | Video processing method, electronic device, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910818124.3A CN110650367A (en) | 2019-08-30 | 2019-08-30 | Video processing method, electronic device, and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110650367A true CN110650367A (en) | 2020-01-03 |
Family
ID=68991385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910818124.3A Pending CN110650367A (en) | 2019-08-30 | 2019-08-30 | Video processing method, electronic device, and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110650367A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111601063A (en) * | 2020-04-29 | 2020-08-28 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN113315691A (en) * | 2021-05-20 | 2021-08-27 | 维沃移动通信有限公司 | Video processing method and device and electronic equipment |
CN114026874A (en) * | 2020-10-27 | 2022-02-08 | 深圳市大疆创新科技有限公司 | Video processing method and device, mobile device and readable storage medium |
CN115719468A (en) * | 2023-01-10 | 2023-02-28 | 清华大学 | Image processing method, device and equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150271619A1 (en) * | 2014-03-21 | 2015-09-24 | Dolby Laboratories Licensing Corporation | Processing Audio or Video Signals Captured by Multiple Devices |
CN107333071A (en) * | 2017-06-30 | 2017-11-07 | 北京金山安全软件有限公司 | Video processing method and device, electronic equipment and storage medium |
CN108765529A (en) * | 2018-05-04 | 2018-11-06 | 北京比特智学科技有限公司 | Video generation method and device |
CN109451349A (en) * | 2018-10-31 | 2019-03-08 | 维沃移动通信有限公司 | A kind of video broadcasting method, device and mobile terminal |
CN109640174A (en) * | 2019-01-28 | 2019-04-16 | Oppo广东移动通信有限公司 | Method for processing video frequency and relevant device |
CN109872297A (en) * | 2019-03-15 | 2019-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110177296A (en) * | 2019-06-27 | 2019-08-27 | 维沃移动通信有限公司 | A kind of video broadcasting method and mobile terminal |
-
2019
- 2019-08-30 CN CN201910818124.3A patent/CN110650367A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150271619A1 (en) * | 2014-03-21 | 2015-09-24 | Dolby Laboratories Licensing Corporation | Processing Audio or Video Signals Captured by Multiple Devices |
CN107333071A (en) * | 2017-06-30 | 2017-11-07 | 北京金山安全软件有限公司 | Video processing method and device, electronic equipment and storage medium |
CN108765529A (en) * | 2018-05-04 | 2018-11-06 | 北京比特智学科技有限公司 | Video generation method and device |
CN109451349A (en) * | 2018-10-31 | 2019-03-08 | 维沃移动通信有限公司 | A kind of video broadcasting method, device and mobile terminal |
CN109640174A (en) * | 2019-01-28 | 2019-04-16 | Oppo广东移动通信有限公司 | Method for processing video frequency and relevant device |
CN109872297A (en) * | 2019-03-15 | 2019-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110177296A (en) * | 2019-06-27 | 2019-08-27 | 维沃移动通信有限公司 | A kind of video broadcasting method and mobile terminal |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111601063A (en) * | 2020-04-29 | 2020-08-28 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN111601063B (en) * | 2020-04-29 | 2021-12-14 | 维沃移动通信有限公司 | Video processing method and electronic equipment |
CN114026874A (en) * | 2020-10-27 | 2022-02-08 | 深圳市大疆创新科技有限公司 | Video processing method and device, mobile device and readable storage medium |
WO2022087826A1 (en) * | 2020-10-27 | 2022-05-05 | 深圳市大疆创新科技有限公司 | Video processing method and apparatus, mobile device, and readable storage medium |
CN113315691A (en) * | 2021-05-20 | 2021-08-27 | 维沃移动通信有限公司 | Video processing method and device and electronic equipment |
CN113315691B (en) * | 2021-05-20 | 2023-02-24 | 维沃移动通信有限公司 | Video processing method and device and electronic equipment |
CN115719468A (en) * | 2023-01-10 | 2023-02-28 | 清华大学 | Image processing method, device and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111093026B (en) | Video processing method, electronic device and computer-readable storage medium | |
CN110365907B (en) | Photographing method and device and electronic equipment | |
CN108495029B (en) | Photographing method and mobile terminal | |
CN110650367A (en) | Video processing method, electronic device, and medium | |
CN109862266B (en) | Image sharing method and terminal | |
CN110557683B (en) | Video playing control method and electronic equipment | |
CN111031398A (en) | Video control method and electronic equipment | |
CN110933306A (en) | Method for sharing shooting parameters and electronic equipment | |
CN108924412B (en) | Shooting method and terminal equipment | |
CN108459788B (en) | Picture display method and terminal | |
CN110602565A (en) | Image processing method and electronic equipment | |
CN111669503A (en) | Photographing method and device, electronic equipment and medium | |
CN109618218B (en) | Video processing method and mobile terminal | |
CN108174110B (en) | Photographing method and flexible screen terminal | |
CN110855921B (en) | Video recording control method and electronic equipment | |
CN108984143B (en) | Display control method and terminal equipment | |
CN108174109B (en) | Photographing method and mobile terminal | |
CN109922294B (en) | Video processing method and mobile terminal | |
CN109448069B (en) | Template generation method and mobile terminal | |
CN111597370A (en) | Shooting method and electronic equipment | |
CN109639981B (en) | Image shooting method and mobile terminal | |
CN111383175A (en) | Picture acquisition method and electronic equipment | |
CN111064888A (en) | Prompting method and electronic equipment | |
CN111246102A (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN108924413B (en) | Shooting method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200103 |
|
RJ01 | Rejection of invention patent application after publication |