WO2022206582A1 - 视频处理方法、装置、电子设备和存储介质 - Google Patents
视频处理方法、装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2022206582A1 WO2022206582A1 PCT/CN2022/082958 CN2022082958W WO2022206582A1 WO 2022206582 A1 WO2022206582 A1 WO 2022206582A1 CN 2022082958 W CN2022082958 W CN 2022082958W WO 2022206582 A1 WO2022206582 A1 WO 2022206582A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- target object
- input
- marking
- processing
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 100
- 238000000034 method Methods 0.000 claims abstract description 192
- 230000008569 process Effects 0.000 claims abstract description 81
- 230000004044 response Effects 0.000 claims abstract description 36
- 238000004891 communication Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 17
- 230000000694 effects Effects 0.000 description 8
- 244000025254 Cannabis sativa Species 0.000 description 5
- 230000001795 light effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008030 elimination Effects 0.000 description 3
- 238000003379 elimination reaction Methods 0.000 description 3
- 239000003550 marker Substances 0.000 description 2
- 241000255777 Lepidoptera Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Definitions
- the present application belongs to the technical field of video processing, and specifically relates to a video processing method, apparatus, electronic device and storage medium.
- the camera on the user's device basically meets the daily needs of the user. There are also various processing methods for the captured videos and photos, but they cannot be selected during the shooting process. For some users who want to delete certain frames in the video Users who want to screen or avoid certain objects in the video need to manually edit and splicing the previously shot video through video software, etc., which is not friendly to most users who are not familiar with post-editing video software, and even give up video editing directly. . At present, the photo-recording function of the user equipment cannot automatically realize the editing and processing of the video, and the editing and processing of the video is relatively cumbersome, and cannot meet the needs of the user for real-time shooting and sharing.
- the embodiments of the present application provide a video processing method, which can solve the defect that the current photo-recording function of the user equipment cannot automatically realize the editing and processing of the video, the editing and processing of the video is cumbersome, and the user's needs for real-time shooting and sharing cannot be met.
- an embodiment of the present application provides a video processing method, which includes:
- the first target object is processed during the video recording process to obtain a first video
- the first input includes:
- an embodiment of the present application provides a video processing apparatus, and the apparatus includes:
- a first receiving unit configured to receive the first input to the first target object in the video recording interface
- a first processing unit configured to process the first target object during the video recording process to obtain a first video in response to the first input
- the first input includes:
- embodiments of the present application provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the video processing method according to the first aspect when executed.
- an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the video processing method according to the first aspect is implemented A step of.
- an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, and implement the first aspect the video processing method.
- the first target object by receiving the first input to the first target object in the video recording interface, in response to the first input, the first target object is processed during the video recording process to obtain the first video, which realizes The automatic editing and processing of video can meet the needs of users to shoot and share in real time.
- FIG. 1 is one of the schematic flowcharts of a video processing method provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of setting a marking method of a target object according to an embodiment of the present application
- FIG. 3 is a schematic diagram of a high amount of selected tags and a display processing method provided by an embodiment of the present application
- FIG. 4 is a schematic diagram of performing different editing processes on different marked objects according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of different marking modes corresponding to different processing modes provided in an embodiment of the present application.
- FIG. 6 is a schematic diagram of displaying a thumbnail image corresponding to a marked object and a time period in which the marked object appears, according to an embodiment of the present application;
- FIG. 7 is a schematic diagram of displaying a marked object label and a position frame during video playback provided by an embodiment of the present application
- FIG. 8 is the second schematic flowchart of the video processing method provided by the embodiment of the present application.
- FIG. 9 is a third schematic flowchart of a video processing method provided by an embodiment of the present application.
- FIG. 10 is a fourth schematic flowchart of a video processing method provided by an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of a video processing apparatus provided by an embodiment of the present application.
- FIG. 12 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- FIG. 13 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
- the term “plurality” refers to two or more than two, and other quantifiers are similar.
- the execution body may be an electronic device, and the electronic device mentioned in the embodiment of the present application includes but is not limited to a mobile phone, a tablet computer, a computer, a wearable device, and the like.
- FIG. 1 is one of the schematic flowcharts of a video processing method provided by an embodiment of the application, and the method includes:
- Step 100 receiving the first input to the first target object in the video recording interface
- the first input includes:
- the video recording interface refers to the video recording interface of the camera APP of the electronic device, and the target objects included in the video recording interface may be people, animals or objects appearing in the video recording preview screen. Wherein, before the user enters the camera APP and does not enable the video recording function, the content displayed on the video recording interface is the video recording preview screen.
- the electronic device receives a first input to the first target object in the video recording interface, where the first input is an operation of setting a marking method and a label of the first target object.
- the marking method of the first target object refers to a specific method of marking the first target object.
- FIG. 2 is a schematic diagram of setting the marking method of the first target object according to an embodiment of the present application.
- the first target object can be marked with an underline; it can also be that the user clicks somewhere on the video recording interface, and the electronic device automatically recognizes the first target object according to the AI algorithm; or, the user can define a rough object area frame Afterwards, the electronic device then completes the detail recognition process based on the area frame according to the AI algorithm, thereby recognizing the first target object.
- the label of the first target object refers to identifying the first target object with a name, which can be a user-defined label or a label recognized by an electronic device according to an AI algorithm, such as a birthday label after detecting a cake, a label after identifying a grass grass tags, building or location tags after identifying buildings.
- the image feature information of the first target object after completing the setting of the marking method and the label of the first target object, save the image feature information of the first target object, so as to facilitate subsequent video recording based on the image feature information of the first target object.
- the object is tracked and matched.
- the electronic device receives a first input to the first target object in the video recording interface, where the first input is a marking method and a label for the first target object, and each marking method corresponds to the first input.
- the processing method to set the operation.
- different marking methods correspond to different processing methods
- the user can set different marking methods according to different processing methods.
- the round frame represents the light effect
- the square frame represents magnification
- the dashed line represents the elimination
- the heart-shaped frame represents the smear, etc., which realizes the corresponding processing method of the marking method of the first target object.
- Step 101 in response to the first input, process the first target object in the video recording process to obtain a first video
- the electronic device in response to the first input, performs corresponding processing on the first target object according to the content of the first input during the video recording process, and obtains the edited first video after the video recording ends. .
- the first target object is processed during the video recording process to obtain
- the first video realizes automatic editing and processing of video, which can meet the needs of users to shoot and share in real time.
- the first target object is processed during the video recording process to obtain the first target object.
- a video including:
- the first video containing the marked object is obtained after the video recording ends.
- the electronic device can follow the first input in the first input during the video recording process.
- the marking method and label of the target object based on the image features of the first target object, the first target object is tracked and matched, so as to complete the marking of the first target object, and after the video recording ends, the first target object containing the marked object is obtained. video.
- the first target object becomes a marked object after being marked.
- the first target object since the user does not set the processing method of the first target object before the video recording, the first target object will not be edited during the video recording process.
- the marking of the first target object is realized during the video recording process, and the obtained first video contains the marked object, so that the user can choose the processing method for different marked objects after the video recording is finished, so as to conveniently realize Editing and processing can effectively improve the user experience.
- the first target object is processed during the video recording process to obtain the first target object.
- a video including:
- the marked object is processed according to the first processing manner, and a first video is obtained after the video recording ends.
- the user can select the label of the marked object at any time during the video recording process and set the processing method for the marked object, so that the processing method for the marked object is Take effect on the next frame.
- the target object is marked according to the marking method and label of the first target object to obtain the marked object; the receiving user selects the label of the marked object and performs the processing method of the marked object.
- the operation of setting in response to the second input, the selected label is highlighted and the processing mode of the selected marked object is displayed, and FIG. 3 is a schematic diagram of highlighting and displaying the processing mode of the selected label provided by the embodiment of the present application .
- the marked object is processed according to the content of the second input, and the first video is obtained after the video recording ends.
- the camera on the electronic device in response to the second input, it is also necessary to control the camera on the electronic device to turn on the wide-angle or telephoto as an auxiliary to perform accurate real-time recording processing, wherein the wide-angle is used to identify whether the marked object is about to enter the main camera. Scope, tele is used to identify whether there is a marker object in the distant view and deal with it accordingly.
- the second input further includes:
- the user can select the label of the marked object, and can cancel or update the processing method of the selected marked object.
- the processing method by receiving the user's selection of the object label and the setting of the processing method during the video recording process, the processing method takes effect in the next frame, and the real-time automatic editing and processing of the marked object can be realized, so as to satisfy the user's real-time shooting and sharing requirements. requirements, and effectively improve the user experience.
- the user can choose how to process different marked objects after the video recording ends.
- the method further includes:
- the marked object in the first video is processed according to the second processing manner to obtain a second video.
- the user selects whether to edit the first video. If the user chooses to perform editing processing on the first video, the electronic device receives a third input for the marked object, where the third input is an operation for setting the processing mode of the marked object; in response to the third input Input: According to the content of the third input, the marked object in the first video is processed to obtain a second video.
- the user can perform different or the same editing processing on different marked objects according to their own needs.
- different marked objects can be processed separately, such as elimination, graffiti, mosaic, blur, smear, etc., or the same processing; or, delete the entire video interval where the marked object appears; or, for the same object category, according to the time period
- Different editing processing is performed; or, according to the listed marked object tags, select one of the marked objects as the main body, and the marked objects other than the main body will be blurred.
- FIG. 4 is a schematic diagram of performing different editing processing on different marked objects according to an embodiment of the present application.
- the recording of the first target object during the video recording process is performed.
- a target object is processed to obtain a first video, including:
- the marked object is processed according to the processing methods corresponding to the marking methods, and a first video is obtained after the video recording ends.
- the user clicks the camera APP to enter the video recording preview screen, the user enables the object marking function, and uses different marking methods (oval box: light effect, box: zoom in, line: eliminate, heart-shaped box: smear... ) mark the target object, and then complete the detail recognition and save its detail characteristics according to the AI algorithm mark, user-defined object labels or use the camera's default AI labels (such as detecting cakes recommending birthday labels, identifying grass labels after grass, identifying buildings).
- different processing methods are preset according to different marking methods.
- FIG. 5 is a schematic diagram of different marking methods corresponding to different processing methods provided by the embodiment of the present application.
- the user can use different marking methods to mark the target and save the image features of the object, so as to realize the preset of different processing methods, wherein the processing methods corresponding to the different marking methods can be user-defined, for example, the camera APP provides the processing method name. , the user can set their favorite marking method according to the name of the processing method; the processing methods corresponding to different marking methods can also be provided by the camera APP: for example, the oval box represents the light effect, the box represents the enlargement, the line represents the elimination, and the heart-shaped box represents the smear etc.
- the electronic device controls the camera to turn on the wide-angle or telephoto as an auxiliary to estimate in advance, wherein the wide-angle is used to identify whether the marked object is about to enter the main camera.
- Scope tele is used to identify whether there is a marker object in the distant view and deal with it accordingly.
- the electronic device will perform the video recording process according to the specified operation method. Mark the first target object using the marking method and label of the first target object to obtain a marked object; then, process the marked object according to the processing methods corresponding to the marking methods, and obtain after the video recording ends first video.
- different marking methods correspond to different processing methods.
- marking and editing of target objects are realized, which can meet the needs of users for real-time shooting and sharing, and effectively improve the user's ability to record videos. experience.
- the method further includes:
- a thumbnail image corresponding to the marked object and a time period in which the marked object appears are displayed.
- the fourth input is an operation of entering an album to view the first video.
- FIG. 6 is a schematic diagram of displaying a thumbnail image corresponding to a marked object and a time period in which the marked object appears, according to an embodiment of the present application.
- the user enters the album to determine whether there is an object tag in the video, and if there is an object tag in the video, the thumbnail image corresponding to the marked object and the time period when the marked object appears are displayed, so that the user can intuitively understand. Displays the video frame and time period in which the marked object appears, allowing the user to see the marked object without having to view the video again.
- the method further includes:
- the fifth input is an operation of adding a second target object and setting the marking mode, label and processing mode of the second target object;
- the second target object is identified and processed during video recording.
- the user can also Time to add a new target object (ie, the second target object) and its label, and set different processing methods according to different marking methods to make it effective in the next frame.
- the electronic device controls the camera to turn on the wide-angle or tele as an auxiliary to perform real-time and accurate recording processing.
- the user can also select the object tag during the video recording process, and the selected tag will be highlighted and the processing method will be displayed.
- the user can also cancel or update the processing method of the selected target object.
- the processing method by receiving the user's selection of the object label and the setting of the processing method during the video recording process, the processing method takes effect in the next frame, and the real-time automatic editing and processing of the marked object can be realized, so as to satisfy the user's real-time shooting and sharing requirements. requirements, and effectively improve the user experience.
- the user watches the video after the recording is completed, and if the tag object is found at the beginning of the viewing, the user can edit the tag display content or use the AI-recognized object tag, for example, the AI does not recognize the tag of object 1, Then you can edit the text as "Butterfly" and apply it, then the attachment of the position where Object 1 appears in the video is marked as Butterfly.
- a prompt will be given before viewing: whether to display the marked object label and the position box. If yes, the marked object label and the position box will be displayed during the video playback.
- FIG. 7 shows the marked object label and position during the video playback provided by the embodiment of the present application. Schematic diagram of the box.
- FIG. 8 is a second schematic flowchart of a video processing method provided by an embodiment of the present application. As shown in Figure 8, the method includes the following steps:
- Step 801 The user clicks the camera APP to enter the recording video preview screen
- Step 802 Turn on the object marking function, and complete the presetting of the marking method of the object according to the prompt;
- Step 803 After the user completes the presetting of the marking mode of the object, the video recording starts after saving the image feature information of the object;
- Step 804 During the video recording process, the user can mark the object and save the image features of the object according to the method selected in step 802, and then complete the video recording action;
- Step 805 Enter the album to determine whether the video has been marked with a tag, if so, go to Step 806, otherwise go to Step 808;
- Step 806 Display the video and the thumbnails of the marked objects selected after the recording is completed, and the appearance time.
- Step 807 the user selects the processing method for the marked object, and determines whether to edit the marked object, if it is determined to edit the marked object, then go to step 809, otherwise go to step 808;
- Step 808 the user watches the unprocessed video
- Step 809 The user watches the recorded video.
- the user can edit the content of the tag display or use the object tag recognized by AI. For example, if the AI does not recognize the tag of object 1, you can edit the text as "butterfly" and apply it, then the object 1 appears in the video. Location attachments are marked as butterflies. At the same time, you will be prompted before watching: whether to display the marked object label and position box, if you select Yes, the marked object label and position box will be displayed when the video is playing.
- the user opens the camera APP to enter the video recording interface, the user enables the object marking function, and completes the setting method of marking objects according to the prompts. After the setting is completed, the video recording starts. After recording the video, you can enter the album to view the set Label all objects and set processing for video editing.
- FIG. 9 is a third schematic flowchart of a video processing method provided by an embodiment of the present application. As shown in Figure 9, the method includes the following steps:
- Step 901 The user clicks the camera APP to enter the recording video preview screen
- Step 902 Turn on the object marking function, and complete the presetting of the marking method of the object according to the prompt;
- Step 903 After the user completes the marking method of the object, it starts to record the video, and the camera needs to turn on the wide-angle or tele as an auxiliary to estimate in advance;
- Step 904 During the video recording process, the user selects the object label and sets the processing method, and the selected label will be highlighted and the processing method will be displayed;
- Step 905 in the process of recording the video, the user can select the object label, and the processing of the selected object can be canceled or changed;
- Step 906 the user completes the video recording action
- Step 907 Enter the album to determine whether the video is marked with a tag, if so, only step 908 is required, otherwise, go to step 112;
- Step 908 Display the video and the thumbnails of the selected marked objects and their appearance time after the recording is completed.
- Step 909 The user selects whether to edit the original video, if it is determined to edit the marked object, then go to Step 910, otherwise go to Step 911;
- Step 910 the user selects the processing method for different marked objects, which can be secondary processing
- Step 911 Watch the video of the object processed in real time during the recording process.
- Step 912 Watch the video after the recording is completed. If there is a tag object at the beginning of watching, the user can edit the content of the tag display or use the object tag recognized by AI. For example, if the AI does not recognize the object 1 tag, the text can be edited as "butterfly" and applied, the position attachment of object 1 in the video is marked as a butterfly. At the same time, you will be prompted before watching: whether to display the marked object label and the position box. If you select Yes, the marked object label and the position box will be displayed when the video is playing.
- the user opens the camera APP to enter the video recording interface, the user enables the object tagging function, completes the preset object tagging method according to the prompt, starts to record the video after the preset is completed, selects the object tag and sets the processing method during the recording process, In subsequent video images, the marked object is identified and processed accordingly, and the object processing method can also be cancelled or changed.
- FIG. 10 is a fourth schematic flowchart of a video processing method provided by an embodiment of the present application. As shown in Figure 10, the method includes the following steps:
- Step 1001 The user clicks the camera APP to enter the recording video preview screen
- Step 1002 Turn on the object marking function, and preset different marking methods (elliptical frame: light effect, box: zoom in, line: eliminate, heart-shaped frame: smear%) for marking the target object, AI completes the details according to the marking Identify and save its detailed features, user-defined object tags or use the camera's default AI tags (detected cakes recommend birthday tags, recognize grass tags after grass, identify buildings or place tags after buildings%), and according to different The marking method realizes the preset of different processing methods.
- preset different marking methods elliptical frame: light effect, box: zoom in, line: eliminate, heart-shaped frame: smear
- Step 1003 After the user completes the marking of the object and the preset processing method, the user starts to record the video, and the camera needs to turn on the wide-angle or tele as an auxiliary to estimate in advance.
- Step 1004 During the video recording process, the user can also add an object and its label at any time, and set the object processing method according to different marking methods to make it effective in the next frame.
- Step 1005 During the video recording process, the user can select the object tag, the processing mode of the selected object can be canceled or changed, and the selected tag will be highlighted and the processing mode will be displayed.
- Step 1006 The user completes the recording of the video processed in real time.
- the user opens the camera APP to enter the video recording interface, the user enables the object marking function, uses different marking methods to mark the target to save its characteristics, and realizes the preset of different processing methods according to different marking methods.
- the object marking function uses different marking methods to mark the target to save its characteristics, and realizes the preset of different processing methods according to different marking methods.
- start recording video When a frame takes effect, start recording video.
- AI recognizes the marked object and responds according to the preset. Processing, you can also click the label to cancel or change the object processing method.
- the execution body may be a video processing apparatus, or a control module in the video processing apparatus for executing the loading video processing method.
- the video processing device provided by the embodiment of the present application is described by taking a video processing device executing a video processing method as an example.
- FIG. 11 is a schematic structural diagram of a video processing apparatus provided by an embodiment of the present application. As shown in FIG. 11 , the apparatus includes:
- the first receiving unit 1110 is configured to receive the first input from the user to the first target object in the video recording interface
- a first processing unit 1120 configured to process the first target object during the video recording process in response to the first input to obtain a first video
- the first input includes:
- the first processing unit when the first input is an operation of setting the marking mode and label of the first target object, the first processing unit is configured to:
- the first video including the marked object is obtained after the video recording ends.
- the first processing unit when the first input is an operation of setting the marking mode and label of the first target object, the first processing unit is configured to:
- the marked object is processed according to the first processing mode, and a first video is obtained after the video recording ends.
- the device further includes:
- a second receiving unit configured to receive a third input from the user to the marked object, where the third input is an operation for setting a second processing mode of the marked object;
- the second processing unit is configured to, in response to the third input, process the marked object in the first video according to the second processing manner to obtain a second video.
- the first processing unit is configured to:
- the marked object is processed according to the processing methods corresponding to the marking methods, and a first video is obtained after the video recording ends.
- the device further includes:
- a third receiving unit configured to receive a fourth input from the user to the first video
- a third processing unit configured to display, in response to the fourth input, a thumbnail image corresponding to the marked object and a time period during which the marked object appears.
- the device further includes:
- a fourth receiving unit configured to receive a fifth input, where the fifth input is an operation of adding a second target object and setting a marking method, a label and a processing method of the second target object;
- a fourth processing unit configured to identify the second target object and process the second target object during the video recording process in response to the fifth input.
- the video processing apparatus in this embodiment of the present application may be an apparatus or electronic device having an operating system, or may be a component, an integrated circuit, or a chip in a terminal.
- the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
- the electronic device may be a mobile electronic device or a non-mobile electronic device.
- the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
- UMPC ultra-mobile personal computer
- netbook or a personal digital assistant
- non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
- Network Attached Storage NAS
- personal computer personal computer, PC
- television television
- teller machine or self-service machine etc.
- the video processing apparatus provided by the embodiments of the present application can implement each process implemented by the video processing apparatus in the method embodiments of FIG. 1 to FIG. 10 , and to avoid repetition, details are not described here.
- the video processing apparatus receives the first input to the first target object in the video recording interface, and in response to the first input, processes the first target object during the video recording process, and obtains The first video realizes automatic editing and processing of video, which can meet the needs of users to shoot and share in real time.
- an embodiment of the present application further provides an electronic device 1200.
- the electronic device includes a processor 1201, a memory 1202, and a program stored in the memory 1202 and executable on the processor 1201 or instruction, when the program or instruction is executed by the processor 1201, each process of the above video processing method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
- the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.
- FIG. 13 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
- the electronic device 1300 includes but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, and a processor 1310, etc. at least some of the components.
- the electronic device 1300 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 1310 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. consumption management and other functions.
- a power source such as a battery
- the structure of the electronic device shown in FIG. 13 does not constitute a limitation on the electronic device.
- the electronic device may include more or less components than the one shown, or combine some components, or arrange different components, which will not be repeated here. .
- the input unit 1304 may include a graphics processor (Graphics Processing Unit, GPU) 13041 and a microphone 13042. Such as camera) to obtain still pictures or video image data for processing.
- the display unit 1306 may include a display panel 13061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
- the user input unit 1307 includes a touch panel 13071 and other input devices 13072 .
- the touch panel 13071 is also called a touch screen.
- the touch panel 13071 may include two parts, a touch detection device and a touch controller.
- Other input devices 13072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be described herein again.
- the radio frequency unit 1301 acquires the information and then processes the information to the processor 1310 .
- the radio frequency unit 1301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- Memory 1309 may be used to store software programs or instructions as well as various data.
- the memory 1309 may mainly include a stored program or instruction area and a storage data area, wherein the stored program or instruction area may store an operating system, an application program or instruction required for at least one function (such as a sound playback function, an image playback function, etc.) and the like.
- the memory 1309 may include a high-speed random access memory, and may also include a non-volatile memory, wherein the non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM) , PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
- ROM Read-Only Memory
- PROM programmable read-only memory
- PROM erasable programmable read-only memory
- Erasable PROM Erasable PROM
- EPROM electrically erasable programmable read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory for example at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
- the processor 1310 may include one or more processing units; optionally, the processor 1310 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, application programs or instructions, etc., Modem processors mainly deal with wireless communications, such as baseband processors. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 1310.
- the user input unit 1307 is used to receive the first input to the first target object in the video recording interface
- the processor 1310 is configured to process the first target object during the video recording process in response to the first input to obtain a first video
- the first input includes:
- the electronic device receives the first input to the first target object in the video recording interface, and in response to the first input, processes the first target object during the video recording process, and obtains the first target object.
- a video realizes the automatic editing and processing of the video, and can meet the needs of users to shoot and share in real time.
- the processor 1310 is configured to:
- the first video including the marked object is obtained after the video recording ends.
- the processor 1310 is configured to:
- the marked object is processed according to the first processing manner, and a first video is obtained after the video recording ends.
- the user input unit 1307 is also used for:
- Processor 1310 is also used to:
- the marked object in the first video is processed according to the second processing manner to obtain a second video.
- the processor 1010 is configured to:
- the marked object is processed according to the processing methods corresponding to the marking methods, and a first video is obtained after the video recording ends.
- the user input unit 1307 is also used for:
- processor 1310 is further configured to:
- a thumbnail image corresponding to the marked object and a time period in which the marked object appears are displayed.
- the user input unit 1307 is also used for:
- the fifth input is an operation of adding a second target object and setting the marking mode, label and processing mode of the second target object;
- processor 1310 is further configured to:
- the second target object is identified and processed during video recording.
- Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium.
- a program or an instruction is stored on the readable storage medium.
- the processor is the processor in the electronic device described in the foregoing embodiments.
- the readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
- An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above video processing method embodiments.
- the chip includes a processor and a communication interface
- the communication interface is coupled to the processor
- the processor is configured to run a program or an instruction to implement the above video processing method embodiments.
- the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
- the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
- the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) execute the methods described in the various embodiments of this application.
- a storage medium such as ROM/RAM, magnetic disk, CD-ROM
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
本申请公开了一种视频处理方法、装置、电子设备和存储介质,属于计算机技术领域,所述方法包括:接收对视频录制界面中第一目标对象的第一输入;响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频;其中,所述第一输入包括:对所述第一目标对象的标记方式和标签进行设置的操作;或者,对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作。
Description
相关申请的交叉引用
本申请要求于2021年03月29日提交的申请号为2021103352981,发明名称为“视频处理方法、装置、电子设备和存储介质”的中国专利申请的优先权,其通过引用方式全部并入本申请。
本申请属于视频处理技术领域,具体涉及一种视频处理方法、装置、电子设备和存储介质。
随着社交方式的网络化和手机拍照技术的迅速发展,越来越多的用户喜欢使用手机拍照、录像来记录生活的点点滴滴,如孩子的成长过程,自己的学习工作,以及自己去过的旅游景点,某一事物的变化等等。同时近几年随着小视频分享应用程序(Application,APP)、视频博客(video weblog,vlog)的流行,用户进行视频录制以及分享的频率越来越高。
用户设备上的相机的拍照录像功能基本满足了用户的日常需求,对于拍下来的视频和照片也有多种处理方式,但是不能够在拍摄过程中进行选择处理,对于一些想删除视频中某些帧画面或规避视频中某些特定物体的用户,需要手动将之前拍的视频通过视频软件进行剪辑拼接等,对于大部分不熟悉后期编辑视频软件的用户来说并不友好,甚至直接放弃了视频编辑。目前用户设备的拍照录像功能无法自动实现对视频的编辑处理,对视频的编辑处理较为繁琐,无法满足用户实时拍摄并分享的需求。
发明内容
本申请实施例提供一种视频处理方法,能够解决目前用户设备的拍照录像功能无法自动实现对视频的编辑处理,对视频的编辑处理较为繁琐,无法满足用户实时拍摄并分享的需求的缺陷。
第一方面,本申请实施例提供了一种视频处理方法,该方法包括:
接收对视频录制界面中第一目标对象的第一输入;
响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频;
其中,所述第一输入包括:
对所述第一目标对象的标记方式和标签进行设置的操作;或者,
对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作。
第二方面,本申请实施例提供了一种视频处理装置,该装置包括:
第一接收单元,用于接收对视频录制界面中第一目标对象的第一输入;
第一处理单元,用于响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频;
其中,所述第一输入包括:
对所述第一目标对象的标记方式和标签进行设置的操作;或者,
对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的视频处理方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的视频处理方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的视频处理方法。
在本申请实施例中,通过接收对视频录制界面中第一目标对象的第一输入,响应于所述第一输入,在视频录制过程中对第一目标对象进行处理,得到第一视频,实现了对视频的自动编辑处理,可满足用户实时拍摄并分享的需求。
图1为本申请实施例提供的视频处理方法的流程示意图之一;
图2为本申请实施例提供的对目标对象的标记方式进行设置的示意图;
图3为本申请实施例提供的被选中标签高量并显示处理方式的示意图;
图4为本申请实施例提供的对不同的标记对象进行不同的编辑处理的示意图;
图5为本申请实施例提供的不同的标记方式对应不同处理方式的示意图;
图6为本申请实施例提供的显示标记对象对应的缩略图和标记对象出现的时间段的示意图;
图7为本申请实施例提供的视频播放时显示标记对象标签和位置框的示意图;
图8为本申请实施例提供的视频处理方法的流程示意图之二;
图9为本申请实施例提供的视频处理方法的流程示意图之三;
图10为本申请实施例提供的视频处理方法的流程示意图之四;
图11为本请实施例提供的视频处理装置的结构示意图;
图12为本请实施例提供的电子设备的结构示意图;
图13为实现本申请实施例的一种电子设备的硬件结构示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或” 的关系。
应理解,说明书中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。
本申请实施例中术语“多个”是指两个或两个以上,其它量词与之类似。
下面结合附图,通过一些实施例及其应用场景对本申请实施例提供的视频处理方法进行详细地说明。本申请实施例提供的视频处理方法,执行主体可以是电子设备,本申请实施例提及的电子设备包括但不限于手机、平板电脑、电脑、可穿戴设备等。
图1为本申请实施例提供的视频处理方法的流程示意图之一,该方法包括:
步骤100、接收对视频录制界面中第一目标对象的第一输入;
其中,所述第一输入包括:
对所述第一目标对象的标记方式和标签进行设置的操作;或者,
对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作。
其中,视频录制界面是指电子设备的相机APP的录制视频的界面,视频录制界面中包含的目标对象可以是出现在视频录制预览画面中的人、动物或物体。其中,在用户进入相机APP未开启录制视频功能之前,在视频录制界面上显示的内容为视频录制预览画面。
一种实施方式中,电子设备接收对视频录制界面中第一目标对象的第一输入,其中,所述第一输入为对第一目标对象的标记方式和标签进行设置的操作。
第一目标对象的标记方式是指对第一目标对象进行标记的具体方式,图2为本申请实施例提供的对第一目标对象的标记方式进行设置的示意图。例如,可以采用下划线的方式对第一目标对象进行标记;也可以是用户点击视频录制界面的某处,然后电子设备根据AI算法自动识别出第一目标对象;或者,用户自定义大致对象区域框后,然后电子设备根据AI算法 基于此区域框完成细节识别处理,从而识别出第一目标对象。
第一目标对象的标签是指用一个名称来标识第一目标对象,可以是用户自定义标签,也可以是电子设备根据AI算法识别的标签,例如,检测到蛋糕后的生日标签、识别草地后的草地标签、识别建筑后的建筑或地点标签。
可选地,完成对第一目标对象的标记方式和标签的设置后,保存第一目标对象的图像特征信息,以便于后续在视频录制过程中基于第一目标对象的图像特征信息对第一目标对象进行跟踪匹配。
可以理解,采用不同的标记方式和标签来实现对不同的第一目标对象的标记并保存各目标对象的图像特征,其中,标记方式和标签可以是用户自定义的,也可以是电子设备根据AI算法自动生产的。
可选地,另一种实施方式中,电子设备接收对视频录制界面中第一目标对象的第一输入,其中,第一输入为对第一目标对象的标记方式和标签,以及各标记方式对应的处理方式进行设置的操作。
可以理解的是,不同的标记方式对应不同的处理方式,用户可以根据处理方式的不同设定不同的标记方式。例如,圆框代表光效,方框代表放大,划线代表消除,心形框代表拖影等,实现了第一目标对象的标记方式对应处理方式。
步骤101、响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频;
可以理解的是,电子设备响应于所述第一输入,在视频录制过程中根据所述第一输入的内容对第一目标对象进行相应处理,在视频录制结束后得到经过编辑处理的第一视频。
本申请实施例提供的视频处理方法,通过接收对视频录制界面中第一目标对象的第一输入,响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频,实现了对视频的自动编辑处理,可满足用户实时拍摄并分享的需求。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,所述在视频录制过程中对所述第一目标对象进行处理,得到第一视频,包括:
在视频录制过程中按照所述第一目标对象的标记方式和标签对所述目标对象进行标记,得到标记对象;
在视频录制结束后得到包含标记对象的第一视频。
可以理解的是,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,电子设备在视频录制过程中可按照所述第一输入中的第一目标对象的标记方式和标签,基于第一目标对象的图像特征,对第一目标对象进行跟踪和匹配,从而完成对第一目标对象的标记,并在视频录制结束后得到包含标记对象的第一视频。
其中,第一目标对象被标记后成为标记对象。
在此实施方式中,由于用户并未在视频录制之前,对第一目标对象的处理方式进行设置,因此,在视频录制过程中并不会对第一目标对象进行编辑处理。但是,在视频录制过程中实现了对第一目标对象的标记,得到的第一视频中包含标记对象,从而使得用户可以在视频录制结束后,选择对不同标记对象的处理方式,从而便捷地实现编辑处理,可以有效提升用户体验。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,所述在视频录制过程中对所述第一目标对象进行处理,得到第一视频,包括:
在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;
接收对所述标记对象的第二输入,所述第二输入包括对所述标记对象的标签进行选中和对所述标记对象的第一处理方式进行设置的操作;
响应于所述第二输入,根据所述第一处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
可以理解的是,一种可选的实施方式中,用户可以在视频录制过程中的任意时刻选中标记对象的标签并设定对所述标记对象的处理方式,使得对所述标记对象的处理方式在下一帧生效。
即在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述目标对象进行标记,得到标记对象;接收用户对标记对象的标签进行选中和对所述标记对象的处理方式进行设置的操作;响应于所述第二输入, 被选中的标签高亮并显示该被选中的标记对象的处理方式,图3为本申请实施例提供的被选中标签高亮并显示处理方式的示意图。在下一帧根据所述第二输入的内容对所述标记对象进行处理,在视频录制结束后得到第一视频。
可选地,响应于所述第二输入时,还需要控制电子设备上的摄像头打开广角或者远望tele做辅助以进行准确实时地录制处理,其中,广角用于识别标记对象是否将要进入主摄范围,tele用于识别远景是否存在标记对象并做相应处理。
可选地,所述第二输入还包括:
对所述标记对象的标签进行选中和对所述标记对象的处理方式进行设置的操作,以及对所述标记对象的处理方式进行取消的操作;
对所述标记对象的标签进行选中和对所述标记对象的处理方式进行设置的操作,以及对所述标记对象的处理方式进行更新的操作。
可以理解的是,在视频录制过程中,用户可以选中标记对象的标签,对选中的标记对象的处理方式可以进行取消或者更新操作。
在本申请实施例中,通过在视频录制过程中接收用户对对象标签的选中和处理方式的设置,处理方式在下一帧生效,可以实现对标记对象的实时自动编辑处理,满足用户实时拍摄并分享的需求,有效提升用户体验。
可选地,用户可以在视频录制结束后,选择对不同标记对象的处理方式。
可选地,在视频录制结束后,所述方法还包括:
接收对所述标记对象的第三输入,所述第三输入包括对所述标记对象的第二处理方式进行设置的操作;
响应于所述第三输入,根据所述第二处理方式对所述第一视频中的标记对象进行处理,得到第二视频。
可选地,在视频录制结束后,用户选择是否对第一视频进行编辑处理。若用户选择对第一视频进行编辑处理,则电子设备接收对所述标记对象的第三输入,所述第三输入为对所述标记对象的处理方式进行设置的操作;响应于所述第三输入,根据所述第三输入的内容,对所述第一视频中的标记对象进行处理,得到第二视频。
可选地,用户可在第一视频的基础上,按照自己的需求对不同的标记对象进行不同或相同的编辑处理。例如:可以对不同的标记对象分别进行消除、涂鸦、马赛克、虚化、拖影等处理或者进行相同的处理;或者,删除标记对象出现的整个视频区间;或者,对于相同的对象类别根据时间段的不同做不同的编辑处理;或者,根据所列的标记对象标签,从中选择一个作为主体,主体以外的标记对象作虚化处理等。图4为本申请实施例提供的对不同的标记对象进行不同的编辑处理的示意图。
在本申请实施例中,通过视频录制结束后,接收用户对标记对象的处理方式的设置,可以实现对标记对象的自动编辑处理,满足用户实时拍摄并分享的需求,有效提升用户体验。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作的情况下,所述在视频录制过程中对所述第一目标对象进行处理,得到第一视频,包括:
在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;
按照所述各标记方式对应的处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
可选地,用户点击相机APP,进入视频录制预览画面,用户开启对象标记功能,使用不同的标记方式(椭圆框:光效,方框:放大,划线:消除,心形框:拖影…)标记目标对象,然后根据AI算法标记完成细节识别并保存其细节特征,用户自定义对象标签或使用相机默认的AI标签(例如检测到蛋糕推荐生日的标签、识别草地后的草地标签、识别建筑后的建筑或地点标签等),然后根据不同的标记方式实现不同处理方式的预设,图5为本申请实施例提供的不同的标记方式对应不同处理方式的示意图。
进一步地,用户可以使用不同的标记方式标记目标并保存对象图像特征,实现不同处理方式的预设,其中,不同标记方式对应的处理方式可以是用户自定义的,例如,相机APP提供处理方式名称,用户根据处理方式名称设定自己喜欢的标记方式;不同标记方式对应的处理方式也可以是相机APP提供的:比如椭圆框代表光效,方框代表放大,划线代表消除,心形框代表拖影等。
可选地,用户完成对象的标记及预设处理方式后,开始录制视频,电子设备控制摄像头打开广角或者远景tele作为辅助以提前进行预估,其中,广角用于识别标记对象是否将要进入主摄范围,tele用于识别远景是否存在标记对象并做相应处理。
可以理解的是,在所述第一输入为对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作的情况下,电子设备在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;然后,按照所述各标记方式对应的处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
在本申请实施例中,不同的标记方式对应不同的处理方式,在视频录制过程中,实现了对目标对象的标记和编辑处理,可满足用户实时拍摄并分享的需求,有效提升用户录制视频的体验。
可选地,在上述各实施例的基础上,所述方法还包括:
接收用户对所述第一视频的第四输入;
响应于所述第四输入,显示所述标记对象对应的缩略图和所述标记对象出现的时间段。
可选地,所述第四输入为进入相册查看第一视频的操作。
图6为本申请实施例提供的显示标记对象对应的缩略图和标记对象出现的时间段的示意图。可选地,用户完成录制视频动作后,进入相册判断视频中是否存在对象标签,若视频中存在对象标签,则显示标记对象对应的缩略图和标记对象出现的时间段,以便于向用户直观地展示标记对象出现的视频帧和时间段,用户无需再次查看视频即可看到标记对象。
可选地,在上述实施例的基础上,所述方法还包括:
接收第五输入,所述第五输入为添加第二目标对象以及对所述第二目标对象的标记方式、标签和处理方式进行设置的操作;
响应于所述第五输入,在视频录制过程中识别所述第二目标对象并对所述第二目标对象进行处理。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作的情况下,在视频录制过程 中,用户也可以在任意时间添加新的目标对象(即第二目标对象)及其标签,并根据不同的标记方式设置不同的处理方式使其在下一帧生效。
可选地,电子设备控制摄像头打开广角或者tele做辅助以进行实时准确地录制处理。
可选地,在录制的视频过程中用户也可以选中对象标签,被选中标签会高亮并显示处理方式。用户还可以对被选中的目标对象的处理方式进行取消或者更新。
在本申请实施例中,通过在视频录制过程中接收用户对对象标签的选中和处理方式的设置,处理方式在下一帧生效,可以实现对标记对象的实时自动编辑处理,满足用户实时拍摄并分享的需求,有效提升用户体验。
可选地,在视频播放阶段,用户观看录制完成后的视频,观看开始时若发现存在标签对象则用户可自行编辑标签显示内容或使用AI识别的对象标签,例如AI未识别对象1的标签,则可以编辑文字为“蝴蝶”并应用,则视频中出现对象1的位置附件标注为蝴蝶。同时观看前会提示:是否显示标记对象标签和位置框,若选择是才会在视频播放时显示标记对象标签和位置框,图7为本申请实施例提供的视频播放时显示标记对象标签和位置框的示意图。
图8为本申请实施例提供的视频处理方法的流程示意图之二。如图8所示,该方法包括以下步骤:
步骤801:用户点击相机APP,进入录制视频预览画面;
步骤802:开启对象标记功能,根据提示完成对象的标记方式的预设;
步骤803:用户完成对象的标记方式的预设后,保存对象图像特征信息后开始录制视频;
步骤804:用户在视频录制过程中可按照步骤802选定的方式进行对象的标记并保存对象图像特征,然后完成录制视频动作;
步骤805:进入相册判断视频是否有被标记标签,若是,则执行步骤806,否则进入步骤808;
步骤806:显示该视频以及录制完成后所选定的标记对象缩略图以及出现时间。
步骤807:用户选择对标记对象的处理方式,并确定是否对标记对象 进行编辑处理,若确定对标记对象进行编辑处理,则进入步骤809,否则进入步骤808;
步骤808:用户观看未处理的视频;
步骤809:用户观看录制完成后的视频。
观看开始时若发现存在标签对象则用户可自行编辑标签显示内容或使用AI识别的对象标签,例如AI未识别对象1标签,则可以编辑文字为“蝴蝶”并应用,则视频中出现对象1的位置附件标注为蝴蝶。同时观看前会提示:是否显示标记对象标签和位置框,若选择是才会在视频播放时显示标记对象标签和位置框。
本实施方式中,用户开启相机APP进入录制视频界面,用户开启对象标记功能,根据提示完成标记对象的设定方式,设定完成后开始录制视频,录制完视频后进入相册可查看到设定的所有对象标签并设定处理方式进行视频编辑。
图9为本申请实施例提供的视频处理方法的流程示意图之三。如图9所示,该方法包括以下步骤:
步骤901:用户点击相机APP,进入录制视频预览画面;
步骤902:开启对象标记功能,根据提示完成对象的标记方式的预设;
步骤903:用户完成对象的标记方式后,开始录制视频,摄像头需打开广角或者tele作为辅助以提前进行预估;
步骤904:在视频的录制过程中,用户选定对象标签并设定处理方式,被选择标签会高亮并显示处理方式;
步骤905:在录制视频的过程中,用户可以选择对象标签,对选中对象的处理可以进行取消或者变化操作;
步骤906:用户完成录制视频动作;
步骤907:进入相册判断视频是否有被标记标签,若是则只需步骤908,否则进入步骤112;
步骤908:显示该视频以及录制完成后所选定的标记对象缩略图以及出现时间。
步骤909:用户选择是否对原视频进行编辑处理,若确定对标记对象进行编辑处理,则进入步骤910,否则进入步骤911;
步骤910:用户选择对不同标记对象的处理方式,可以为二次处理;
步骤911:观看录制过程中对对象实时处理后的视频。
步骤912:观看录制完成后的视频,观看开始时若发现存在标签对象则用户可自行编辑标签显示内容或使用AI识别的对象标签,例如AI未识别对象1标签,则可以编辑文字为“蝴蝶”并应用,则视频中出现对象1的位置附件标注为蝴蝶。同时观看前会提示:是否显示标记对象标签和位置框,若选择是才会在视频播放时显示标记对象标签和位置框。
本实施方式中,用户开启相机APP进入视频录制界面,用户开启对象标记功能,根据提示完成对象标记方式的预设,预设完成后开始录制视频,录制过程中选定对象标签并设置处理方式,在后续视频画面中识别标记对象并进行相应处理,也可以进行对象处理方式的取消或者变化设定。
图10为本申请实施例提供的视频处理方法的流程示意图之四。如图10所示,该方法包括以下步骤:
步骤1001:用户点击相机APP,进入录制视频预览画面;
步骤1002:开启对象标记功能,预设不同的标记方式(椭圆框:光效,方框:放大,划线:消除,心形框:拖影…)用于标记目标对象,AI根据标记完成细节识别并保存其细节特征,用户自定义对象标签或使用相机默认的AI标签(检测到蛋糕推荐生日的标签、识别草地后的草地标签、识别建筑后的建筑或地点标签…),并根据不同的标记方式实现不同处理方式的预设。
步骤1003:用户完成对象的标记及预设处理方式后,开始录制视频,摄像头需打开广角或者tele作为辅助以提前进行预估。
步骤1004:在视频录制过程中,用户也可以在任意时间添加某一对象及其标签,并根据不同的标记方式设置对象处理方式使其在下一帧生效。
本步骤1005:在录制的视频过程中用户可以选中对象标签,选中对象的处理方式可以进行取消或者变化操作,被选择标签会高亮并显示处理方式。
步骤1006:用户完成经过实时处理的视频的录制。
本实施方式中,用户开启相机APP进入录制视频界面,用户开启对象标记功能,使用不同的标记方式标记目标保存其特征,并根据不同的标记 方式实现不同处理方式的预设,预设完成后下一帧生效开始录制视频,录制过程中也可以添加某一对象及其标签并根据不同的标记方式设置对象处理方式后在下一帧生效,在后续视频画面中AI识别标记对象并根据预设进行相应处理,也可以点击标签进行对象处理方式的取消或者变化。
需要说明的是,本申请实施例提供的视频处理方法,执行主体可以为视频处理装置,或者,该视频处理装置中的用于执行加载视频处理方法的控制模块。本申请实施例中以视频处理装置执行视频处理方法为例,说明本申请实施例提供的视频处理装置。
图11为本请实施例提供的视频处理装置的结构示意图,如图11所示,该装置包括:
第一接收单元1110,用于接收用户对视频录制界面中第一目标对象的第一输入;
第一处理单元1120,用于响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频;
其中,所述第一输入包括:
对所述第一目标对象的标记方式和标签进行设置的操作;或者,
对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,所述第一处理单元用于:
在视频录制过程中按照所述目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;
在视频录制结束后得到包含所述标记对象的第一视频。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,所述第一处理单元用于:
在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;
接收对所述标记对象的第二输入,所述第二输入包括对所述标记对象的标签进行选中和对所述标记对象的第一处理方式进行设置的操作;
响应于所述第二输入,根据所述第一处理方式对所述标记对象进行处 理,在视频录制结束后得到第一视频。
可选地,所述装置还包括:
第二接收单元,用于接收用户对所述标记对象的第三输入,所述第三输入为对所述标记对象的第二处理方式进行设置的操作;
第二处理单元,用于响应于所述第三输入,根据所述第二处理方式对所述第一视频中的标记对象进行处理,得到第二视频。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作的情况下,所述第一处理单元用于:
在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;
按照所述各标记方式对应的处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
可选地,所述装置还包括:
第三接收单元,用于接收用户对所述第一视频的第四输入;
第三处理单元,用于响应于所述第四输入,显示所述标记对象对应的缩略图和所述标记对象出现的时间段。
可选地,所述装置还包括:
第四接收单元,用于接收第五输入,所述第五输入为添加第二目标对象以及对所述第二目标对象的标记方式、标签和处理方式进行设置的操作;
第四处理单元,用于响应于所述第五输入,在视频录制过程中识别所述第二目标对象并对所述第二目标对象进行处理。
本申请实施例中的视频处理装置可以是具有操作系统的装置或电子设备,也可以是终端中的部件、集成电路、或芯片。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。该电子设备可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网 络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例提供的视频处理装置能够实现图1至图10的方法实施例中视频处理装置实现的各个过程,为避免重复,这里不再赘述。
本申请实施例提供的视频处理装置,通过接收对视频录制界面中第一目标对象的第一输入,响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频,实现了对视频的自动编辑处理,可满足用户实时拍摄并分享的需求。
可选的,本申请实施例还提供一种电子设备1200,如图12所示,该电子设备包括处理器1201,存储器1202,存储在存储器1202上并可在所述处理器1201上运行的程序或指令,该程序或指令被处理器1201执行时实现上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要注意的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图13为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1300包括但不限于:射频单元1301、网络模块1302、音频输出单元1303、输入单元1304、传感器1305、显示单元1306、用户输入单元1307、接口单元1308、存储器1309、以及处理器1310等中的至少部分部件。
本领域技术人员可以理解,电子设备1300还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1310逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图13中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
应理解的是,本申请实施例中,输入单元1304可以包括图形处理器(Graphics Processing Unit,GPU)13041和麦克风13042,图形处理器13041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的 静态图片或视频的图像数据进行处理。显示单元1306可包括显示面板13061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板13061。用户输入单元1307包括触控面板13071以及其他输入设备13072。触控面板13071,也称为触摸屏。触控面板13071可包括触摸检测装置和触摸控制器两个部分。其他输入设备13072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
本申请实施例中,射频单元1301获取信息后给处理器1310处理。通常,射频单元1301包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。
存储器1309可用于存储软件程序或指令以及各种数据。存储器1309可主要包括存储程序或指令区和存储数据区,其中,存储程序或指令区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1309可以包括高速随机存取存储器,还可以包括非易失性存储器,其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。
处理器1310可包括一个或多个处理单元;可选的,处理器1310可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序或指令等,调制解调处理器主要处理无线通信,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1310中。
其中,用户输入单元1307用于接收对视频录制界面中第一目标对象的第一输入;
其中,处理器1310,用于响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频;
其中,所述第一输入包括:
对所述第一目标对象的标记方式和标签进行设置的操作;或者,
对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作。
本申请实施例提供的电子设备,通过接收对视频录制界面中第一目标对象的第一输入,响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频,实现了对视频的自动编辑处理,可满足用户实时拍摄并分享的需求。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,处理器1310用于:
在视频录制过程中按照所述目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;
在视频录制结束后得到包含所述标记对象的第一视频。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,处理器1310用于:
在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;
接收对所述标记对象的第二输入,所述第二输入包括对所述标记对象的标签进行选中和对所述标记对象的第一处理方式进行设置的操作;
响应于所述第二输入,根据所述第一处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
可选地,用户输入单元1307还用于:
接收用户对所述标记对象的第三输入,所述第三输入为对所述标记对象的第二处理方式进行设置的操作;
处理器1310还用于:
响应于所述第三输入,根据所述第二处理方式对所述第一视频中的标记对象进行处理,得到第二视频。
可选地,在所述第一输入为对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作的情况下,处理器1010用于:
在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;
按照所述各标记方式对应的处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
可选地,用户输入单元1307还用于:
接收对所述第一视频的第四输入;
可选地,处理器1310还用于:
响应于所述第四输入,显示所述标记对象对应的缩略图和所述标记对象出现的时间段。
可选地,用户输入单元1307还用于:
接收第五输入,所述第五输入为添加第二目标对象以及对所述第二目标对象的标记方式、标签和处理方式进行设置的操作;
可选地,处理器1310还用于:
响应于所述第五输入,在视频录制过程中识别所述第二目标对象并对所述第二目标对象进行处理。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限 制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。
Claims (17)
- 一种视频处理方法,包括:接收对视频录制界面中第一目标对象的第一输入;响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频;其中,所述第一输入包括:对所述第一目标对象的标记方式和标签进行设置的操作;或者,对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作。
- 根据权利要求1所述的视频处理方法,其中,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,所述在视频录制过程中对所述第一目标对象进行处理,得到第一视频,包括:在视频录制过程中按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;在视频录制结束后得到包含所述标记对象的第一视频。
- 根据权利要求1所述的视频处理方法,其中,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,所述在视频录制过程中对所述第一目标对象进行处理,得到第一视频,包括:在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;接收对所述标记对象的第二输入,所述第二输入包括对所述标记对象的标签进行选中和对所述标记对象的第一处理方式进行设置的操作;响应于所述第二输入,根据所述第一处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
- 根据权利要求2或3所述的视频处理方法,其中,在视频录制结束后,所述方法还包括:接收对所述标记对象的第三输入,所述第三输入包括对所述标记对象的第二处理方式进行设置的操作;响应于所述第三输入,根据所述第二处理方式对所述第一视频中的标 记对象进行处理,得到第二视频。
- 根据权利要求1所述的视频处理方法,其中,在所述第一输入为对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作的情况下,所述在视频录制过程中对所述第一目标对象进行处理,得到第一视频,包括:在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;按照所述各标记方式对应的处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
- 根据权利要求2、3或5所述的视频处理方法,其中,所述方法还包括:接收对所述第一视频的第四输入;响应于所述第四输入,显示所述标记对象对应的缩略图和所述标记对象出现的时间段。
- 根据权利要求5所述的视频处理方法,其中,在视频录制过程中,所述方法还包括:接收第五输入,所述第五输入为添加第二目标对象以及对所述第二目标对象的标记方式、标签和处理方式进行设置的操作;响应于所述第五输入,在视频录制过程中识别所述第二目标对象并对所述第二目标对象进行处理。
- 一种视频处理装置,其中,包括:第一接收单元,用于接收对视频录制界面中第一目标对象的第一输入;第一处理单元,用于响应于所述第一输入,在视频录制过程中对所述第一目标对象进行处理,得到第一视频;其中,所述第一输入包括:对所述第一目标对象的标记方式和标签进行设置的操作;或者,对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作。
- 根据权利要求8所述的视频处理装置,其中,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,所述第 一处理单元用于:在视频录制过程中按照所述目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;在视频录制结束后得到包含所述标记对象的第一视频。
- 根据权利要求8所述的视频处理装置,其中,在所述第一输入为对所述第一目标对象的标记方式和标签进行设置的操作的情况下,所述第一处理单元用于:在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;接收对所述标记对象的第二输入,所述第二输入包括对所述标记对象的标签进行选中和对所述标记对象的第一处理方式进行设置的操作;响应于所述第二输入,根据所述第一处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
- 根据权利要求9或10所述的视频处理装置,其中,所述装置还包括:第二接收单元,用于接收用户对所述标记对象的第三输入,所述第三输入为对所述标记对象的第二处理方式进行设置的操作;第二处理单元,用于响应于所述第三输入,根据所述第二处理方式对所述第一视频中的标记对象进行处理,得到第二视频。
- 根据权利要求8所述的视频处理装置,其中,在所述第一输入为对所述第一目标对象的标记方式和标签以及各标记方式对应的处理方式进行设置的操作的情况下,所述第一处理单元用于:在视频录制过程中,按照所述第一目标对象的标记方式和标签对所述第一目标对象进行标记,得到标记对象;按照所述各标记方式对应的处理方式对所述标记对象进行处理,在视频录制结束后得到第一视频。
- 根据权利要求9、10或12所述的视频处理装置,其中,所述装置还包括:第三接收单元,用于接收对所述第一视频的第四输入;第三处理单元,用于响应于所述第四输入,显示所述标记对象对应的 缩略图和所述标记对象出现的时间段。
- 根据权利要求12所述的视频处理装置,其中,所述装置还包括:第四接收单元,用于接收第五输入,所述第五输入为添加第二目标对象以及对所述第二目标对象的标记方式、标签和处理方式进行设置的操作;第四处理单元,用于响应于所述第五输入,在视频录制过程中识别所述第二目标对象并对所述第二目标对象进行处理。
- 一种电子设备,其中,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-7任一项所述的视频处理方法的步骤。
- 一种芯片,其中,包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-7任一项所述的视频处理方法的步骤。
- 一种可读存储介质,其中,所述可读存储介质上存储有程序或指令,所述程序或指令被处理器执行时实现如权利要求1-7任一项所述的视频处理方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110335298.1A CN113067983B (zh) | 2021-03-29 | 2021-03-29 | 视频处理方法、装置、电子设备和存储介质 |
CN202110335298.1 | 2021-03-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022206582A1 true WO2022206582A1 (zh) | 2022-10-06 |
Family
ID=76564560
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/082958 WO2022206582A1 (zh) | 2021-03-29 | 2022-03-25 | 视频处理方法、装置、电子设备和存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113067983B (zh) |
WO (1) | WO2022206582A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113067983B (zh) * | 2021-03-29 | 2022-11-15 | 维沃移动通信(杭州)有限公司 | 视频处理方法、装置、电子设备和存储介质 |
CN114363694A (zh) * | 2021-09-08 | 2022-04-15 | 腾讯科技(深圳)有限公司 | 一种视频处理方法、装置、计算机设备及存储介质 |
CN114598819B (zh) * | 2022-03-16 | 2024-08-02 | 维沃移动通信有限公司 | 视频录制方法、装置和电子设备 |
CN116112782B (zh) * | 2022-05-25 | 2024-04-02 | 荣耀终端有限公司 | 录像方法和相关装置 |
CN116193198A (zh) * | 2023-03-02 | 2023-05-30 | 维沃移动通信有限公司 | 一种视频处理方法、装置、电子设备、存储介质及产品 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170062014A1 (en) * | 2015-08-24 | 2017-03-02 | Vivotek Inc. | Method, device, and computer-readable medium for tagging an object in a video |
CN108040265A (zh) * | 2017-12-13 | 2018-05-15 | 北京奇虎科技有限公司 | 一种对视频进行处理的方法和装置 |
CN109961453A (zh) * | 2018-10-15 | 2019-07-02 | 华为技术有限公司 | 一种图像处理方法、装置与设备 |
CN111601039A (zh) * | 2020-05-28 | 2020-08-28 | 维沃移动通信有限公司 | 视频拍摄方法、装置及电子设备 |
US20200349188A1 (en) * | 2019-05-02 | 2020-11-05 | Oath Inc. | Tagging an object within an image and/or a video |
CN112261218A (zh) * | 2020-10-21 | 2021-01-22 | 维沃移动通信有限公司 | 视频控制方法、视频控制装置、电子设备和可读存储介质 |
CN113067983A (zh) * | 2021-03-29 | 2021-07-02 | 维沃移动通信(杭州)有限公司 | 视频处理方法、装置、电子设备和存储介质 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7016532B2 (en) * | 2000-11-06 | 2006-03-21 | Evryx Technologies | Image capture and identification system and process |
EP1612794A1 (en) * | 2003-04-04 | 2006-01-04 | Sony Corporation | Video editor and editing method, recording medium, and program |
JP2004312511A (ja) * | 2003-04-09 | 2004-11-04 | Nippon Telegr & Teleph Corp <Ntt> | 映像編集システム,園児モニタリングシステム,福祉施設モニタリングシステム,記念ビデオ作成システム,行動モニタリングシステムおよび映像編集方法 |
JP2006303595A (ja) * | 2005-04-15 | 2006-11-02 | Sony Corp | 素材記録装置および素材記録方法 |
US20120148216A1 (en) * | 2010-12-14 | 2012-06-14 | Qualcomm Incorporated | Self-editing video recording |
JP2012175281A (ja) * | 2011-02-18 | 2012-09-10 | Sharp Corp | 録画装置及びテレビ受像装置 |
KR102356448B1 (ko) * | 2014-05-05 | 2022-01-27 | 삼성전자주식회사 | 이미지 합성 방법 및 그 전자 장치 |
EP3029676A1 (en) * | 2014-12-02 | 2016-06-08 | Bellevue Investments GmbH & Co. KGaA | System and method for theme based video creation with real-time effects |
US10070063B2 (en) * | 2015-02-20 | 2018-09-04 | Grideo Technologies Inc. | Integrated video capturing and sharing application on handheld device |
US10629166B2 (en) * | 2016-04-01 | 2020-04-21 | Intel Corporation | Video with selectable tag overlay auxiliary pictures |
CN109922252B (zh) * | 2017-12-12 | 2021-11-02 | 北京小米移动软件有限公司 | 短视频的生成方法及装置、电子设备 |
CN108924418A (zh) * | 2018-07-02 | 2018-11-30 | 珠海市魅族科技有限公司 | 一种预览图像的处理方法和装置、终端、可读存储介质 |
CN111383638A (zh) * | 2018-12-28 | 2020-07-07 | 上海寒武纪信息科技有限公司 | 信号处理装置、信号处理方法及相关产品 |
CN111866404B (zh) * | 2019-04-25 | 2022-04-29 | 华为技术有限公司 | 一种视频编辑方法及电子设备 |
CN110290425B (zh) * | 2019-07-29 | 2023-04-07 | 腾讯科技(深圳)有限公司 | 一种视频处理方法、装置及存储介质 |
CN110909776A (zh) * | 2019-11-11 | 2020-03-24 | 维沃移动通信有限公司 | 一种图像识别方法及电子设备 |
CN111209435A (zh) * | 2020-01-10 | 2020-05-29 | 上海摩象网络科技有限公司 | 生成视频数据的方法、装置、电子设备及计算机存储介质 |
CN111209438A (zh) * | 2020-01-14 | 2020-05-29 | 上海摩象网络科技有限公司 | 视频处理方法、装置、设备及计算机存储介质 |
CN112118395B (zh) * | 2020-04-23 | 2022-04-22 | 中兴通讯股份有限公司 | 视频处理方法、终端及计算机可读存储介质 |
CN111752450A (zh) * | 2020-05-28 | 2020-10-09 | 维沃移动通信有限公司 | 显示方法、装置及电子设备 |
CN111722775A (zh) * | 2020-06-24 | 2020-09-29 | 维沃移动通信(杭州)有限公司 | 图像处理方法、装置、设备及可读存储介质 |
-
2021
- 2021-03-29 CN CN202110335298.1A patent/CN113067983B/zh active Active
-
2022
- 2022-03-25 WO PCT/CN2022/082958 patent/WO2022206582A1/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170062014A1 (en) * | 2015-08-24 | 2017-03-02 | Vivotek Inc. | Method, device, and computer-readable medium for tagging an object in a video |
CN108040265A (zh) * | 2017-12-13 | 2018-05-15 | 北京奇虎科技有限公司 | 一种对视频进行处理的方法和装置 |
CN109961453A (zh) * | 2018-10-15 | 2019-07-02 | 华为技术有限公司 | 一种图像处理方法、装置与设备 |
US20200349188A1 (en) * | 2019-05-02 | 2020-11-05 | Oath Inc. | Tagging an object within an image and/or a video |
CN111601039A (zh) * | 2020-05-28 | 2020-08-28 | 维沃移动通信有限公司 | 视频拍摄方法、装置及电子设备 |
CN112261218A (zh) * | 2020-10-21 | 2021-01-22 | 维沃移动通信有限公司 | 视频控制方法、视频控制装置、电子设备和可读存储介质 |
CN113067983A (zh) * | 2021-03-29 | 2021-07-02 | 维沃移动通信(杭州)有限公司 | 视频处理方法、装置、电子设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113067983B (zh) | 2022-11-15 |
CN113067983A (zh) | 2021-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022206582A1 (zh) | 视频处理方法、装置、电子设备和存储介质 | |
WO2022206696A1 (zh) | 拍摄界面显示方法、装置、电子设备及介质 | |
WO2022012657A1 (zh) | 图像编辑方法、装置和电子设备 | |
US20220417417A1 (en) | Content Operation Method and Device, Terminal, and Storage Medium | |
CN113905175A (zh) | 视频生成方法、装置、电子设备及可读存储介质 | |
CN112672061B (zh) | 视频拍摄方法、装置、电子设备及介质 | |
WO2022143525A1 (zh) | 视频播放方法、装置及电子设备 | |
WO2022143971A1 (zh) | 一种视频处理方法、装置和电子设备 | |
CN111601012B (zh) | 图像处理方法、装置及电子设备 | |
CN112269522A (zh) | 图像处理方法、装置、电子设备和可读存储介质 | |
WO2021238721A1 (zh) | 图片显示方法及装置 | |
CN113794829B (zh) | 拍摄方法、装置及电子设备 | |
WO2023083089A1 (zh) | 拍摄控件显示方法, 装置, 电子设备及介质 | |
CN113794835A (zh) | 视频录制方法、装置及电子设备 | |
CN112911147A (zh) | 显示控制方法、显示控制装置及电子设备 | |
CN112287141A (zh) | 相册处理方法、装置、电子设备和存储介质 | |
CN113014801A (zh) | 录像方法、装置、电子设备及介质 | |
WO2022068721A1 (zh) | 截屏方法、装置及电子设备 | |
WO2022095878A1 (zh) | 拍摄方法、装置、电子设备及可读存储介质 | |
CN113918661A (zh) | 知识图谱生成方法、装置和电子设备 | |
WO2022228301A1 (zh) | 文档生成方法、装置和电子设备 | |
WO2022174826A1 (zh) | 图片处理方法、装置、设备及存储介质 | |
WO2022228373A1 (zh) | 图片管理方法、装置、电子设备和可读存储介质 | |
CN114025237B (zh) | 视频生成方法、装置和电子设备 | |
WO2022143337A1 (zh) | 显示控制方法、装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22778765 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/02/2024) |