WO2021190428A1 - 图像拍摄方法和电子设备 - Google Patents

图像拍摄方法和电子设备 Download PDF

Info

Publication number
WO2021190428A1
WO2021190428A1 PCT/CN2021/081982 CN2021081982W WO2021190428A1 WO 2021190428 A1 WO2021190428 A1 WO 2021190428A1 CN 2021081982 W CN2021081982 W CN 2021081982W WO 2021190428 A1 WO2021190428 A1 WO 2021190428A1
Authority
WO
WIPO (PCT)
Prior art keywords
video stream
processed
target object
image
processing
Prior art date
Application number
PCT/CN2021/081982
Other languages
English (en)
French (fr)
Inventor
卢培锐
蔡眉眉
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to KR1020227036922A priority Critical patent/KR20220158101A/ko
Priority to JP2022558320A priority patent/JP7495517B2/ja
Priority to EP21775913.3A priority patent/EP4131931A4/en
Publication of WO2021190428A1 publication Critical patent/WO2021190428A1/zh
Priority to US17/949,486 priority patent/US20230013753A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present invention relates to the field of communication technology, and in particular to an image shooting method and electronic equipment.
  • processing such as editing and simple special effect addition can be directly implemented on the electronic device, but for some more complex video effects, this operation is not conducive to the convenient operation of the user.
  • the present invention provides an image shooting method and electronic equipment, which can solve the problems of complicated operation and consuming more resources when the electronic equipment has video special effect processing in the prior art.
  • the present invention is implemented as follows:
  • an embodiment of the present invention provides an image capturing method applied to an electronic device, including:
  • the frame rates of the first intermediate video stream and the second intermediate video stream are different.
  • an electronic device including:
  • a receiving module for receiving the first input
  • the acquiring module is configured to acquire the first video stream and the second video stream of the same shooting content collected by the camera module in response to the first input;
  • the first processing module is configured to extract the target object in the shooting content of the first video stream to obtain the first intermediate video stream of the target object;
  • the second processing module is configured to cut out the target object in the shooting content of the second video stream, and perform image registration compensation on the area where the target object is located, to obtain the second intermediate video stream;
  • the synthesis module is used to generate the target video according to the first intermediate video stream and the second intermediate video stream;
  • the frame rates of the first intermediate video stream and the second intermediate video stream are different.
  • an embodiment of the present invention provides an electronic device including a processor, a memory, and a computer program stored in the memory and running on the processor.
  • the computer program is executed by the processor to implement the steps of the image capturing method described above. .
  • an embodiment of the present invention provides a computer-readable storage medium with a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image capturing method described above are implemented.
  • an embodiment of the present invention provides a computer software product, the computer software product is stored in a non-volatile storage medium, and the software product is configured to be executed by at least one processor to implement the above-mentioned image capturing method A step of.
  • an embodiment of the present invention provides an electronic device configured to execute the above-mentioned image capturing method.
  • the first video stream and the second video stream of the same shooting content collected by the camera module are acquired, and the target objects in the shooting content of the first video stream are respectively extracted And cut out the target object in the shooting content of the second video stream, obtain the first intermediate video stream and the second intermediate video stream, and finally synthesize the first intermediate video stream and the second intermediate video stream with different frame rates to generate Target video can realize quick processing of video special effects during video shooting, realize personalized video shooting, and improve user shooting experience.
  • FIG. 1 shows a schematic flowchart of an image shooting method provided by an embodiment of the present invention
  • Figure 2 shows a schematic structural diagram of an electronic device provided by an embodiment of the present invention
  • FIG. 3 shows a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present invention.
  • FIG. 1 shows a schematic flowchart of an image capturing method provided by an embodiment of the present invention.
  • An embodiment of the present invention provides an image capturing method, which is applied to an electronic device with an image capturing function.
  • the shooting method may include the following steps:
  • Step 101 Receive a first input.
  • the user can make the electronic device receive the first input by performing the first input operation, and then control the electronic device to start the camera module to perform the shooting action, and trigger the electronic device to realize the processing operation of the preset video shooting mode.
  • the first input is used to trigger a shooting instruction for a preset video shooting mode
  • the first input may include voice input, body motion input, touch input acting on the electronic device, and motion input acting on the electronic device.
  • the body motion input may include but is not limited to at least one of gesture motion input, head motion input, and facial motion input.
  • the touch input acting on the electronic device may include, but is not limited to, acting on the screen or
  • the touch input of the casing and the motion input acting on the electronic device may include, but are not limited to, at least one of a shaking motion input acting on the electronic device, a turning motion input, and a bending input/bending input acting on the flexible screen.
  • the preset video shooting mode may be a fun video shooting mode or a special effect video shooting mode.
  • Step 102 In response to the first input, obtain a first video stream and a second video stream of the same shooting content collected by the camera module.
  • the electronic device in response to the first input received in step 101, obtains the first video stream and the second video stream collected during the shooting of the same shooting content by the camera module.
  • the first video stream and The shooting content of the second video stream is the same.
  • the first video stream and the second video stream may be captured by a camera module at the same time, and the first video stream and the second video stream may be captured by the same camera module.
  • the camera includes The first analog-to-digital converter (Analog-to-Digital Converter, ADC) and the second ADC.
  • ADC Analog-to-Digital Converter
  • the camera collects the shooting content, the same photon is converted into two electrical signals through the first ADC and the second ADC respectively, and they are output at the same time.
  • Two video streams are formed, the two video streams are the first video stream and the second video stream, where the two video streams formed by the conversion of the first ADC and the second ADC have different brightness levels of the shooting content.
  • the brightness of the shooting content of the first video stream and the second video stream are different; for example, the brightness of the shooting content of the first video stream is lower.
  • Bright the brightness of the shooting content of the second video stream is darker, that is, the brightness of the first intermediate video stream including the target object is brighter, and the brightness of the second intermediate video stream including other shooting content after the target object is extracted is darker .
  • a camera including the first ADC and the second ADC may be used to simultaneously collect the first video stream and the second video stream.
  • the first video stream and the second video stream can be captured by the same camera, and the shooting positions and exposure times corresponding to the shooting content of the first video stream and the second video stream are the same, which is beneficial to better reduce the two The difference between the channels of the video stream, in order to achieve a better video effect.
  • the camera including the first ADC and the second ADC is used to collect the first video stream and the second video stream at the same time, the shooting position and exposure time of the two video streams are the same, and only the exposure intensity is different.
  • the target video can also guarantee a better contrast effect between light and dark.
  • the electronic device can output at least one of the first video stream and the second video stream to display as a preview screen, for example, in the shooting of the first video stream and the second video stream.
  • the electronic device may output the video stream with the brighter brightness of the captured content in the first video stream and the second video stream to display as a preview image.
  • the video stream collected and output by the camera module that is, the video frame rate and video resolution of the video stream acquired by the electronic device are not limited, which can be based on the camera module of the electronic device.
  • the actual configuration parameters and user configuration requirements are set.
  • the video frame rate of the video stream acquired by the electronic device by default is 120Fps and the video resolution is 1080P.
  • Step 103 Extract the target object in the shooting content of the first video stream, and obtain the first intermediate video stream of the target object.
  • the electronic device may extract the target object in the shooting content of the first video stream obtained in step 102 from the shooting content, and obtain the first intermediate video stream including the target object.
  • the first intermediate video Only target objects can be included in the stream.
  • the electronic device may extract the target object in the first video stream frame by frame.
  • Step 104 Cut out the target object in the shooting content of the second video stream, and perform image registration compensation on the area where the target object is located to obtain a second intermediate video stream.
  • the electronic device may remove the target object in the shooting content of the second video stream obtained in step 102 from the shooting content, because the second video stream after removing the target object will lack the target object
  • the electronic device performs image registration compensation on the area where the target object is located, so as to compensate for the transparent area lacking the target object, so as to obtain a second intermediate video stream that does not include the target object.
  • the electronic device may extract the target object in the second video stream frame by frame.
  • the electronic device can perform object analysis on the shooting content of the video stream, identify the moving objects in the shooting content, and then separate the moving objects or background objects other than the moving objects in the shooting content.
  • the electronic device can separate the target object according to the shape and color information of the moving object.
  • the separation referred to here refers to the target object that the electronic device will recognize
  • the target object can be a moving object or a background object.
  • the moving objects can be detected in sequence according to the preset object priority after the moving objects are recognized.
  • the human moving objects can be detected first, followed by the animal moving objects and Other objects are moving objects; in addition, if the electronic device recognizes multiple moving objects in the shooting content of the video stream at the same time, a separate action is performed on each moving object.
  • the target object can be set as a moving object by default.
  • the user can automatically recognize the electronic device according to their needs after the separation of the moving objects is completed.
  • Select multiple moving objects out of the video stream to determine the main object in the moving object of the video stream to realize the secondary editing and determination of the target object.
  • the main object is the final target object to obtain the first intermediate video including the main object.
  • Stream and the second intermediate video stream that does not include the main object.
  • the electronic device may mark the identified and separated moving objects in the preview screen.
  • the electronic device outputs the brighter video stream of the captured content in the first video stream and the second video stream. Display as a preview screen.
  • the electronic device can replace the background object in the preview screen with the background object in the darker video stream, and keep the moving objects in the preview screen using the brighter video stream.
  • the moving object can be highlighted to facilitate the user's selection; when the electronic device receives a touch input on the main object of the multiple moving objects on the preview screen, in response to the touch input, pick (or cut out) The main object in the video stream, and undo the keying (or keying out) of multiple moving objects other than the main object. For example, if three human moving objects are recognized on the electronic device, the user can click on one of the moving objects.
  • the moving object performs a touch input operation to enable the electronic device to separate the human moving object, and automatically cancel the separation operation of the other two human moving objects.
  • the electronic device may capture images in the first video stream and the second video stream.
  • step 103 and step 104 can be executed simultaneously in specific implementation, or step 103 is executed first and then step 104 is executed, or step 104 is executed first and then step 103 is executed.
  • the sequence of steps illustrated in FIG. 1 is only for the convenience of intuitive understanding. An implementation situation shown.
  • Step 105 Generate a target video according to the first intermediate video stream and the second intermediate video stream.
  • the frame rates of the first intermediate video stream and the second intermediate video stream are different; here, the electronic device can synthesize the first intermediate video stream and the second intermediate video stream with different frame rates through a registration algorithm Process and generate the target video.
  • the video special effects can be quickly processed during video shooting, personalized video shooting can be realized, and the user shooting experience can be improved.
  • the first video stream and the second video stream of the same shooting content collected by the camera module are acquired, and the target objects in the shooting content of the first video stream are respectively extracted And cut out the target object in the shooting content of the second video stream, obtain the first intermediate video stream and the second intermediate video stream, and finally synthesize the first intermediate video stream and the second intermediate video stream with different frame rates to generate
  • the target video can realize quick processing of video special effects during video shooting, realize personalized video shooting, and improve user shooting experience.
  • performing image registration compensation on the area where the target object is located may include: in the target video frame where the target object is cut out in the second video stream, Area, through the adjacent frame image of the target video frame for image registration compensation.
  • the electronic device can obtain the previous frame image and the next frame image that are adjacent to the target video frame, and based on the previous frame image and the next frame image corresponding to the area where the target object of the target video frame is located, Image registration compensation is performed on the area where the target object is located.
  • the electronic device may first determine the video stream that requires frame rate adjustment among the first video stream and the second video stream, for example, it may be based on In the target video to be generated, the display speed effect requirement information for the target object and other objects except the target object is determined, and the video stream that needs frame rate adjustment in the first video stream and the second video stream is determined.
  • the first input can include information on demand for displaying fast or slow effects.
  • the first video stream is a video stream that requires frame rate adjustment
  • step 103 before obtaining the first intermediate video stream of the target object, perform frame rate adjustment processing on the first video stream; if the second video stream is required If the frame rate is adjusted for the video stream, in step 104, before the second intermediate video stream is obtained, the frame rate adjustment process is performed on the second video stream.
  • the frame rate adjustment is performed based on the first intermediate video stream and the second intermediate video.
  • step 105 before generating the target video according to the first intermediate video stream and the second intermediate video stream, may further include the following steps: performing frame rate adjustment processing on the video stream to be processed to obtain The processed video stream; in this way, the first intermediate video stream and the second intermediate video stream with different frame rates are obtained, which prepares the user for obtaining a personalized target video, and improves the user's shooting experience.
  • the electronic device may adjust the frame rate before performing the target object extraction (or extraction) processing on the video frame, and the to-be-processed video stream is the first video stream.
  • Step 103 extracting the captured content of the first video stream
  • the target object to obtain the first intermediate video stream of the target object specifically includes: extracting the target object in the shooting content of the processed video stream to obtain the first intermediate video stream of the target object; or, the to-be-processed video stream is the second Video stream
  • step 104 cut out the target object in the shooting content of the second video stream, and perform image registration compensation on the area where the target object is located to obtain the second intermediate video stream, which specifically includes: cutting out the processed video stream The target object in the content is photographed, and image registration compensation is performed on the area where the target object is located to obtain a second intermediate video stream.
  • the electronic device may perform frame rate adjustment after extracting (or extracting) the target object from the video frame, and the to-be-processed video stream is the first video stream.
  • the to-be-processed video stream is the first intermediate video stream and the second video stream. At least one of the intermediate video streams, and the target video is generated based on the processed video stream.
  • performing frame rate adjustment processing on the video stream to be processed to obtain the processed video stream may include one of the following:
  • Method 1 When the ambient brightness value when the camera module collects images is higher than the first preset brightness value, and the gain value is lower than the first preset gain value, the video stream to be processed is the first preset number of frames per adjacent Extract a frame of the image to be processed, and synthesize the image to be processed to obtain the processed video stream; in the first method, consider that the environmental brightness value when the camera module collects the image is higher than the preset brightness value, and the gain value When the gain value is lower than the preset value, the image noise in the video stream is relatively low and the image effect is better.
  • the electronic device can use the method of frame extraction to realize the frame reduction adjustment of the video stream to be processed, which can ensure the quality of the video stream.
  • the picture quality effect can save synthesis time and reduce power consumption.
  • the first preset brightness value and the first preset gain value may be set according to historical experimental data, may also be set according to operating experience, or may be set by the user, for example, the first preset gain value may be a gain value of 2;
  • the first preset number of frames can be determined according to the frame rate adjustment requirements. For example, the frame rate of the to-be-processed video stream is 120Fps, and when the frame rate of the to-be-processed video stream needs to be reduced to 30Fps, the first preset number of frames is 4 frames .
  • Method 2 When the environmental brightness value when the camera module collects images is lower than the second preset brightness value, and the gain value is higher than the second preset gain value, the video stream to be processed is the second preset number of frames per adjacent Perform the average synthesis processing on the image after the average synthesis processing to obtain the processed video stream; in the second method, it is considered that the environmental brightness value when the camera module collects the image is lower than the preset brightness value, and When the gain value is higher than the preset gain value, the image noise in the video stream is relatively high and the image effect is poor.
  • the electronic device can use the average synthesis method to adjust the frame rate of the video stream to be processed, which can ensure that it is in low light. Under the picture quality performance.
  • the second preset brightness value and the second preset gain value may be set according to historical experimental data, may also be set according to operating experience, or may be set by the user, for example, the second preset gain value may be 2 times the gain value;
  • the second preset number of frames can be determined according to the frame rate adjustment requirements. For example, the frame rate of the to-be-processed video stream is 120Fps, and when the frame rate of the to-be-processed video stream needs to be reduced to 30Fps, the second preset number of frames is 4 frames .
  • the second preset brightness value may be the same as or different from the first preset brightness value.
  • the second preset gain value may be the same or different from the first preset gain value, which can be set according to actual design requirements.
  • Method 3 According to the correspondence between the preset moving speed and the frame rate, according to the moving speed of the moving object in the video stream to be processed, the processed video stream is reduced by frame processing to obtain the processed video stream; in the third method, consider To the actual scene of the image taken in the video stream to be processed, the moving speed of the moving object can be judged, and different frame rates can be selected according to the moving speed, so as to realize the frame rate reduction adjustment of the video stream to be processed, enabling the frame rate adjustment operation and processing The actual scene of the video stream is adapted to ensure the image effect after frame drop.
  • the frame rate of the to-be-processed video stream including the moving object can be adjusted to be less than the to-be-processed
  • the original frame rate of the video stream for example, the frame rate of the video stream to be processed is 120Fps, and the moving speed of the moving object in the video stream to be processed is greater than the preset value, then the electronic device can adjust the frame rate of the video stream to be processed to less than 120Fps , Such as 60Fps or 40Fps.
  • Method 4 The video stream to be processed, the third preset frame number of images at every interval, is framed according to the adjacent images, and the processed video stream is obtained; in the fourth method, the electronic device can also add frame processing to achieve Adjust the frame rate of the video stream to be processed.
  • the image shooting method provided by the embodiment of the present invention obtains the first video stream and the second video stream of the same shooting content collected by the camera module by receiving and responding to the first input, and extracts the shooting content of the first video stream respectively
  • the target object in the shooting content of the second video stream and the target object in the shooting content of the second video stream are cut out, the first intermediate video stream and the second intermediate video stream are obtained, and finally the first intermediate video stream and the second intermediate video stream with different frame rates are processed
  • Synthesis processing, generating target video can realize the quick processing of video special effects during video shooting, realize personalized video shooting, and improve user shooting experience.
  • an embodiment of the present invention provides an electronic device for implementing the above method.
  • FIG. 2 shows a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • An embodiment of the present invention provides an electronic device 200, which may include: a receiving module 210, an acquiring module 220, a first processing module 230, a second processing module 240, and a synthesis module 250.
  • the receiving module 210 is configured to receive the first input
  • the obtaining module 220 is configured to obtain the first video stream and the second video stream of the same shooting content collected by the camera module in response to the first input;
  • the first processing module 230 is configured to extract the target object in the shooting content of the first video stream to obtain the first intermediate video stream of the target object;
  • the second processing module 240 is configured to cut out the target object in the shooting content of the second video stream, and perform image registration compensation on the area where the target object is located, to obtain a second intermediate video stream;
  • the synthesis module 250 is configured to generate a target video according to the first intermediate video stream and the second intermediate video stream;
  • the frame rates of the first intermediate video stream and the second intermediate video stream are different.
  • the second processing module 240 may include a registration compensation unit.
  • the registration compensation unit is used to perform image registration compensation for the area where the target object is located in the target video frame from which the target object is cut out in the second video stream through adjacent frame images of the target video frame.
  • the electronic device 200 may further include: a third processing module; wherein the video stream to be processed may be a first video stream, and the first processing module 230 may specifically include: a first processing unit Or, the video stream to be processed may be a second video stream, and the second processing module 240 may specifically include: a second processing unit; or, the video stream to be processed is at least one of the first intermediate video stream and the second intermediate video stream , The target video is generated based on the processed video stream.
  • a third processing module wherein the video stream to be processed may be a first video stream, and the first processing module 230 may specifically include: a first processing unit Or, the video stream to be processed may be a second video stream, and the second processing module 240 may specifically include: a second processing unit; or, the video stream to be processed is at least one of the first intermediate video stream and the second intermediate video stream , The target video is generated based on the processed video stream.
  • the third processing module is used to perform frame rate adjustment processing on the video stream to be processed to obtain the processed video stream;
  • the first processing unit is configured to extract the target object in the shooting content of the processed video stream to obtain the first intermediate video stream of the target object;
  • the second processing unit is configured to cut out the target object in the shooting content of the processed video stream, and perform image registration compensation on the area where the target object is located, to obtain a second intermediate video stream.
  • the third processing module may include one of the following: a third processing unit, a fourth processing unit, a fifth processing unit, and a sixth processing unit.
  • the third processing unit is used for the video stream to be processed when the ambient brightness value when the camera module collects the image is higher than the first preset brightness value and the gain value is lower than the first preset gain value. Extracting one frame of the image to be processed from the image of the preset number of frames, and synthesizing the image to be processed to obtain the processed video stream;
  • the fourth processing unit is used for the video stream to be processed when the ambient brightness value when the camera module collects the image is lower than the second preset brightness value and the gain value is higher than the second preset gain value. Perform average synthesis processing on images with a preset number of frames, and perform synthesis processing on the images after the average synthesis processing to obtain a processed video stream;
  • the fifth processing unit is configured to perform frame reduction processing on the to-be-processed video stream according to the preset corresponding relationship between the moving speed and the frame rate and according to the moving speed of the moving object in the to-be-processed video stream to obtain the processed video stream;
  • the sixth processing unit is configured to perform frame augmentation processing on adjacent images for images of the third preset number of frames in the video stream to be processed to obtain the processed video stream.
  • the brightness of the shooting content of the first video stream and the second video stream are different, so as to better reflect the prominent effect of the target object in the generated target video; the first video stream and the The shooting position and exposure time corresponding to the shooting content of the second video stream are the same, so as to better reduce the difference between the two video streams and achieve better video effects.
  • the electronic device 200 provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiment of FIG.
  • the electronic device receives and responds to the first input through the receiving module and the acquiring module, acquires the first video stream and the second video stream of the same shooting content collected by the camera module, and passes through the first processing module and The second processing module respectively extracts the target object in the shooting content of the first video stream and extracts the target object in the shooting content of the second video stream to obtain the first intermediate video stream and the second intermediate video stream, and finally pass the synthesis module
  • the first intermediate video stream and the second intermediate video stream with different frame rates are synthesized to generate a target video, which can realize quick processing of video special effects during video shooting, realize personalized video shooting, and improve user shooting experience.
  • Fig. 3 is a schematic diagram of the hardware structure of an electronic device implementing various embodiments of the present invention.
  • the electronic device 300 includes but is not limited to: a radio frequency unit 301, a network module 302, an audio output unit 303, an input unit 304, a sensor 305, a display unit 306, a user input unit 307, an interface unit 308, a memory 309, a processor 310, and Power supply 311 and other components.
  • a radio frequency unit 301 includes but is not limited to: a radio frequency unit 301, a network module 302, an audio output unit 303, an input unit 304, a sensor 305, a display unit 306, a user input unit 307, an interface unit 308, a memory 309, a processor 310, and Power supply 311 and other components.
  • Those skilled in the art can understand that the structure of the electronic device shown in FIG. 3 does not constitute a limitation on the electronic device.
  • the electronic device may include more or fewer components than those shown in the figure, or a combination of certain components, or different components. Layout.
  • electronic devices include, but are not limited to, mobile phones
  • the user input unit 307 is configured to receive a first input; the processor 310 is configured to, in response to the first input, obtain a first video stream and a second video stream of the same shooting content collected by the camera module; The target object in the shooting content of the video stream obtains the first intermediate video stream of the target object; cuts out the target object in the shooting content of the second video stream, and performs image registration compensation on the area where the target object is located to obtain the second intermediate video stream Video stream; generate the target video according to the first intermediate video stream and the second intermediate video stream; wherein the frame rates of the first intermediate video stream and the second intermediate video stream are different. In this way, personalized video shooting can be realized, and the user's shooting experience can be improved.
  • the radio frequency unit 301 can be used for receiving and sending signals during the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 310; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 301 can also communicate with the network and other devices through a wireless communication system.
  • the electronic device provides users with wireless broadband Internet access through the network module 302, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 303 may convert the audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output it as sound. Moreover, the audio output unit 303 may also provide audio output related to a specific function performed by the electronic device 300 (e.g., call signal reception sound, message reception sound, etc.).
  • the audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 304 is used to receive audio or video signals.
  • the input unit 304 may include a graphics processing unit (GPU) 3041 and a microphone 3042.
  • the graphics processing unit 3041 is used to capture images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
  • the processed image frame may be displayed on the display unit 306.
  • the image frame processed by the graphics processor 3041 may be stored in the memory 309 (or other storage medium) or sent via the radio frequency unit 301 or the network module 302.
  • the microphone 3042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to the mobile communication base station via the radio frequency unit 301 in the case of a telephone call mode for output.
  • the electronic device 300 also includes at least one sensor 305, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 3061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 3061 and the display panel 3061 when the electronic device 300 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensor 305 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the display unit 306 is used to display information input by the user or information provided to the user.
  • the display unit 306 may include a display panel 3061, and the display panel 3061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 307 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user input unit 307 includes a touch panel 3071 and other input devices 3072.
  • the touch panel 3071 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 3071 or near the touch panel 3071. operate).
  • the touch panel 3071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 310, the command sent by the processor 310 is received and executed.
  • the touch panel 3071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 307 may also include other input devices 3072.
  • other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 3071 can be overlaid on the display panel 3061.
  • the touch panel 3071 detects a touch operation on or near it, it transmits it to the processor 310 to determine the type of touch event, and then the processor 310 determines the type of touch event according to the touch
  • the type of event provides corresponding visual output on the display panel 3061.
  • the touch panel 3071 and the display panel 3061 are used as two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 3071 and the display panel 3061 can be integrated
  • the implementation of the input and output functions of the electronic device is not specifically limited here.
  • the interface unit 308 is an interface for connecting an external device and the electronic device 300.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (Input/Output, I/O) port, video I/O port, headphone port, etc.
  • the interface unit 308 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 300 or can be used to connect the electronic device 300 to an external device. Transfer data between devices.
  • the memory 309 can be used to store software programs and various data.
  • the memory 309 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 309 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 310 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device, runs or executes the software programs and/or modules stored in the memory 309, and calls the data stored in the memory 309 , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
  • the processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc., and the modem
  • the processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 310.
  • the electronic device 300 may also include a power source 311 (such as a battery) for supplying power to various components.
  • a power source 311 such as a battery
  • the power source 311 may be logically connected to the processor 310 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system. And other functions.
  • the electronic device 300 includes some functional modules not shown, which will not be repeated here.
  • the embodiment of the present invention also provides an electronic device, including a processor 310, a memory 309, and a computer program stored in the memory 309 and running on the processor 310.
  • an electronic device including a processor 310, a memory 309, and a computer program stored in the memory 309 and running on the processor 310.
  • the computer program is executed by the processor 310, the foregoing
  • Each process of the embodiment of the image capturing method can achieve the same technical effect, and in order to avoid repetition, it will not be repeated here.
  • the embodiment of the present invention also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program is executed by a processor, each process of the above-mentioned image shooting method embodiment is realized, and the same technology can be achieved. The effect, in order to avoid repetition, will not be repeated here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to make a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present invention.
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
  • the program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • modules, units, and sub-units can be implemented in one or more application specific integrated circuits (ASIC), digital signal processors (Digital Signal Processor, DSP), and digital signal processing equipment (DSP Device, DSPD). ), programmable logic devices (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, used to execute the present disclosure Other electronic units or a combination of the functions described above.
  • ASIC application specific integrated circuits
  • DSP Digital Signal Processor
  • DSP Device digital signal processing equipment
  • PLD programmable logic devices
  • Field-Programmable Gate Array Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technology described in the embodiments of the present disclosure can be implemented by modules (for example, procedures, functions, etc.) that perform the functions described in the embodiments of the present disclosure.
  • the software codes can be stored in the memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)

Abstract

本发明提供一种图像拍摄方法和电子设备,其中,图像拍摄方法包括:接收第一输入;响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流;抠取第一视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流;抠除第二视频流的拍摄内容中的目标对象,且对目标对象所在区域进行图像配准补偿,获得第二中间视频流;根据第一中间视频流和第二中间视频流,生成目标视频;其中,第一中间视频流和第二中间视频流的帧率不同。

Description

图像拍摄方法和电子设备
相关申请的交叉引用
本申请主张在2020年3月27日在中国提交的中国专利申请号No.202010228177.2的优先权,其全部内容通过引用包含于此。
技术领域
本发明涉及通信技术领域,尤其涉及一种图像拍摄方法和电子设备。
背景技术
目前,对于电子设备拍摄得到视频,可以在电子设备上直接实现剪辑和简单特效添加等处理,但是对于一些比较复杂的视频效果,这种操作不利于用户便捷操作。
例如,对于涉及视频合成的特效,往往需要先对原视频进行复制,再通过软件一步步进行手动调整,操作繁琐、操作效率低,且占用较多的计算资源,不便于用户快捷处理。
发明内容
本发明提供一种图像拍摄方法和电子设备,能够解决现有技术中电子设备存在视频特效处理时操作繁琐、占用较多资源的问题。
为了解决上述技术问题,本发明是这样实现的:
第一方面,本发明实施例提供一种图像拍摄方法,应用于电子设备,包括:
接收第一输入;
响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流;
抠取第一视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流;
抠除第二视频流的拍摄内容中的目标对象,且对目标对象所在区域进行 图像配准补偿,获得第二中间视频流;
根据第一中间视频流和第二中间视频流,生成目标视频;
其中,第一中间视频流和第二中间视频流的帧率不同。
第二方面,本发明实施例提供一种电子设备,包括:
接收模块,用于接收第一输入;
获取模块,用于响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流;
第一处理模块,用于抠取第一视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流;
第二处理模块,用于抠除第二视频流的拍摄内容中的目标对象,且对目标对象所在区域进行图像配准补偿,获得第二中间视频流;
合成模块,用于根据第一中间视频流和第二中间视频流,生成目标视频;
其中,第一中间视频流和第二中间视频流的帧率不同。
第三方面,本发明实施例提供一种电子设备,包括处理器,存储器,存储在存储器上并可在处理器上运行的计算机程序,该计算机程序被处理器执行时实现上述图像拍摄方法的步骤。
第四方面,本发明实施例提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述图像拍摄方法的步骤。
第五方面,本发明实施例提供一种计算机软件产品,所述计算机软件产品被存储在非易失的存储介质中,所述软件产品被配置成被至少一个处理器执行以实现上述图像拍摄方法的步骤。
第六方面,本发明实施例提供一种电子设备,所述电子设备被配置成用于执行上述图像拍摄方法。
本发明实施例中,通过接收并响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流,并分别抠取第一视频流的拍摄内容中的目标对象和抠除第二视频流的拍摄内容中的目标对象,获得第一中间视频流和第二中间视频流,最后将帧率不同的第一中间视频流和第二中间视频流进行合成处理,生成目标视频,能够在视频拍摄时实现视频特效快捷处理, 实现个性化视频拍摄,提升用户拍摄体验。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1表示本发明实施例提供的图像拍摄方法的流程示意图;
图2表示本发明实施例提供的电子设备的结构示意图;
图3表示本发明实施例提供的电子设备的硬件结构示意图。
具体实施方式
为使本发明要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。
请参见图1,其示出的是本发明实施例提供的图像拍摄方法的流程示意图,本发明实施例提供一种图像拍摄方法,应用于具有图像拍摄功能的电子设备,本发明实施例的图像拍摄方法可以包括以下步骤:
步骤101,接收第一输入。
本发明实施例中,用户可通过执行第一输入操作,使电子设备接收到第一输入,进而控制电子设备启动相机模组执行拍摄动作,并触发电子设备实现预设视频拍摄模式的处理操作。
可选地,第一输入用于触发预设视频拍摄模式的拍摄指令,该第一输入可以包括语音输入、肢体动作输入、作用于电子设备的触控输入以及作用于电子设备的运动输入中的至少一者;其中,肢体动作输入可以包括但不限于手势动作输入、头部动作输入和面部动作输入等中的至少一种,作用于电子设备的触控输入可以包括但不限于作用于屏幕或壳体的触控输入,作用于电子设备的运动输入可以包括但不限于作用于电子设备的甩动动作输入、翻转动作输入和作用于柔性屏幕的弯曲输入/弯折输入等中的至少一种。示例地,预设视频拍摄模式可以为趣味录像模式或者特效视频拍摄模式。
步骤102,响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流。
本发明实施例中,电子设备响应于步骤101接收到的第一输入,获取相机模组对同一拍摄内容的拍摄过程中采集的第一视频流和第二视频流,这里,第一视频流和第二视频流的拍摄内容相同。
可选地,该第一视频流和第二视频流可以是通过相机模组同时拍摄采集的,该第一视频流和第二视频流可以是通过同一相机模组采集得到,这里,该相机包括第一模数转换器(Analog-to-Digital Converter,ADC)和第二ADC,相机对拍摄内容采集时通过第一ADC和第二ADC分别将同一光子转换为两个电信号,并同时分别输出形成两路视频流,该两路视频流即为第一视频流和第二视频流,这里经第一ADC和第二ADC转换形成的该两路视频流的拍摄内容的亮度等级不同。
较优地,为利于更好地体现生成的目标视频中目标对象的突出效果,第一视频流和第二视频流的拍摄内容的亮度不同;示例地,第一视频流的拍摄内容的亮度较亮,第二视频流的拍摄内容的亮度较暗,也即包括目标对象的第一中间视频流的亮度较亮,包括抠取目标对象之后的其他拍摄内容的第二中间视频流的亮度较暗。这里,为了确保两路视频流的输出效率以及拍摄效果,可以采用包括第一ADC和第二ADC的相机来同时采集第一视频流和第二视频流。
较优地,第一视频流和第二视频流可以是通过同一相机采集得到,第一视频流和第二视频流的拍摄内容对应的拍摄位置和曝光时间均相同,利于更好地减小两路视频流的差异,以实现更优的视频效果。如,上述采用包括第一ADC和第二ADC的相机来同时采集第一视频流和第二视频流,则两个视频流的拍摄位置和曝光时间均相同,仅曝光强度不同。这样在抠取与抠除目标对象时,只需在一个视频流中进行识别,根据识别到的位置就可以对另一个视频流进行抠取或抠除处理;两者曝光强度不同,最后生成的目标视频又可以保证较好的明暗反差效果。
另外,本发明实施例中,在获取的同时,电子设备可以输出第一视频流和第二视频流中的至少一路显示为预览画面,示例地,在第一视频流和第二 视频流的拍摄内容的亮度不同的情况下,电子设备可以输出第一视频流和第二视频流中拍摄内容的亮度较亮的视频流显示为预览画面。
可以理解的是,本发明实施例中,对于相机模组采集输出的视频流,也即电子设备获取的视频流的视频帧率和视频分辨率不作限定,其可以根据电子设备的相机模组的实际配置参数以及用户配置需求进行设置,例如,电子设备默认获取的视频流的视频帧率为120Fps,视频分辨率为1080P。
步骤103,抠取第一视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流。
本发明实施例中,电子设备可以将步骤102获得的第一视频流的拍摄内容中的目标对象从拍摄内容中进行抠取,获取包括目标对象的第一中间视频流,这里,第一中间视频流中可以只包括目标对象。可选地,电子设备可以逐帧抠取第一视频流中的目标对象。
步骤104,抠除第二视频流的拍摄内容中的目标对象,且对目标对象所在区域进行图像配准补偿,获得第二中间视频流。
本发明实施例中,电子设备可以将步骤102获得的第二视频流的拍摄内容中的目标对象从拍摄内容中进行抠除,由于抠除目标对象之后的第二视频流中将存在缺少目标对象的透明区域,在抠取目标对象之后,电子设备对目标对象所在区域进行图像配准补偿,以实现对缺少目标对象的透明区域进行补偿,从而获得不包括目标对象的第二中间视频流,这样,能够确保第二中间视频流中的图像的完整性,保证合成的目标视频具有良好的视频效果。可选地,电子设备可以逐帧抠取第二视频流中的目标对象。
在本发明一些可选的实施例,电子设备可以对视频流的拍摄内容进行对象分析,识别拍摄内容中的运动对象,然后对运动对象或拍摄内容中除运动对象之外的背景对象进行分离,以利于实现对目标对象的抠取(或抠除),示例地,电子设备可以根据运动对象的形状和颜色信息进行目标对象的分离,这里所指分离指的是电子设备将识别出来的目标对象进行抠图,目标对象可以为运动对象或背景对象。其中,在电子设备分别识别拍摄内容中的运动对象的过程中,可以在识别出运动对象后按照预设对象优先级依次检测运动对象,例如可以优先检测人物运动对象,次之为动物运动对象和其它物体运动 对象;另外,如果电子设备在视频流的拍摄内容中同时识别到多个运动对象时,对每个运动对象都进行分离动作。
可选地,目标对象可以默认设置为运动对象,在运动对象为多个的情况下,为提升人机交互体验,当拍摄内容运动对象分离完成后,用户可根据自己的需求对电子设备自动识别出来的多个运动对象进行选择,确定视频流的运动对象中的主体对象,实现对目标对象的二次编辑确定,这里主体对象即作为最终的目标对象,以获得包括主体对象的第一中间视频流和不包括主体对象的第二中间视频流。示例地,为方便用户选择,电子设备可以将识别分离的运动对象在预览画面中标记出来,举例来说,电子设备输出第一视频流和第二视频流中拍摄内容的亮度较亮的视频流显示为预览画面,此时,电子设备可以将预览画面中的背景对象替换成亮度较暗的视频流中的背景对象,将预览画面中的运动对象保持使用亮度较亮的视频流的运动对象,这样能突出显示运动对象以便于用户选择;在电子设备接收到在预览画面上对多个运动对象中的主体对象的触控输入的情况下,响应于触控输入,抠取(或抠除)视频流中的主体对象,并撤销抠取(或抠除)多个运动对象中除主体对象之外的对象,例如,在电子设备识别出3个人物运动对象,用户可通过对其中1个人物运动对象执行触控输入操作以使电子设备对该个人物运动对象进行分离,并自动撤销其他2个人物运动对象的分离操作。
本发明实施例中,考虑到第一视频流和第二视频流没有位置差异,同时步骤103和步骤104中分别会对视频流中的目标对象进行相同的抠图动作,为简化对目标对象的识别确定以提升处理效率,可以对第一视频流和第二视频流中的一者进行目标对象识别确定并进行抠取(或抠除)处理,然后对第一视频流和第二视频流中的另一者对视频流中的相同抠图位置进行抠除(或抠取)处理。示例地,为利于电子设备对目标对象的识别确定,在第一视频流和第二视频流的拍摄内容的亮度不同的情况下,电子设备可以通过对第一视频流和第二视频流中拍摄内容的亮度较亮的视频流进行目标对象识别确定和抠取(或抠除)处理,然后对第一视频流和第二视频流中的拍摄内容的亮度较暗的视频流按照相同抠图位置进行抠除(或抠取)处理。可以理解地,步骤103和步骤104在具体实现中可以是同时执行,或者先执行步骤103再 执行步骤104,或者先执行步骤104再执行步骤103,图1中示例的步骤先后仅为便于直观理解示出的一种实现情形。
步骤105,根据第一中间视频流和第二中间视频流,生成目标视频。
本发明实施例中,第一中间视频流和第二中间视频流的帧率不同;这里,电子设备可以通过配准算法,将帧率不同的第一中间视频流和第二中间视频流进行合成处理,生成目标视频,这样,能够在视频拍摄时实现视频特效快捷处理,实现个性化视频拍摄,提升用户拍摄体验。
本发明实施例中,通过接收并响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流,并分别抠取第一视频流的拍摄内容中的目标对象和抠除第二视频流的拍摄内容中的目标对象,获得第一中间视频流和第二中间视频流,最后将帧率不同的第一中间视频流和第二中间视频流进行合成处理,生成目标视频,能够在视频拍摄时实现视频特效快捷处理,实现个性化视频拍摄,提升用户拍摄体验。
可选地,在本发明一些实施例中,步骤104中,对目标对象所在区域进行图像配准补偿,可以包括:在第二视频流中抠除目标对象的目标视频帧中,对目标对象所在区域,通过目标视频帧的相邻帧图像进行图像配准补偿。这样,能够更好地确保第二中间视频流中的图像的完整性,保证合成的目标视频具有良好的视频效果。举例来说,电子设备可以获取目标视频帧相邻的前一帧图像和后一帧图像,并根据前一帧图像和后一帧图像中与目标视频帧的目标对象所在区域对应的区域,对目标对象所在区域进行图像配准补偿。
本发明实施例中,为获得帧率不同的第一中间视频流和第二中间视频流,电子设备可以先确定第一视频流和第二视频流中需要帧率调整的视频流,例如可以基于所要生成的目标视频中对目标对象和除目标对象之外的其他对象的显示快慢效果需求信息,确定第一视频流和第二视频流中需要帧率调整的视频流,在一示例中,为简化用户操作,第一输入中可以包括显示快慢效果需求信息。然后,若第一视频流为需要帧率调整的视频流,则步骤103中,获得目标对象的第一中间视频流之前,对第一视频流进行帧率调整处理;若第二视频流为需要帧率调整的视频流,则步骤104中,获得第二中间视频流之前,对第二视频流进行帧率调整处理。或者,也可以是在获得第一中间视 频流和第二中间视频之后,再基于第一中间视频流和第二中间视频做帧率调整。
举例来说,在本发明一些实施例中,步骤105,根据第一中间视频流和第二中间视频流,生成目标视频之前,还可以包括以下步骤:对待处理视频流进行帧率调整处理,得到已处理视频流;这样,以获得帧率不同的第一中间视频流和第二中间视频流,为用户获得个性化的目标视频做准备,提升用户拍摄体验。这里,电子设备可以在对视频帧进行目标对象抠取(或抠除)处理之前进行帧率调整,则待处理视频流为第一视频流,步骤103,抠取第一视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流,具体包括:抠取已处理视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流;或者,待处理视频流为第二视频流,则步骤104,抠除第二视频流的拍摄内容中的目标对象,且对目标对象所在区域进行图像配准补偿,获得第二中间视频流,具体包括:抠除已处理视频流的拍摄内容中的目标对象,且对目标对象所在区域进行图像配准补偿,获得第二中间视频流。或者,电子设备可以在对视频帧进行目标对象抠取(或抠除)处理之后再进行帧率调整,则待处理视频流为第一视频流待处理视频流为第一中间视频流和第二中间视频流中的至少一个,目标视频根据已处理视频流生成。
较优地,本发明一些实施例中,对待处理视频流进行帧率调整处理,得到已处理视频流,可以包括如下其中之一:
方式一:在相机模组采集图像时的环境亮度值高于第一预设亮度值,且增益值低于第一预设增益值时,对待处理视频流,每相邻第一预设帧数的图像抽取一帧待处理图像,并对待处理图像进行合成处理,得到已处理视频流;方式一中,考虑到在相机模组采集图像时的环境亮度值高于预设亮度值,且增益值低于预设增益值的情况下,视频流中图像噪点比较低、图像效果较好,电子设备可以利用抽帧合成的方式,实现对待处理视频流的降帧率调整,既能够保证视频流的画质效果,又能够节省合成时间,还能够降低功耗。这里,第一预设亮度值和第一预设增益值可以根据历史实验数据设置,也可以根据操作经验设置,还可以由用户设置,例如,第一预设增益值可以为2倍增益值;第一预设帧数可以根据帧率调整需求确定,例如,待处理视频流的帧率 为120Fps,在需要将待处理视频流的帧率降低为30Fps,则第一预设帧数为4帧。
方式二:在相机模组采集图像时的环境亮度值低于第二预设亮度值,且增益值高于第二预设增益值时,对待处理视频流,每相邻第二预设帧数的图像进行平均合成处理,并对平均合成处理后的图像进行合成处理,得到已处理视频流;方式二中,考虑到在相机模组采集图像时的环境亮度值低于预设亮度值,且增益值高于预设增益值的情况下,视频流中图像噪点比较高、图像效果较差,电子设备可以利用平均合成的方式,实现对待处理视频流的降帧率调整,能够保证在低光下的画质表现。这里,第二预设亮度值和第二预设增益值可以根据历史实验数据设置,也可以根据操作经验设置,还可以由用户设置,例如,第二预设增益值可以为2倍增益值;第二预设帧数可以根据帧率调整需求确定,例如,待处理视频流的帧率为120Fps,在需要将待处理视频流的帧率降低为30Fps,则第二预设帧数为4帧。可以理解地,第二预设亮度值可以与第一预设亮度值相同或不同,同样地,第二预设增益值可以与第一预设增益值相同或不同,具体可以实际设计需求设置。
方式三:按照预设的移动速度与帧率之间的对应关系,根据待处理视频流中运动对象的移动速度,对待处理视频流进行减帧处理,得到已处理视频流;方式三中,考虑到待处理视频流中拍摄图像的实际场景,可以通过判断运动对象的移动速度,根据移动速度进行不同帧率的选择,实现对待处理视频流的降帧率调整,能够使得帧率调整操作与处理视频流的实际场景相适应,确保降帧后的图像效果。另外,如果待处理视频流中运动对象的移动速度大于预设值,为了避免出现拖影和前景移动速度过快的情况,可以将包括运动对象的待处理视频流的帧率调整为小于待处理视频流的原始帧率;例如,待处理视频流的帧率为120Fps,待处理视频流中运动对象的移动速度大于预设值,则电子设备可以将待处理视频流的帧率调整至小于120Fps,如60Fps或40Fps。
方式四:对待处理视频流,每间隔第三预设帧数的图像,根据相邻图像进行增帧处理,得到已处理视频流;方式四中,电子设备也可以里增帧处理的方式,实现对待处理视频流的增帧率调整。
本发明实施例提供的图像拍摄方法,通过接收并响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流,并分别抠取第一视频流的拍摄内容中的目标对象和抠除第二视频流的拍摄内容中的目标对象,获得第一中间视频流和第二中间视频流,最后将帧率不同的第一中间视频流和第二中间视频流进行合成处理,生成目标视频,能够在视频拍摄时实现视频特效快捷处理,实现个性化视频拍摄,提升用户拍摄体验。
基于上述方法,本发明实施例提供一种用以实现上述方法的电子设备。
请参见图2,其示出的是本发明实施例提供的电子设备的结构示意图。本发明实施例提供一种电子设备200,可以包括:接收模块210、获取模块220、第一处理模块230、第二处理模块240和合成模块250。
接收模块210,用于接收第一输入;
获取模块220,用于响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流;
第一处理模块230,用于抠取第一视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流;
第二处理模块240,用于抠除第二视频流的拍摄内容中的目标对象,且对目标对象所在区域进行图像配准补偿,获得第二中间视频流;
合成模块250,用于根据第一中间视频流和第二中间视频流,生成目标视频;
其中,第一中间视频流和第二中间视频流的帧率不同。
可选地,在本发明一些实施例中,第二处理模块240可以包括:配准补偿单元。
配准补偿单元,用于在第二视频流中抠除目标对象的目标视频帧中,对目标对象所在区域,通过目标视频帧的相邻帧图像进行图像配准补偿。
可选地,在本发明一些实施例中,电子设备200还可以包括:第三处理模块;其中,待处理视频流可以为第一视频流,第一处理模块230具体可以包括:第一处理单元;或者,待处理视频流可以为第二视频流,第二处理模块240具体可以包括:第二处理单元;或者,待处理视频流为第一中间视频流和第二中间视频流中的至少一个,目标视频根据已处理视频流生成。
第三处理模块,用于对待处理视频流进行帧率调整处理,得到已处理视频流;
第一处理单元,用于抠取已处理视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流;
第二处理单元,用于抠除已处理视频流的拍摄内容中的目标对象,且对目标对象所在区域进行图像配准补偿,获得第二中间视频流。
可选地,在本发明一些实施例中,第三处理模块可以包括如下其中之一:第三处理单元、第四处理单元、第五处理单元和第六处理单元。
第三处理单元,用于在相机模组采集图像时的环境亮度值高于第一预设亮度值,且增益值低于第一预设增益值时,对待处理视频流,每相邻第一预设帧数的图像抽取一帧待处理图像,并对待处理图像进行合成处理,得到已处理视频流;
第四处理单元,用于在相机模组采集图像时的环境亮度值低于第二预设亮度值,且增益值高于第二预设增益值时,对待处理视频流,每相邻第二预设帧数的图像进行平均合成处理,并对平均合成处理后的图像进行合成处理,得到已处理视频流;
第五处理单元,用于按照预设的移动速度与帧率之间的对应关系,根据待处理视频流中运动对象的移动速度,对待处理视频流进行减帧处理,得到已处理视频流;
第六处理单元,用于对待处理视频流,每间隔第三预设帧数的图像,根据相邻图像进行增帧处理,得到已处理视频流。
较优地,在本发明一些实施例中,第一视频流和第二视频流的拍摄内容的亮度不同,以利于更好地体现生成的目标视频中目标对象的突出效果;第一视频流和第二视频流的拍摄内容对应的拍摄位置和曝光时间均相同,以利于更好地减小两路视频流的差异,实现更优的视频效果。
本发明实施例提供的电子设备200能够实现图1的方法实施例中电子设备实现的各个过程,为避免重复,这里不再赘述。
本发明实施例提供的电子设备,通过接收模块和获取模块接收并响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流, 并通过第一处理模块和第二处理模块分别抠取第一视频流的拍摄内容中的目标对象和抠除第二视频流的拍摄内容中的目标对象,获得第一中间视频流和第二中间视频流,最后通过合成模块将帧率不同的第一中间视频流和第二中间视频流进行合成处理,生成目标视频,能够在视频拍摄时实现视频特效快捷处理,实现个性化视频拍摄,提升用户拍摄体验。
图3为实现本发明各个实施例的一种电子设备的硬件结构示意图。
该电子设备300包括但不限于:射频单元301、网络模块302、音频输出单元303、输入单元304、传感器305、显示单元306、用户输入单元307、接口单元308、存储器309、处理器310、以及电源311等部件。本领域技术人员可以理解,图3中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本发明实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,用户输入单元307,用于接收第一输入;处理器310,用于响应于第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流;抠取第一视频流的拍摄内容中的目标对象,获得目标对象的第一中间视频流;抠除第二视频流的拍摄内容中的目标对象,且对目标对象所在区域进行图像配准补偿,获得第二中间视频流;根据第一中间视频流和第二中间视频流,生成目标视频;其中,第一中间视频流和第二中间视频流的帧率不同。这样,能够实现个性化视频拍摄,提升用户拍摄体验。
应理解的是,本发明实施例中,射频单元301可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器310处理;另外,将上行的数据发送给基站。通常,射频单元301包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元301还可以通过无线通信系统与网络和其他设备通信。
电子设备通过网络模块302为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元303可以将射频单元301或网络模块302接收的或者在存储器309中存储的音频数据转换成音频信号并且输出为声音。而且,音频输 出单元303还可以提供与电子设备300执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元303包括扬声器、蜂鸣器以及受话器等。
输入单元304用于接收音频或视频信号。输入单元304可以包括图形处理器(Graphics Processing Unit,GPU)3041和麦克风3042,图形处理器3041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元306上。经图形处理器3041处理后的图像帧可以存储在存储器309(或其它存储介质)中或者经由射频单元301或网络模块302进行发送。麦克风3042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元301发送到移动通信基站的格式输出。
电子设备300还包括至少一种传感器305,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板3061的亮度,接近传感器可在电子设备300移动到耳边时,关闭显示面板3061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器305还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元306用于显示由用户输入的信息或提供给用户的信息。显示单元306可包括显示面板3061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板3061。
用户输入单元307可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元307包括触控面板3071以及其他输入设备3072。触控面板3071,也称为触摸屏, 可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板3071上或在触控面板3071附近的操作)。触控面板3071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器310,接收处理器310发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板3071。除了触控面板3071,用户输入单元307还可以包括其他输入设备3072。具体地,其他输入设备3072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板3071可覆盖在显示面板3061上,当触控面板3071检测到在其上或附近的触摸操作后,传送给处理器310以确定触摸事件的类型,随后处理器310根据触摸事件的类型在显示面板3061上提供相应的视觉输出。虽然在图3中,触控面板3071与显示面板3061是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板3071与显示面板3061集成而实现电子设备的输入和输出功能,具体此处不做限定。
接口单元308为外部装置与电子设备300连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(Input/Output,I/O)端口、视频I/O端口、耳机端口等等。接口单元308可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备300内的一个或多个元件或者可以用于在电子设备300和外部装置之间传输数据。
存储器309可用于存储软件程序以及各种数据。存储器309可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器309可以包括高速随机存取存储器,还可以包括非易失性存储器,例 如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器310是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器309内的软件程序和/或模块,以及调用存储在存储器309内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器310可包括一个或多个处理单元;优选的,处理器310可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器310中。
电子设备300还可以包括给各个部件供电的电源311(比如电池),优选的,电源311可以通过电源管理系统与处理器310逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,电子设备300包括一些未示出的功能模块,在此不再赘述。
优选的,本发明实施例还提供一种电子设备,包括处理器310,存储器309,存储在存储器309上并可在处理器310上运行的计算机程序,该计算机程序被处理器310执行时实现上述图像拍摄方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述图像拍摄方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结 合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程, 是可以通过计算机程序来控制相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器(Read-Only Memory,ROM)或随机存取存储器(Random Access Memory,RAM)等。
可以理解的是,本公开实施例描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,模块、单元、子单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本公开所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本公开实施例所述功能的模块(例如过程、函数等)来实现本公开实施例所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本发明的保护之内。

Claims (14)

  1. 一种图像拍摄方法,应用于电子设备,包括:
    接收第一输入;
    响应于所述第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流;
    抠取所述第一视频流的拍摄内容中的目标对象,获得所述目标对象的第一中间视频流;
    抠除所述第二视频流的拍摄内容中的所述目标对象,且对所述目标对象所在区域进行图像配准补偿,获得第二中间视频流;
    根据所述第一中间视频流和第二中间视频流,生成目标视频;
    其中,所述第一中间视频流和所述第二中间视频流的帧率不同。
  2. 根据权利要求1所述的方法,其中,所述对所述目标对象所在区域进行图像配准补偿,包括:
    在所述第二视频流中抠除所述目标对象的目标视频帧中,对所述目标对象所在区域,通过所述目标视频帧的相邻帧图像进行图像配准补偿。
  3. 根据权利要求1所述的方法,其中,所述根据所述第一中间视频流和第二中间视频流,生成目标视频之前,还包括:
    对待处理视频流进行帧率调整处理,得到已处理视频流;
    其中,所述待处理视频流为所述第一视频流,所述抠取所述第一视频流的拍摄内容中的目标对象,获得所述目标对象的第一中间视频流,具体包括:
    抠取所述已处理视频流的拍摄内容中的目标对象,获得所述目标对象的第一中间视频流;
    或者,所述待处理视频流为所述第二视频流,所述抠除所述第二视频流的拍摄内容中的所述目标对象,且对所述目标对象所在区域进行图像配准补偿,获得第二中间视频流,具体包括:
    抠除所述已处理视频流的拍摄内容中的所述目标对象,且对所述目标对象所在区域进行图像配准补偿,获得第二中间视频流;
    或者,所述待处理视频流为所述第一中间视频流和所述第二中间视频流 中的至少一个,所述目标视频根据所述已处理视频流生成。
  4. 根据权利要求3所述的方法,其中,所述对待处理视频流进行帧率调整处理,得到已处理视频流,包括如下其中之一:
    在所述相机模组采集图像时的环境亮度值高于第一预设亮度值,且增益值低于第一预设增益值时,对所述待处理视频流,每相邻第一预设帧数的图像抽取一帧待处理图像,并对所述待处理图像进行合成处理,得到已处理视频流;
    在所述相机模组采集图像时的环境亮度值低于第二预设亮度值,且增益值高于第二预设增益值时,对所述待处理视频流,每相邻第二预设帧数的图像进行平均合成处理,并对平均合成处理后的图像进行合成处理,得到已处理视频流;
    按照预设的移动速度与帧率之间的对应关系,根据所述待处理视频流中运动对象的移动速度,对所述待处理视频流进行减帧处理,得到已处理视频流;
    对所述待处理视频流,每间隔第三预设帧数的图像,根据相邻图像进行增帧处理,得到已处理视频流。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述第一视频流和所述第二视频流的拍摄内容的亮度不同,所述第一视频流和所述第二视频流的拍摄内容对应的拍摄位置和曝光时间均相同。
  6. 一种电子设备,包括:
    接收模块,用于接收第一输入;
    获取模块,用于响应于所述第一输入,获取相机模组采集的同一拍摄内容的第一视频流和第二视频流;
    第一处理模块,用于抠取所述第一视频流的拍摄内容中的目标对象,获得所述目标对象的第一中间视频流;
    第二处理模块,用于抠除所述第二视频流的拍摄内容中的所述目标对象,且对所述目标对象所在区域进行图像配准补偿,获得第二中间视频流;
    合成模块,用于根据所述第一中间视频流和第二中间视频流,生成目标视频;
    其中,所述第一中间视频流和所述第二中间视频流的帧率不同。
  7. 根据权利要求6所述的电子设备,其中,所述第二处理模块包括:
    配准补偿单元,用于在所述第二视频流中抠除所述目标对象的目标视频帧中,对所述目标对象所在区域,通过所述目标视频帧的相邻帧图像进行图像配准补偿。
  8. 根据权利要求6所述的电子设备,还包括:
    第三处理模块,用于对待处理视频流进行帧率调整处理,得到已处理视频流;
    其中,所述待处理视频流为所述第一视频流,所述第一处理模块具体包括:
    第一处理单元,用于抠取所述已处理视频流的拍摄内容中的目标对象,获得所述目标对象的第一中间视频流;
    或者,所述待处理视频流为所述第二视频流,所述第二处理模块具体包括:
    第二处理单元,用于抠除所述已处理视频流的拍摄内容中的所述目标对象,且对所述目标对象所在区域进行图像配准补偿,获得第二中间视频流;
    或者,所述待处理视频流为所述第一中间视频流和所述第二中间视频流中的至少一个,所述目标视频根据所述已处理视频流生成。
  9. 根据权利要求8所述的电子设备,其中,所述第三处理模块包括如下其中之一:
    第三处理单元,用于在所述相机模组采集图像时的环境亮度值高于第一预设亮度值,且增益值低于第一预设增益值时,对所述待处理视频流,每相邻第一预设帧数的图像抽取一帧待处理图像,并对所述待处理图像进行合成处理,得到已处理视频流;
    第四处理单元,用于在所述相机模组采集图像时的环境亮度值低于第二预设亮度值,且增益值高于第二预设增益值时,对所述待处理视频流,每相邻第二预设帧数的图像进行平均合成处理,并对平均合成处理后的图像进行合成处理,得到已处理视频流;
    第五处理单元,用于按照预设的移动速度与帧率之间的对应关系,根据 所述待处理视频流中运动对象的移动速度,对所述待处理视频流进行减帧处理,得到已处理视频流;
    第六处理单元,用于对所述待处理视频流,每间隔第三预设帧数的图像,根据相邻图像进行增帧处理,得到已处理视频流。
  10. 根据权利要求6至9中任一项所述的电子设备,其中,所述第一视频流和所述第二视频流的拍摄内容的亮度不同,所述第一视频流和所述第二视频流的拍摄内容对应的拍摄位置和曝光时间均相同。
  11. 一种电子设备,包括处理器,存储器,存储在存储器上并可在处理器上运行的计算机程序,该计算机程序被处理器执行时实现根据权利要求1至5中任一项所述的图像拍摄方法的步骤。
  12. 一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现根据权利要求1至5中任一项所述的图像拍摄方法的步骤。
  13. 一种计算机软件产品,所述计算机软件产品被存储在非易失的存储介质中,所述软件产品被配置成被至少一个处理器执行以实现根据权利要求1至5中任一项所述的图像拍摄方法的步骤。
  14. 一种电子设备,所述电子设备被配置成用于执行根据权利要求1至5中任一项所述的图像拍摄方法。
PCT/CN2021/081982 2020-03-27 2021-03-22 图像拍摄方法和电子设备 WO2021190428A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020227036922A KR20220158101A (ko) 2020-03-27 2021-03-22 이미지 촬영 방법 및 전자 장비
JP2022558320A JP7495517B2 (ja) 2020-03-27 2021-03-22 画像撮影方法と電子機器
EP21775913.3A EP4131931A4 (en) 2020-03-27 2021-03-22 IMAGE CAPTURE METHOD AND ELECTRONIC DEVICE
US17/949,486 US20230013753A1 (en) 2020-03-27 2022-09-21 Image shooting method and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010228177.2A CN111405199B (zh) 2020-03-27 2020-03-27 一种图像拍摄方法和电子设备
CN202010228177.2 2020-03-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/949,486 Continuation US20230013753A1 (en) 2020-03-27 2022-09-21 Image shooting method and electronic device

Publications (1)

Publication Number Publication Date
WO2021190428A1 true WO2021190428A1 (zh) 2021-09-30

Family

ID=71436710

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/081982 WO2021190428A1 (zh) 2020-03-27 2021-03-22 图像拍摄方法和电子设备

Country Status (6)

Country Link
US (1) US20230013753A1 (zh)
EP (1) EP4131931A4 (zh)
JP (1) JP7495517B2 (zh)
KR (1) KR20220158101A (zh)
CN (1) CN111405199B (zh)
WO (1) WO2021190428A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114554114A (zh) * 2022-04-24 2022-05-27 浙江华眼视觉科技有限公司 一种快件码识别机取件证据存留方法及装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405199B (zh) * 2020-03-27 2022-11-01 维沃移动通信(杭州)有限公司 一种图像拍摄方法和电子设备
CN112261218B (zh) * 2020-10-21 2022-09-20 维沃移动通信有限公司 视频控制方法、视频控制装置、电子设备和可读存储介质
CN113810725A (zh) * 2021-10-12 2021-12-17 深圳市华胜软件技术有限公司 视频处理方法、装置、存储介质及视频通讯终端
CN114125297B (zh) * 2021-11-26 2024-04-09 维沃移动通信有限公司 视频拍摄方法、装置、电子设备及存储介质
CN116156250A (zh) * 2023-02-21 2023-05-23 维沃移动通信有限公司 视频处理方法及其装置
CN116256908B (zh) * 2023-05-10 2023-07-21 深圳市瀚达美电子有限公司 一种基于miniLED背光模组的校准方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1756312A (zh) * 2004-09-30 2006-04-05 中国科学院计算技术研究所 一种具有运动前景的视频合成方法
US20120314104A1 (en) * 2011-06-08 2012-12-13 Canon Kabushiki Kaisha Image processing method, image processing device, and recording medium
CN105554361A (zh) * 2014-10-28 2016-05-04 中兴通讯股份有限公司 一种动感视频拍摄的处理方法和系统
CN106791416A (zh) * 2016-12-29 2017-05-31 努比亚技术有限公司 一种背景虚化的拍摄方法及终端
CN107018331A (zh) * 2017-04-19 2017-08-04 努比亚技术有限公司 一种基于双摄像头的成像方法及移动终端
CN111405199A (zh) * 2020-03-27 2020-07-10 维沃移动通信(杭州)有限公司 一种图像拍摄方法和电子设备

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3809788B2 (ja) 2001-10-19 2006-08-16 日本電気株式会社 映像送受信装置および映像送受信プログラム
KR100586883B1 (ko) * 2004-03-04 2006-06-08 삼성전자주식회사 비디오 스트리밍 서비스를 위한 비디오 코딩방법, 프리디코딩방법, 비디오 디코딩방법, 및 이를 위한 장치와, 이미지 필터링방법
EP2131583A1 (en) * 2007-03-29 2009-12-09 Sharp Kabushiki Kaisha Video transmitter, video receiver, video recorder, video reproducer, and video display
JP2009049979A (ja) 2007-07-20 2009-03-05 Fujifilm Corp 画像処理装置、画像処理方法、画像処理システム、及びプログラム
JP2009049946A (ja) * 2007-08-23 2009-03-05 Sharp Corp 撮像装置およびドアホンシステム
JP5028225B2 (ja) * 2007-11-06 2012-09-19 オリンパスイメージング株式会社 画像合成装置、画像合成方法、およびプログラム
JP5604916B2 (ja) 2010-03-12 2014-10-15 カシオ計算機株式会社 画像処理装置及びプログラム
JP2012137511A (ja) * 2010-12-24 2012-07-19 Kyocera Corp カメラ装置、携帯端末、フレームレート制御プログラムおよびフレームレート制御方法
JP5704957B2 (ja) * 2011-02-22 2015-04-22 キヤノン株式会社 動画撮影装置及びその制御方法
CN102724503B (zh) 2012-06-13 2015-04-29 广东威创视讯科技股份有限公司 视频压缩方法和系统
JP2014030095A (ja) 2012-07-31 2014-02-13 Xacti Corp 電子カメラ
CN104754200A (zh) * 2013-12-31 2015-07-01 联芯科技有限公司 调整自动曝光量的方法和系统
JP6516302B2 (ja) * 2014-09-22 2019-05-22 Necディスプレイソリューションズ株式会社 画像表示装置及び光源調光方法
EP3229457B1 (en) * 2014-12-03 2021-06-09 Nikon Corporation Image pickup device, electronic apparatus, and program
CN104954689B (zh) * 2015-06-30 2018-06-26 努比亚技术有限公司 一种利用双摄像头获得照片的方法及拍摄装置
CN105847636B (zh) * 2016-06-08 2018-10-16 维沃移动通信有限公司 一种视频拍摄方法及移动终端
CN106131449B (zh) * 2016-07-27 2019-11-29 维沃移动通信有限公司 一种拍照方法及移动终端
CN107592488A (zh) * 2017-09-30 2018-01-16 联想(北京)有限公司 一种视频数据处理方法及电子设备
JP6653845B1 (ja) * 2018-11-08 2020-02-26 オリンパス株式会社 撮影装置、撮影方法、及び、プログラム
CN109361879A (zh) * 2018-11-29 2019-02-19 北京小米移动软件有限公司 图像处理方法及装置
CN109819161A (zh) * 2019-01-21 2019-05-28 北京中竞鸽体育文化发展有限公司 一种帧率的调整方法、装置、终端及可读存储介质
CN110675420B (zh) * 2019-08-22 2023-03-24 华为技术有限公司 一种图像处理方法和电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1756312A (zh) * 2004-09-30 2006-04-05 中国科学院计算技术研究所 一种具有运动前景的视频合成方法
US20120314104A1 (en) * 2011-06-08 2012-12-13 Canon Kabushiki Kaisha Image processing method, image processing device, and recording medium
CN105554361A (zh) * 2014-10-28 2016-05-04 中兴通讯股份有限公司 一种动感视频拍摄的处理方法和系统
CN106791416A (zh) * 2016-12-29 2017-05-31 努比亚技术有限公司 一种背景虚化的拍摄方法及终端
CN107018331A (zh) * 2017-04-19 2017-08-04 努比亚技术有限公司 一种基于双摄像头的成像方法及移动终端
CN111405199A (zh) * 2020-03-27 2020-07-10 维沃移动通信(杭州)有限公司 一种图像拍摄方法和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4131931A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114554114A (zh) * 2022-04-24 2022-05-27 浙江华眼视觉科技有限公司 一种快件码识别机取件证据存留方法及装置

Also Published As

Publication number Publication date
KR20220158101A (ko) 2022-11-29
JP7495517B2 (ja) 2024-06-04
CN111405199B (zh) 2022-11-01
US20230013753A1 (en) 2023-01-19
CN111405199A (zh) 2020-07-10
JP2023518895A (ja) 2023-05-08
EP4131931A1 (en) 2023-02-08
EP4131931A4 (en) 2023-08-16

Similar Documents

Publication Publication Date Title
WO2021190428A1 (zh) 图像拍摄方法和电子设备
CN110740259B (zh) 视频处理方法及电子设备
CN107995429B (zh) 一种拍摄方法及移动终端
WO2021036536A1 (zh) 视频拍摄方法及电子设备
WO2021104197A1 (zh) 对象跟踪方法及电子设备
WO2021036542A1 (zh) 录屏方法及移动终端
US11451706B2 (en) Photographing method and mobile terminal
CN110365907B (zh) 一种拍照方法、装置及电子设备
CN108989672B (zh) 一种拍摄方法及移动终端
CN108307109B (zh) 一种高动态范围图像预览方法及终端设备
CN109743504B (zh) 一种辅助拍照方法、移动终端和存储介质
CN109361867B (zh) 一种滤镜处理方法及移动终端
CN110602401A (zh) 一种拍照方法及终端
WO2021104227A1 (zh) 拍照方法及电子设备
CN107592459A (zh) 一种拍照方法及移动终端
CN110062171B (zh) 一种拍摄方法及终端
CN109102555B (zh) 一种图像编辑方法及终端
JP7467667B2 (ja) 検出結果出力方法、電子機器及び媒体
WO2021036623A1 (zh) 显示方法及电子设备
CN109618218B (zh) 一种视频处理方法及移动终端
CN108881721B (zh) 一种显示方法及终端
CN111464746B (zh) 拍照方法及电子设备
CN107959755B (zh) 一种拍照方法、移动终端及计算机可读存储介质
CN110908517B (zh) 图像编辑方法、装置、电子设备及介质
CN107817963B (zh) 一种图像显示方法、移动终端及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21775913

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022558320

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227036922

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2021775913

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021775913

Country of ref document: EP

Effective date: 20221027

NENP Non-entry into the national phase

Ref country code: DE