CN110996030A - Video generation method and device, storage medium and terminal equipment - Google Patents

Video generation method and device, storage medium and terminal equipment Download PDF

Info

Publication number
CN110996030A
CN110996030A CN201911325071.8A CN201911325071A CN110996030A CN 110996030 A CN110996030 A CN 110996030A CN 201911325071 A CN201911325071 A CN 201911325071A CN 110996030 A CN110996030 A CN 110996030A
Authority
CN
China
Prior art keywords
image
video
shooting
shot
auxiliary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911325071.8A
Other languages
Chinese (zh)
Inventor
黄树伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JRD Communication Shenzhen Ltd
TCL Mobile Communication Technology Ningbo Ltd
Original Assignee
JRD Communication Shenzhen Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JRD Communication Shenzhen Ltd filed Critical JRD Communication Shenzhen Ltd
Priority to CN201911325071.8A priority Critical patent/CN110996030A/en
Publication of CN110996030A publication Critical patent/CN110996030A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The video generation method obtains the scanning image of the auxiliary shooting object by obtaining the scanning image of the auxiliary shooting object, and generates the scanning image of the auxiliary shooting object on each shooting image in the shooting video when the video shooting is carried out on the object to be shot, so that the synthesized video of the auxiliary shooting object and the object to be shot is obtained, and the mobile terminal does not need to record the auxiliary shooting object for explaining the object to be shot independently when shooting the object to be shot, thereby reducing the shooting time and reducing the power consumption of the terminal.

Description

Video generation method and device, storage medium and terminal equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a video generation method and apparatus, a storage medium, and a terminal device.
Background
In the process of recording videos by using a mobile terminal, a user often needs to shoot videos of other objects related to a selected object to describe information describing the selected object so as to complete the shot videos of the selected object.
In the prior art, when a terminal user records a video of a selected object and other objects for explaining the selected object, the selected object and the other objects need to be recorded separately, so that the shooting time is longer and the power consumption of the terminal is larger.
Disclosure of Invention
The application provides a video generation method, a video generation device, a storage medium and a terminal device, which effectively solve the problems of long shooting time and large power consumption of the terminal caused by completely shooting videos of selected objects.
In order to solve the above problem, an embodiment of the present application provides a video generation method, where the video generation method includes:
carrying out image scanning on an auxiliary shooting object to obtain at least one scanning image of the auxiliary shooting object;
carrying out video shooting on an object to be shot;
and in the video shooting process, generating a composite video of the auxiliary shooting object and the object to be shot according to the at least one scanning image and the shooting video.
In the video generating method provided by the present application, the step of generating the composite video of the auxiliary photographic object and the object to be photographed according to the at least one scanned image and the photographic video specifically includes:
extracting a partial image containing the auxiliary shooting object from each of the at least one scanning image to obtain at least one partial image;
determining a photographic effect score of the at least one partial image;
selecting the local image with the highest shooting effect score from the at least one local image as a target local image;
and generating the target local image on each shot image in the shot video to obtain a composite video of the auxiliary shot object and the object to be shot.
In the video generation method provided by the present application, the step of determining the shooting effect score of the at least one local image specifically includes:
determining the definition and the object integrity of each local image in the at least one local image;
and calculating the shooting effect score of the corresponding local image according to the definition and the object integrity.
In the video generation method provided by the present application, the step of generating the target partial image on each captured image in the captured video specifically includes:
determining an area which does not contain the object to be shot on each shot image in the shot video as an image generation area;
and generating the target local image in the image generation area.
In the video generating method provided by the present application, the step of generating the target local image on each captured image in the captured video to obtain a composite video of the auxiliary captured object and the object to be captured specifically includes:
generating the target local image on each shooting image in the shooting video to obtain a plurality of composite images;
acquiring audio information corresponding to the multiple synthetic images;
and generating a composite video of the auxiliary shooting object and the object to be shot according to the plurality of composite images and the audio information.
In order to solve the above problem, an embodiment of the present application further provides a video generating apparatus, including:
the scanning module is used for scanning images of the auxiliary shooting object to obtain at least one scanning image of the auxiliary shooting object;
the shooting module is used for carrying out video shooting on an object to be shot;
and the generating module is used for generating a composite video of the auxiliary shooting object and the object to be shot according to the at least one scanning image and the shooting video in the video shooting process.
In the video generating apparatus provided by the present application, the generating module specifically includes:
an extraction unit, configured to extract a partial image including the auxiliary photographic object from each of the at least one scanned image, so as to obtain at least one partial image;
a determination unit, configured to determine a capture effect score of the at least one local image;
a selection unit, configured to select the local image with the highest shooting effect score from the at least one local image as a target local image;
a generating unit, configured to generate the target local image on each captured image in the captured video to obtain a composite video of the auxiliary captured object and the object to be captured.
In the video generating apparatus provided by the present application, the determining unit specifically includes:
the determining subunit is used for determining the definition and the object integrity of each local image in the at least one local image;
and the calculating subunit is used for calculating the shooting effect score of the corresponding local image according to the definition and the object integrity.
In the video generating apparatus provided by the present application, the video generating apparatus further includes a first generating subunit configured to:
determining an area which does not contain the object to be shot on each shot image in the shot video as an image generation area;
and generating the target local image in the image generation area.
In the video generating apparatus provided by the present application, the video generating apparatus further includes a second generating subunit configured to:
generating the target local image on each shooting image in the shooting video to obtain a plurality of composite images;
acquiring audio information corresponding to the multiple synthetic images;
and generating a composite video of the auxiliary shooting object and the object to be shot according to the plurality of composite images and the audio information.
In order to solve the above problem, an embodiment of the present application further provides a computer-readable storage medium, where a plurality of instructions are stored, and the instructions are adapted to be loaded by a processor to execute any one of the video generation methods described above.
In order to solve the above problem, an embodiment of the present application further provides a terminal device, which includes a processor and a memory, where the processor is electrically connected to the memory, the memory is used to store instructions and data, and the processor is used to execute the steps in the video generation method according to any one of the above descriptions.
The beneficial effect of this application does: the video generation method obtains the scanning image of the auxiliary shooting object by obtaining the scanning image of the auxiliary shooting object, and generates the scanning image of the auxiliary shooting object on each shooting image in the shooting video when the video shooting is carried out on the object to be shot so as to obtain the composite video of the auxiliary shooting object and the object to be shot, so that the mobile terminal does not need to record the auxiliary shooting object for explaining the object to be shot separately when shooting the object to be shot, the shooting time is shortened, and the power consumption of the terminal is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video generation method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of a video generation method according to an embodiment of the present application.
Fig. 3 is a schematic view of an application scenario of a video generation method according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application.
Fig. 5 is another schematic structural diagram of a video generating apparatus according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a video generation method, a video generation device, a storage medium and terminal equipment.
Referring to fig. 1, fig. 1 is a schematic flowchart of a video generation method provided in an embodiment of the present application, where the video generation method is applied to a mobile terminal, and the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, a notebook computer, and the like. The specific flow of the video generation method provided by this embodiment may be as follows:
s101, scanning the auxiliary shooting object to obtain at least one scanned image of the auxiliary shooting object.
And S102, carrying out video shooting on the object to be shot.
And S103, in the video shooting process, generating a composite video for assisting the shooting object and the object to be shot according to the at least one scanned image and the shooting video.
Further, the step S103 may specifically include:
extracting a local image containing an auxiliary shooting object from each of at least one scanned image to obtain at least one local image;
determining a shooting effect score of at least one local image;
selecting a local image with the highest shooting effect score from at least one local image as a target local image;
and generating a target local image on each shot image in the shot video to obtain a composite video of the auxiliary shot object and the object to be shot.
The method for extracting the local image containing the auxiliary shooting object from the scanned image can be as follows: selecting a point (a, b) at an arbitrary position on the scanned image; and arranging one horizontal reference line at intervals of a preset distance on the horizontal lines of the points (a, b) for a total of n, and arranging one vertical reference line at intervals of a preset distance on the vertical lines of the points (a, b) for a total of m, so that the scanned image is divided into n x m sub-scanned images, and when the processor extracts the local images, the a x b sub-scanned images containing the auxiliary shooting object images are extracted from the scanned images, and the required images can be obtained.
The step of "determining the shooting effect score of at least one local image" may specifically include:
determining the definition and the object integrity of each local image in at least one local image;
and calculating the shooting effect score of the corresponding local image according to the definition and the object integrity.
The processor determines the scores of the definition and the object integrity of each local image through various learning models to obtain corresponding numerical values, and directly adds or adds the numerical values according to weight to obtain the corresponding shooting effect score of each local image. For example, when the score of the sharpness of a certain local image is x1 and the score of the integrity of an object is x2, the corresponding score of the photographic effect may be x1+ x2, or x1 × P% + x2 × Q%, where P + Q is 100.
The step of "generating a target local image on each captured image in the captured video" may specifically include:
determining an area which does not contain the object to be shot on each shot image in the shot video as an image generation area;
and generating a target local image in the image generation area.
After the processor determines the image generation area, the processor selects a sub-image generation area which is most suitable for generating the local image according to the size of the local image of the auxiliary shooting object, and generates the local image of the auxiliary shooting object in the sub-image generation area so as to ensure the integrity of the local image of the auxiliary shooting object.
The step of generating a target local image on each captured image in the captured video to obtain a composite video of the auxiliary captured object and the object to be captured may specifically include:
generating a target local image on each shot image in the shot video to obtain a plurality of composite images;
acquiring audio information corresponding to a plurality of synthetic images;
and generating a composite video of the auxiliary shooting object and the object to be shot according to the plurality of composite images and the audio information.
Referring to fig. 2, fig. 2 is another schematic flow chart of a video generation method according to an embodiment of the present disclosure, where the video generation method is applied to a mobile terminal, and the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, a notebook computer, and the like. The specific flow of the video generation method provided by this embodiment may be as follows:
s201, scanning the auxiliary shooting object to obtain at least one scanned image of the auxiliary shooting object.
S202, video shooting is conducted on the object to be shot.
S203, in the video shooting process, extracting a local image containing an auxiliary shooting object from each of at least one scanned image to obtain at least one local image.
The method for extracting the local image containing the auxiliary shooting object from the scanned image can be as follows: selecting a point (a, b) at an arbitrary position on the scanned image; and arranging one horizontal reference line at intervals of a preset distance on the horizontal lines of the points (a, b) for a total of n, and arranging one vertical reference line at intervals of a preset distance on the vertical lines of the points (a, b) for a total of m, so that the scanned image is divided into n x m sub-scanned images, and when the processor extracts the local images, the a x b sub-scanned images containing the auxiliary shooting object images are extracted from the scanned images, and the required images can be obtained.
And S204, determining the definition and the object integrity of each local image in at least one local image.
And S205, calculating the shooting effect score of the corresponding local image according to the definition and the object integrity.
The processor determines the scores of the definition and the object integrity of each local image through various learning models to obtain corresponding numerical values, and directly adds or adds the numerical values according to weight to obtain the corresponding shooting effect score of each local image. For example, when the score of the sharpness of a certain local image is x1 and the score of the integrity of an object is x2, the corresponding score of the photographic effect may be x1+ x2, or x1 × P% + x2 × Q%, where P + Q is 100.
And S206, selecting the local image with the highest shooting effect score from the at least one local image as the target local image.
And S207, determining an area which does not contain the object to be shot on each shot image in the shot video as an image generation area.
After the processor determines the image generation area, the processor selects a sub-image generation area which is most suitable for generating the local image according to the size of the local image of the auxiliary shooting object, and generates the local image of the auxiliary shooting object in the sub-image generation area so as to ensure the integrity of the local image of the auxiliary shooting object.
And S208, generating a target local image in the image generation area to obtain a composite video of the auxiliary shooting object and the object to be shot.
The step of generating a target local image in the image generation area to obtain a composite video of the auxiliary shooting object and the object to be shot may specifically include:
generating a target local image in the image generation area to obtain a plurality of composite images;
acquiring audio information corresponding to a plurality of synthetic images;
and generating a composite video of the auxiliary shooting object and the object to be shot according to the plurality of composite images and the audio information.
Further, the auxiliary photographing object is taken as an object a, and the object to be photographed is taken as an object B.
For example, referring to fig. 3, fig. 3 is a schematic view of an application scenario of a video generation method provided in this embodiment of the present application, in which a mobile terminal scans an object a to obtain three scanned images of the object a, extracts a partial image including an auxiliary object from the three scanned images of the object a to obtain a partial image 1, a partial image 2, and a partial image 3, calculates shooting effect scores x, y, and z corresponding to each partial image according to the definition and the object integrity of the object a in the partial image, (where the value of z is the largest), selects the partial image 3 as a target partial image, and generates the partial image of the object a on each image in the shot images to obtain a composite video of the object a and the object B.
Therefore, different from the prior art, the video generation method, the device, the storage medium and the terminal device are provided, the video generation method obtains the scanning image of the auxiliary shooting object, and when the video shooting is performed on the object to be shot, the scanning image of the auxiliary shooting object is generated on each shooting image in the shooting video, so that the synthesized video of the auxiliary shooting object and the object to be shot is obtained, so that when the mobile terminal shoots the object to be shot, the auxiliary shooting object for explaining the object to be shot does not need to be recorded independently, the shooting time is shortened, and the power consumption of the terminal is reduced.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present disclosure, which is applied to a mobile terminal, where the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, a notebook computer, and the like. The video generation apparatus provided in this embodiment may include: a scanning module 10, a photographing module 20, and a generating module 30, wherein:
(1) scanning module 10
And the scanning module 10 is configured to perform image scanning on the auxiliary shooting object to obtain at least one scanned image of the auxiliary shooting object.
(2) Shooting module 20
And the shooting module 20 is used for carrying out video shooting on the object to be shot.
(3) Generation module 30
And the generating module 30 is configured to generate a composite video of the auxiliary shooting object and the object to be shot according to the at least one scanned image and the shooting video in the video shooting process.
Further, referring to fig. 5, fig. 5 is another schematic structural diagram of a video generating device according to an embodiment of the present application, where the generating module 30 specifically includes:
an extracting unit 31, configured to extract a partial image including an auxiliary photographic object from each of the at least one scanned image, to obtain at least one partial image;
the method for extracting the local image containing the auxiliary shooting object from the scanned image can be as follows: selecting a point (a, b) at an arbitrary position on the scanned image; and arranging one horizontal reference line at intervals of a preset distance on the horizontal lines of the points (a, b) for a total of n, and arranging one vertical reference line at intervals of a preset distance on the vertical lines of the points (a, b) for a total of m, so that the scanned image is divided into n x m sub-scanned images, and when the processor extracts the local images, the a x b sub-scanned images containing the auxiliary shooting object images are extracted from the scanned images, and the required images can be obtained.
A determination unit 32 for determining a photographic effect score of the at least one partial image;
the processor determines the scores of the definition and the object integrity of each local image through various learning models to obtain corresponding numerical values, and directly adds or adds the numerical values according to weight to obtain the corresponding shooting effect score of each local image. For example, when the score of the sharpness of a certain local image is x1 and the score of the integrity of an object is x2, the corresponding score of the photographic effect may be x1+ x2, or x1 × P% + x2 × Q%, where P + Q is 100.
A selection unit 33 configured to select, as a target partial image, a partial image with a highest photographic effect score from the at least one partial image;
a generating unit 34 for generating a target partial image on each of the captured images in the captured video to obtain a composite video of the auxiliary captured object and the object to be captured.
Further, referring to fig. 5, the determining unit 32 may specifically include:
a determining subunit 321, configured to determine a sharpness and an object integrity of each of the at least one partial image;
and the calculating subunit 322 is configured to calculate a shooting effect score of the corresponding local image according to the definition and the object integrity.
The processor determines the scores of the definition and the object integrity of each local image through various learning models to obtain corresponding numerical values, and directly adds or adds the numerical values according to weight to obtain the corresponding shooting effect score of each local image. For example, when the score of the sharpness of a certain local image is x1 and the score of the integrity of an object is x2, the corresponding score of the photographic effect may be x1+ x2, or x1 × P% + x2 × Q%, where P + Q is 100.
Furthermore, the video generation apparatus may further include a first generation subunit operable to:
determining an area which does not contain the object to be shot on each shot image in the shot video as an image generation area;
and generating a target local image in the image generation area.
After the processor determines the image generation area, the processor selects a sub-image generation area which is most suitable for generating the local image according to the size of the local image of the auxiliary shooting object, and generates the local image of the auxiliary shooting object in the sub-image generation area so as to ensure the integrity of the local image of the auxiliary shooting object.
Furthermore, the video generation apparatus may further include a second generation subunit operable to:
generating a target local image on each shot image in the shot video to obtain a plurality of composite images;
acquiring audio information corresponding to a plurality of synthetic images;
and generating a composite video of the auxiliary shooting object and the object to be shot according to the plurality of composite images and the audio information.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
Therefore, different from the prior art, the present application provides a video generation method, an apparatus, a storage medium, and a terminal device, in which a scanning module 10 is used to obtain a scanned image of an auxiliary shooting object, and when a shooting module 20 performs video shooting on an object to be shot, a generation module 30 generates a scanned image of the auxiliary shooting object on each shot image in the shot video to obtain a composite video of the auxiliary shooting object and the object to be shot, so that when the mobile terminal shoots the object to be shot, the auxiliary shooting object describing the object to be shot does not need to be recorded separately, thereby reducing the shooting time and reducing the power consumption of the terminal.
In addition, the embodiment of the application further provides a terminal device, and the terminal device can be a smart phone, a tablet computer and other devices. As shown in fig. 6, the terminal device 200 includes a processor 201 and a memory 202. The processor 201 is electrically connected to the memory 202.
The processor 201 is a control center of the terminal device 200, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or loading an application program stored in the memory 202 and calling data stored in the memory 202, thereby performing overall monitoring of the terminal device.
In this embodiment, the terminal device 200 is provided with a plurality of memory partitions, the plurality of memory partitions includes a system partition and a target partition, the processor 201 in the terminal device 200 loads instructions corresponding to processes of one or more application programs into the memory 202 according to the following steps, and the processor 201 runs the application programs stored in the memory 202, so as to implement various functions:
carrying out image scanning on the auxiliary shooting object to obtain at least one scanning image of the auxiliary shooting object;
carrying out video shooting on an object to be shot;
and in the video shooting process, generating a composite video of the auxiliary shooting object and the object to be shot according to the at least one scanning image and the shooting video.
Fig. 7 is a block diagram showing a specific structure of a terminal device according to an embodiment of the present invention, where the terminal device may be used to implement the video generation method provided in the foregoing embodiment. The terminal device 300 may be a smart phone or a tablet computer.
The RF circuit 310 is used for receiving and transmitting electromagnetic waves, and performing interconversion between the electromagnetic waves and electrical signals, thereby communicating with a communication network or other devices. RF circuitry 310 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuit 310 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols and technologies, including but not limited to Global System for Mobile Communication (GSM), Enhanced Mobile Communication (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., IEEE802.11 a, IEEE802.11 b, IEEE802.1 g and/or IEEE802.1 n), Voice over Internet Protocol (VoIP), world wide Internet Protocol (Microwave Access for Wireless communications, Wi-Max), and other short message protocols, as well as any other suitable communication protocols, and may even include those that have not yet been developed.
The memory 320 may be configured to store software programs and modules, such as program instructions/modules corresponding to the automatic light supplement system and method for front-facing camera photographing in the foregoing embodiments, and the processor 380 executes various functional applications and data processing by running the software programs and modules stored in the memory 320, so as to implement the function of automatic light supplement for front-facing camera photographing. The memory 320 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 320 may further include memory located remotely from processor 380, which may be connected to terminal device 300 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 330 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 330 may include a touch-sensitive surface 331 as well as other input devices 332. The touch-sensitive surface 331, also referred to as a touch screen or touch pad, may collect touch operations by a user on or near the touch-sensitive surface 331 (e.g., operations by a user on or near the touch-sensitive surface 331 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 331 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch-sensitive surface 331 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 330 may comprise other input devices 332 in addition to the touch sensitive surface 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 340 may be used to display information input by or provided to the user and various graphic user interfaces of the terminal apparatus 300, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 340 may include a Display panel 341, and optionally, the Display panel 341 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 331 may overlay display panel 341, and when touch-sensitive surface 331 detects a touch operation thereon or thereabout, communicate to processor 380 to determine the type of touch event, and processor 380 then provides a corresponding visual output on display panel 341 in accordance with the type of touch event. Although in FIG. 7, touch-sensitive surface 331 and display panel 341 are implemented as two separate components for input and output functions, in some embodiments, touch-sensitive surface 331 and display panel 341 may be integrated for input and output functions.
The terminal device 300 may also include at least one sensor 350, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 341 and/or the backlight when the terminal device 300 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal device 300, detailed descriptions thereof are omitted.
Audio circuitry 360, speaker 361, microphone 362 may provide an audio interface between a user and terminal device 300. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signal into an electrical signal, which is received by the audio circuit 360 and converted into audio data, which is then processed by the audio data output processor 380 and then transmitted to, for example, another terminal via the RF circuit 310, or the audio data is output to the memory 320 for further processing. The audio circuit 360 may also include an earbud jack to provide communication of peripheral headphones with the terminal device 300.
The terminal device 300 may assist the user in e-mail, web browsing, streaming media access, etc. through the transmission module 370 (e.g., a Wi-Fi module), which provides the user with wireless broadband internet access. Although fig. 7 shows the transmission module 370, it is understood that it does not belong to the essential constitution of the terminal device 300, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 380 is a control center of the terminal device 300, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal device 300 and processes data by running or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the mobile phone. Optionally, processor 380 may include one or more processing cores; in some embodiments, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
Terminal device 300 also includes a power supply 390 (e.g., a battery) for powering the various components, which may be logically coupled to processor 380 via a power management system in some embodiments to manage charging, discharging, and power consumption management functions via the power management system. The power supply 390 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal device 300 may further include a camera (e.g., a front camera, a rear camera), a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the display unit of the terminal device is a touch screen display, the terminal device further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for:
carrying out image scanning on the auxiliary shooting object to obtain at least one scanning image of the auxiliary shooting object;
carrying out video shooting on an object to be shot;
and in the video shooting process, generating a composite video of the auxiliary shooting object and the object to be shot according to the at least one scanning image and the shooting video.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor. To this end, the present invention provides a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute the steps in any one of the video generation methods provided by the embodiments of the present invention.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any video generation method provided in the embodiments of the present invention, beneficial effects that can be achieved by any video generation method provided in the embodiments of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
In addition to the above embodiments, other embodiments are also possible. All technical solutions formed by using equivalents or equivalent substitutions fall within the protection scope of the claims of the present application.
In summary, although the present application has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present application, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present application, so that the scope of the present application shall be determined by the appended claims.

Claims (10)

1. A video generation method, characterized in that the video generation method comprises:
carrying out image scanning on an auxiliary shooting object to obtain at least one scanning image of the auxiliary shooting object;
carrying out video shooting on an object to be shot;
and in the video shooting process, generating a composite video of the auxiliary shooting object and the object to be shot according to the at least one scanning image and the shooting video.
2. The video generation method according to claim 1, wherein the step of generating the composite video of the auxiliary photographic object and the object to be photographed from the at least one scan image and the photographic video specifically includes:
extracting a partial image containing the auxiliary shooting object from each of the at least one scanning image to obtain at least one partial image;
determining a photographic effect score of the at least one partial image;
selecting the local image with the highest shooting effect score from the at least one local image as a target local image;
and generating the target local image on each shot image in the shot video to obtain a composite video of the auxiliary shot object and the object to be shot.
3. The video generation method according to claim 2, wherein the step of determining the capture effect score of the at least one partial image specifically comprises:
determining the definition and the object integrity of each local image in the at least one local image;
and calculating the shooting effect score of the corresponding local image according to the definition and the object integrity.
4. The video generation method according to claim 2, wherein the step of generating the target partial image on each captured image in the captured video specifically includes:
determining an area which does not contain the object to be shot on each shot image in the shot video as an image generation area;
and generating the target local image in the image generation area.
5. The video generation method according to claim 2, wherein the step of generating the target partial image on each captured image in the captured video to obtain a composite video of the auxiliary captured object and the object to be captured specifically includes:
generating the target local image on each shooting image in the shooting video to obtain a plurality of composite images;
acquiring audio information corresponding to the multiple synthetic images;
and generating a composite video of the auxiliary shooting object and the object to be shot according to the plurality of composite images and the audio information.
6. A video generation apparatus, characterized in that the video generation apparatus comprises:
the scanning module is used for scanning images of the auxiliary shooting object to obtain at least one scanning image of the auxiliary shooting object;
the shooting module is used for carrying out video shooting on an object to be shot;
and the generating module is used for generating a composite video of the auxiliary shooting object and the object to be shot according to the at least one scanning image and the shooting video in the video shooting process.
7. The video generation apparatus according to claim 6, wherein the generation module specifically includes:
an extraction unit, configured to extract a partial image including the auxiliary photographic object from each of the at least one scanned image, so as to obtain at least one partial image;
a determination unit, configured to determine a capture effect score of the at least one local image;
a selection unit, configured to select the local image with the highest shooting effect score from the at least one local image as a target local image;
a generating unit, configured to generate the target local image on each captured image in the captured video to obtain a composite video of the auxiliary captured object and the object to be captured.
8. The video generation apparatus according to claim 7, wherein the determining unit specifically includes:
the determining subunit is used for determining the definition and the object integrity of each local image in the at least one local image;
and the calculating subunit is used for calculating the shooting effect score of the corresponding local image according to the definition and the object integrity.
9. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor to perform the video generation method of any of claims 1 to 5.
10. A terminal device comprising a processor and a memory, the processor being electrically connected to the memory, the memory being configured to store instructions and data, the processor being configured to perform the steps of the video generation method of any one of claims 1 to 5.
CN201911325071.8A 2019-12-20 2019-12-20 Video generation method and device, storage medium and terminal equipment Pending CN110996030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911325071.8A CN110996030A (en) 2019-12-20 2019-12-20 Video generation method and device, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911325071.8A CN110996030A (en) 2019-12-20 2019-12-20 Video generation method and device, storage medium and terminal equipment

Publications (1)

Publication Number Publication Date
CN110996030A true CN110996030A (en) 2020-04-10

Family

ID=70074316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911325071.8A Pending CN110996030A (en) 2019-12-20 2019-12-20 Video generation method and device, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN110996030A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104159000A (en) * 2014-08-21 2014-11-19 广东欧珀移动通信有限公司 Scanned image acquiring method and device
CN104581351A (en) * 2015-01-28 2015-04-29 上海与德通讯技术有限公司 Audio/video recording method, audio/video playing method and electronic device
US20170134666A1 (en) * 2014-07-02 2017-05-11 Nubia Technology Co., Ltd. Method and apparatus for shooting star trail video, and computer storage medium
CN108154091A (en) * 2017-12-11 2018-06-12 北京小米移动软件有限公司 Image presentation method, image processing method and device
CN108876782A (en) * 2018-06-27 2018-11-23 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170134666A1 (en) * 2014-07-02 2017-05-11 Nubia Technology Co., Ltd. Method and apparatus for shooting star trail video, and computer storage medium
CN104159000A (en) * 2014-08-21 2014-11-19 广东欧珀移动通信有限公司 Scanned image acquiring method and device
CN104581351A (en) * 2015-01-28 2015-04-29 上海与德通讯技术有限公司 Audio/video recording method, audio/video playing method and electronic device
CN108154091A (en) * 2017-12-11 2018-06-12 北京小米移动软件有限公司 Image presentation method, image processing method and device
CN108876782A (en) * 2018-06-27 2018-11-23 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus

Similar Documents

Publication Publication Date Title
CN107977144B (en) Screen capture processing method and mobile terminal
US11363196B2 (en) Image selection method and related product
CN108038825B (en) Image processing method and mobile terminal
CN107749046B (en) Image processing method and mobile terminal
CN107241552B (en) Image acquisition method, device, storage medium and terminal
CN105989572B (en) Picture processing method and device
CN109618218B (en) Video processing method and mobile terminal
CN111182236A (en) Image synthesis method and device, storage medium and terminal equipment
CN112488914A (en) Image splicing method, device, terminal and computer readable storage medium
CN111401463A (en) Method for outputting detection result, electronic device, and medium
CN107330867B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN109561255B (en) Terminal photographing method and device and storage medium
CN112489082A (en) Position detection method, position detection device, electronic equipment and readable storage medium
CN111026457B (en) Hardware configuration method and device, storage medium and terminal equipment
CN111355892B (en) Picture shooting method and device, storage medium and electronic terminal
CN111343335B (en) Image display processing method, system, storage medium and mobile terminal
CN111355991B (en) Video playing method and device, storage medium and mobile terminal
CN111064886B (en) Shooting method of terminal equipment, terminal equipment and storage medium
CN108829600B (en) Method and device for testing algorithm library, storage medium and electronic equipment
CN109379531B (en) Shooting method and mobile terminal
CN110996030A (en) Video generation method and device, storage medium and terminal equipment
CN110958392A (en) Shooting method of terminal equipment, terminal equipment and storage medium
CN111046215A (en) Image processing method and device, storage medium and mobile terminal
CN112468725B (en) Photo shooting method and device, storage medium and mobile terminal
CN110995996A (en) Image display method and device, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410