WO2022068479A1 - 图像处理方法、装置、电子设备及计算机可读存储介质 - Google Patents

图像处理方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022068479A1
WO2022068479A1 PCT/CN2021/114717 CN2021114717W WO2022068479A1 WO 2022068479 A1 WO2022068479 A1 WO 2022068479A1 CN 2021114717 W CN2021114717 W CN 2021114717W WO 2022068479 A1 WO2022068479 A1 WO 2022068479A1
Authority
WO
WIPO (PCT)
Prior art keywords
video image
virtual object
video
display
image
Prior art date
Application number
PCT/CN2021/114717
Other languages
English (en)
French (fr)
Inventor
吴金远
吴永文
吕海涛
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to US18/246,389 priority Critical patent/US20230360184A1/en
Publication of WO2022068479A1 publication Critical patent/WO2022068479A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to an image processing method, apparatus, electronic device, and computer-readable storage medium.
  • an embodiment of the present disclosure provides an image processing method, which is applied to a terminal, where the terminal includes a first photographing device and a second photographing device, and the method includes: collecting a first video image through the first photographing device , and display the first video image on the screen; when it is detected that the display object in the first video image satisfies the preset switching condition, it is switched to capture the second video image by the second shooting device, and then The second video image is displayed on the screen.
  • an embodiment of the present disclosure provides an image processing apparatus, which is applied to a terminal, where the terminal includes a first photographing apparatus and a second photographing apparatus, and the apparatus includes: a video display module, configured to use the first photographing device
  • the device collects the first video image, and displays the first video image on the screen;
  • the switching display module is configured to switch to the second video image when it is detected that the display object in the first video image meets the preset switching condition.
  • the photographing device captures a second video image and displays the second video image on the screen.
  • an embodiment of the present disclosure provides an electronic device, the electronic device includes: one or more processors; a memory, where a computer program is stored in the memory, when the computer program is configured by the one or more processors When executed by each processor, the electronic device is caused to execute the method described in the first aspect.
  • an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, causes the processor to execute the method described in the first aspect above .
  • An image processing method, device, electronic device, and computer-readable storage medium provided by the embodiments of the present disclosure are applied to a terminal with two shooting devices.
  • a first video image is collected by the first shooting device, and displayed on a screen of the terminal.
  • the first video image is displayed, and then when it is detected that the display object in the first video image meets the preset switching condition, the second video image is switched to be captured by the second shooting device, and the second video image is displayed on the screen.
  • the embodiments of the present disclosure can realize automatic switching of different shooting devices on the terminal based on the state of the display object in the first video image during shooting, switch the first shooting device to the second shooting device of the terminal, and make The screen of the terminal switches from displaying the first video image collected by the first shooting device to displaying the second video image collected by the second shooting device, thereby providing users with more shooting possibilities and fun, so that users can
  • the automatic switching of the shooting device enables more creative works to be shot, which enriches the shooting gameplay and enhances the user’s shooting experience.
  • FIG. 1 shows a schematic flowchart of an image processing method provided by an embodiment of the present disclosure.
  • FIG. 2 shows a schematic flowchart of an image processing method provided by another embodiment of the present disclosure.
  • FIG. 3 shows a schematic diagram of an interface provided by an exemplary embodiment of the present disclosure.
  • FIG. 4 shows another interface schematic diagram provided by an exemplary embodiment of the present disclosure.
  • FIG. 5 shows a detailed flowchart of step S220 in FIG. 2 provided by an exemplary embodiment of the present disclosure.
  • FIG. 6 shows yet another interface schematic diagram provided by an exemplary embodiment of the present disclosure.
  • FIG. 7 shows a schematic flowchart of a method for displaying a virtual object on a second video image provided by an exemplary embodiment of the present disclosure.
  • FIG. 8 shows yet another interface schematic diagram provided by an exemplary embodiment of the present disclosure.
  • FIG. 9 shows a schematic diagram of screens at four different moments provided by an exemplary embodiment of the present disclosure.
  • FIG. 10 shows another interface schematic diagram provided by an exemplary embodiment of the present disclosure.
  • FIG. 11 shows a schematic flowchart of an image processing method provided by yet another embodiment of the present disclosure.
  • FIG. 12 shows a block diagram of modules of an image processing apparatus provided by an embodiment of the present disclosure.
  • FIG. 13 shows a structural block diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • the image processing method improved by the embodiment of the present disclosure may be applied to a terminal, where the terminal includes a first photographing device and a second photographing device, and the first photographing device and the second photographing device may be fixed or rotatable on the terminal.
  • a photographing device and a second photographing device may be provided on different sides of the terminal; wherein, the first photographing device and the second photographing device may include any device capable of capturing images, such as a camera, which is not limited herein.
  • the terminal can be a smartphone, tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, moving image compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image compression standard audio layer 3) arbitrarily provided with at least two shooting devices Image compression standard audio layer 4) Players, wearable devices, in-vehicle devices, Augmented Reality (AR)/Virtual Reality (VR) devices, laptops, Ultra-Mobile Personal Computers, UMPC), netbooks, personal digital assistants (Personal digital assistants, PDAs) or specialized cameras (such as single-lens reflex cameras, card cameras), etc.
  • the embodiment of the present disclosure does not limit the specific type of the terminal.
  • the terminal may run a client application, and the client application may include client application software corresponding to the photographing device or other client application software with photographing function, which is not limited in the present disclosure.
  • FIG. 1 shows a schematic flowchart of an image processing method provided by an embodiment of the present disclosure, which can be applied to the above-mentioned terminal provided with multiple (eg, two) photographing apparatuses.
  • the flow shown in FIG. 1 is described in detail below, and the image processing method may include the following steps:
  • S110 Capture the first video image by the first photographing device, and display the first video image on the screen.
  • the terminal includes at least two setting photographing devices, which are denoted as a first photographing device and a second photographing device, and the first photographing device and the second photographing device may be set on different sides of the terminal.
  • the terminal includes four borders, up, down, left and right. When the user holds the terminal facing the screen of the terminal, the border on the left side of the screen is marked as the left border, the border on the right side of the screen is marked as the right border, and the border on the screen is marked as the right border.
  • the frame on the upper side is denoted as the upper frame
  • the frame on the lower side of the screen is denoted as the lower frame
  • one of the first photographing device and the second photographing device can be set on the same side of the terminal screen and located on the upper frame, the lower frame
  • On either one of the left or right borders and the other can be set on the same side of the back shell of the terminal and on any one of the upper border, the lower border, the left border or the right border.
  • one of the first shooting device and the second shooting device can be set on the same side of the terminal screen, that is, one of the cameras is a front-facing camera; the other can be set on the same side of the back shell of the terminal, that is, the other camera is rear camera.
  • the present disclosure does not limit the specific installation positions of the first photographing device and the second photographing device.
  • the screen of the terminal may display an image captured in real time by the first photographing device, that is, the first video image.
  • the first video image may be an original image collected by the first photographing device, or may be an image adjusted on the basis of the original image, wherein the adjustment operation may include adjustment of parameters such as contrast, brightness, focus, aperture, etc.
  • the operation may also include operations such as adding filters, stickers, special effects, etc. to the first video image, which is not limited in this disclosure.
  • the first photographing device may be a camera that is activated by default when the photographing function in the client application is activated. For example, when the terminal obtains an activation instruction for the photographing function of the client application, the When the function is activated, the first shooting device can be activated by default, and the first video image captured by the first shooting device in real time is displayed on the screen.
  • the first photographing device may be a front camera or a rear camera, which is not limited herein.
  • the first photographing device may also be a photographing device selected to be activated by the user.
  • the second photographing device is activated by default when the photographing function is activated, the user can click on the screen, such as flipping the photographing device. , switch the second camera to the first camera, use the first camera to capture images, and display the first video image captured by the first camera in real time on the screen.
  • the second video image may include the original image collected by the second photographing device, or may include an image adjusted on the basis of the original image, wherein the adjustment operation may include adjustment of parameters such as contrast, brightness, focus, aperture, etc.
  • the operation may also include operations such as adding filters, stickers, special effects, etc. to the second video image, which is not limited in this disclosure.
  • the terminal when it detects that the display object in the first video image satisfies the preset switching condition, it can generate a camera switching control instruction, and call an application for controlling the camera based on the camera switching control instruction.
  • Interface Application Programming Interface, API
  • the display object in the first video image may include a target object in the first video image, or may include other objects superimposed on the first video image, which is not limited in this embodiment.
  • the terminal may detect the display object in the first video image while collecting and displaying the first video image, so as to determine whether the display object satisfies the preset switching condition, and when the display object satisfies the preset switching condition, switch to pass through.
  • the second photographing device captures the second video image and displays the second video image on the screen. In this way, the automatic switching of the shooting device can be realized during the process of image shooting without the need for manual operation by the user, and the video images collected by different shooting devices can be switched and displayed on the screen along with the switching of the shooting device, and the different shooting devices can be displayed on the screen.
  • the first video image and the second video image collected respectively are recorded and synthesized into a video image, so that the user can shoot more creative works based on the automatic switching of the shooting device during the shooting process, which enriches the shooting gameplay and enhances the user experience. shooting experience.
  • the terminal may execute the image processing method provided by the embodiments of the present disclosure when only previewing starts before shooting, so that the user can preview the final effect in the preview stage before shooting, That is, the entire effect can be previewed in real time, so that the user can preview the effect that can be obtained by shooting with the image processing method of the embodiment of the present disclosure before shooting, so as to stimulate the user's interest in shooting with the image processing method, and promote the user to make more images. Shooting works.
  • the terminal may also execute the image processing method provided by the embodiment of the present disclosure when shooting starts, which is not limited in the embodiment of the present disclosure.
  • a first video image is collected by a first photographing device, and the first video image is displayed on the screen, and then when it is detected that the display object in the first video image satisfies the preset switching condition, switching is performed.
  • the second video image is captured by the second photographing device, and the second video image is displayed on the screen. Therefore, this embodiment can automatically switch the shooting device based on the display object in the first video image during shooting, switch the first shooting device to the second shooting device of the terminal, and change the display image of the screen from the first shooting device to the second shooting device of the terminal.
  • the first video image collected by the shooting device is switched to the second video image collected by the second shooting device, so that the user can shoot more interesting and creative works based on the automatic switching of the shooting device during the shooting process, which enriches the shooting gameplay. Improve the user's shooting experience.
  • a virtual object may also be displayed on the first video image, and at this time, the terminal may trigger switching of the camera according to the virtual object.
  • FIG. 2 shows a schematic flowchart of an image processing method provided by another embodiment of the present disclosure.
  • the method may include:
  • S210 Capture the first video image by the first photographing device, and display the first video image on the screen.
  • S220 Display the virtual object on the first video image.
  • the virtual objects may include any one of virtual characters, animals, plants, objects, etc., and the objects may include any objects such as hearts, stars, etc., which are not limited in this embodiment.
  • the virtual object may include a three-dimensional solid model created based on animation technology; the virtual object may also include a two-dimensional virtual model.
  • Each virtual object can have its own shape and size.
  • the terminal when detecting the virtual object display request, may determine the virtual object to be displayed by parsing the request, and display the virtual object on the first video image.
  • the virtual object display request may be manually triggered by a touch operation acting on the terminal screen, or automatically triggered based on image recognition.
  • the virtual object display request can be manually triggered by a touch operation.
  • a virtual object request display control for requesting to display a virtual object may be displayed on the screen of the terminal.
  • a touch operation acting on the control it can be determined that a corresponding virtual object display request is detected.
  • different controls may correspond to different virtual objects, and touching different controls may trigger different virtual object display requests, and different virtual object display requests carry different virtual object identifiers.
  • the terminal can obtain the corresponding virtual object identifier through analysis, and further determines the virtual object corresponding to the virtual object identifier, so as to display the corresponding virtual object in the first video image.
  • a special effect selection control can be displayed on the screen, wherein the special effect selection control is used to trigger the display of a special effect selection page, and the special effect selection page can display one or more special effect selection controls, and different special effect selection controls can be implemented correspondingly same or different functions.
  • the terminal detects the triggering operation of the special effect selection control corresponding to the image processing function of the embodiment of the present disclosure, it triggers the virtual object display request at the same time, so that the image processing method according to the embodiment of the present disclosure can be directly executed, and in the first Virtual objects are displayed superimposed on a video.
  • the terminal when it detects a triggering operation of the special effect selection control corresponding to the image processing function of the embodiment of the present disclosure, it can also trigger the display of a corresponding virtual object selection page, and the virtual object selection page can display at least one virtual object selection page.
  • the virtual object corresponding to the object requests the display control.
  • the terminal selects the control according to the special effect touched by the user, can determine the function to be implemented, and then requests to display the control according to the touched virtual object, so as to determine the virtual object to be displayed, so that multiple virtual objects can be displayed. Choose and display.
  • the image processing method provided by this embodiment can be executed when it is detected that a specific special effect selection control is triggered, so as to realize the shooting function based on the automatic switching of the shooting device.
  • a corresponding virtual object selection page can be displayed on the screen.
  • the screen can be as shown in FIG. 3 , which shows an example of the present disclosure.
  • the screen includes two display areas: a display area for displaying the shooting screen 310 and another display area for displaying a virtual object selection page 321 on which a plurality of virtual objects (such as virtual objects A to O) are displayed. ) corresponding to the virtual object request display control 3211.
  • the photographing screen 310 may display the first video image collected by the first photographing device.
  • virtual object presentation requests may also be automatically triggered based on image recognition.
  • the terminal may detect the first target object in the first video image, and display the virtual object on the first video image according to the detected triggering action of the first target object.
  • the specific implementation of step S220 may include: when it is detected that the first target object in the first video image performs a preset trigger action, displaying the virtual object on the first video image.
  • the first target object may be a target person in the first video image, and the corresponding preset trigger action may include at least one of preset body posture, gesture, expression, body movement, etc., that is, the first target object executes the preset trigger action.
  • the trigger action may include at least one of the following: the first target object is in a preset body posture, such as hands on hips, etc.; the first target object executes a preset trigger gesture, such as an "ok" gesture, folded hands, heart-to-heart, etc.; The target object performs preset expressions, such as smiling, laughing, etc.; the first target object performs preset actions, such as blinking, waving, pouting, and the like.
  • the preset trigger action may be determined according to actual needs, and may be preset by a program or user-defined, which is not limited herein. Therefore, by detecting the target object in the first video image, and when detecting that the first target object performs a preset trigger action, the virtual object can be displayed on the first video image, thereby automatically triggering the generation and display of the virtual object , without the need for manual operation by the user, but also enriches the fun of shooting and improves the shooting experience of the user.
  • the first target object may also include an object such as an animal that can perform a preset trigger action, which is not limited herein.
  • the first target object may be a preset object
  • the terminal may store a preset image of the preset object in advance, and when detecting the target object in the first video image, it may associate the detected target object with the The preset images are matched, and if the matching is successful, it can be determined that the first target object is detected, and then it is further detected whether the first target object performs a preset trigger action and performs subsequent operations. Therefore, by detecting only the triggering action of the preset object to display the virtual object, on the one hand, the consumption of computing resources can be reduced, and on the other hand, it can also avoid the possibility of causing the occurrence of a preset triggering action when multiple objects are detected at the same time. Virtual objects are displayed cluttered, improving system stability and user shooting experience.
  • the first target object may also be a non-preset object, that is, any object that appears within the shooting range of the first shooting device can be used as the first target object, and by detecting whether to perform Preset trigger actions to trigger the display of virtual objects.
  • corresponding prompt information may also be displayed on the screen of the terminal, for prompting the user to trigger the function of the control and/or how to trigger the function.
  • the prompt information may include information in any one or more forms of images and text, and may also include information in the form of voice, which is not limited in this embodiment.
  • the terminal can display text-type prompt information on the screen. For example, as shown in FIG. 4 , the terminal can display prompt information 330 in the center of the screen, “Blink to send love ⁇ give Recipient", prompting the user to blink an eye to trigger the heart display and transmit the heart to the recipient (e.g., the other person across from the user).
  • prompt information 330 in the center of the screen, “Blink to send love ⁇ give Recipient”
  • prompting the user to blink an eye to trigger the heart display and transmit the heart to the recipient (e.g., the other person across from the user).
  • the virtual object may be dynamically displayed on the first video image.
  • the virtual object may be dynamically displayed along the first motion trajectory on the first video image.
  • the virtual object By playing the first video sequence frame, the virtual object can be dynamically displayed along the first motion trajectory on the first video image.
  • the virtual objects in each video frame image superimposed on the first video image may be the same, and then the virtual object itself displayed on the first video image remains unchanged, only the display position of the virtual object It varies with the change of the first motion trajectory.
  • the virtual objects in each video frame image superimposed on the first video image may also be different, that is, each video frame image including the virtual object can be superimposed in a specified sequence of changes of the virtual objects to the corresponding first video image.
  • the change of the image itself may include the size change (for example, by Big to small; from small to big), display angle change, color change (for example, color gradient), style change (for example, the painting style changes from cartoon style to realistic style), etc., which are not limited here.
  • the display effect of the virtual object can be made richer and more vivid, thereby improving the video shooting quality and the interestingness of the video.
  • multiple video frame images corresponding to the virtual object can be superimposed on the first video image in a specified order of the size of the virtual object from small to large.
  • dynamically displaying the virtual object on the first video image may be implemented based on a pre-configured first video sequence frame including the virtual object.
  • FIG. 5 shows a detailed flowchart of step S220 in FIG. 2 provided by an exemplary embodiment of the present disclosure.
  • Step S220 may include:
  • S221 Acquire a first video sequence frame including a virtual object.
  • S223 Play the first video sequence frame to dynamically display the virtual object on the first video image.
  • the first video sequence frame may be stored locally on the terminal, and the terminal may obtain the first video sequence frame locally; in addition, the first video sequence frame may also be stored in the server, and the terminal may obtain the first video sequence frame from the server.
  • the example does not limit this.
  • the terminal when detecting a virtual object display request, may acquire a first video sequence frame including the virtual object, superimpose and display the first video sequence frame on the first video image, and play the first video sequence frame, so that the virtual object can be dynamically displayed on the first video image.
  • the virtual object by playing the first video sequence frame, the virtual object can be moved from the starting position in the screen to the edge of the screen, and also can be moved from the edge of the screen to the center of the screen; and the virtual object can also be kept still.
  • the size of the virtual object may also change dynamically in the first video image. For example, the size of the virtual object may change from small to large, or from large to small, or from large to small and then larger, etc., which is not limited here.
  • the terminal may acquire position information of the virtual object in each video frame of the first video sequence frame, and determine the first motion track of the virtual object in the first video image according to the position information, thereby:
  • a specific implementation manner of dynamically displaying the virtual object on the first video image may include: dynamically displaying the virtual object on the first video image along the first motion track.
  • the location information may be the coordinates of the virtual object in each video frame of the video sequence frame.
  • a point in the video frame may be used as the origin of coordinates, and the coordinates of the virtual object in the video frame may be determined in units of pixels. Namely location information.
  • the first motion trajectory of the virtual object in the first video sequence frame can be preset according to actual needs, so that the position information of the virtual object in different video frames of the first video sequence frame is not exactly the same.
  • the video sequence frame can present the dynamic display effect of the virtual object moving on the first video image.
  • the position information can be preset according to the motion trajectory to be presented. For example, if the required motion trajectory is to move from the middle of the image to the edge of the image, the virtual objects can be set in sequence from the inside to the outside in each Location information in a video frame.
  • the edge of the image can be the outline of a specific object in the image, or the boundary of the image canvas, and the specific object can be any creature such as people, animals, or non-living creatures such as sculptures, clothes, scenery, buildings, etc.
  • the present disclosure is not limited herein.
  • the first motion trajectory of the virtual object may also be determined in real time according to the user's input. For example, when triggering a virtual object display request, the user may input a required motion trajectory, so that the virtual object is dynamically displayed on the first video image according to the motion trajectory input by the user.
  • the motion trajectory may be determined by detecting the touch operation of the user acting on at least one optional motion trajectory displayed on the screen. For example, when the terminal detects the touch operation acting on the display control of the virtual object, The terminal can display a request page, and at least one optional motion trajectory identifier can be displayed on the request page.
  • a corresponding virtual object display request can be generated, so that the virtual object display request can carry the motion trajectory identifier selected by the user.
  • the terminal determines the corresponding motion trajectory according to the motion trajectory identifier selected by the user according to the virtual object display request.
  • the motion trajectory may also be determined based on the trajectory drawn by the user's air gesture.
  • the motion trajectory may also be determined based on the sliding trajectory of the user sliding on the screen, and the manner of determining the motion trajectory is not limited in this embodiment.
  • the virtual object can be displayed from an initial position specified on the first video image, and by playing the first video sequence frame, the virtual object can be caused to move along the first from the initial position specified on the first video image. trajectory to move.
  • the terminal may also determine the occurrence position of the trigger action, and determine the occurrence position of the trigger action as the specified initial position, and then The terminal displays the image position of the virtual object in the first video frame of the first video sequence frame corresponding to the initial position (for example, superimposed display), and based on the corresponding relationship between the two, sequentially determines that the virtual object is in the first video sequence.
  • the position information in each video frame in the frame is used to determine the corresponding first motion trajectory, so that when the virtual object is displayed on the first video image, the virtual object can be superimposed and displayed on the first target object to perform a preset trigger action position, and from this position, it is dynamically displayed on the first video image along the first motion track.
  • the terminal when the terminal detects that the user blinks, the terminal can start to display the virtual object from the blinking point, and dynamically display the virtual object on the first video image along the first motion track.
  • FIG. 6 shows another interface schematic diagram provided by an exemplary embodiment of the present disclosure.
  • the shooting picture 310 on the screen corresponds to the first image captured by the first shooting device.
  • a video image when the terminal detects that the first target object in the first video image performs a blinking action, it corresponds the image position of the virtual object in the first video frame of the first video sequence frame to the position where the blinking action occurs 311 Therefore, the virtual object can be correspondingly displayed at the occurrence position 311 of the blinking action, and correspondingly, the virtual object is superimposed and displayed on the occurrence position 311 where the blinking action is performed by the first target object, so that the virtual object starts from the occurrence position 311 and moves along the The first motion track is dynamically displayed on the first video image.
  • step S230 when the virtual object satisfies the preset state, it is switched to capture the second video image by the second photographing device.
  • the virtual object meets a preset state, including at least one of the following: the first video sequence frame is finished playing; the virtual object is displayed at a specified position of the first video image; and the parameters of the virtual object meet the preset parameters.
  • the first video sequence frame when the first video sequence frame is played, it can be determined that the virtual object satisfies the preset state, so that after the virtual object is dynamically displayed, the second video image can be automatically switched to capture the second video image by the second shooting device, And it presents an effect that the movement of the virtual object triggers the switching of the shooting screen.
  • the terminal may also detect the display position of the virtual object in the first video image, and when detecting that the virtual object is displayed at the designated position of the first video image, may determine that the virtual object satisfies the preset state.
  • the specified position can be set according to actual needs. For example, if the effect of moving the virtual object to the edge position L of the image needs to be presented, the position L can be set as the specified position.
  • the specified position can be preset or customized by the user. For example, when the user triggers a virtual object display request, the specified position can be set. Further, the specified position can be set when the user inputs the desired motion trajectory The end position of the motion track is determined as the specified position.
  • the designated position can also be determined based on image recognition of the first video image. For example, a termination object can be preset, and the virtual object can be determined when it is detected that the virtual object moves to the image area where the termination object is located on the first video image. The object satisfies the preset state.
  • the termination object is used to determine the termination position of the virtual object moving on the first video image, and the termination object can be set according to actual needs, and can include a designated body part of the first target object, a designated object in the first video image, etc., It is not limited here.
  • the terminating object is the finger of the first target object
  • the first video sequence frame is played so that the virtual object moves from the starting position on the screen to the finger of the first target object, and switches to capture the first video through the second shooting device. Two video images, so as to realize the effect that the virtual object moves to the designated position, that is, triggers the switching of the shooting device.
  • the terminal may also detect the parameters of the virtual object, and when it is detected that the parameters of the virtual object meet the predetermined parameters, it may be determined that the virtual object meets the preset state.
  • the parameters of the virtual object may include shape, size, display angle, style, and the like.
  • the virtual object may exhibit a dynamic change in shape (eg, from small to large) in the first video image, and when the virtual object reaches a predetermined shape, it is determined that the virtual object satisfies the preset state.
  • the virtual object may present a dynamic change in shape in the first video image, and when the virtual object reaches a predetermined shape, it is determined that the virtual object satisfies the preset state. This disclosure does not limit this.
  • the terminal may determine whether the virtual object satisfies the preset state by detecting the moving distance and moving time of the virtual object on the first video image.
  • the terminal may calculate the moving distance of the virtual object moving from the starting position on the first video image, and the moving distance may be calculated in units of pixels. For example, if the moving distance of the virtual object moving on the first video image When the predetermined distance is reached, it can be determined that the virtual object meets the preset state, and the predetermined distance may include a predetermined number of pixels, for example, the predetermined number may be 30, 60, or 100 pixels, which is not limited herein.
  • the terminal can also judge whether the virtual object meets the preset state according to the moving time of the virtual object, and the moving time can be determined according to the frame number of the first video sequence frame. For example, the terminal can move the virtual object for the first time.
  • the frame superimposed and displayed on the first video image is denoted as the first frame, and when the virtual object is superimposed and displayed on the first video image for the nth time, it can be determined that the display object of the first video image satisfies the preset state, wherein, n may be any positive integer greater than 1, thus, after the nth frame of the virtual object is superimposed and displayed on the first video image, the camera can be controlled to switch.
  • the preset state that can trigger the switching of the photographing device is not limited to the above-mentioned embodiments, which are not limited in this embodiment, but are not exhaustive for reasons of space.
  • step S240 displaying the second video image on the screen.
  • the terminal automatically switches the second camera to capture the second video image, and displays the second video image on the screen.
  • the terminal may further display the virtual object on the second video image.
  • the virtual object displayed on the second video image corresponds to the virtual object displayed on the first video image, and the two virtual objects may be the same or different, which is not limited in this embodiment.
  • whether to display the virtual object in the second video image may be determined according to the special effect selection control triggered by the user, that is, determined by the function corresponding to the corresponding special effect selection control.
  • the special effect selection control corresponds to the transfer function of the virtual object (wherein, the effect that the transfer function can achieve is to first superimpose and display the virtual object on the first video image, and then switch to capture the second video image through the second shooting device
  • the virtual object continues to be superimposed and displayed on the second video image
  • the virtual object continues to be superimposed and displayed on the second video image, so as to present the continuous transmission effect of the virtual object in the first video image and the second video image , adding interest and richness to the video.
  • the virtual object when the terminal detects a special effect selection control acting on the corresponding transfer function, the virtual object is displayed superimposed on the first video image captured by the first shooting device, and the virtual object is displayed after switching to the second shooting device.
  • the second video image continue to superimpose and display the virtual object on the second video image, so that the continuous display of the virtual object can be visually realized, that is, the virtual object moves to the second shot in the first video image shot by the first shooting device
  • the effect that the virtual object is transferred from the target object in the first video image to another target object in the second video image is presented. Therefore, this embodiment provides an innovative shooting interactive gameplay based on automatic switching of the shooting device, which improves the shooting efficiency, quality and interest of the work.
  • the virtual object when the virtual object is displayed on the second video image, the virtual object may be dynamically displayed in the second video image. Dynamic display in the second video image.
  • FIG. 7 shows a schematic flowchart of a method for displaying a virtual object on a second video image provided by an exemplary embodiment of the present disclosure. The method may include:
  • S310 Acquire a second video sequence frame including the virtual object.
  • S320 Superimpose the second video sequence frame on the second video image.
  • S330 Play the second video sequence frame to dynamically display the virtual object on the second video image.
  • steps S310 to S330 are similar to the implementations of steps S221 to S223 .
  • steps S221 to S223 For parts not described in detail here, reference may be made to steps S221 to S223 , which will not be repeated here.
  • the virtual object by playing the second video sequence frame, the virtual object can be moved from any position on the screen to the end position, for example, the virtual object can be moved from the edge of the screen to the inside of the screen area, or the virtual object can be kept still.
  • the size of the virtual object can also be dynamically changed in the second video image. For example, the size of the virtual object can be changed from large to small, so that when the second video sequence frame is played, the virtual object can be displayed from near to far on the screen. The effect of the second target object approaching.
  • the size of the virtual object can also be changed from large to small, and can also be changed from large to small and then larger, etc., which is not limited here.
  • the second target object may be any creature such as a human being or an animal, or may be a non-living creature such as sculpture, clothes, scenery, building, etc., which is not limited in this embodiment.
  • the terminal when detecting a virtual object display request, can simultaneously acquire a second video sequence frame including the virtual object, and superimpose the second video sequence frame on the second video image, and play the second video by playing the second video sequence frame.
  • the terminal may preset the mapping relationship between the virtual object, the first video sequence frame, and the second video sequence frame, and when detecting a virtual object display request, may determine the virtual object to be displayed and the corresponding first video. sequence frame and second video sequence frame, and by playing the first video sequence frame and the second video sequence frame, respectively in the first video image captured by the first shooting device and the second video image captured by the second shooting device The same virtual object is displayed dynamically, thereby presenting the effect that the virtual object is transferred from a target object in the first video image to another target object in the second video image.
  • the terminal may acquire position information of the virtual object in each video frame of the second video sequence frame, and determine the second motion trajectory of the virtual object in the second video image according to the position information, thereby:
  • a specific implementation manner of dynamically displaying the virtual object on the second video image may include: dynamically displaying the virtual object on the second video image along the second motion track.
  • the virtual object may move from an initial position to a specified position in the second video image along the second motion track, that is, the position of the virtual object in the last video frame of the second video sequence frame falls within at the specified location.
  • the last displayed position of the virtual object in the second video image is denoted as the end position, that is, the designated position.
  • any position can be set as the end position according to actual needs; or based on image recognition of the second video image, the position corresponding to the identified preset end point object can be determined as the end position, and the preset end point object is used to determine The end position of the virtual object moving on the second video image.
  • the preset end object can be set according to actual needs, and can include the specified object, the specified body parts of the second target object, such as face, lips, eyes, forehead, heart, etc. , which is not limited here; it is also possible to obtain the trigger operation that the user acts on the second video image when the second video image is displayed, and determine the trigger position corresponding to the trigger operation as the end position, that is, to be determined by the user's trigger operation end position.
  • the identifier or image of the preset end point object indicating the end position can be stored in correspondence with the object identifier of the virtual object, then after the virtual object is determined, the logo or image of the preset end point object indicating the end position can be determined, and then By performing image recognition processing on the second video image, the position corresponding to the object identification or image is determined as the end position.
  • the preset receiving action can be set according to actual needs, for example, it can include but not limited to pouting, comparing heart, blinking and so on.
  • the terminal simultaneously recognizes that there are multiple target objects in the second video image it may also be determined according to the occupied area of each target object in the second video image. For example, the target with the largest occupied area may be selected. The object is determined as the second target object, so that the user closest to the second photographing device is determined as the second target object.
  • the terminal may output receiving prompt information, wherein the receiving prompt information is used to prompt the second target object to prepare to perform the operation on the virtual object.
  • the receiving prompt information can be information in the form of voice, such as "please prepare to receive”, then when the preset switching conditions are met, the terminal can play the voice of "please prepare to receive", thereby prompting the user who intends to cooperate, namely the second target
  • the subject can listen to the sound to start the performance to ensure the shooting effect, and the interactive and tacit video can be shot without repeated re-shooting, which improves the user experience.
  • a virtual object can be displayed on the first video image on the basis of the foregoing embodiment, and when the virtual object satisfies the preset state, it is determined that the preset switching condition is satisfied, so as to switch
  • the second video image is captured by the second photographing device, and the second video image is displayed on the screen.
  • the terminal may detect the first target object in the first video image, and when detecting that the first target object performs a preset trigger action, dynamically display the first target object along the first motion track on the first video image
  • the virtual object can realize the display effect such as the user blinks the virtual object from the eyes, and the user pouts the mouth to launch the virtual object from the mouth, until the virtual object satisfies the preset state, and switches to capture the second video image through the second shooting device , and display the second video image on the screen.
  • the virtual object may also be dynamically displayed on the second video image, so that the virtual object continues to move based on the second video image captured by the second camera.
  • the user can trigger the display of the virtual object by executing a preset trigger action, and finally present the visual effect of the virtual object moving in the video images collected by different shooting devices, providing the user with more shooting possibilities , which enhances the user's interest and experience in shooting videos.
  • the terminal when the virtual object is displayed at the end position of the second video image, the terminal may trigger a preset special effect, and the special effect may correspond to or be associated with the virtual object.
  • the server or terminal can set at least one special effect in advance, and establish the mapping relationship between the virtual object and each special effect and special effect triggering conditions. When it is detected that the special effect triggering condition corresponding to the virtual object is satisfied, the virtual The special effect corresponding to the object, and play the special effect.
  • the terminal can trigger the playback of special effects corresponding to virtual objects at different timings.
  • the terminal can trigger the playback of special effects one or more times, and the virtual objects can be combined with one or more special effects.
  • Multiple special effects are corresponding or associated, and the special effects triggered to play each time may be the same or different, which is not limited in this embodiment.
  • the terminal can play the special effect corresponding to the virtual object, and in one embodiment, the special effect of the virtual object can be played when the shooting device is switched.
  • the special effect may include at least one of a visual special effect and an audio special effect, wherein the visual special effect can be superimposed and displayed on the second video image to present a dynamic display effect; and the audio special effect is a piece of audio, the present disclosure
  • the specific type and content of special effects are not limited.
  • the special effect corresponding to the virtual object may include a sequence frame composed of multiple frames of images dynamically displayed on the screen by a plurality of virtual objects.
  • the special effect corresponding to the virtual object may be a sequence frame in which the display effect is that multiple hearts move on the screen, for example, a sequence frame in which multiple hearts float upward.
  • multiple hearts can be set based on the full-screen setting, so that a dreamy love atmosphere appears on the full-screen screen when special effects are played, as shown in Figure 8; This is not limited.
  • the effect presentation and richness of video production can be improved, which is beneficial to stimulate users' enthusiasm for shooting, enhance the fun of video shooting, and enhance the social interaction of shooting.
  • FIG. 9 shows a schematic diagram of the screen at four different moments provided by an exemplary embodiment of the present disclosure.
  • FIG. 9( a ) to FIG. 9 (d) are interface schematic diagrams of four screens at time T1, T2, T3, and T4 respectively, wherein, at time T1 and time T2, the first video image 910 collected by the first photographing device is displayed on the screen, and at time T3, At time T4, a second video image 920 collected by the second shooting device is displayed on the screen after the switching of the shooting device.
  • the first shooting device recognizes that user A blinks at time T1, and determines the blinking eye as the origin of the virtual object.
  • Start position 920 (corresponding to the image position of the virtual object in the first video frame of the first video sequence frame), and play the first video sequence frame, let the heart 930 start to move from the eyes, and at time T2, the screen
  • the terminal judges whether the first video sequence frame is finished playing, if it is finished, it is switched to the second shooting device, and the screen displays the second video image 940 collected by the second shooting device, and the second video image 940 is displayed on the screen.
  • the second video image is collected from other people opposite user A, such as user B, and the second video sequence frame is played, so that from time T3 to time T4, the heart 950 in the second video sequence frame starts from the edge of the screen to user B's
  • the face moves, that is, moves to the end position 960, and at the same time shows a change in size from large to small.
  • the heart is transmitted from the blinking eyes of user A in front of the camera on one side, and is transmitted to the face of user B in front of the camera on the other side.
  • the third video sequence frame can also be played, so that special effects associated with the virtual object love, including visual effects and audio effects, are played on the screen after time T4.
  • the schematic diagram of can be shown in Figure 8, thereby realizing the effect of multiple hearts floating upward after the heart moves to the user B's face.
  • deformation processing may also be performed on the second target object captured by the second camera.
  • the terminal when the terminal switches to the second camera to capture the second video image and displays the second video image on the screen, the terminal may target the second target object in the second video image for the second video image on the second video image.
  • the image undergoes deformation processing, that is, the deformation processing is triggered after the switching of the photographing device, so that the second target object is deformed on the second video image; in other embodiments, the terminal may also display the virtual object on the second video image.
  • the deformation processing is performed on the image of the second target object on the second video image, that is, the deformation processing can be triggered when the virtual object moves to the end position.
  • An embodiment of performing deformation processing on the image of the second target object on the second video image may include: acquiring a deformation processing configuration of the second target object, wherein the deformation processing configuration may include a deformation type; Deformation key point; determine the deformed position corresponding to the key point to be deformed according to the deformation type, move the key point to be deformed to the deformed position, thereby obtaining the second video image after the deformation of the second target object, and displaying the deformed second video image, which can present the visual effect of the deformation of the second target object on the second video image.
  • the deformation processing configuration may also include the deformation degree corresponding to the deformation type.
  • the deformation type and the corresponding deformation degree can be calculated by calculating the deformation type and the corresponding deformation degree.
  • the deformation type may be one or a combination of enlargement, reduction, translation, rotation, and dragging.
  • the degree of deformation may include, for example, a magnification/reduction multiple, a translation distance, a rotation angle, a dragging distance, and the like.
  • the deformation processing configuration may further include a deformation part, and when acquiring the key point to be deformed corresponding to the second target object, the key point related to the deformation part of the second target object may be obtained as the key point to be deformed. key point.
  • the deformation processing configuration can be set according to actual needs, and can include one or more deformation parts, and one or more deformation types can also be correspondingly configured for each deformation part. Then, if the deformation processing configuration includes multiple deformation parts and at least two deformation parts corresponding to different deformation types, the deformation processing corresponding to different deformation types can be performed on different deformation parts of the second target object, so that according to actual needs, the deformation process can be processed by Processing configuration settings to achieve rich deformation effects.
  • the deformation part can be the default part before it is set, and the default part can be preset or customized by the user.
  • the default part can be the face, eyes, nose, lips, etc. of the second target object. , which is not limited here.
  • the deformation processing configuration can be set according to the deformation effect to be presented, and stored in a deformation database corresponding to the visual effect that can be presented, wherein the deformation database can store one or more deformation processing configurations and corresponding deformations
  • the mapping relationship of the effect and the deformation database can be stored locally on the terminal or on the server.
  • the deformation effect can be various expressions such as shy expression, angry expression, etc., it can also be an image deformation effect (for example, face stretching effect, etc.), and it can also be any other effect involving the change of the position of key points, which will not be described here. limited.
  • the positional relationship of the key points of the user's face when the user shows a shy expression can be determined, so as to determine the deformation type and degree of deformation corresponding to each deformation part, thereby
  • the deformation processing configuration corresponding to the shy expression is obtained, and the mapping relationship between the logo corresponding to the shy expression and the deformation processing configuration corresponding to the shy expression is constructed, and stored in the deformation database.
  • the user can select the desired deformation effect before or during the shooting, and after detecting the corresponding selection operation, the identifier corresponding to the deformation effect selected by the user can be obtained, and the corresponding identification can be obtained from the deformation database.
  • Corresponding deformation processing configuration thereby obtaining the deformation processing configuration of the second target object.
  • the deformation process may be performed on the face of the second target object captured by the second camera.
  • FIG. 10 shows a schematic interface diagram of the screen at time T5 after time T4 in FIG. 9 .
  • the terminal may perform deformation processing on the face of the second target object to present corresponding deformation effects.
  • the terminal may also detect the first target object in the first video image, and trigger the switching of the photographing device according to the first target object.
  • FIG. 11 shows a schematic flowchart of an image processing method provided by another embodiment of the present disclosure.
  • the method may include:
  • S410 Capture the first video image by the first photographing device, and display the first video image on the screen.
  • step S220 the first target The object performs a preset trigger action to trigger the display of a virtual object on the first video image, and in this embodiment, the first target object performs a preset trigger action as a preset switching condition for controlling Switching of cameras. That is, through the triggering operation of the first target object in the first video image, the first photographing device is switched to the second photographing device, thereby providing the user with a more flexible photographing manner and improving the operation experience.
  • a virtual object may be displayed on the first video image, and a virtual object may also be displayed on the second video image, which is not limited here. description, which will not be repeated here.
  • the terminal can detect the first target object in the first video image, and when detecting that the first target object performs a preset trigger action, switch to the second shooting device A second video image is captured and displayed on the screen.
  • the preset trigger action as an eye blinking action as an example, it can be realized that when a user blinks an eye, the camera can be switched to collect and display a video image.
  • FIG. 12 is a block diagram of an image processing apparatus provided by an embodiment of the present disclosure.
  • the image processing apparatus 1200 can be applied to a terminal.
  • the terminal includes a first photographing apparatus and a second photographing apparatus, and may specifically include: a video display module 1210 and switching display module 1220, wherein:
  • a video display module 1210 configured to capture the first video image by the first photographing device, and display the first video image on the screen;
  • the switching display module 1220 is configured to switch to capture the second video image by the second shooting device and display the second video image on the screen when it is detected that the display object in the first video image meets the preset switching condition.
  • the image processing apparatus 1200 further includes: a first virtual object display module, configured to display the virtual object on the first video image.
  • the switching display module 1220 may include: a first trigger switching sub-module for switching to capture the second video image by the second camera when the virtual object satisfies the preset state.
  • the image processing apparatus 1200 further includes: a first target object detection module, configured to detect the first target object in the first video image.
  • the first virtual object display module may include: a first virtual object display sub-module, configured to display on the first video image when it is detected that the first target object in the first video image performs a preset trigger action virtual object.
  • the first virtual object display module may include: a first sequence frame acquisition submodule, a first sequence frame overlay submodule, and a first sequence frame playback submodule, wherein:
  • a first sequence frame acquisition submodule for acquiring a first video sequence frame including a virtual object
  • a first sequence frame superimposition sub-module for superimposing the first video sequence frame on the first video image
  • the first sequence frame playing submodule is used for playing the first video sequence frame to dynamically display the virtual object on the first video image.
  • the image processing apparatus 1200 further includes: a first position information acquisition module and a first motion trajectory determination module, wherein:
  • the first position information acquisition module is used to obtain the position information of the virtual object in each video frame in the first video sequence frame
  • the first motion trajectory determination module is used to determine the first motion trajectory of the virtual object in the first video image according to the position information
  • the first sequence frame playing submodule may include: a first sequence frame playing unit, configured to play the first video sequence frame, so as to dynamically display the virtual object on the first video image along the first motion track.
  • the virtual object satisfies a preset state, including: the virtual object is displayed at a specified position of the first video image.
  • the image processing apparatus 1200 further includes: a target object detection module for detecting the first target object in the first video image; at this time, the switch display module 1220 may include: a second trigger switch module for When it is detected that the first target object performs a preset trigger action, it is switched to capture the second video image by the second photographing device.
  • the image processing apparatus 1200 further includes: a second virtual object display module, configured to display the virtual object on the second video image.
  • the second virtual object display module includes: a second sequence frame acquisition submodule, a second sequence frame overlay submodule, and a second sequence frame playback submodule, wherein:
  • the second sequence frame acquisition submodule is used to acquire the second video sequence frame including the virtual object
  • the second sequence frame superimposing submodule is used to superimpose the second video sequence frame on the second video image
  • the second sequence frame playing submodule is configured to play the second video sequence frame to dynamically display the virtual object on the second video image.
  • the image processing apparatus 120 further includes: a second position information acquisition module and a second motion trajectory determination module, wherein:
  • the second position information acquisition module is used to obtain the position information of the virtual object in each video frame in the second video sequence frame;
  • a second motion trajectory determination module configured to determine the second motion trajectory of the virtual object in the second video image according to the position information
  • the second sequence frame playing submodule may include: a second sequence frame playing unit, configured to play the second video sequence frame, so as to dynamically display the virtual object on the second video image along the second motion track.
  • the image processing apparatuses in the embodiments of the present disclosure can execute the image processing methods provided in the embodiments of the present disclosure, and the implementation principles are similar.
  • the actions performed by the modules in the image processing apparatuses in the embodiments of the present disclosure are the same as Corresponding to the steps in the image processing method in each embodiment, for the detailed functional description of each module of the image processing apparatus, reference may be made to the description in the corresponding image processing method shown above, which will not be repeated here.
  • FIG. 13 shows a structural block diagram of an electronic device 1300 suitable for implementing embodiments of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, terminals such as computers and mobile phones.
  • the electronic device shown in FIG. 13 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 1300 includes: a memory and a processor, where the memory is used for storing a program for executing the methods described in the foregoing method embodiments; the processor is configured to execute the program stored in the memory.
  • the processor here may be referred to as the processing device 1301 below, and the memory may include at least one of a read-only memory (ROM) 1302, a random access memory (RAM) 1303, and a storage device 1308, as shown below. :
  • electronic device 1300 may include processing means (eg, central processing unit, graphics processor, etc.) 1301 that may be loaded into random access according to a program stored in read only memory (ROM) 1302 or from storage means 1308 Various appropriate actions and processes are executed by the programs in the memory (RAM) 1303 . In the RAM 1303, various programs and data necessary for the operation of the electronic device 1300 are also stored.
  • the processing device 1301, the ROM 1302, and the RAM 1303 are connected to each other through a bus 1304.
  • An input/output (I/O) interface 1305 is also connected to bus 1304 .
  • I/O interface 1305 the following devices can be connected to the I/O interface 1305: input devices 1306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 1307 of a computer, etc.; a storage device 1308 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1309. Communication means 1309 may allow electronic device 1300 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 13 shows an electronic device 1300 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer-readable storage medium, the computer program containing program code for performing the methods described in the various embodiments described above.
  • the computer program may be downloaded and installed from the network via the communication device 1309, or from the storage device 1308, or from the ROM 1302.
  • the processing device 1301 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable storage medium of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon.
  • Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable storage medium, other than a computer-readable storage medium, that can be sent, propagated, or transmitted for use by or in connection with the instruction execution system, apparatus, or device. program.
  • Program code embodied on a computer-readable storage medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable storage medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable storage medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device is caused to perform the following steps: collecting a first video image by the first shooting device, and Display the first video image on the screen; when it is detected that the display object in the first video image satisfies the preset switching condition, switch to capture the second video image by the second shooting device, and record it on the The second video image is displayed on the screen.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • modules or units involved in the embodiments of the present disclosure may be implemented in software or hardware.
  • the name of the module or unit does not constitute a limitation of the unit itself under certain circumstances, for example, the video display module can also be described as "a module for displaying video images”.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a computer-readable storage medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the computer-readable storage medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Computer-readable storage media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • an image processing method which is applied to a first terminal.
  • the method includes: capturing a first video image through the first photographing device, and displaying the first video image on a screen. a video image; when it is detected that the display object in the first video image satisfies the preset switching condition, switching to capture a second video image by the second shooting device, and display the second video image on the screen video image.
  • the method further includes: displaying a virtual object on the first video image; when detecting that the display object in the first video image satisfies a preset switching condition, switching to through the The collecting of the second video image by the second shooting device includes: when the virtual object satisfies a preset state, switching to collecting the second video image by the second shooting device.
  • the method further includes: detecting a first target object in the first video image; and displaying a virtual object on the first video image includes: when the first video image is detected When the first target object in the image performs a preset trigger action, the virtual object is displayed on the first video image.
  • the displaying a virtual object on the first video image includes: acquiring a first video sequence frame including the virtual object; and superimposing the first video sequence frame on the first video on an image; playing the first video sequence frame to dynamically display the virtual object on the first video image.
  • the method further includes: acquiring position information of the virtual object in each video frame in the first video sequence frame; determining the position of the virtual object in the first video frame according to the position information.
  • the virtual object satisfies a preset state, including: the virtual object is displayed at a specified position of the first video image.
  • the method further includes: detecting a first target object in the first video image; when detecting that the display object in the first video image satisfies a preset switching condition, switching to Collecting a second video image by using the second photographing device includes: when it is detected that the first target object performs a preset trigger action, switching to collecting the second video image by using the second photographing device.
  • the method further includes displaying the virtual object on the second video image.
  • the displaying the virtual object on the second video image includes: acquiring a second video sequence frame including the virtual object; and superimposing the second video sequence frame on the first video sequence frame. on two video images; playing the second video sequence frame to dynamically display the virtual object on the second video image.
  • the method further includes: acquiring position information of the virtual object in each video frame of the second video sequence frame; determining the position of the virtual object in the first video frame according to the position information.
  • an image processing apparatus which can be applied to a terminal, where the terminal includes a first photographing apparatus and a second photographing apparatus respectively disposed on different sides, and the apparatus may include: a video display module and a switching display module, wherein: a video display module is used to collect a first video image through the first shooting device, and display the first video image on a screen; a switching display module is used to detect the first video image when the When the display object in the video image satisfies the preset switching condition, it is switched to capture the second video image by the second shooting device, and display the second video image on the screen.
  • the image processing apparatus further includes: a first virtual object display module, configured to display a virtual object on the first video image.
  • the switching display module may include: a first trigger switching sub-module, configured to switch to capturing the second video image by the second photographing device when the virtual object satisfies a preset state.
  • the image processing apparatus further includes: a first target object detection module, configured to detect the first target object in the first video image.
  • the first virtual object display module may include: a first virtual object display sub-module, configured to, when it is detected that the first target object in the first video image performs a preset trigger action, The virtual object is displayed on the video image.
  • the first virtual object display module may include: a first sequence frame acquisition submodule, a first sequence frame superimposition submodule, and a first sequence frame playback submodule, wherein: the first sequence frame acquisition submodule is used for in acquiring the first video sequence frame including the virtual object; the first sequence frame superimposing sub-module for superimposing the first video sequence frame on the first video image; the first sequence frame playing sub-module, for playing the first video sequence frame to dynamically display the virtual object on the first video image.
  • the image processing apparatus further includes: a first position information acquisition module and a first motion trajectory determination module, wherein: a first position information acquisition module is used to acquire the virtual object in the first video sequence frame The position information in each video frame in ; the first motion trajectory determination module is used to determine the first motion trajectory of the virtual object in the first video image according to the position information; at this time, the first sequence
  • the frame playing sub-module may include: a first sequence frame playing unit, configured to play the first video sequence frame, so as to dynamically display the virtual object on the first video image along the first motion track.
  • the virtual object satisfies a preset state, including: the virtual object is displayed at a specified position of the first video image.
  • the image processing apparatus further includes: a target object detection module for detecting the first target object in the first video image; at this time, the switching display module may include: a second trigger switching module for detecting the first target object in the first video image; When it is detected that the first target object performs a preset trigger action, it is switched to capture a second video image through the second photographing device.
  • the image processing apparatus further includes: a second virtual object display module, configured to display the virtual object on the second video image.
  • the second virtual object display module includes: a second sequence frame acquisition submodule, a second sequence frame superimposition submodule, and a second sequence frame playback submodule, wherein: the second sequence frame acquisition submodule is used for Acquiring a second video sequence frame including the virtual object; a second sequence frame superimposing submodule for superimposing the second video sequence frame on the second video image; a second sequence frame playing submodule for using and playing the second video sequence frame to dynamically display the virtual object on the second video image.
  • the image processing apparatus further includes: a second position information acquisition module and a second motion trajectory determination module, wherein: a second position information acquisition module is used to acquire the virtual object in the second video sequence frame
  • the position information in each video frame of The frame playing submodule may include: a second sequence frame playing unit, configured to play the second video sequence frame, so as to dynamically display the virtual object on the second video image along the second motion track.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开提供了一种图像处理方法、装置、电子设备及计算机可读存储介质,涉及图像处理技术领域。该方法应用于终端,终端包括第一、第二拍摄装置,该方法包括: 通过所述第一拍摄装置采集第一视频图像,并在屏幕中显示所述第一视频图像; 当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,并在所述屏幕中显示所述第二视频图像。本公开的实施能够基于拍摄装置所采集的图像中的显示对象实现拍摄装置的自动切换。

Description

图像处理方法、装置、电子设备及计算机可读存储介质
相关申请的交叉引用
本申请要求于2020年09月30日提交的、申请号为202011065575.3、发明名称为“图像处理方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,具体而言,本公开涉及一种图像处理方法、装置、电子设备及计算机可读存储介质。
背景技术
随着移动互联网的发展以及移动终端的普及,越来越多的用户开始自发制作内容,并上传社交平台与他人分享。通常,内容制作者利用移动终端上的拍摄设备拍摄自己喜欢的图像、视频并上传到社交平台上分享给其他用户。然而,现有的拍摄过程,用户或者使用前置摄像头自拍或者使用后置摄像头拍摄所见图像,拍摄效果和内容显得单一化。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开实施例提供了一种图像处理方法,应用于终端,所述终端包括第一拍摄装置与第二拍摄装置,该方法包括:通过所述第一拍摄装置采集第一视频图像,并在屏幕中显示所述第一视频图像;当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,并在所述屏幕中显示所述第二视频图像。
第二方面,本公开实施例提供了一种图像处理装置,应用于终端,所述终端包括第一拍摄装置与第二拍摄装置,该装置包括:视频显示模块,用于通过所述第一拍摄装置采集第一视频图像,在屏幕中显示所述第一视频图像;切换显示模块,用于当检测到所述第一视频图像中的显示对象满足预设切换条件,切换为通过所述第二拍摄装置采集第二视频图像,并在所述屏幕中显示所述第二视频图像。
第三方面,本公开实施例提供了一种电子设备,所述电子设备包括:一个或多个处理器;存储器,所述存储器存储有计算机程序,当所述计算机程序配置被所述一个或多个处理器执行时,使所述电子设备执行如上述第一方面所述的方法。
第四方面,本公开实施例提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,使所述处理器执行如上述第一方面所述的方法。
本公开实施例提供的一种图像处理方法、装置、电子设备及计算机可读存储介质,应用于具有两个拍摄装置的终端,通过第一拍摄装置采集第一视频图像,并在终端的屏幕中显示第一视频图像,然后当检测到第一视频图像中的显示对象满足预设切换条件时,切换为通过第二拍摄装置采集第二视频图像,并在屏幕中显示第二视频图像。由此,本公开实施例可在拍摄时基于第一视频图像中的显示对象的状态来实现终端上的不同拍摄装置的自动切换,将第一拍摄装置切换为终端的第二拍摄装置,并使得终端的屏幕从显示第一拍摄装置采集的第一视频图像,切换为显示第二拍摄装置采集的第二视频图像,从而为用户提供更多拍摄可能性和乐趣,使得用户在拍摄过程中可基于拍摄装置的自动切换,拍摄更多有创意的作品,丰富了拍摄玩法,提升用户的拍摄体验。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1示出了本公开一个实施例提供的图像处理方法的流程示意图。
图2示出了本公开另一个实施例提供的图像处理方法的流程示意图。
图3示出了本公开一个示例性实施例提供的一种界面示意图。
图4示出了本公开一个示例性实施例提供的另一种界面示意图。
图5示出了本公开一个示例性实施例提供的图2中步骤S220的详细流程示意图。
图6示出了本公开一个示例性实施例提供的又一种界面示意图。
图7示出了本公开一个示例性实施例提供的在第二视频图像上显示虚拟对象的方法的流程示意图。
图8示出了本公开一个示例性实施例提供的再一种界面示意图。
图9示出了本公开一个示例性实施例提供的四个不同时刻下的屏幕示意图。
图10示出了本公开一个示例性实施例提供的还一种界面示意图。
图11示出了本公开又一个实施例提供的图像处理方法的流程示意图。
图12示出了本公开一个实施例提供的图像处理装置的模块框图。
图13示出了本公开实施例提供的电子设备的结构框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为 限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对装置、模块或单元进行区分,并非用于限定这些装置、模块或单元一定为不同的装置、模块或单元,也并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
下面以具体的实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
下面将通过具体实施例对本公开实施例提供的图像处理方法、装置、电子设备及计算机可读存储介质进行详细说明。
本公开实施例提高的图像处理方法可适用于终端,终端包括第一拍摄装置和第二拍摄装置,第一拍摄装置和第二拍摄装置可以是固定于终端的、也可以是可旋转的,第一拍摄装置和第二拍摄装置可以设置与终端的不同侧;其中,第一拍摄装置和第二拍摄装置可以包括能够采集图像的任意装置如摄像头等,在此不作限定。其中,终端可以是任意设置有至少两个拍摄装置的智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio LayerⅢ,动态影像压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio LayerⅣ,动态影像压缩标准音频层面4)播放器、可穿戴设备、车载设备、增强现实(Augmented Reality,AR)/虚拟现实(Virtual Reality,VR)设备、笔记本电脑、超级移动个人计算机(Ultra-Mobile Personal Computer,UMPC)、上网本、个人数字助理(Personal digital assistant,PDA)或专门的照相机(例如单反相机、卡片式相机)等。本公开实施例对终端的具体类型不作限定。
其中,终端可运行有客户端应用,该客户端应用可以包括拍摄装置对应的客户端应用软件、也可以包括具有拍摄功能的其他客户端应用软件,本公开不对此进行限制。
请参阅图1,图1示出了本公开一个实施例提供的图像处理方法的流程示意图,可应用于上述设置有多个(例如,两个)拍摄装置的终端。下面针对图1所示的流程进行详细的阐述,该图像处理方法可以包括以下步骤:
S110:通过第一拍摄装置采集第一视频图像,并在屏幕中显示第一视频图像。
终端至少包括两个设置拍摄装置,记为第一拍摄装置和第二拍摄装置,第一拍摄装置和第二拍摄装置可以设置在终端的不同侧。在一些实施方式中,终端包括上下左右四个边框,在用户面对终端的屏幕握持终端时,位于屏幕左侧的边框记为左边框,位于屏幕右侧的边框记为右边框,位于屏幕上侧的边框记为上边框,位于屏幕下侧的边框记为下边框,则第一拍摄装置、第二拍摄装置中的其中一个可设置在终端屏幕的同侧且位于上边框、下边框、左边框或右边框的任意一个边框上,而另一个可设置在终端后壳的同侧且位于上边框、下边框、左边框或右边框的任意一个边框上。
例如,第一拍摄装置、第二拍摄装置中的其中一个可设置在终端屏幕的同侧,即其中一个摄像头为前置摄像头;另一个可设置在终端后壳的同侧,即另一个摄像头为后置摄像头。本公开不对第一拍摄装置和第二拍摄装置的具体设置位置进行限定。
其中,终端的屏幕可以显示第一拍摄装置实时采集的图像,即第一视频图像。其中,第一视频图像可以是通过第一拍摄装置采集的原始图像,也可以是在原始图像基础上进行调整后的图像,其中,调整操作可包括针对对比度、亮度、对焦、光圈等参数的调整操作,还可以包括对第一视频图像添加滤镜、贴纸、特效等操作,本公开在此不作限定。
在一些实施方式中,第一拍摄装置可以是客户端应用中的拍摄功能被启动时默认启动的摄像头,例如,终端获取到对客户端应用的拍摄功能的启动指令时,可启动拍摄功能,拍摄功能在启动时可以默认启动第一拍摄装置,并在屏幕中显示第一拍摄装置实时采集的第一视频图像。其中,第一拍摄装置可以是前置摄像头,也可以是后置摄像头,在此不作限定。
在另一些实施方式中,第一拍摄装置也可以是用户选择启动的拍摄装置,例如,若拍摄功能被启动时默认启动的是第二拍摄装置,则用户可以通过点击屏幕上的例如翻转拍摄装置的控件,将第二拍摄装置切换为第一拍摄装置,以使用第一拍摄装置采集图像,并在屏幕上显示第一拍摄装置实时采集的第一视频图像。
S120:当检测到第一视频图像中的显示对象满足预设切换条件时,自动切换为通过第二拍摄装置采集第二视频图像,并在屏幕中显示第二视频图像。
其中,第二视频图像可以包括通过第二拍摄装置采集的原始图像,也可以包括在原始图像基础上进行调整后的图像,其中,调整操作可包括针对对比度、亮度、对焦、光圈等参数的调整操作,还可以包括对第二视频图像添加滤镜、贴纸、特效等操作,本公开在此不作限定。
在一些实施方式中,终端检测到第一视频图像中的显示对象满足预设切换条件时,可生成拍摄装置切换控制指令,并基于该拍摄装置切换控制指令,调用用于控制拍摄装置的应用程序接口(Application Programming Interface,API),将当前拍摄第一视频图像的第一拍摄装置的状态,从启动状态切换为关闭或休眠状态,启动第二拍摄装置,并将第二拍摄装置的状态从关闭或休眠状态切换为启动状态,从而使能第二拍摄装置采集第二视频图像,并在屏幕中显示第二视频图像。
其中,第一视频图像中的显示对象可包括第一视频图像的中的目标对象,也可以包括第一视频图像上叠加的其他对象,本实施例对此不作限定。终端可在采集并显示第一视频图像的同时,对第一视频图像中的显示对象进行检测,以判断显示对象是否 满足预设切换条件,并在显示对象满足预设切换条件时,切换为通过第二拍摄装置采集第二视频图像,并在屏幕中显示第二视频图像。由此可在图像拍摄的过程中实现拍摄装置的自动切换,而无需用户手动操作,并且随着拍摄装置的切换还可在屏幕上切换显示不同拍摄装置采集的视频图像,并将不同的拍摄装置分别采集到的第一视频图像和第二视频图像录制合成为一段视频图像,从而使得用户在拍摄过程中可基于拍摄装置的自动切换,拍摄更多有创意的作品,丰富了拍摄玩法,提升用户的拍摄体验。
需要说明的是,在一些实施例中,终端可以在拍摄未开始仅预览时执行本公开实施例提供的图像处理方法,以使得用户可在拍摄前的预览阶段就能够预览到最终实现的效果,即整个效果可以实时预览,从而可在拍摄前通过让用户预览采用本公开实施例的图像处理方法拍摄可得到的效果,激发用户使用该图像处理方法进行拍摄的兴趣,促进用户制作出更多的拍摄作品。另外,在一些实施例中,终端也可以在开始拍摄时才执行本公开实施例提供的图像处理方法,本公开实施例对此不作限定。
本实施例提供的图像处理方法,通过第一拍摄装置采集第一视频图像,并在屏幕中显示第一视频图像,然后当检测到第一视频图像中的显示对象满足预设切换条件时,切换为通过第二拍摄装置采集第二视频图像,并在屏幕中显示第二视频图像。由此,本实施例可在拍摄时基于第一视频图像中的显示对象来实现拍摄装置的自动切换,将第一拍摄装置切换为终端的第二拍摄装置,并将屏幕的显示图像从第一拍摄装置采集的第一视频图像切换为第二拍摄装置采集的第二视频图像,使得用户在拍摄过程中可基于拍摄装置的自动切换,拍摄更多有趣、有创意的作品,丰富了拍摄玩法,提升用户的拍摄体验。
在一些实施例中,在第一视频图像上还可显示虚拟对象,此时终端可根据虚拟对象来触发拍摄装置的切换。具体地,请参阅图2,其示出了本公开另一个实施例提供的图像处理方法的流程示意图,于本实施例中,该方法可包括:
S210:通过第一拍摄装置采集第一视频图像,并在屏幕中显示第一视频图像。
S220:在第一视频图像上显示虚拟对象。
其中,虚拟对象可以包括虚拟人物、动物、植物、物体等任意一种,其中物体可以包括爱心、星星等任意物体,本实施例对此不作限定。可选地,虚拟对象可以包括基于动画技术创建的三维立体模型;虚拟对象还可以包括二维虚拟模型。每个虚拟对象可具有自身的形状和尺寸。
在一些实施例中,终端可在检测到虚拟对象展示请求时,通过解析该请求,确定待显示的虚拟对象,并在第一视频图像上显示该虚拟对象。
其中,虚拟对象展示请求可由作用于终端屏幕的触控操作手动触发,也可基于图像识别自动触发。
在一些实施方式中,虚拟对象展示请求可由触控操作手动触发,例如,终端的屏幕上可显示有用于请求展示虚拟对象的虚拟对象请求展示控件,当检测到作用于该控件的触控操作时,可确定检测到相应的虚拟对象展示请求。其中,不同控件可对应不同的虚拟对象,触控不同的控件可以触发不同的虚拟对象展示请求,不同的虚拟对象展示请求中携带有不同的虚拟对象标识。终端在检测到虚拟对象展示请求时,通过解析可得到对应的虚拟对象标识,并进一步确定虚拟对象标识对应的虚拟对象,从而在第一视频图像中展示对应的虚拟对象。
在一种实施方式中,屏幕中可显示有特效选择控件,其中,特效选择控件用于触发显示特效选择页面,而特效选择页面可显示一个或多个特效选择控件,不同特效选择控件可对应实现相同或不同的功能。当终端检测到作用于对应于本公开实施例的图像处理功能的特效选择控件的触发操作时,即同时触发虚拟对象展示请求,从而可直接执行依据本公开实施例的图像处理方法,并在第一视频中叠加显示虚拟对象。在一个实施例中,当终端检测到对应于本公开实施例的图像处理功能的特效选择控件的触发操作时,还可触发显示相应的虚拟对象选择页面,虚拟对象选择页面可显示有至少一个虚拟对象对应的虚拟对象请求展示控件。由此,终端根据用户触控的特效选择控件,可确定所需实现的功能,再根据被触控的虚拟对象请求展示控件,可确定所需展示的虚拟对象,从而能够实现多个虚拟对象的选择和展示。通过检测到特定特效选择控件被触发时可执行本实施例提供的图像处理方法,实现基于自动切换拍摄装置的拍摄功能。
在一个示例中,当终端检测到对一特效选择控件的触发操作,可在屏幕上显示相应的虚拟对象选择页面,此时屏幕可如图3所示,图3示出了本公开一个示例性实施例提供的一种界面示意图。屏幕包括两个显示区域:用于显示拍摄画面310的显示区域与用于显示虚拟对象选择页面321的另一显示区域,虚拟对象选择页面321上显示有多个虚拟对象(如虚拟对象A至O)对应的虚拟对象请求展示控件3211。其中,拍摄画面310可以显示第一拍摄装置采集的第一视频图像。
在一个实施方式中,虚拟对象展示请求也可基于图像识别自动触发。终端可检测第一视频图像中的第一目标对象,并根据检测到的第一目标对象的触发动作,在第一视频图像上显示虚拟对象。则步骤S220的具体实施方式可包括:当检测到第一视频图像中的第一目标对象执行预设的触发动作时,在第一视频图像上显示虚拟对象。
其中,第一目标对象可以为第一视频图像中的目标人物,对应的预设的触发动作可包括预设身体姿态、手势、表情、肢体动作等至少一个,即第一目标对象执行预设的触发动作可包括以下至少一项:第一目标对象处于预设身体姿态,如双手叉腰等;第一目标对象执行预设触发手势,如比“ok”手势、双手合十、比心等;第一目标对象执行预设表情,如微笑、大笑等;第一目标对象执行预设动作,如眨眼、挥手、嘟嘴等。预设的触发动作可根据实际需要确定,可以是程序预设的,也可以是用户自定义的,在此不作限定。由此,通过检测第一视频图像中的目标对象,并在检测到第一目标对象执行预设的触发动作时,可在第一视频图像上显示虚拟对象,由此可自动触发生成显示虚拟对象,无需用户手动操作,还可丰富拍摄的趣味性,提升用户的拍摄体验。
此外,第一目标对象还可以包括动物等可执行预设的触发动作的对象,在此不作限定。
作为一种实施方式,第一目标对象可以是预设对象,终端可预先存储该预设对象的预设图像,在检测到第一视频图像中的目标物体时,可将检测到的目标物体与预设图像进行匹配,若匹配成功,可判定检测到第一目标对象,再进一步检测第一目标对象是否执行预设的触发动作并进行后续操作。由此,通过仅检测预设对象的触发动作来显示虚拟对象,一方面可降低计算资源的耗费,另一方面也可避免因同时检测到多个对象均执行了预设的触发动作可能导致的虚拟对象显示混乱,提升系统稳定性和用户的拍摄体验。
此外,作为另一种实施方式,第一目标对象也可以是非预设的对象,即任意出现 于第一拍摄装置的拍摄范围内的对象,均可作为第一目标对象,通过对其检测是否执行预设的触发动作来触发显示虚拟对象。
在一些实施方式中,终端的屏幕上还可显示对应的提示信息,用于提示用户所触发控件的功能和/或如何触发该功能。其中,提示信息可以包括图像、文字中任意一种或多种形式的信息,还可包括语音形式的信息,本实施例对此不作限定。
在一个示例中,以虚拟对象是爱心为例,终端可以在屏幕上显示文字类型的提示信息,例如,如图4所示,终端可在屏幕上居中显示提示信息330“眨眼发射爱心~送给接收人”,提示用户可以通过眨眼来触发爱心显示并将爱心传送给接收人(例如,用户对面的其他人)。
在一些实施方式中,虚拟对象可以在第一视频图像上动态地显示。例如,虚拟对象可以在第一视频图像上沿着第一运动轨迹动态显示。通过播放第一视频序列帧,虚拟对象可以在第一视频图像上沿着第一运动轨迹动态地显示。
作为一种实施方式,叠加至第一视频图像上的每个视频帧图像中的虚拟对象是可以是相同的,则在第一视频图像上显示的虚拟对象本身不变,只是虚拟对象的显示位置随第一运动轨迹的变化而变化。
作为另一种实施方式,叠加至第一视频图像上的每个视频帧图像中的虚拟对象也可以是不同的,即可按虚拟对象变化的指定顺序将包括虚拟对象的每个视频帧图像叠加到相应的第一视频图像上。则在第一视频图像上显示虚拟对象时,不仅虚拟对象的显示位置随第一运动轨迹的变化而变化,而且包括虚拟对象的本身也在变化,图像本身的变化可包括尺寸变化(例如,由大变小;由小变大)、显示角度变化、颜色变化(例如,颜色渐变)、风格变化(例如,画风从卡通风格变为写实风格)等,在此不作限定。由此,可使得虚拟对象的显示效果更丰富生动,从而提高视频拍摄质量和视频的趣味性。例如,若需要实现虚拟对象在移动过程中由远及近的移动效果,可将虚拟对象对应的多张视频帧图像按虚拟对象的尺寸由小到大的指定顺序叠加到第一视频图像上。
在一个实施方式中,在第一视频图像上动态显示虚拟对象,可以基于预先配置好的包括虚拟对象的第一视频序列帧实现。具体地,请参阅图5,其示出了本公开一个示例性实施例提供的图2中步骤S220的详细流程示意图,步骤S220可包括:
S221:获取包括虚拟对象的第一视频序列帧。
S222:将第一视频序列帧叠加至第一视频图像上。
S223:播放第一视频序列帧,以在第一视频图像上动态显示虚拟对象。
其中,第一视频序列帧可存储于终端本地,终端可在本地获取第一视频序列帧;另外,第一视频序列帧还可存储于服务器,终端可从服务器获取第一视频序列帧,本实施例对此不作限定。
在一种实施方式中,终端在检测到虚拟对象展示请求时,可获取包括虚拟对象的第一视频序列帧,将第一视频序列帧叠加显示至第一视频图像上,并播放第一视频序列帧,从而可在第一视频图像上动态显示虚拟对象。
在一些实施方式中,通过播放第一视频序列帧可以使得虚拟对象从屏幕中的起始位置处移动至屏幕边缘,也可以从屏幕边缘移动至屏幕中心;还可以使得虚拟对象保 持不动。此外,虚拟对象的尺寸还可以在第一视频图像中动态变化,例如,虚拟对象的尺寸可由小变大、也可由大变小,还可由大变小再变大等等,在此不作限定。
在一些实施方式中,终端可获取虚拟对象在第一视频序列帧中的每一视频帧中的位置信息,并根据位置信息确定虚拟对象在第一视频图像中的第一运动轨迹,由此,在第一视频图像上动态显示虚拟对象的具体实施方式可包括:沿着该第一运动轨迹在第一视频图像上动态显示虚拟对象。
其中,位置信息可以是虚拟对象在视频序列帧中的每个视频帧中的坐标,例如,可以将视频帧中的一点作为坐标原点,以像素为单位,确定虚拟对象在视频帧中的坐标,即位置信息。通过将第一视频序列帧的每一帧依次叠加至第一视频图像的相应帧上,可使得虚拟对象在第一拍摄装置采集的第一视频图像上沿着第一运动轨迹运动。
其中,第一视频序列帧中虚拟对象的第一运动轨迹可以根据实际需要进行预先设定,从而使得虚拟对象在第一视频序列帧的不同视频帧中的位置信息不完全相同,通过播放第一视频序列帧,可呈现虚拟对象在第一视频图像上移动的动态显示效果。作为一种实施方式,位置信息可按所需呈现的运动轨迹进行预先设定,例如,若所需的运动轨迹是从图像中间位置往图像边缘运动,则可从内往外依次设置虚拟对象在每一视频帧中的位置信息。需要说明的是,图像边缘可以是图像中特定对象的轮廓,也可以是图像画布的边界,并特定对象可以是人、动物等任意生物,也可以是非生物如雕塑、衣服、景物、建筑等,本公开在此不作限定。
在一个实施例中,还可以根据用户的输入实时确定虚拟对象的第一运动轨迹。例如,用户在触发虚拟对象展示请求时,可输入所需的运动轨迹,使得虚拟对象按用户所输入的运动轨迹在第一视频图像上动态显示。其中,作为一种实施方式,运动轨迹可以通过检测用户作用于屏幕上显示的至少一个可选的运动轨迹的触控操作确定,例如,终端检测到作用于虚拟对象展示控件的触控操作时,终端可显示请求页面,请求页面上可显示至少一个可选的运动轨迹标识,根据用户选中的运动轨迹标识,可生成对应的虚拟对象展示请求,使得虚拟对象展示请求可携带用户选中的运动轨迹标识,则终端根据虚拟对象展示请求,根据用户选中的运动轨迹标识确定对应的运动轨迹。作为另一种实施方式,运动轨迹也可以基于用户隔空手势划出的轨迹确定。作为又一种实施方式,运动轨迹还可以基于用户在屏幕上滑动的滑动轨迹确定,本实施例对运动轨迹的确定方式不作限定。
在一些实施方式中,虚拟对象可以从第一视频图像上指定的初始位置开始显示,通过播放第一视频序列帧,可使得虚拟对象从第一视频图像上指定的初始位置开始沿着第一运动轨迹进行运动。具体地,作为一种实施方式,终端在检测到第一目标对象执行预设的触发动作时,还可以确定触发动作的发生位置,并将该触发动作的发生位置确定为指定的初始位置,然后终端将虚拟对象在第一视频序列帧的第一个视频帧中的图像位置与该初始位置对应显示(例如,叠加显示),并基于二者的对应关系,依次确定虚拟对象在第一视频序列帧中的每一视频帧中的位置信息,以确定对应的第一运动轨迹,从而使得在第一视频图像上显示虚拟对象时,虚拟对象可叠加显示于第一目标对象执行预设的触发动作的位置上,并从该位置开始沿着第一运动轨迹在第一视频图像上动态显示。
例如,以预设的触发动作为眨眼动作为例,则终端检测到用户眨眼时,可从眨眼处开始显示虚拟对象,并沿着第一运动轨迹在第一视频图像上动态显示虚拟对象。
在一个示例中,请参阅图6,其示出了本公开一个示例性实施例提供的又一种界面示意图,如图6所示,屏幕上的拍摄画面310对应第一拍摄装置所采集的第一视频图像,终端检测到第一视频图像中的第一目标对象执行眨眼动作时,将虚拟对象在第一视频序列帧中的第一个视频帧中的图像位置与眨眼动作的发生位置311对应显示,从而,可以在眨眼动作的发生位置311对应显示虚拟对象,相应地,虚拟对象叠加显示于第一目标对象执行眨眼动作的发生位置311上,以使得虚拟对象从发生位置311处开始,沿着第一运动轨迹在第一视频图像上动态显示。
返回至图2,在步骤S230中,当虚拟对象满足预设状态时,切换为通过第二拍摄装置采集第二视频图像。
在一些实施方式中,虚拟对象满足预设状态,包括以下至少一项:第一视频序列帧播放完毕;虚拟对象显示于第一视频图像的指定位置处;虚拟对象的参数符合预设参数。
作为一种实施方式,当第一视频序列帧播放完毕时,可判定虚拟对象满足预设状态,由此可在动态显示完虚拟对象后,自动切换为通过第二拍摄装置采集第二视频图像,并呈现出一种由虚拟对象的运动触发拍摄画面切换的效果。
作为另一种实施方式,终端也可检测虚拟对象在第一视频图像的显示位置,当检测到虚拟对象显示于第一视频图像的指定位置处时,可判定虚拟对象满足预设状态。其中,指定位置可以根据实际需要进行设定,例如,若需要呈现虚拟对象移动到图像边缘位置L的效果,可将该位置L设置为指定位置,当虚拟对象沿第一运动轨迹在第一视频图像上动态显示,且移动到位置L时,切换为通过第二拍摄装置采集第二视频图像,从而实现虚拟对象移动到图像边缘即触发拍摄装置切换的效果。
当然,指定位置可预先预设,也可由用户自定义,例如,在用户在触发虚拟对象展示请求时,可设置指定位置,进一步地,该指定位置可在用户输入所需的运动轨迹时,将该运动轨迹的终止位置确定为指定位置。另外,指定位置还可基于对第一视频图像进行图像识别确定,例如,可预先设置终止对象,则可在检测虚拟对象移动到第一视频图像上的终止对象所在的图像区域时,可判定虚拟对象满足预设状态。其中,终止对象用于确定虚拟对象在第一视频图像上移动的终止位置,终止对象可根据实际需要进行设置,可以包括第一目标对象的指定身体部位、第一视频图像中的指定物体等,在此不作限定。以终止对象为第一目标对象的手指时,则播放第一视频序列帧使得虚拟对象从屏幕中的起始位置处移动至第一目标对象的手指处,并切换为通过第二拍摄装置采集第二视频图像,从而实现虚拟对象移动到指定位置即触发拍摄装置切换的效果。
作为又一种实施方式,终端也可检测虚拟对象的参数,当检测到虚拟对象的参数符合预定参数时,可判定虚拟对象满足预设状态。虚拟对象的参数可以包括形状、尺寸、显示角度、样式等等。例如,虚拟对象可以在第一视频图像中呈现形状的动态变化(例如,由小变大),当虚拟对象达到预定形状时,确定虚拟对象满足预设状态。又例如,虚拟对象可以在第一视频图像中呈现形状的动态变化,当虚拟对象达到预定形状时,确定虚拟对象满足预设状态。本公开不对此进行限制。
在另一些实施方式中,终端可以通过检测虚拟对象在第一视频图像上的移动距离、移动时间等来确定虚拟对象是否满足预设状态。
作为一种实施方式,终端可以计算虚拟对象在第一视频图像上从起始位置开始移 动的移动距离,移动距离可以像素为单位计算,例如,若虚拟对象在第一视频图像上移动的移动距离达到预定距离,可判定虚拟对象满足预设状态,预定距离可以包括预定数量个像素,例如预定数量可以为30个、60个、100个像素,在此不作限定。
作为另一种实施方式,终端还可以根据虚拟对象的移动时间来判断虚拟对象是否满足预设状态,移动时间可根据第一视频序列帧的帧数确定,例如,终端可将虚拟对象第一次被叠加显示在第一视频图像上的帧记为第一帧,当第n次将虚拟对象叠加显示在第一视频图像上时,可判定第一视频图像的显示对象满足预设状态,其中,n可以为大于1的任意正整数,由此,可在第一视频图像上叠加显示第n帧虚拟对象后,控制拍摄装置切换。
可以理解的是,可触发拍摄装置切换的预设状态,并不限于上述几种实施方式,本实施例对此并不作限定,但考虑篇幅原因不再穷举。
返回附图2,步骤S240:在屏幕中显示第二视频图像。
当虚拟对象满足预设状态时,终端自动切换第二拍摄装置采集第二视频图像,并在屏幕中显示第二视频图像。
在一些实施例中,终端还可以在第二视频图像上显示虚拟对象。其中在第二视频图像上显示的虚拟对象与在第一视频图像上显示的虚拟对象对应,两个虚拟对象可以相同,也可以不同,本实施例对此不作限定。
在一个实施例中,可以根据用户触发的特效选择控件确定是否在第二视频图像中显示虚拟对象,即由相应的特效选择控件所对应的功能决定。例如,若特效选择控件对应的是虚拟对象的传递功能(其中,传递功能可实现的效果为先在第一视频图像上叠加显示虚拟对象,然后在切换为通过第二拍摄装置采集第二视频图像时,在第二视频图像上继续叠加显示虚拟对象),则相应地,在第二视频图像上继续叠加显示虚拟对象,以呈现虚拟对象在第一视频图像和第二视频图像中的连续传递效果,增加了视频的趣味性和丰富性。
在一个示例性的实施方式中,当终端检测到作用于对应传递功能的特效选择控件时,在第一拍摄装置采集到的第一视频图像上叠加显示虚拟对象,在切换为通过第二拍摄装置采集第二视频图像时,继续在第二视频图像上叠加显示虚拟对象,从而可在视觉上实现虚拟对象的连续展示,即虚拟对象在第一拍摄装置拍摄的第一视频图像中移动到第二拍摄装置拍摄的第二视频图像中,从而呈现出虚拟对象由第一视频图像中的目标对象传递至第二视频图像中的另一目标对象的效果。由此本实施例提供了一种基于拍摄装置的自动切换的创新的拍摄互动玩法,提高作品拍摄效率、质量和趣味性。
当然,前述仅为一种功能示例,本实施例可根据实际需要设计多种功能,并相应配置特效选择控件,使得用户可根据所需实现的功能或效果,触发相应的特效选择控件以实现相应的功能,即本实施例并不限于实现上述一种功能。
在一些实施例方式中,在第二视频图像上显示虚拟对象时,虚拟对象可以在第二视频图像中动态显示,例如,可以基于预先设计好的虚拟对象的第二视频序列帧实现虚拟对象在第二视频图像中的动态显示。具体地,请参阅图7,其示出了本公开一个示例性实施例提供的在第二视频图像上显示虚拟对象的方法的流程示意图,该方法可包括:
S310:获取包括虚拟对象的第二视频序列帧。
S320:将第二视频序列帧叠加至第二视频图像上。
S330:播放第二视频序列帧,以在第二视频图像上动态显示虚拟对象。
需要说明的是,步骤S310至S330的实施方式与步骤S221至S223的实施方式类似,此处未详细描述的部分可参考步骤S221至S223,在此不再赘述。
在一些实施方式中,通过播放第二视频序列帧可以使得虚拟对象从屏幕中的任意位置移动至终点位置,如虚拟对象从屏幕边缘移动至屏幕区域内部,也可以使得虚拟对象保持不动。此外,虚拟对象的尺寸还可以在第二视频图像中动态变化,例如,虚拟对象的尺寸可由大变小,以在通过播放第二视频序列帧时,可以呈现虚拟对象由近及远向屏幕中的第二目标对象靠近的效果。当然,虚拟对象的尺寸也可由大变小,还可由大变小再变大等等,在此不作限定。
其中,第二目标对象可以是人、动物等任意生物,也可以是非生物如雕塑、衣服、景物、建筑等,本实施例对此不作限定。
在一种实施方式中,终端在检测到虚拟对象展示请求时,可同时获取包括虚拟对象的第二视频序列帧,并将第二视频序列帧叠加至第二视频图像上,通过播放第二视频序列帧,可在第二视频图像上动态显示虚拟对象。
示例性的,终端可预先设置虚拟对象、第一视频序列帧、第二视频序列帧之间的映射关系,当检测到虚拟对象展示请求时,可确定待展示的虚拟对象以及相应的第一视频序列帧和第二视频序列帧,并通过播放第一视频序列帧和第二视频序列帧,分别在由第一拍摄装置拍摄的第一视频图像和由第二拍摄装置拍摄的第二视频图像中动态显示同一个虚拟对象,从而呈现出虚拟对象从第一视频图像中的目标对象传递至第二视频图像中的另一目标对象的效果。
在一些实施方式中,终端可获取虚拟对象在第二视频序列帧中的每一视频帧中的位置信息,并根据位置信息确定虚拟对象在第二视频图像中的第二运动轨迹,由此,在第二视频图像上动态显示虚拟对象的具体实施方式可包括:沿着该第二运动轨迹在第二视频图像上动态显示虚拟对象。
在一些实施方式中,虚拟对象可以沿着该第二运动轨迹在第二视频图像中从初始位置向指定的位置运动,即虚拟对象在第二视频序列帧的最后一个视频帧中的位置落在该指定位置上。为方便表述,记虚拟对象在第二视频图像中的最后显示的位置为终点位置,即该指定的位置。
其中,可根据实际需要将任意位置设置为终点位置;也可基于对第二视频图像进行图像识别,将识别到的预设终点对象所对应的位置确定为终点位置,预设终点对象用于确定虚拟对象在第二视频图像上移动的终点位置,预设终点对象可根据实际需要进行设置,可以包括指定物体、第二目标对象的指定身体部位如脸部、唇部、眼睛、额头、心脏等,在此不作限定;还可以在显示第二视频图像时,获取用户作用于第二视频图像的触发操作,并将该触发操作对应的触发位置确定为终点位置,即,由用户的触发操作确定终点位置。作为一种方式,可将指示终点位置的预设终点对象的标识或图像与虚拟对象的对象标识对应存储,则确定了虚拟对象后可确定指示终点位置的预设终点对象的标识或图像,再通过对第二视频图像进行图像识别处理,将与该对象标识或图像对应的位置确定为终点位置。
另外,在一些实施例中,若检测到第二视频图像中存在多个候选第二目标对象, 可进一步识别多个候选第二目标对象是否执行预设的接收动作,并将执行了该预设的接收动作的候选第二目标对象确定为最终实际接收虚拟对象的第二目标对象。其中,预设的接收动作可根据实际需要设置,例如可以包括但不限于嘟嘴、比心、眨眼等。
在另一些实施例中,检测第二视频图像中的第二目标对象的一些具体实施方式,可参考前述实施例提供的检测第一视频图像中的第一目标对象的实施方式,二者原理类似,在此不再赘述。
在又一些实施例中,若终端同时识别到第二视频图像中存在多个目标对象,还可根据各目标对象在第二视频图像中的占用面积来确定,例如,可将占用面积最大的目标对象确定为第二目标对象,从而将距离第二拍摄装置最近的用户确定为第二目标对象。
另外,在一些实施例中,当检测到第二视频图像中的显示对象满足预设切换条件时,终端可以输出接收提示信息,其中,接收提示信息用于提示第二目标对象准备对虚拟对象进行响应,接收提示信息可以是语音形式的信息,如“请准备接收”,则在满足预设切换条件时,终端可播放“请准备接收”的语音,从而可提示有意配合的用户即第二目标对象可以听声开始表演,保证拍摄效果,无需反复重拍即可拍摄到互动配合默契的视频,提升用户体验。
需要说明的是,本实施例中未详细描述的部分请参考前述实施例,在此不再赘述。
由此,通过本实施例提供的图像处理方法,可以在前述实施例的基础上,在第一视频图像上显示虚拟对象,并在虚拟对象满足预设状态时判定满足预设切换条件,从而切换为通过第二拍摄装置采集第二视频图像,并在屏幕上显示第二视频图像。在一些实施方式中,终端可检测第一视频图像中的第一目标对象,并在检测到第一目标对象执行预设的触发动作时,在第一视频图像上沿着第一运动轨迹动态显示虚拟对象,可实现诸如用户眨眼就从眼睛处发射虚拟对象、用户嘟嘴就从嘴部发射虚拟对象的显示效果,直到虚拟对象满足预设状态,切换为通过第二拍摄装置采集第二视频图像,并在屏幕上显示第二视频图像。另外,在一些实施方式中,还可在第二视频图像上动态显示虚拟对象,使得虚拟对象基于通过第二拍摄装置采集的第二视频图像上继续移动。由此,使得用户可通过执行预设的触发动作触发虚拟对象的显示,并最终呈现出虚拟对象在不同拍摄装置所采集的视频图像中运动的视觉效果,为用户提供了更多的拍摄可能性,提升了用户拍摄视频的兴趣和体验。
另外,在一些实施例中,当虚拟对象显示于第二视频图像的终点位置处时,终端可触发预设的特效,该特效可以与虚拟对象对应或者相关联。服务器或终端可预先设置好至少一个特效,并建立虚拟对象与各特效、特效触发条件之间的映射关系,则在检测到满足虚拟对象对应的特效触发条件时,可从服务器或终端本地获取虚拟对象对应的特效,并播放该特效。
当然,特效的播放也可由其他条件触发,终端可在不同的时机触发播放虚拟对象对应的特效,在视频图像的采集过程,终端可触发一次或多次特效的播放,且虚拟对象可与一个或多个特效对应或者关联,每次触发播放的特效可以是相同的,也可以是不同的,本实施例对此并不作限定。例如,当虚拟对象满足预设状态时,终端可播放虚拟对象对应的特效,则在一种实施方式中,可以在拍摄装置切换时即播放虚拟对象的特效。
在一些实施方式中,特效可包括视觉特效和音频特效中的至少一种,其中,视觉 特效为可以叠加显示在第二视频图像上,呈现动态的显示效果;而音频特效为一段音频,本公开不对特效的具体类型和内容进行限制。
在一些实施方式中,虚拟对象对应的特效可以包括由多个虚拟对象在屏幕上动态显示的多帧图像组成的序列帧。以虚拟对象为爱心为例,则虚拟对象对应的特效可以为显示效果为多个爱心在屏幕上移动的序列帧,例如多个爱心向上飘的序列帧。作为一种方式,多个爱心可基于全屏设置,使得播放特效时全屏画面出现梦幻爱心氛围,如图8所示;作为另一种方式,多个爱心也可仅基于屏幕的部分区域设置,在此不作限定。从而可提高视频制作的效果呈现度和丰富程度,有利于激发用户的拍摄积极性,提升视频拍摄的趣味性、增强拍摄的社交互动性。
在一个具体应用场景中,请参阅图9,其示出了本公开一个示例性实施例提供的四个不同时刻下的屏幕示意图,以虚拟对象是爱心为例,图9(a)至图9(d)分别为时刻T1、T2、T3、T4的四个屏幕的界面示意图,其中,在时刻T1、时刻T2,屏幕上显示有第一拍摄装置采集的第一视频图像910,在时刻T3、时刻T4,屏幕上显示有拍摄装置切换后,由第二拍摄装置采集的第二视频图像920,第一拍摄装置在时刻T1识别到用户A眨眼,则将眨眼的眼睛处确定为虚拟对象的起始位置920(与虚拟对象在第一视频序列帧的第一个视频帧中的图像位置对应显示),并播放第一视频序列帧,让爱心930从眼睛处开始移动,并在时刻T2,屏幕中的显示如图9(b)所示,终端判断第一视频序列帧是否播放完毕,若播放完毕,择切换为第二拍摄装置,屏幕显示第二拍摄装置采集的第二视频图像940,第二视频图像采集到用户A对面的其他人,例如用户B,并播放第二视频序列帧,使得从时刻T3至时刻T4中,第二视频序列帧中的爱心950从屏幕边缘开始向用户B的人脸移动,即移动至终点位置960,并同时呈现出尺寸由大到小的变化。由此实现爱心从一侧摄像头前的用户A眨眼的眼睛处开始发射,传递到位于另一侧摄像头前的用户B的人脸上的爱心传递效果。
在一些示例中,第二视频序列帧播放完毕后,还可播放第三视频序列帧,即可显示和虚拟对象爱心相关联的特效,包括视觉特效和音频特效,在时刻T4后屏幕上播放特效的示意图可如图8所示,由此实现爱心移动到用户B脸上后有多个爱心向上飘的效果。
另外,在一些实施例中,当终端切换为第二拍摄装置采集第二视频图像后,还可针对第二拍摄装置拍摄到的第二目标对象进行形变处理。在一些实施方式中,终端可以在切换为第二拍摄装置采集第二视频图像,并在屏幕中显示第二视频图像时,针对第二视频图像中的第二目标对象在第二视频图像上的图像进行形变处理,即在拍摄装置切换后就触发形变处理,使得第二目标对象在第二视频图像上发生形变;在另一些实施方式中,终端也可以在虚拟对象显示于第二视频图像的终点位置处时,针对第二目标对象在第二视频图像上的图像进行形变处理,即可在虚拟对象移动到终点位置处时触发形变处理。
针对第二目标对象在第二视频图像上的图像进行形变处理的实施方式可以包括:获取第二目标对象的形变处理配置,其中,形变处理配置可包括形变类型;获取第二目标对象对应的待形变的关键点;根据该形变类型确定待形变的关键点对应的形变后的位置,将待形变的关键点移动到形变后的位置,从而得到第二目标对象发生形变后的第二视频图像,并显示该形变后的第二视频图像,可呈现第二目标对象在第二视频图像上发生形变的视觉效果。另外,形变处理配置还可包括形变类型对应的形变程度,则根据该形变类型确定待形变的关键点对应的形变后的位置时,可具体通过根据该形 变类型以及其对应的形变程度,计算待形变的关键点对应的形变后位置。其中,形变类型可以为放大、缩小、平移、旋转、拖拽中的一种或多种的组合。相应的,形变程度可包括例如放大/缩小的倍数、平移的距离、旋转的角度、拖拽的距离等。
在一些实施方式中,形变处理配置还可包括形变部位,则获取第二目标对象对应的待形变的关键点时,可通过获取与第二目标对象的该形变部位相关的关键点作为待形变的关键点。
其中,形变处理配置可根据实际需要设置,可以包括一个或多个形变部位,针对每个形变部位也可相应配置一个或多个形变类型。则若形变处理配置包括多个形变部位以及至少两个形变部位对应不同的形变类型,则可对第二目标对象的不同形变部位执行不同形变类型对应的形变处理,从而根据实际需要可通过对形变处理配置的设置,实现丰富的形变效果。需要说明的是,形变部位在未设置前可以是默认部位,默认部位可预先预设,也可由用户自定义,例如,默认部位可以为第二目标对象的脸部、眼睛、鼻子、唇部等,在此不作限定。
在一些实施方式中,形变处理配置可根据所需呈现的形变效果进行设置,并与可呈现的视觉效果对应存储于形变数据库,其中,形变数据库可存储一个或多个形变处理配置和对应的形变效果的映射关系,形变数据库可存储于终端本地或服务器。其中,形变效果可以为害羞表情、生气表情等各种表情,也可以为图像变形效果(例如,脸部拉伸效果等),还可以为其它任何涉及关键点位置变化的效果,在此不做限定。例如,针对形变效果如害羞表情,可通过学习大量害羞表情的图片,确定用户表现出害羞表情时,用户的脸部关键点的位置关系,从而确定对应各形变部位的形变类型和形变程度,从而得到害羞表情对应的形变处理配置,并构建害羞表情对应的标识与害羞表情对应的形变处理配置的映射关系,存储于形变数据库中。则作为一种实施方式,用户可在拍摄前或拍摄过程中选择所需实现的形变效果,则检测到相应的选择操作后,可获取用户选中的形变效果对应的标识,从形变数据库中查找到对应的形变处理配置,由此获取到第二目标对象的形变处理配置。
在一个示例中,以形变部位是脸部为例,则可以在终端切换为第二拍摄装置采集第二视频图像后,可针对第二拍摄装置拍摄到的第二目标对象的脸部进行形变处理。如图10所示,基于图9示出的示例,图10示出了在图9中时刻T4后的时刻T5的屏幕的界面示意图。则在图10示出的示例中,终端在检测到虚拟对象显示于第二视频图像的终点位置处时,可以对第二目标对象的人脸进行形变处理,呈现相应的形变效果。
另外,在一些实施例中,终端也可检测第一视频图像中的第一目标对象,根据第一目标对象来触发拍摄装置的切换。具体地,请参阅图11,其示出了本公开又一个实施例提供的图像处理方法的流程示意图,于本实施例中,该方法可包括:
S410:通过第一拍摄装置采集第一视频图像,并在屏幕中显示第一视频图像。
S420:检测第一视频图像中的第一目标对象。
S430:当检测到第一目标对象执行预设的触发动作时,切换为通过第二拍摄装置采集第二视频图像。
S440:在屏幕中显示第二视频图像。
其中,步骤S420至S430的实施方式,可参考前述实施例对步骤S220中相应部 分的描述,二者实施方式大致相同,不同之处在于,在步骤S220中的一种实施方式中,第一目标对象执行预设的触发动作是用于在第一视频图像上触发显示虚拟对象,而本实施例中,第一目标对象执行预设的触发动作则是作为一种预设切换条件,用于控制拍摄装置的切换。即,通过第一视频图像中的第一目标对象的触发操作,将第一拍摄装置切换至第二拍摄装置,从而能为用户提供更灵活的拍摄方式,并提升操作体验。
在本实施例中,根据实际需要,可在第一视频图像上显示虚拟对象,也可在第二视频图像上显示虚拟对象,在此不作限定,相关实施方式可参考前述实施例中相应部分的描述,在此不再赘述。
需要说明的是,本实施例中未详细描述的部分请参考前述实施例,在此不再赘述。
由此,通过本实施例提供的图像处理方法,终端可通过检测第一视频图像中的第一目标对象,当检测到第一目标对象执行预设的触发动作时,切换为通过第二拍摄装置采集第二视频图像,并在屏幕上显示第二视频图像。则以预设的触发动作是眨眼动作为例,可实现当检测到用户眨眼时,可切换拍摄装置进行视频图像的采集和显示。
请参照图12,本公开一实施例提供的一种图像处理装置的模块框图,该图像处理装置1200可应用于终端,终端包括第一拍摄装置与第二拍摄装置,具体可以包括:视频显示模块1210以及切换显示模块1220,其中:
视频显示模块1210,用于通过第一拍摄装置采集第一视频图像,在屏幕中显示第一视频图像;
切换显示模块1220,用于当检测到第一视频图像中的显示对象满足预设切换条件,切换为通过第二拍摄装置采集第二视频图像,并在屏幕中显示第二视频图像。
在一实施例中,图像处理装置1200还包括:第一虚拟对象显示模块,用于在第一视频图像上显示虚拟对象。此时,切换显示模块1220可包括:第一触发切换子模块,用于当虚拟对象满足预设状态时,切换为通过第二拍摄装置采集第二视频图像。
在一实施例中,图像处理装置1200还包括:第一目标对象检测模块,用于检测第一视频图像中的第一目标对象。此时,第一虚拟对象显示模块可包括:第一虚拟对象显示子模块,用于当检测到第一视频图像中的第一目标对象执行预设的触发动作时,在第一视频图像上显示虚拟对象。
在一实施例中,第一虚拟对象显示模块可包括:第一序列帧获取子模块、第一序列帧叠加子模块以及第一序列帧播放子模块,其中:
第一序列帧获取子模块,用于获取包括虚拟对象的第一视频序列帧;
第一序列帧叠加子模块,用于将第一视频序列帧叠加至第一视频图像上;
第一序列帧播放子模块,用于播放第一视频序列帧,以在第一视频图像上动态显示虚拟对象。
在一实施例中,图像处理装置1200还包括:第一位置信息获取模块以及第一运动轨迹确定模块,其中:
第一位置信息获取模块,用于获取虚拟对象在第一视频序列帧中的每一视频帧中的位置信息;
第一运动轨迹确定模块,用于根据位置信息确定虚拟对象在第一视频图像中的第 一运动轨迹;
此时,第一序列帧播放子模块可包括:第一序列帧播放单元,用于播放第一视频序列帧,以沿着第一运动轨迹在第一视频图像上动态显示虚拟对象。
在一实施例中,虚拟对象满足预设状态,包括:虚拟对象显示于第一视频图像的指定位置处。
在一实施例中,图像处理装置1200还包括:目标对象检测模块,用于检测第一视频图像中的第一目标对象;此时,切换显示模块1220可包括:第二触发切换模块,用于当检测到第一目标对象执行预设的触发动作时,切换为通过第二拍摄装置采集第二视频图像。
在一实施例中,图像处理装置1200还包括:第二虚拟对象显示模块,用于在第二视频图像上显示虚拟对象。
在一实施例中,第二虚拟对象显示模块包括:第二序列帧获取子模块、第二序列帧叠加子模块以及第二序列帧播放子模块,其中:
第二序列帧获取子模块,用于获取包括虚拟对象的第二视频序列帧;
第二序列帧叠加子模块,用于将第二视频序列帧叠加至第二视频图像上;
第二序列帧播放子模块,用于播放第二视频序列帧,以在第二视频图像上动态显示虚拟对象。
在一实施例中,图像处理装置120还包括:第二位置信息获取模块以及第二运动轨迹确定模块,其中:
第二位置信息获取模块,用于获取虚拟对象在第二视频序列帧中的每一视频帧中的位置信息;
第二运动轨迹确定模块,用于根据位置信息确定虚拟对象在第二视频图像中的第二运动轨迹;
此时,第二序列帧播放子模块可包括:第二序列帧播放单元,用于播放第二视频序列帧,以沿着第二运动轨迹在第二视频图像上动态显示虚拟对象。
本公开实施例的图像处理装置可执行本公开的实施例所提供的图像处理方法,其实现原理相类似,本公开各实施例中的图像处理装置中的各模块所执行的动作是与本公开各实施例中的图像处理方法中的步骤相对应的,对于图像处理装置的各模块的详细功能描述具体可以参见前文中所示的对应的图像处理方法中的描述,此处不再赘述。
下面参考图13,其示出了适于用来实现本公开实施例的电子设备1300的结构框图。本公开实施例中的电子设备可以包括但不限于诸如计算机、手机等终端。图13示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
电子设备1300包括:存储器以及处理器,存储器用于存储执行上述各个方法实施例所述方法的程序;处理器被配置为执行存储器中存储的程序。其中,这里的处理器可以称为下文的处理装置1301,存储器可以包括下文中的只读存储器(ROM)1302、随机访问存储器(RAM)1303以及存储装置1308中的至少一项,具体如下所示:
如图13所示,电子设备1300可以包括处理装置(例如中央处理器、图形处理器等)1301,其可以根据存储在只读存储器(ROM)1302中的程序或者从存储装置1308加载到随机访问存储器(RAM)1303中的程序而执行各种适当的动作和处理。在RAM 1303中,还存储有电子设备1300操作所需的各种程序和数据。处理装置1301、ROM 1302以及RAM 1303通过总线1304彼此相连。输入/输出(I/O)接口1305也连接至总线1304。
通常,以下装置可以连接至I/O接口1305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1307;包括例如磁带、硬盘等的存储装置1308;以及通信装置1309。通信装置1309可以允许电子设备1300与其他设备进行无线或有线通信以交换数据。虽然图13示出了具有各种装置的电子设备1300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读存储介质上的计算机程序,该计算机程序包含用于执行上述各个实施例所述方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1309从网络上被下载和安装,或者从存储装置1308被安装,或者从ROM 1302被安装。在该计算机程序被处理装置1301执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读存储介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读存储介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读存储介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序 被该电子设备执行时,使得该电子设备执行以下步骤:通过所述第一拍摄装置采集第一视频图像,并在屏幕中显示所述第一视频图像;当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,并在所述屏幕中显示所述第二视频图像。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块或单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块或单元的名称在某种情况下并不构成对该单元本身的限定,例如,视频显示模块还可以被描述为“用于显示视频图像的模块”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,计算机可读存储介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。计算机可读存储介质可以是机器可读信号介质或机器可读储存介质。计算机可读存储介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,提供了一种图像处理方法,应用于第一终端,该方法包括:通过所述第一拍摄装置采集第一视频图像,并在屏幕中显示所述第一视频图像;当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,并在所述屏幕中显示所述第二视频图像。
在一实施例中,所述方法还包括:在所述第一视频图像上显示虚拟对象;所述当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,包括:当所述虚拟对象满足预设状态时,切换为通过所述第二拍摄装置采集所述第二视频图像。
在一实施例中,所述方法还包括:检测所述第一视频图像中的第一目标对象;所述在所述第一视频图像上显示虚拟对象,包括:当检测到所述第一视频图像中的第一目标对象执行预设的触发动作时,在所述第一视频图像上显示所述虚拟对象。
在一实施例中,所述在所述第一视频图像上显示虚拟对象,包括:获取包括所述虚拟对象的第一视频序列帧;将所述第一视频序列帧叠加至所述第一视频图像上;播放所述第一视频序列帧,以在所述第一视频图像上动态显示所述虚拟对象。
在一实施例中,所述方法还包括:获取所述虚拟对象在所述第一视频序列帧中的每一视频帧中的位置信息;根据所述位置信息确定所述虚拟对象在所述第一视频图像中的第一运动轨迹;其中,所述在所述第一视频图像上动态显示所述虚拟对象,包括:沿着所述第一运动轨迹在所述第一视频图像上动态显示所述虚拟对象。
在一实施例中,所述虚拟对象满足预设状态,包括:所述虚拟对象显示于所述第一视频图像的指定位置处。
在一实施例中,所述方法还包括:检测所述第一视频图像中的第一目标对象;所述当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,包括:当检测到所述第一目标对象执行预设的触发动作时,切换为通过所述第二拍摄装置采集第二视频图像。
在一实施例中,所述方法还包括:在所述第二视频图像上显示所述虚拟对象。
在一实施例中,所述在所述第二视频图像上显示所述虚拟对象,包括:获取包括所述虚拟对象的第二视频序列帧;将所述第二视频序列帧叠加至所述第二视频图像上;播放所述第二视频序列帧,以在所述第二视频图像上动态显示所述虚拟对象。
在一实施例中,所述方法还包括:获取所述虚拟对象在所述第二视频序列帧中的每一个视频帧中的位置信息;根据所述位置信息确定所述虚拟对象在所述第二视频图像中的第二运动轨迹;其中,所述在所述第二视频图像上动态显示所述虚拟对象,包括:沿着所述第二运动轨迹在所述第二视频图像上动态显示所述虚拟对象。
根据本公开的一个或多个实施例,提供了一种图像处理装置,可应用于终端,终端包括分别设置于不同侧的第一拍摄装置与第二拍摄装置,该装置可包括:视频显示模块以及切换显示模块,其中:视频显示模块,用于通过所述第一拍摄装置采集第一视频图像,在屏幕中显示所述第一视频图像;切换显示模块,用于当检测到所述第一视频图像中的显示对象满足预设切换条件,切换为通过所述第二拍摄装置采集第二视频图像,并在所述屏幕中显示所述第二视频图像。
在一实施例中,图像处理装置还包括:第一虚拟对象显示模块,用于在所述第一视频图像上显示虚拟对象。此时,切换显示模块可包括:第一触发切换子模块,用于当所述虚拟对象满足预设状态时,切换为通过所述第二拍摄装置采集所述第二视频图像。
在一实施例中,图像处理装置还包括:第一目标对象检测模块,用于检测所述第 一视频图像中的第一目标对象。此时,第一虚拟对象显示模块可包括:第一虚拟对象显示子模块,用于当检测到所述第一视频图像中的第一目标对象执行预设的触发动作时,在所述第一视频图像上显示所述虚拟对象。
在一实施例中,第一虚拟对象显示模块可包括:第一序列帧获取子模块、第一序列帧叠加子模块以及第一序列帧播放子模块,其中:第一序列帧获取子模块,用于获取包括所述虚拟对象的第一视频序列帧;第一序列帧叠加子模块,用于将所述第一视频序列帧叠加至所述第一视频图像上;第一序列帧播放子模块,用于播放所述第一视频序列帧,以在所述第一视频图像上动态显示所述虚拟对象。
在一实施例中,图像处理装置还包括:第一位置信息获取模块以及第一运动轨迹确定模块,其中:第一位置信息获取模块,用于获取所述虚拟对象在所述第一视频序列帧中的每一视频帧中的位置信息;第一运动轨迹确定模块,用于根据所述位置信息确定所述虚拟对象在所述第一视频图像中的第一运动轨迹;此时,第一序列帧播放子模块可包括:第一序列帧播放单元,用于播放所述第一视频序列帧,以沿着所述第一运动轨迹在所述第一视频图像上动态显示所述虚拟对象。
在一实施例中,所述虚拟对象满足预设状态,包括:所述虚拟对象显示于所述第一视频图像的指定位置处。
在一实施例中,图像处理装置还包括:目标对象检测模块,用于检测所述第一视频图像中的第一目标对象;此时,切换显示模块可包括:第二触发切换模块,用于当检测到所述第一目标对象执行预设的触发动作时,切换为通过所述第二拍摄装置采集第二视频图像。
在一实施例中,图像处理装置还包括:第二虚拟对象显示模块,用于在所述第二视频图像上显示所述虚拟对象。
在一实施例中,第二虚拟对象显示模块包括:第二序列帧获取子模块、第二序列帧叠加子模块以及第二序列帧播放子模块,其中:第二序列帧获取子模块,用于获取包括所述虚拟对象的第二视频序列帧;第二序列帧叠加子模块,用于将所述第二视频序列帧叠加至所述第二视频图像上;第二序列帧播放子模块,用于播放所述第二视频序列帧,以在所述第二视频图像上动态显示所述虚拟对象。
在一实施例中,图像处理装置还包括:第二位置信息获取模块以及第二运动轨迹确定模块,其中:第二位置信息获取模块,用于获取所述虚拟对象在所述第二视频序列帧中的每一视频帧中的位置信息;第二运动轨迹确定模块,用于根据所述位置信息确定所述虚拟对象在所述第二视频图像中的第二运动轨迹;此时,第二序列帧播放子模块可包括:第二序列帧播放单元,用于播放所述第二视频序列帧,以沿着所述第二运动轨迹在所述第二视频图像上动态显示所述虚拟对象。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是 有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (14)

  1. 一种图像处理方法,其特征在于,应用于终端,所述终端包括第一拍摄装置与第二拍摄装置,所述方法包括:
    通过所述第一拍摄装置采集第一视频图像,并在屏幕中显示所述第一视频图像;
    当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,并在所述屏幕中显示所述第二视频图像。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述第一视频图像上显示虚拟对象;
    所述当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,包括:
    当所述虚拟对象满足预设状态时,切换为通过所述第二拍摄装置采集所述第二视频图像。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    检测所述第一视频图像中的第一目标对象;
    所述在所述第一视频图像上显示虚拟对象,包括:
    当检测到所述第一视频图像中的第一目标对象执行预设的触发动作时,在所述第一视频图像上显示所述虚拟对象。
  4. 根据权利要求2所述的方法,其特征在于,所述在所述第一视频图像上显示虚拟对象,包括:
    获取包括所述虚拟对象的第一视频序列帧;
    将所述第一视频序列帧叠加至所述第一视频图像上;
    播放所述第一视频序列帧,以在所述第一视频图像上动态显示所述虚拟对象。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    获取所述虚拟对象在所述第一视频序列帧中的每一视频帧中的位置信息;
    根据所述位置信息确定所述虚拟对象在所述第一视频图像中的第一运动轨迹;
    其中,所述在所述第一视频图像上动态显示所述虚拟对象,包括:
    沿着所述第一运动轨迹在所述第一视频图像上动态显示所述虚拟对象。
  6. 根据权利要求2所述的方法,其特征在于,所述虚拟对象满足预设状态,包括:
    所述虚拟对象显示于所述第一视频图像的指定位置处。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    检测所述第一视频图像中的第一目标对象;
    所述当检测到所述第一视频图像中的显示对象满足预设切换条件时,切换为通过所述第二拍摄装置采集第二视频图像,包括:
    当检测到所述第一目标对象执行预设的触发动作时,切换为通过所述第二拍摄装置采集第二视频图像。
  8. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    在所述第二视频图像上显示所述虚拟对象。
  9. 根据权利要求8所述的方法,其特征在于,所述在所述第二视频图像上显示所述虚拟对象,包括:
    获取包括所述虚拟对象的第二视频序列帧;
    将所述第二视频序列帧叠加至所述第二视频图像上;
    播放所述第二视频序列帧,以在所述第二视频图像上动态显示所述虚拟对象。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    获取所述虚拟对象在所述第二视频序列帧中的每一个视频帧中的位置信息;
    根据所述位置信息确定所述虚拟对象在所述第二视频图像中的第二运动轨迹;
    其中,所述在所述第二视频图像上动态显示所述虚拟对象,包括:
    沿着所述第二运动轨迹在所述第二视频图像上动态显示所述虚拟对象。
  11. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    检测所述第二视频图像中的第二目标对象;
    对所述第二目标对象进行形变处理,得到所述第二目标对象发生形变后的第二视频图像,并在所述屏幕中进行显示。
  12. 一种图像处理装置,其特征在于,应用于终端,所述终端包括第一拍摄装置与第二拍摄装置,所述装置包括:
    视频显示模块,用于通过所述第一拍摄装置采集第一视频图像,在屏幕中显示所述第一视频图像;
    切换显示模块,用于当检测到所述第一视频图像中的显示对象满足预设切换条件,切换为通过所述第二拍摄装置采集第二视频图像,并在所述屏幕中显示所述第二视频图像。
  13. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器,所述存储器存储有计算机程序,当所述计算机程序被所述一个或多个处理器执行时,使所述电子设备执行如权利要求1-11任一项所述的图像处理方法。
  14. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,所述计算机程序被处理器执行时,使所述处理器执行如权利要求1-11任一项所述的图像处理方法。
PCT/CN2021/114717 2020-09-30 2021-08-26 图像处理方法、装置、电子设备及计算机可读存储介质 WO2022068479A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/246,389 US20230360184A1 (en) 2020-09-30 2021-08-26 Image processing method and apparatus, and electronic device and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011065575.3A CN112199016B (zh) 2020-09-30 2020-09-30 图像处理方法、装置、电子设备及计算机可读存储介质
CN202011065575.3 2020-09-30

Publications (1)

Publication Number Publication Date
WO2022068479A1 true WO2022068479A1 (zh) 2022-04-07

Family

ID=74014414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114717 WO2022068479A1 (zh) 2020-09-30 2021-08-26 图像处理方法、装置、电子设备及计算机可读存储介质

Country Status (3)

Country Link
US (1) US20230360184A1 (zh)
CN (1) CN112199016B (zh)
WO (1) WO2022068479A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6716004B1 (ja) * 2019-09-30 2020-07-01 株式会社バーチャルキャスト 記録装置、再生装置、システム、記録方法、再生方法、記録プログラム、再生プログラム
CN112199016B (zh) * 2020-09-30 2023-02-21 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN115250357B (zh) * 2021-04-26 2024-04-12 海信集团控股股份有限公司 终端设备、视频处理方法和电子设备
CN114416259A (zh) * 2022-01-20 2022-04-29 北京字跳网络技术有限公司 虚拟资源的获取方法、装置、设备及存储介质
CN114429506B (zh) * 2022-01-28 2024-02-06 北京字跳网络技术有限公司 图像处理方法、装置、设备、存储介质和程序产品
CN114531553B (zh) * 2022-02-11 2024-02-09 北京字跳网络技术有限公司 生成特效视频的方法、装置、电子设备及存储介质
CN114708290A (zh) * 2022-03-28 2022-07-05 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002262178A (ja) * 2001-02-28 2002-09-13 Hitachi Ltd 映像表示装置
CN105554386A (zh) * 2015-12-23 2016-05-04 努比亚技术有限公司 一种移动终端及其控制摄像头拍摄的方法
CN106210531A (zh) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 视频生成方法、装置和移动终端
CN106303260A (zh) * 2016-10-18 2017-01-04 北京小米移动软件有限公司 摄像头切换方法、装置及终端设备
CN109327568A (zh) * 2018-10-18 2019-02-12 维沃移动通信(杭州)有限公司 一种切换摄像头的方法及移动终端
CN112199016A (zh) * 2020-09-30 2021-01-08 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8111247B2 (en) * 2009-03-27 2012-02-07 Sony Ericsson Mobile Communications Ab System and method for changing touch screen functionality
CN103856707B (zh) * 2012-12-03 2017-04-19 联想(北京)有限公司 一种摄像头切换方法、装置及电子设备
US10449900B2 (en) * 2014-06-20 2019-10-22 Clarion, Co., Ltd. Video synthesis system, video synthesis device, and video synthesis method
CN105049711B (zh) * 2015-06-30 2018-09-04 广东欧珀移动通信有限公司 一种拍照方法及用户终端
CN105391965B (zh) * 2015-11-05 2018-09-07 广东欧珀移动通信有限公司 基于多摄像头的视频录制方法及装置
CN106131425B (zh) * 2016-07-27 2019-01-22 维沃移动通信有限公司 一种切换摄像头的方法及移动终端
CN106657774A (zh) * 2016-11-25 2017-05-10 杭州联络互动信息科技股份有限公司 一种视频录制方法及装置
CN108022279B (zh) * 2017-11-30 2021-07-06 广州市百果园信息技术有限公司 视频特效添加方法、装置及智能移动终端
CN111258413A (zh) * 2018-11-30 2020-06-09 北京字节跳动网络技术有限公司 虚拟对象的控制方法和装置
CN110058685B (zh) * 2019-03-20 2021-07-09 北京字节跳动网络技术有限公司 虚拟对象的显示方法、装置、电子设备和计算机可读存储介质
CN109889893A (zh) * 2019-04-16 2019-06-14 北京字节跳动网络技术有限公司 视频处理方法、装置及设备
CN110413171B (zh) * 2019-08-08 2021-02-09 腾讯科技(深圳)有限公司 控制虚拟对象进行快捷操作的方法、装置、设备及介质
CN110769302B (zh) * 2019-10-28 2022-03-22 广州方硅信息技术有限公司 直播互动方法、装置、系统、终端设备、存储介质
CN111464761A (zh) * 2020-04-07 2020-07-28 北京字节跳动网络技术有限公司 视频的处理方法、装置、电子设备及计算机可读存储介质
CN111722775A (zh) * 2020-06-24 2020-09-29 维沃移动通信(杭州)有限公司 图像处理方法、装置、设备及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002262178A (ja) * 2001-02-28 2002-09-13 Hitachi Ltd 映像表示装置
CN105554386A (zh) * 2015-12-23 2016-05-04 努比亚技术有限公司 一种移动终端及其控制摄像头拍摄的方法
CN106210531A (zh) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 视频生成方法、装置和移动终端
CN106303260A (zh) * 2016-10-18 2017-01-04 北京小米移动软件有限公司 摄像头切换方法、装置及终端设备
CN109327568A (zh) * 2018-10-18 2019-02-12 维沃移动通信(杭州)有限公司 一种切换摄像头的方法及移动终端
CN112199016A (zh) * 2020-09-30 2021-01-08 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
US20230360184A1 (en) 2023-11-09
CN112199016A (zh) 2021-01-08
CN112199016B (zh) 2023-02-21

Similar Documents

Publication Publication Date Title
WO2022068479A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
CN111726536B (zh) 视频生成方法、装置、存储介质及计算机设备
WO2020107904A1 (zh) 一种视频特效添加方法、装置、终端设备及存储介质
WO2020083021A1 (zh) 视频录制方法、视频播放方法、装置、设备及存储介质
TW202105331A (zh) 一種人體關鍵點檢測方法及裝置、電子設備和電腦可讀儲存介質
WO2023051185A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2022007627A1 (zh) 一种图像特效的实现方法、装置、电子设备及存储介质
US11030733B2 (en) Method, electronic device and storage medium for processing image
WO2021254502A1 (zh) 目标对象显示方法、装置及电子设备
US20230419582A1 (en) Virtual object display method and apparatus, electronic device, and medium
CN109600559B (zh) 一种视频特效添加方法、装置、终端设备及存储介质
CN112035046B (zh) 榜单信息显示方法、装置、电子设备及存储介质
WO2021043121A1 (zh) 一种图像换脸的方法、装置、系统、设备和存储介质
US20230182028A1 (en) Game live broadcast interaction method and apparatus
WO2023134491A1 (zh) 页面显示控制方法、装置、移动终端及存储介质
CN111028566A (zh) 直播教学的方法、装置、终端和存储介质
CN114387445A (zh) 对象关键点识别方法及装置、电子设备和存储介质
WO2023140786A2 (zh) 特效视频处理方法、装置、电子设备及存储介质
WO2022171114A1 (zh) 图像处理方法、装置、设备及介质
CN115002359A (zh) 视频处理方法、装置、电子设备及存储介质
WO2024051540A1 (zh) 特效处理方法、装置、电子设备及存储介质
WO2023134490A1 (zh) 数据显示方法、装置、移动终端及存储介质
CN116320721A (zh) 一种拍摄方法、装置、终端及存储介质
WO2022151687A1 (zh) 合影图像生成方法、装置、设备、存储介质、计算机程序及产品
WO2023151554A1 (zh) 视频图像处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21874140

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21874140

Country of ref document: EP

Kind code of ref document: A1