WO2023143240A1 - 图像处理方法、装置、设备、存储介质和程序产品 - Google Patents

图像处理方法、装置、设备、存储介质和程序产品 Download PDF

Info

Publication number
WO2023143240A1
WO2023143240A1 PCT/CN2023/072579 CN2023072579W WO2023143240A1 WO 2023143240 A1 WO2023143240 A1 WO 2023143240A1 CN 2023072579 W CN2023072579 W CN 2023072579W WO 2023143240 A1 WO2023143240 A1 WO 2023143240A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
image
image editing
camera
original video
Prior art date
Application number
PCT/CN2023/072579
Other languages
English (en)
French (fr)
Inventor
钟善萍
杨诚诚
李林卉
马佳欣
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to EP23746122.3A priority Critical patent/EP4459556A1/en
Priority to AU2023213666A priority patent/AU2023213666A1/en
Priority to KR1020247028347A priority patent/KR20240141285A/ko
Publication of WO2023143240A1 publication Critical patent/WO2023143240A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs

Definitions

  • the present disclosure relates to the technical field of computer processing, and in particular to an image processing method, device, equipment, storage medium and program product.
  • the media software installed in the smart terminal can be used to operate the virtual image material to simulate a real environment. Based on this kind of software, the need for real materials can be reduced, cost can be saved, and it is convenient to count the operation results.
  • an embodiment of the present disclosure provides an image processing method, the method comprising: in response to a first camera opening operation, acquiring a first original video frame captured by the first camera; First image editing effect; according to the first image editing effect, perform image editing processing on the first original video frame to obtain and display a first target video frame, where the first target video frame is the first image Editing the effect image applied to the first original video frame; in response to the camera switching instruction, switching to the second camera and acquiring the second original video frame captured by the second camera; determining the corresponding second image editing effect; and according to the second image editing effect, perform image editing processing on the second original video frame to obtain and display a second target video frame, where the second target video frame is the second target video frame An image editing effect is applied to an effect image on the second original video frame.
  • an embodiment of the present disclosure provides an image processing device, the device including: a first original video frame acquisition module, configured to acquire the first original video frame captured by the first camera in response to the first camera opening operation Video frame; a first editing effect determining module, configured to determine a first image editing effect corresponding to the first camera; a first target video frame obtaining module, configured to perform the first editing effect according to the first image editing effect
  • the original video frame is subjected to image editing processing, and the first target video frame is obtained and displayed, and the first target video frame is an effect image applied to the first original video frame by the first image editing effect;
  • the second original video The frame acquisition module is used to switch to the second camera and acquire the second original video frame captured by the second camera in response to the camera switching instruction; the second editing effect determination module is used to determine the corresponding first video frame of the second camera.
  • Two image editing effects and a second target video frame obtaining module, configured to perform image editing processing on the second original video frame according to the second image editing effect, to obtain and display a second target video frame, and to obtain and display the second target video frame.
  • the second target video frame is an effect image of the second image editing effect applied to the second original video frame.
  • an embodiment of the present disclosure provides an electronic device, and the electronic device includes: one or more processors; and a storage device for storing one or more programs; when the one or more programs are executed The one or more processors are executed, so that the one or more processors implement the image processing method according to any one of the above first aspect.
  • an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the image processing method described in any one of the above-mentioned first aspects is implemented.
  • an embodiment of the present disclosure provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the image processing described in any one of the above first aspects is realized method.
  • an embodiment of the present disclosure provides a computer program, including: an instruction, which when executed by a processor causes the processor to execute the image processing method according to any one of the above first aspect.
  • FIG. 1 is a flowchart of an image processing method in an embodiment of the present disclosure
  • FIG. 2 is a flowchart of another image processing method in an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of an image processing device in an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • the first interaction mode when a specific image is detected in a video frame, the corresponding additional image material is added to the video frame and displayed.
  • the above-mentioned specific image may be a face image, a foot image, a hand image, and the like.
  • the recognition detects that a hand image appears in a video frame the candied haws image material is displayed at the hand position in the video frame.
  • the second interaction mode when the specified action is detected by the user in the video frame, the corresponding additional image material is displayed in the video frame.
  • the above-mentioned specified actions may be pouting, blinking, comparing hearts, and the like.
  • the recognition detects that the user pouts in a video frame, the balloon image material is displayed at the mouth in the video frame.
  • the above-mentioned interactive method of adding image materials can only recognize a single image, such as a head or a foot, and cannot recognize both a head and a foot in a video when shooting a video.
  • Multiple additional image materials can only be used in captured video frames when the front-facing camera is used to capture video frames. Or, when using the rear camera to capture video frames, multiple additional image materials are used in the captured video frames. In this way, there is no split between the front and rear cameras, When there are multiple additional image materials, the front and rear cameras have multiple 3D additional image materials, which consumes a lot of equipment performance. For example, when there are two image materials, the front and rear cameras both have the two image materials, which consumes a lot of performance of the terminal device.
  • an embodiment of the present disclosure provides an image processing method, by using different image editing and processing methods in the video frames captured by different cameras, in the same video captured by the front camera and the rear camera, using multiple One image editing effect is used for video captured by the front camera, and another image editing effect is used for video captured by the rear camera to optimize device performance.
  • the image processing method proposed in the embodiment of the present application will be described in detail below with reference to the accompanying drawings.
  • FIG. 1 is a flowchart of an image processing method in an embodiment of the present disclosure. This embodiment is applicable to the situation of adding additional image material to the video.
  • the method can be performed by an image processing device.
  • the image processing device can be realized by software and/or hardware.
  • the image processing device can be configured in electronic equipment.
  • the electronic equipment may be a mobile terminal, a fixed terminal or a portable terminal, such as a mobile handset, a station, a unit, a device, a multimedia computer, a multimedia tablet, an Internet node, a communicator, a desktop computer, a laptop computer, a notebook computer, Netbook Computers, Tablet Computers, Personal Communication System (PCS) Devices, Personal Navigation Devices, Personal Digital Assistants (PDAs), Audio/Video Players, Digital Still/Video Cameras, Pointing Devices, Television Receivers, Radio Broadcast Receivers, Electronic Books devices, gaming devices, or any combination thereof, including accessories and peripherals for such devices, or any combination thereof.
  • PCS Personal Communication System
  • PDAs Personal Digital Assistants
  • Audio/Video Players Audio/Video Players
  • Digital Still/Video Cameras Pointing Devices
  • Television Receivers Radio Broadcast Receivers
  • Electronic Books devices Electronic Books devices, gaming devices, or any combination thereof, including accessories and peripherals for such devices, or any combination thereof.
  • the electronic device may be a server, wherein the server may be a physical server or a cloud server, and the server may be a server or a server cluster.
  • the image processing method provided by the embodiment of the present disclosure mainly includes the following steps S101 to S106.
  • step S101 a first original video frame captured by the first camera is acquired in response to the first camera opening operation.
  • the first camera mentioned above and the second camera described below are two cameras set in the same terminal device.
  • the first camera and the second camera described below may be external cameras connected to the terminal device, or may be built-in cameras of the terminal device.
  • the foregoing connection may be a wired connection or a wireless connection, which is not limited in this embodiment.
  • the built-in camera of the above terminal device may be a front camera or a rear camera.
  • the above-mentioned first camera is a front camera of the terminal device
  • the second camera is a rear camera of the terminal device.
  • the first camera may be one camera or a group of cameras, and the number of cameras included in the first camera is not limited in this embodiment.
  • responding to the opening operation of the first camera includes: after detecting that the user starts the media application program and detecting the user's trigger operation on the target image editing effect, receiving an opening instruction of the first camera, responding In response to the opening instruction, the first camera is turned on.
  • responding to the opening operation of the first camera may further include: after detecting the trigger operation of the camera switching button by the user, if the first camera is in the off state, receiving an opening instruction of the first camera, and responding to the Turn on the command, the first camera is turned on.
  • the switch button mentioned above may be a virtual button or a physical button, which is not limited in this embodiment.
  • the first original video frame may be understood as a video frame collected by the first camera without any processing.
  • the first original video frame may also be understood as a video frame collected by the first camera and subjected to set processing, but without adding additional image material.
  • the processing set above may be image beautification processing such as skin smoothing, makeup, and using a filter.
  • the additional image material can be understood as the content added to the video frame that does not belong to the image of the video frame.
  • the additional image material may also be referred to as prop material, special effect material, or the like. It is not limited in this embodiment.
  • Acquiring the first original video frame captured by the first camera may include: acquiring the first video frame captured by the front camera in real time.
  • the target image editing effect when the target image editing effect is applied to shoot the video, in response to the first camera turning on operation, the first original video frame captured by the first camera is acquired.
  • an image editing effect is an effect that can be understood as adding additional image material to a video frame.
  • the target image editing effect can be understood as an image editing effect selected by the user.
  • the user in the case of applying the target image editing effect to shoot a video, it can be understood that the user enables the target image editing effect before shooting the video, and uses the target image editing effect to edit the captured video when shooting the video. Image editing processing.
  • enabling the target image editing effect may be that when the user opens the media application, the target image editing effect is enabled by default, or after the user opens the media application, in response to the user's trigger operation on the target image editing effect, enabling Image editing effects.
  • step S102 a first image editing effect corresponding to the first camera is determined.
  • the first image editing effect may be understood as an image editing effect used when the first camera is used to capture video frames.
  • determining the first image editing effect corresponding to the first camera includes: after acquiring the first original video frame captured by the first camera, one of the default target image editing effects is the first Image editing effects.
  • determining the first image editing effect corresponding to the first camera includes: displaying multiple image editing effects included in the target image editing effect, and based on the user's selection operation, selecting the image editing effect selected by the user The result is determined to be the first image editing effect corresponding to the first camera.
  • step S103 image editing processing is performed on the first original video frame to obtain and display a first target video frame, and the first target video frame is the first image editing effect An effect image applied to the first original video frame.
  • image editing processing is performed on the first original video frame to obtain the first target video frame, which can be understood as applying the first image editing effect to the first original video frame .
  • the first image editing effect is to add additional image material at a specified position in the first original video frame.
  • the above-mentioned specified position may be human body parts such as eyes, mouth, head, and hands in the first original video frame, and may also be static objects such as buildings, flowers, trees, etc. in the first original video frame. No specific limitation is made in this embodiment.
  • the first image editing effect is to add an additional image material at a specified position of the first original video frame after the user completes a specified action.
  • the above specified action may be actions such as blinking, pouting, waving, and kicking by the user in the first original video frame. No specific limitation is made in this embodiment.
  • performing image editing processing on the first original video frame to obtain the first target video frame includes: detecting whether the first original video frame exists head image; and if there is a first head image in the first original video frame, then according to the first image editing effect, the first original video frame is subjected to image editing processing to obtain a first target video frame .
  • the first head image may be understood as a recognized and detected human face image in the first original video frame.
  • three algorithms are provided: a face recognition method, a foot recognition algorithm and a whole body recognition algorithm.
  • the face recognition algorithm is bound with the first camera. That is, after detecting that the first camera is turned on, the face recognition algorithm is started. Identify and detect whether there is a head image in the first original video frame through a face recognition algorithm. How the face recognition algorithm performs head recognition is not specifically limited in this implementation.
  • image editing is performed on the first original video frame to obtain the first target video frame, including: the first header in the first original video frame
  • the first additional image material is added to the position corresponding to the image to obtain the first target video frame.
  • the first additional image material may be a virtual hat, a virtual hairpin, a virtual step shaker, a virtual hairpin, a virtual hairpin, and the like.
  • the style of the hat can be an animal-style hat, such as a tiger head hat, a rabbit-style hat, etc., or a regular-style hat, such as a baseball cap.
  • a first additional image material is added at the head position. For example: display a virtual tiger head hat at said head position.
  • step S104 in response to the camera switching instruction, switch to the second camera and obtain the captured images from the second camera The second original video frame of .
  • the camera switching instruction can be understood as an instruction to switch the currently working camera to the off state, and switch the off state camera to the working state.
  • the first camera before responding to the camera switching instruction, the first camera is in the working state, and the second camera is in the off state. After responding to the camera switching instruction, the first camera is switched from the working state to the off state, and the second camera is switched from the off state to the working state (that is, the second camera is turned on).
  • the camera switching instruction it may be that after detecting the operation of the camera switching button by the user, the camera switching instruction is received, and in response to the switching instruction, the first camera is turned off and the second camera is turned on, wherein the above-mentioned switching button may be a virtual
  • the key may also be a physical key, which is not limited in this embodiment.
  • the second original video frame may be understood as a video frame captured by the second camera.
  • the second original video frame may also be understood as a video frame collected by the first camera and subjected to set processing, but without adding additional image material.
  • the processing set above may be image beautification processing such as skin smoothing, makeup, and using a filter.
  • Acquiring the second video frame captured by the second camera may include: acquiring the second video frame captured by the rear camera in real time.
  • step S105 a second image editing effect corresponding to the second camera is determined.
  • the second image editing effect can be understood as an image editing effect used when the second camera is used to capture video frames.
  • determining the second image editing effect corresponding to the second camera includes: after acquiring the second original video frame captured by the second camera, using one of the default target image editing effects as the second Image editing effects.
  • determining the second image editing effect corresponding to the second camera includes: displaying a plurality of image editing effects included in the target image editing effect, and determining the image editing effect selected by the user based on the user's selection operation It is the second image editing effect corresponding to the second camera.
  • step S106 image editing processing is performed on the second original video frame to obtain and display a second target video frame, the second target video frame is the second image editing effect An effect image applied to the second original video frame.
  • image editing processing is performed on the second original video frame to obtain the second target video frame, which can be understood as applying the second image editing effect to the second original video frame .
  • the second image editing effect is the addition of additional image material at specified locations in the second original video frame.
  • the above specified position may be human body parts such as feet and legs in the second original video frame. In this example, no Specific limits.
  • performing image editing processing on the second original video frame to obtain a second target video frame includes: detecting whether there is a foot in the second original video frame image; and if there is a foot image in the second original video frame, performing image editing processing on the second original video frame according to the second image editing effect to obtain a second target video frame.
  • three algorithms are provided: a face recognition method, a foot recognition algorithm and a whole body recognition algorithm.
  • the foot recognition algorithm is bound with the second camera. That is, after detecting that the second camera is turned on, the foot recognition algorithm is started.
  • a foot recognition algorithm is used to identify and detect whether there is a foot image in the second original video frame. How the foot recognition algorithm performs foot recognition is not specifically limited in this implementation.
  • the foot image may be a barefoot image, or a foot image after wearing shoes, which is not limited in this embodiment.
  • the whole body recognition algorithm when the face recognition algorithm is turned on, the whole body recognition algorithm is turned on; and/or, when the foot recognition algorithm is turned on, the whole body recognition algorithm is turned on.
  • the above-mentioned whole-body recognition algorithm is used to assist the face recognition algorithm and/or the foot recognition algorithm to recognize images, so as to improve image recognition efficiency.
  • image editing is performed on the second original video frame to obtain a second target video frame, including: corresponding to the foot image in the second original video frame A second additional image material is added at the position to obtain a second target video frame.
  • the second additional image material may be a virtual shoe or the like.
  • the style of the shoes may be animal hats, such as tiger paws, rabbit feet, etc., or regular shoes, such as sports shoes.
  • a second additional image material is added at the foot location. For example: display a virtual tiger paw shoe at said foot position.
  • a face recognition algorithm, a foot recognition algorithm, and a whole body recognition algorithm are provided; at the same time, hats and shoes are split into front and rear lenses. Under the front lens, only the virtual hat on the head is recognized, Prioritize the display effect of the hat; when the user switches to the rear camera to obtain a larger shooting space, virtual shoes will appear by default. Realize that when there are multiple additional image materials, the lenses of the two cameras are split, that is, different additional image materials are displayed in the video frames collected by the two cameras (that is, the first additional image material is different from the second additional image material), Optimized end device performance.
  • An embodiment of the present disclosure provides an image processing method, including: in response to the first camera opening operation, acquiring the first original video frame captured by the first camera; determining the first image editing effect corresponding to the first camera; The first image editing effect is to perform image editing processing on the first original video frame to obtain the first target The video frame is displayed, and the first target video frame is an effect image of the first image editing effect applied on the first original video frame; in response to the camera switching instruction, switch to the second camera and obtain the first The second original video frame collected by the second camera; determine the second image editing effect corresponding to the second camera; according to the second image editing effect, perform image editing processing on the second original video frame to obtain a second target The video frame is displayed, and the second target video frame is an effect image of the second image editing effect applied to the second original video frame.
  • different image editing processing methods are used in video frames captured by different cameras, for example, the first image editing effect is different from the second image editing effect.
  • multiple image editing effects can be used in the same video captured by the front camera and the rear camera, that is, the video captured by the front camera uses one image editing effect, and the video captured by the rear camera uses another image Edit effects to optimize terminal device performance.
  • the embodiments of the present disclosure further optimize the above image processing method.
  • the optimized image processing method of the embodiments of the present disclosure mainly includes the following steps S201 to S208.
  • step S201 a first original video frame captured by the first camera is acquired in response to a first camera opening operation.
  • step S202 a first image editing effect corresponding to the first camera is determined.
  • step S203 perform image editing processing on the first original video frame according to the first image editing effect, obtain and display a first target video frame, the first target video frame is the first image editing effect An effect image applied to the first original video frame.
  • step S204 in response to a camera switching instruction, switch to a second camera and acquire a second original video frame captured by the second camera.
  • step S205 a second image editing effect corresponding to the second camera is determined.
  • step S206 image editing processing is performed on the second original video frame to obtain and display a second target video frame, and the second target video frame is the second image editing effect An effect image applied to the second original video frame.
  • steps S201-S206 are performed in the same manner as steps S101-S106 in the above-mentioned embodiment, and details may refer to the description in the above-mentioned embodiment, and details will not be repeated in this embodiment.
  • step S207 in response to a trigger operation on the screen, a third image editing effect corresponding to the second camera is determined.
  • the screen refers to a touch screen capable of receiving operation signals, and the size and type of the screen are not specifically limited in this embodiment.
  • the trigger operation on the screen may be a click operation or a double-click operation on the screen.
  • the third image editing effect may be the same as or different from the first image editing effect.
  • the third image editing effect is different from the second image editing effect.
  • the third image editing effect may be the same as the first image editing effect.
  • one of the default target image editing effects is the third image editing effect.
  • step S208 image editing processing is performed on the second original video frame to obtain and display a third target video frame, the third target video frame is the third image editing effect An effect image applied to the second original video frame.
  • image editing processing is performed on the second original video frame to obtain a third target video frame, which can be understood as applying the third image editing effect to the second original video frame .
  • the third image editing effect is to add additional image material at a specified location in the second original video frame.
  • the above-mentioned designated position may be human body parts such as eyes, mouth, head, and hands in the second original video frame, and may also be static objects such as buildings, flowers, trees, etc. in the second original video frame. No specific limitation is made in this embodiment.
  • performing image editing processing on the second original video frame to obtain and display the third target video frame includes: detecting whether there is The second head image; if there is a second head image in the second original video frame, then according to the third image editing effect, the second original video frame is subjected to image editing processing to obtain a third target video frame and display.
  • the second head image may be understood as a recognized and detected head image in the second original video frame. In some embodiments, whether there is a head image in the second original video frame is identified and detected through a face recognition algorithm. How the face recognition algorithm performs head recognition is not specifically limited in this implementation.
  • image editing is performed on the second original video frame to obtain a third target video frame, including: the second header in the second original video frame A third additional image material is added to a position corresponding to the image to obtain a third target video frame.
  • the third additional image material may be a virtual hat, a virtual hairpin, a virtual walker, a virtual hairpin, a virtual hairpin, and the like.
  • the style of the hat can be an animal-style hat, such as a tiger head hat, a rabbit-style hat, etc., or a regular-style hat, such as a baseball cap.
  • a third additional image material is added at the head position. For example: display a virtual tiger head hat at said head position.
  • the front camera is turned on by default, and when the face image is detected, the screen displays a tiger head hat.
  • the screen displays a tiger head hat.
  • the foot image is detected, tiger shoes appear on the screen.
  • the user taps the screen check whether there is Face image, after the face image is detected, the tiger head hat will be displayed on the screen.
  • the image processing method provided by the embodiments of the present disclosure further includes: when it is detected that the user uses the additional image material package for the first time, displaying guide information, the guide information is used to prompt the user for the additional image How to use the material package.
  • the guide information may be any one of video, audio or a combination of both.
  • the guide information is used to inform the user that there are tiger hats for face pats and tiger shoes for feet pats.
  • the guide information can also be used to pre-inform the user to switch between tiger shoes and tiger hats by clicking the screen under the rear camera after the user switches the rear camera and the tiger shoes appear by default.
  • the use of the playback guide information can make the user clearly know how to use the image editing effect, thereby improving the user experience.
  • FIG. 3 is a schematic structural diagram of an image processing device in an embodiment of the present disclosure. This embodiment is applicable to the situation of adding virtual special effect props to a video, and the image processing device can be realized by software and/or hardware, and the image processing device can be configured in an electronic device.
  • the image processing device 30 mainly includes: a first original video frame acquisition module 31, a first editing effect determination module 32, a first target video frame acquisition module 33, a second original video frame An acquisition module 34 , a second editing effect determination module 35 and a second target video frame acquisition module 36 .
  • the first original video frame acquisition module 31 is configured to acquire the first original video frame captured by the first camera in response to the first camera opening operation.
  • the first editing effect determining module 32 is configured to determine a first image editing effect corresponding to the first camera.
  • the first target video frame obtaining module 33 is used to perform image editing processing on the first original video frame according to the first image editing effect to obtain and display the first target video frame, and the first target video frame is
  • the first image editing effect is an effect image applied to the first original video frame.
  • the second original video frame acquiring module 34 is configured to switch to the second camera and acquire the second original video frame captured by the second camera in response to the camera switching instruction.
  • the second editing effect determination module 35 is configured to determine a second image editing effect corresponding to the second camera.
  • the second target video frame obtaining module 36 is used to perform image editing processing on the second original video frame according to the second image editing effect to obtain and display the second target video frame, and the second target video frame is
  • the second image editing effect is an effect image applied to the second original video frame.
  • the first target video frame obtaining module 33 includes: a first head image detection unit for detecting whether there is a first head image in the first original video frame; and obtaining the first target video frame A unit, configured to edit the first image according to the first image editing effect if the first head image exists in the first original video frame
  • the original video frame is subjected to image editing processing to obtain the first target video frame.
  • the first target video frame obtaining unit is specifically configured to add a first additional image material to a position corresponding to the first head image in the first original video frame to obtain the first target video frame.
  • the second target video frame obtaining module 36 includes: a foot image detection unit for detecting whether there is a foot image in the second original video frame; and a second target video frame obtaining unit for If there is a foot image in the second original video frame, perform image editing processing on the second original video frame according to the second image editing effect to obtain a second target video frame.
  • the second target video frame obtaining unit is configured to add a second additional image material to a position corresponding to the foot image in the second original video frame to obtain the second target video frame.
  • the first editing effect determination module 32 is configured to: after acquiring the first original video frame captured by the first camera, one of the default target image editing effects is the first image editing effect effect; or, display multiple image editing effects included in the target image editing effect, and determine the image editing effect selected by the user as the first image editing effect corresponding to the first camera based on the user's selection operation.
  • the second editing effect determining module 35 is configured to: after acquiring the second original video frame captured by the second camera, one of the default target image editing effects is used as the second image editing effect; Alternatively, multiple image editing effects included in the target image editing effect are displayed, and based on the user's selection operation, the image editing effect selected by the user is determined as the second image editing effect corresponding to the second camera.
  • the device further includes: a third image editing effect determining module, configured to determine a third image editing effect corresponding to the second camera in response to a trigger operation on the screen; and a third target video frame
  • the obtaining module is configured to perform image editing processing on the second original video frame according to the third image editing effect to obtain and display a third target video frame.
  • the module for obtaining the third target video frame includes: a second head image detection unit, configured to detect whether there is a second head image in the second original video frame; and obtain the third target video frame A unit, configured to, if there is a second head image in the second original video frame, perform image editing processing on the second original video frame according to the third image editing effect to obtain a third target video frame and show.
  • the third target video frame obtaining unit is configured to add a third additional image material at a position corresponding to the second head image in the second original video frame to obtain a third target video frame.
  • the first image editing effect is different from the second image editing effect.
  • the third image editing effect is different than the second image editing effect.
  • the device further includes: a guide information display module, configured to display guide information when it is detected that the user uses the additional image material package for the first time, and the guide information is used to prompt the user for the additional image material How to use the package.
  • a guide information display module configured to display guide information when it is detected that the user uses the additional image material package for the first time, and the guide information is used to prompt the user for the additional image material How to use the package.
  • the image processing device provided by the embodiment of the present disclosure can execute the steps performed in the image processing method provided by the method embodiment of the present disclosure, and the execution steps and beneficial effects will not be repeated here.
  • FIG. 4 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure. Referring to FIG. 4 in detail below, it shows a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure.
  • the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), PMPs (Portable Multimedia Players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals), wearable terminal devices, etc., and fixed terminals such as digital TVs, desktop computers, smart home devices, etc.
  • the electronic device shown in FIG. 4 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 400 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 401, which may be randomly accessed according to a program stored in a read-only memory (ROM) 402 or loaded from a storage device 408.
  • the program in the memory (RAM) 403 executes various appropriate actions and processes to realize the image rendering method according to the embodiment of the present disclosure.
  • various programs and data necessary for the operation of the terminal device 400 are also stored.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404 .
  • the following devices can be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 407 such as a computer; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409.
  • the communication means 409 may allow the terminal device 400 to perform wireless or wired communication with other devices to exchange data. While FIG. 4 shows a terminal device 400 having various means, it should be understood that implementing or possessing all of the illustrated means is not a requirement. More or fewer means may alternatively be implemented or provided.
  • the processes described above with reference to the flowcharts can be implemented as computer software programs.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program includes program code for executing the method shown in the flow chart, thereby realizing the above The page jump method described above.
  • the computer program may be downloaded and installed from a network via communication means 409 , or from storage means 408 , or from ROM 402 .
  • the processing device 401 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium Communications (eg, communication networks) are interconnected.
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the terminal device, the terminal device: in response to an operation of turning on the first camera, acquires the first Original video frame; determine the first image editing effect corresponding to the first camera; perform image editing processing on the first original video frame according to the first image editing effect, obtain and display the first target video frame, and
  • the first target video frame is an effect image of the first image editing effect applied on the first original video frame; in response to a camera switching instruction, switch to a second camera and obtain a second image captured by the second camera
  • the original video frame determining the second image editing effect corresponding to the second camera; performing image editing processing on the second original video frame according to the second image editing effect, obtaining a second target video frame and displaying it, the The second target video frame is the second image An editing effect is applied to an effect image on the second original video frame.
  • the terminal device may also perform other steps described in the foregoing embodiments.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any of the foregoing any suitable combination.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the present disclosure provides an image processing method, including: in response to the first camera opening operation, acquiring the first original video frame captured by the first camera; determining the first The first image editing effect corresponding to the camera; according to the first image editing effect, image editing processing is performed on the first original video frame, and the first target video frame is obtained and displayed, and the first target video frame is the The first image editing effect is applied to the effect image on the first original video frame; in response to the camera switching instruction, switch to the second camera and acquire the second original video frame captured by the second camera; determine the second The second image editing effect corresponding to the camera; according to the second image editing effect, image editing processing is performed on the second original video frame, and the second target video frame is obtained and displayed, and the second target video frame is the A second image editing effect is applied to the effect image on the second original video frame.
  • the present disclosure provides an image processing method, wherein, according to the first image editing effect, image editing processing is performed on the first original video frame to obtain a first target video frame, including: detecting whether there is a first head image in the first original video frame; if there is a first head image in the first original video frame, according to the first image editing effect, the The first original video frame is subjected to image editing processing to obtain the first target video frame.
  • the present disclosure provides an image processing method, wherein, according to the first image editing effect, image editing processing is performed on the first original video frame to obtain a first target video frame, including: adding a first additional image material to a position corresponding to the first head image in the first original video frame to obtain a first target video frame.
  • the present disclosure provides an image processing method, wherein, according to the second image editing effect, image editing processing is performed on the second original video frame to obtain a second target video frame, including: detecting whether there is a foot image in the second original video frame; if there is a foot image in the second original video frame, according to the second image editing effect, the second original video The frame is subjected to image editing processing to obtain a second target video frame.
  • the present disclosure provides an image processing method, wherein, according to the second image editing effect, image editing processing is performed on the second original video frame to obtain a second target video frame, including: adding a second additional image material at the position corresponding to the foot image in the second original video frame to obtain the second target video frame.
  • the present disclosure provides an image processing method, wherein determining the first image editing effect corresponding to the first camera includes: After the video frame, one image editing effect in the default target image editing effect is the first image editing effect; or, displaying multiple image editing effects included in the target image editing effect, based on the user's selection operation, the user selected The image editing effect is determined as the first image editing effect corresponding to the first camera.
  • the present disclosure provides an image processing method, wherein determining the second image editing effect corresponding to the second camera includes: obtaining the second original video frame captured by the second camera Finally, one of the image editing effects in the default target image editing effect is used as the second image editing effect; or, a plurality of image editing effects included in the target image editing effect are displayed, and the image editing effect selected by the user is edited based on the user's selection operation. The effect is determined as the second image editing effect corresponding to the second camera.
  • the present disclosure provides an image processing method, the method further comprising: performing image editing processing on the second original video frame according to the second image editing effect, After the second target video frame is obtained and displayed, in response to the trigger operation on the screen, determine the third image editing effect corresponding to the second camera; according to the third image editing effect, perform the second original video frame Image editing processing, obtaining and displaying the third target video frame.
  • the present disclosure provides an image processing method, wherein, according to the third image editing effect, image editing processing is performed on the second original video frame to obtain a third target video frame and display, including: detecting whether there is a second head image in the second original video frame; if there is a second head image in the second original video frame, according to the third image editing effect, the The second original video frame is subjected to image editing processing to obtain and display a third target video frame.
  • the present disclosure provides an image processing method, wherein, according to the third image editing effect, image editing processing is performed on the second original video frame to obtain a third target video frame, including: adding a third additional image material to a position corresponding to the second head image in the second original video frame to obtain a third target video frame.
  • the present disclosure provides an image processing method, wherein the first image editing effect is different from the second image editing effect.
  • the present disclosure provides an image processing method, wherein the third image editing effect is different from the second image editing effect.
  • the present disclosure provides an image processing method, wherein the image processing method further includes: when it is detected that the user uses the additional image material package for the first time, displaying guide information, the The guide information is used to prompt the user how to use the additional image material package.
  • an embodiment of the present disclosure provides an image processing device, the device comprising: a first original video frame acquisition module, configured to acquire the first original video frame in response to the first camera opening operation The first original video frame collected by the camera; the first editing effect determining module, configured to determine the first image editing effect corresponding to the first camera; the first target video frame obtaining module, configured to edit the effect according to the first image , performing image editing processing on the first original video frame to obtain and display a first target video frame, where the first target video frame is an effect of the first image editing effect applied to the first original video frame Image; the second original video frame acquisition module, used to switch to the second camera and obtain the second original video frame collected by the second camera in response to the camera switching instruction; the second editing effect determination module, used to determine the The second image editing effect corresponding to the second camera; the second target video frame obtaining module is used to perform image editing processing on the second original video frame according to the second image editing effect to obtain the second target video frame and It is shown that the second target video
  • an embodiment of the present disclosure provides an image processing device, wherein the first target video frame obtaining module includes: a first head image detection unit, configured to detect the first original video Whether there is a first head image in the frame; the first target video frame obtaining unit is configured to: if there is a first head image in the first original video frame, according to the editing effect of the first image, the An original video frame is subjected to image editing processing to obtain a first target video frame.
  • the first target video frame obtaining module includes: a first head image detection unit, configured to detect the first original video Whether there is a first head image in the frame; the first target video frame obtaining unit is configured to: if there is a first head image in the first original video frame, according to the editing effect of the first image, the An original video frame is subjected to image editing processing to obtain a first target video frame.
  • an embodiment of the present disclosure provides an image processing device, wherein the first target video frame obtaining unit is specifically configured to correspond to the first head image in the first original video frame The first additional image material is added at the position to obtain the first target video frame.
  • the second target video frame obtaining module includes: a foot image detection unit, configured to detect whether there is a foot image in the second original video frame; the second target video frame obtains A unit, configured to, if there is a foot image in the second original video frame, perform image editing processing on the second original video frame according to the second image editing effect to obtain a second target video frame.
  • an embodiment of the present disclosure provides an image processing device, wherein the second target video frame obtaining unit is configured to add The second additional image material is used to obtain the second target video frame.
  • an embodiment of the present disclosure provides an image processing device, wherein the first The editing effect determining module is used for: after acquiring the first original video frame collected by the first camera, one of the default target image editing effects is the first image editing effect; or, displaying the target image editing effect Based on the user's selection operation, determine the image editing effect selected by the user as the first image editing effect corresponding to the first camera.
  • an embodiment of the present disclosure provides an image processing device, wherein the second editing effect determination module is configured to: after acquiring the second original video frame captured by the second camera, the default target image One image editing effect in the editing effects is used as the second image editing effect; or, displaying multiple image editing effects included in the target image editing effect, based on the user's selection operation, determining the image editing effect selected by the user as the second image editing effect The second image editing effect corresponding to the second camera.
  • an embodiment of the present disclosure provides an image processing device, wherein the device further includes: a third image editing effect determining module, configured to determine the The third image editing effect corresponding to the second camera; the third target video frame obtaining module is used to perform image editing processing on the second original video frame according to the third image editing effect to obtain a third target video frame and display.
  • a third image editing effect determining module configured to determine the The third image editing effect corresponding to the second camera
  • the third target video frame obtaining module is used to perform image editing processing on the second original video frame according to the third image editing effect to obtain a third target video frame and display.
  • an embodiment of the present disclosure provides an image processing device, wherein the third target video frame acquisition module includes: a second head image detection unit, configured to detect the second original Whether there is a second head image in the video frame; the third target video frame obtaining unit is configured to: if there is a second head image in the second original video frame, according to the third image editing effect, the The second original video frame is subjected to image editing processing to obtain and display the third target video frame.
  • the third target video frame acquisition module includes: a second head image detection unit, configured to detect the second original Whether there is a second head image in the video frame; the third target video frame obtaining unit is configured to: if there is a second head image in the second original video frame, according to the third image editing effect, the The second original video frame is subjected to image editing processing to obtain and display the third target video frame.
  • an embodiment of the present disclosure provides an image processing device, wherein the third target video frame obtaining unit is used to obtain the position corresponding to the second head image in the second original video frame Adding a third additional image material to obtain a third target video frame.
  • an embodiment of the present disclosure provides an image processing apparatus, wherein the first image editing effect is different from the second image editing effect.
  • an embodiment of the present disclosure provides an image processing device, wherein the third image editing effect is different from the second image editing effect.
  • an embodiment of the present disclosure provides an image processing device, wherein the device further includes: a guide information display module, configured to detect that the user uses the additional image material package for the first time , displaying guide information, where the guide information is used to prompt the user how to use the additional image material package.
  • a guide information display module configured to detect that the user uses the additional image material package for the first time , displaying guide information, where the guide information is used to prompt the user how to use the additional image material package.
  • the present disclosure provides an electronic device, including: one or more processors; and a memory for storing one or more programs; when the one or more programs are executed the one or more processors, so that the one or more processors implement any one of the image processing methods provided in the present disclosure.
  • the present disclosure provides a computer-readable storage medium (for example, a non-transitory computer-readable storage medium), on which a computer program is stored, and when the program is executed by a processor, the Any one of the image processing methods provided by the present disclosure.
  • a computer-readable storage medium for example, a non-transitory computer-readable storage medium
  • An embodiment of the present disclosure also provides a computer program product, where the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the above-mentioned image processing method is realized.
  • An embodiment of the present disclosure also provides a computer program, including: instructions, which when executed by a processor cause the processor to execute the image processing method as described above.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

本公开涉及一种图像处理方法、装置、设备、存储介质和程序产品。图像处理方法包括:响应于第一摄像头开启操作,获取第一摄像头采集的第一原始视频帧;按照第一图像编辑效果,对第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示;响应于摄像头切换指令,切换至第二摄像头并获取第二摄像头采集的第二原始视频帧;按照第二图像编辑效果,对第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示。

Description

图像处理方法、装置、设备、存储介质和程序产品
相关申请的交叉引用
本申请是以申请号为202210107352.1,申请日为2022年1月28日的中国申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及计算机处理技术领域,尤其涉及一种图像处理方法、装置、设备、存储介质和程序产品。
背景技术
随着互联网技术和终端设备的快速发展,各种终端设备例如手机和平板电脑等己经成为了人们工作和中不可或缺的一部分,智能终端中安装的各种媒体软件的功能也越来越强大。
例如,可以通过智能终端中安装的媒体软件来实现对虚拟图像素材的操作,以模拟真实的环境。基于这类软件可以减少对真实素材的需求,节省成本,并且便于统计操作结果。
发明内容
第一方面,本公开实施例提供一种图像处理方法,所述方法包括:响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;确定所述第一摄像头对应的第一图像编辑效果;按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;确定所述第二摄像头对应的第二图像编辑效果;和按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
第二方面,本公开实施例提供一种图像处理装置,所述装置包括:第一原始视频帧获取模块,用于响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始 视频帧;第一编辑效果确定模块,用于确定所述第一摄像头对应的第一图像编辑效果;第一目标视频帧获得模块,用于按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;第二原始视频帧获取模块,用于响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;第二编辑效果确定模块,用于确定所述第二摄像头对应的第二图像编辑效果;和第二目标视频帧获得模块,用于按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
第三方面,本公开实施例提供一种电子设备,所述电子设备包括:一个或多个处理器;和存储装置,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如上述第一方面中任一项所述的图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如上述第一方面中任一项所述的图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,该计算机程序或指令被处理器执行时实现如上述第一方面中任一项所述的图像处理方法。
第六方面,本公开实施例提供一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行如上述第一方面中任一项所述的图像处理方法。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例中的一种图像处理方法的流程图;
图2为本公开实施例中的另一种图像处理方法的流程图;
图3为本公开实施例中的一种图像处理装置的结构示意图;
图4为本公开实施例中的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
相关技术中对于附加图像素材的交互方式主要有两种。
第一种交互方式:识别检测到视频帧中出现特定图像时,在该视频帧中添加对应的附加图像素材并显示。其中,上述特定图像可以是人脸图像、脚部图像、手部图像等。例如:识别检测到视频帧中出现手部图像时,在该视频帧中的手部位置显示糖葫芦图像素材。
第二中交互方式:识别检测到视频帧中用户做出指定动作时,在该视频帧中显示对应的附加图像素材。其中,上述指定动作可以是嘟嘴、眨眼、比心等。例如:识别检测到视频帧中用户做出嘟嘴动作时,在该视频帧中的嘴部显示气球图像素材。
上述附加图像素材的交互方式,只能识别单一的图像,如头部或者脚部,并不能做到在拍摄一个视频时,在一个视频中既识别头部又识别脚部。只能在使用前置摄像头采集视频帧时,在采集到的视频帧中使用多款附加图像素材。或者,使用后置摄像头采集视频帧时,在采集到的视频帧中使用多款附加图像素材。这样,前后置摄像头无拆分, 当存在附加多款图像素材时,前后置镜头都带有多款3D附加图像素材对设备性能消耗巨大。例如,当存在两款图像素材时,前后置摄像头都带有两款图像素材,对终端设备的性能消耗巨大。
鉴于此,本公开实施例提供了一种图像处理方法,通过不同的摄像头采集的视频帧中使用不同的图像编辑处理方式,实现在前置摄像头和后置摄像头采集的同一个视频中,使用多种图像编辑效果,即前置摄像头采集的视频使用一种图像编辑效果,后置摄像头采集的视频使用另一种图像编辑效果,优化设备性能。下面将结合附图,对本申请实施例提出的图像处理方法进行详细介绍。
图1为本公开实施例中的一种图像处理方法的流程图。本实施例可适用于对视频添加附加图像素材的情况。该方法可以由图像处理装置执行。该图像处理装置可以采用软件和/或硬件的方式实现。该图像处理装置可配置于电子设备中。
例如:所述电子设备可以是移动终端、固定终端或便携式终端,例如移动手机、站点、单元、设备、多媒体计算机、多媒体平板、互联网节点、通信器、台式计算机、膝上型计算机、笔记本计算机、上网本计算机、平板计算机、个人通信系统(PCS)设备、个人导航设备、个人数字助理(PDA)、音频/视频播放器、数码相机/摄像机、定位设备、电视接收器、无线电广播接收器、电子书设备、游戏设备或者其任意组合,包括这些设备的配件和外设或者其任意组合。
再如:所述电子设备可以是服务器,其中,所述服务器可以是实体服务器,也可以是云服务器,服务器可以是一个服务器,或者服务器集群。
如图1所述,本公开实施例提供的图像处理方法主要包括如下步骤S101至S106。
在步骤S101,响应于第一摄像头开启操作,获取第一摄像头采集的第一原始视频帧。
上述第一摄像头和下述的第二摄像头是设置于同一个终端设备中的两个摄像头。其中,第一摄像头和下述的第二摄像头可以是与终端设备进行连接的外置摄像头,也可以是终端设备的内置摄像头。上述连接可以是有线连接,也可以是无线连接,本实施例中不进行限定。上述终端设备的内置摄像头可以是前置摄像头,也可以是后置摄像头。例如,本实施例中,上述第一摄像头为终端设备的前置摄像头,第二摄像头为终端设备的后置摄像头。
例如,所述第一摄像头可以是一个摄像头,也可是一组摄像头,本实施例中并不限定第一摄像头中包含的摄像头的数量。
在一些实施例中,响应于第一摄像头开启操作包括:检测到用户开启媒体应用程序,并且检测到用户对目标图像编辑效果的触发操作之后,接收到第一摄像头的开启指令,响 应于该开启指令,第一摄像头开启。
在一些实施例中,响应于第一摄像头开启操作,还可以包括:检测到用户对摄像头切换按键的触发操作后,若第一摄像头处于关闭状态,则接收第一摄像头的开启指令,响应于该开启指令,第一摄像头开启。其中,上述切换按键可以是虚拟按键,也可以是物理按键,本实施例中不进行限定。
例如,第一原始视频帧可以理解为通过第一摄像头采集到的未经过任何处理的视频帧。又例如,第一原始视频帧还可以理解为通过第一摄像头采集到的经过设定的处理,但是未添加附加图像素材的视频帧。上述设定的处理可以是磨皮、化妆、使用滤镜等图像美化处理。附加图像素材可以理解为在视频帧中添加的不属于视频帧图像中的内容。该附加图像素材也可以称为道具素材、特效素材等。本实施例中不进行限定。
获取第一摄像头采集的第一原始视频帧可以包括:实时获取前置摄像头采集到的第一视频帧。
在一些实施例中,在应用目标图像编辑效果拍摄视频的情况下,响应于第一摄像头开启操作,获取第一摄像头采集的第一原始视频帧。
例如,图像编辑效果是可以理解为在视频帧中添加附加图像素材的效果。目标图像编辑效果可以理解为是用户选择的图像编辑效果。
在一些实施例中,在应用目标图像编辑效果拍摄视频的情况下,可以理解为用户在拍摄视频之前开启目标图像编辑效果,在拍摄视频时,使用所述目标图像编辑效果对拍摄到的视频进行图像编辑处理。
在一些实施例中,开启目标图像编辑效果可以是用户打开媒体应用程序时,默认开启目标图像编辑效果,还可以是用户打开媒体应用程序之后,响应于用户对目标图像编辑效果的触发操作,开启图像编辑效果。
在步骤S102,确定所述第一摄像头对应的第一图像编辑效果。
例如,所述第一图像编辑效果可以理解为使用第一摄像头采集视频帧时,所使用的图像编辑效果。
在一些实施例中,确定所述第一摄像头对应的第一图像编辑效果,包括:在获取第一摄像头采集的第一原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果为第一图像编辑效果。
在一些实施例中,确定所述第一摄像头对应的第一图像编辑效果,包括:显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效 果确定为第一摄像头对应的第一图像编辑效果。
在步骤S103,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像。
在本实施例中,按照第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,可以理解为将第一图像编辑效果应用在第一原始视频帧上。
在一些实施例中,第一图像编辑效果是在第一原始视频帧的指定位置添加附加图像素材。上述指定位置可以是第一原始视频帧中的眼睛、嘴部、头部、手部等人体部位,还可以是第一原始视频帧中的建筑物、花草、树木等静态物品。本实施例中不进行具体的限定。
在一些实施例中,第一图像编辑效果是在用户完成指定动作之后,在第一原始视频帧的指定位置添加附加图像素材。上述指定动作可以是第一原始视频帧中的用户进行眨眼、嘟嘴、挥手、踢腿等动作。本实施例中不进行具体的限定。
在一些实施例中,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,包括:检测所述第一原始视频帧中是否存在第一头部图像;和如果所述第一原始视频帧中存在第一头部图像,则按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧。
第一头部图像可以理解为第一原始视频帧中的识别检测到的人脸图像。在本实施例中,提供人脸识别方法、脚部识别算法和全身识别算法三种算法。
在一些实施例中,所述人脸识别算法与所述第一摄像头进行绑定。即检测到第一摄像头开启后,启动人脸识别算法。通过人脸识别算法识别检测第一原始视频帧中是否存在头部图像。人脸识别算法如何进行头部识别本实施中不再具体限定。
在一些实施例中,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,包括:在所述第一原始视频帧中第一头部图像对应的位置添加第一附加图像素材,得到第一目标视频帧。
其中,所述第一附加图像素材可以是虚拟帽子、虚拟发簪、虚拟步摇、虚拟发钗和虚拟发钿等。帽子的样式可以是动物样式的帽子,例如:虎头帽、兔子样式帽子等,也可以是常规样式的帽子,例如:棒球帽等。
例如,在第一原始视频帧中识别出头部图像后,在头部位置添加第一附加图像素材。例如:在所述头部位置显示虚拟虎头帽。
在步骤S104,响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集 的第二原始视频帧。
例如,摄像头切换指令可以理解为将当前工作的摄像头切换到关闭状态,将关闭状态的摄像头切换到工作状态的指令。
本实施例中,响应摄像头切换指令之前,第一摄像头处于工作状态,第二摄像头处于关闭状态。响应摄像头切换指令之后,将第一摄像头由工作状态切换至关闭状态,将第二摄像头由关闭状态切换至工作状态(即开启第二摄像头)。
响应于摄像头切换指令,可以是检测到用户对摄像头切换按键的操作之后,接收到摄像头的切换指令,响应于该切换指令,第一摄像头关闭,第二摄像头开启,其中,上述切换按键可以是虚拟按键,也可以是物理按键,本实施例中不进行限定。
例如,第二原始视频帧可以理解为通过第二摄像头采集到的视频帧。又例如,第二原始视频帧还可以理解为通过第一摄像头采集到的经过设定的处理,但是未添加附加图像素材的视频帧。上述设定的处理可以是磨皮、化妆、使用滤镜等图像美化处理。
获取第二摄像头采集的第二视频帧可以包括:实时获取后置摄像头采集到的第二视频帧。
在步骤S105,确定所述第二摄像头对应的第二图像编辑效果。
所述第二图像编辑效果可以理解为使用第二摄像头采集视频帧时,所使用的图像编辑效果。
在一些实施例中,确定所述第二摄像头对应的第二图像编辑效果,包括:在获取第二摄像头采集的第二原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果作为第二图像编辑效果。
在一些实施例中,确定所述第二摄像头对应的第二图像编辑效果,包括:显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为第二摄像头对应的第二图像编辑效果。
在步骤S106,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
在本实施例中,按照第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,可以理解为将第二图像编辑效果应用在第二原始视频帧上。
在一些实施例中,第二图像编辑效果是在第二原始视频帧的指定位置添加附加图像素材。上述指定位置可以是第二原始视频帧中的脚部、腿部等人体部位。本实施例中不进行 具体的限定。
在一些实施例中,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,包括:检测所述第二原始视频帧中是否存在脚部图像;和如果所述第二原始视频帧中存在脚部图像,则按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧。
本实施例中提供人脸识别方法、脚部识别算法和全身识别算法三种算法。
在一些实施例中,所述脚部识别算法与所述第二摄像头进行绑定。即检测到第二摄像头开启后,启动脚部识别算法。通过脚部识别算法识别检测第二原始视频帧中是否存在脚部图像。脚部识别算法如何进行脚部识别本实施中不再具体限定。
例如,所述脚部图像可以是光脚图像,也可以是穿鞋之后的脚部图像,本实施例中不进行限定。
在一些实施例中,在开启人脸识别算法时,开启全身识别算法;和/或,在开启脚部识别算法时,开启全身识别算法。上述全身识别算法用于辅助所述人脸识别算法和/或脚部识别算法对图像进行识别,提高图像识别效率。
在一些实施例中,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,包括:在所述第二原始视频帧中脚部图像对应的位置添加第二附加图像素材,得到第二目标视频帧。
例如,所述第二附加图像素材可以是虚拟鞋子等。鞋子的样式可以是动物样式的帽子,例如:虎爪样式、兔脚样式等,也可以是常规样式的鞋子,例如:运动鞋等。
在一些实施例中,在第二原始视频帧中识别出脚部图像后,在脚部位置添加第二附加图像素材。例如:在所述脚部位置显示虚拟虎爪鞋。
在本实施例中,提供了人脸识别算法、脚部识别算法、全身识别算法;同时对帽子和鞋子做了前后置镜头的拆分,在前置镜头下,只识别头部出现虚拟帽子,优先保证帽子的展示效果;当用户切换到后置镜头获得更大的拍摄空间时,默认出现虚拟鞋子。实现在有多个附加图像素材时,两个摄像头镜头拆分,即在两个摄像头采集的视频帧中显示不同的附加图像素材(即,第一附加图像素材与第二附加图像素材不同),优化了终端设备性能。
本公开实施例提供了一种图像处理方法包括:响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;确定所述第一摄像头对应的第一图像编辑效果;按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标 视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;确定所述第二摄像头对应的第二图像编辑效果;按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。在本公开实施例中,不同的摄像头采集的视频帧中使用不同的图像编辑处理方式,例如,第一图像编辑效果与第二图像编辑效果不同。这样实现了在前置摄像头和后置摄像头采集的同一个视频中,使用多种图像编辑效果,即前置摄像头采集的视频使用一种图像编辑效果,后置摄像头采集的视频使用另一种图像编辑效果,优化终端设备性能。
在上述实施例的基础上,本公开实施例进一步优化了上述图像处理方法,如图2所示,本公开实施例优化后的图像处理方法主要包括如下步骤S201至S208。
在步骤S201,响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧。
在步骤S202,确定所述第一摄像头对应的第一图像编辑效果。
在步骤S203,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像。
在步骤S204,响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧。
在步骤S205,确定所述第二摄像头对应的第二图像编辑效果。
在步骤S206,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
本实施例中,步骤S201-S206和上述实施例中的步骤S101-S106执行方法相同,具体可以参照上述实施例中的描述,本实施例中不再赘述。
在步骤S207,响应于对屏幕的触发操作,确定所述第二摄像头对应的第三图像编辑效果。
所述屏幕是指可接收操作信号的触摸屏幕,屏幕的大小和类型本实施例中不再进行具体的限定。检测到对屏幕的触发操作后,响应对屏幕的触发操作。对屏幕的触发操作可以是对屏幕的点击操作或者双击操作。
第三图像编辑效果与第一图像编辑效果可以相同,也可以不同。第三图像编辑效果与所述第二图像编辑效果不同。在一些实施例中,第三图像编辑效果与第一图像编辑效果可以相同。
在一些实施例中,在开启第二摄像头的情况下,响应于对屏幕的触发操作,默认目标图像编辑效果中的其中一个图像编辑效果为第三图像编辑效果。
在步骤S208,按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示,所述第三目标视频帧为所述第三图像编辑效果应用在所述第二原始视频帧上的效果图像。
在本实施例中,按照第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧,可以理解为将第三图像编辑效果应用在第二原始视频帧上。
在一些实施例中,第三图像编辑效果是在第二原始视频帧的指定位置添加附加图像素材。上述指定位置可以是第二原始视频帧中的眼睛、嘴部、头部、手部等人体部位,还可以是第二原始视频帧中的建筑物、花草、树木等静态物品。本实施例中不进行具体的限定。
在一些实施例中,按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示,包括:检测所述第二原始视频帧中是否存在第二头部图像;如果所述第二原始视频帧中存在第二头部图像,则按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
第二头部图像可以理解为第二原始视频帧中的识别检测到的头部图像。在一些实施例中,通过人脸识别算法识别检测第二原始视频帧中是否存在头部图像。人脸识别算法如何进行头部识别本实施中不再具体限定。
在一些实施例中,按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧,包括:在所述第二原始视频帧中第二头部图像对应的位置添加第三附加图像素材,得到第三目标视频帧。
例如,所述第三附加图像素材可以是虚拟帽子、虚拟发簪、虚拟步摇、虚拟发钗和虚拟发钿等。帽子的样式可以是动物样式的帽子,例如:虎头帽、兔子样式帽子等,也可以是常规样式的帽子,例如:棒球帽等。
在一些实施例中,在第二原始视频帧中识别出头部图像后,在头部位置添加第三附加图像素材。例如:在所述头部位置显示虚拟虎头帽。
用户加载贴纸后,默认前置摄像头开启,检测到人脸图像后,屏幕显示虎头帽。用户切换后置镜头后,检测到脚部图像后,屏幕显示出现虎鞋。用户点击屏幕后,检测是否有 人脸图像,检测到人脸图像后,在屏幕上显示虎头帽。
在上述实施例的基础上,本公开实施例提供的图像处理方法还包括:在检测到用户首次使用附加图像素材包的情况下,显示引导信息,所述引导信息用于提示用户所述附加图像素材包的使用方法。
所述引导信息可以是视频、音频中的任意一种或者两者的结合。所述引导信息用于告知用户拍脸有虎帽,拍脚有虎鞋。所述引导信息还可以用于在用户切换后置镜头后,默认出现虎鞋后,预告知用户在后置镜头下再点击屏幕可以在虎鞋和虎帽之间进行切换。
在本实施例中,使用播放引导信息,可以使得用户清晰明确的知道该图像编辑效果的使用方式,提高用户的使用体验。
图3为本公开实施例中的一种图像处理装置的结构示意图。本实施例可适用于对视频添加虚拟特效道具的情况,该图像处理装置可以采用软件和/或硬件的方式实现,该图像处理装置可配置于电子设备中。
如图3所述,本公开实施例提供的图像处理装置30主要包括:第一原始视频帧获取模块31、第一编辑效果确定模块32、第一目标视频帧获得模块33、第二原始视频帧获取模块34、第二编辑效果确定模块35和第二目标视频帧获得模块36。
第一原始视频帧获取模块31,用于响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧。
第一编辑效果确定模块32,用于确定所述第一摄像头对应的第一图像编辑效果。
第一目标视频帧获得模块33,用于按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像。
第二原始视频帧获取模块34,用于响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧。
第二编辑效果确定模块35,用于确定所述第二摄像头对应的第二图像编辑效果。
第二目标视频帧获得模块36,用于按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
在一些实施例中,第一目标视频帧获得模块33包括:第一头部图像检测单元,用于检测所述第一原始视频帧中是否存在第一头部图像;和第一目标视频帧获得单元,用于如果所述第一原始视频帧中存在第一头部图像,则按照所述第一图像编辑效果,对所述第一 原始视频帧进行图像编辑处理,得到第一目标视频帧。
在一些实施例中,第一目标视频帧获得单元,具体用于在所述第一原始视频帧中第一头部图像对应的位置添加第一附加图像素材,得到第一目标视频帧。
在一些实施例中,第二目标视频帧获得模块36,包括:脚部图像检测单元,用于检测所述第二原始视频帧中是否存在脚部图像;和第二目标视频帧获得单元,用于如果所述第二原始视频帧中存在脚部图像,则按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧。
在一些实施例中,第二目标视频帧获得单元,用于具体用于在所述第二原始视频帧中脚部图像对应的位置添加第二附加图像素材,得到第二目标视频帧。
在一些实施例中,第一编辑效果确定模块32用于:在获取所述第一摄像头采集的第一原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果为所述第一图像编辑效果;或者,显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第一摄像头对应的所述第一图像编辑效果。
在一些实施例中,第二编辑效果确定模块35用于:在获取第二摄像头采集的第二原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果作为所述第二图像编辑效果;或者,显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第二摄像头对应的所述第二图像编辑效果。
在一些实施例中,所述装置还包括:第三图像编辑效果确定模块,用于响应于对屏幕的触发操作,确定所述第二摄像头对应的第三图像编辑效果;和第三目标视频帧获得模块,用于按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
在一些实施例中,第三目标视频帧获得模块,包括:第二头部图像检测单元,用于检测所述第二原始视频帧中是否存在第二头部图像;和第三目标视频帧获得单元,用于如果所述第二原始视频帧中存在第二头部图像,则按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
在一些实施例中,第三目标视频帧获得单元用于在所述第二原始视频帧中第二头部图像对应的位置添加第三附加图像素材,得到第三目标视频帧。
在一些实施例中,所述第一图像编辑效果与所述第二图像编辑效果不同。
在一些实施例中,所述第三图像编辑效果与所述第二图像编辑效果不同。
在一些实施例中,所述装置还包括:引导信息显示模块,用于在检测到用户首次使用附加图像素材包的情况下,显示引导信息,所述引导信息用于提示用户所述附加图像素材包的使用方法。
本公开实施例提供的图像处理装置,可执行本公开方法实施例所提供的图像处理方法中所执行的步骤,具备执行步骤和有益效果此处不再赘述。
图4为本公开实施例中的一种电子设备的结构示意图。下面具体参考图4,其示出了适于用来实现本公开实施例中的电子设备400的结构示意图。本公开实施例中的电子设备400可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴终端设备等等的移动终端以及诸如数字TV、台式计算机、智能家居设备等等的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图4所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理以实现如本公开所述的实施例的图片渲染方法。在RAM 403中,还存储有终端设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许终端设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的终端设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码,从而实现如上所述的页面跳转方法。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该终端设备执行时,使得该终端设备:响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;确定所述第一摄像头对应的第一图像编辑效果;按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;确定所述第二摄像头对应的第二图像编辑效果;按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像 编辑效果应用在所述第二原始视频帧上的效果图像。
可选的,当上述一个或者多个程序被该终端设备执行时,该终端设备还可以执行上述实施例所述的其他步骤。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任 何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,包括:响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;确定所述第一摄像头对应的第一图像编辑效果;按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;确定所述第二摄像头对应的第二图像编辑效果;按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,包括:检测所述第一原始视频帧中是否存在第一头部图像;如果所述第一原始视频帧中存在第一头部图像,则按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,包括:在所述第一原始视频帧中第一头部图像对应的位置添加第一附加图像素材,得到第一目标视频帧。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,包括:检测所述第二原始视频帧中是否存在脚部图像;如果所述第二原始视频帧中存在脚部图像,则按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,包括:在所述第二原始视频帧中脚部图像对应的位置添加第二附加图像素材,得到第二目标 视频帧。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,确定所述第一摄像头对应的第一图像编辑效果包括:在获取所述第一摄像头采集的第一原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果为所述第一图像编辑效果;或者,显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第一摄像头对应的所述第一图像编辑效果。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,确定所述第二摄像头对应的第二图像编辑效果包括:在获取第二摄像头采集的第二原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果作为所述第二图像编辑效果;或者,显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第二摄像头对应的所述第二图像编辑效果。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,所述方法还包括:在按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示之后,响应于对屏幕的触发操作,确定所述第二摄像头对应的第三图像编辑效果;按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示,包括:检测所述第二原始视频帧中是否存在第二头部图像;如果所述第二原始视频帧中存在第二头部图像,则按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧,包括:在所述第二原始视频帧中第二头部图像对应的位置添加第三附加图像素材,得到第三目标视频帧。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,所述第一图像编辑效果与所述第二图像编辑效果不同。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,所述第三图像编辑效果与所述第二图像编辑效果不同。
根据本公开的一个或多个实施例,本公开提供了一种图像处理方法,其中,所述图像处理方法还包括:在检测到用户首次使用附加图像素材包的情况下,显示引导信息,所述引导信息用于提示用户所述附加图像素材包的使用方法。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,所述装置包括:第一原始视频帧获取模块,用于响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;第一编辑效果确定模块,用于确定所述第一摄像头对应的第一图像编辑效果;第一目标视频帧获得模块,用于按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;第二原始视频帧获取模块,用于响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;第二编辑效果确定模块,用于确定所述第二摄像头对应的第二图像编辑效果;第二目标视频帧获得模块,用于按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,第一目标视频帧获得模块包括:第一头部图像检测单元,用于检测所述第一原始视频帧中是否存在第一头部图像;第一目标视频帧获得单元,用于如果所述第一原始视频帧中存在第一头部图像,则按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,第一目标视频帧获得单元,具体用于在所述第一原始视频帧中第一头部图像对应的位置添加第一附加图像素材,得到第一目标视频帧。
根据本公开的一个或多个实施例,第二目标视频帧获得模块,包括:脚部图像检测单元,用于检测所述第二原始视频帧中是否存在脚部图像;第二目标视频帧获得单元,用于如果所述第二原始视频帧中存在脚部图像,则按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,第二目标视频帧获得单元,用于在所述第二原始视频帧中脚部图像对应的位置添加第二附加图像素材,得到第二目标视频帧。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,第一 编辑效果确定模块用于:在获取所述第一摄像头采集的第一原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果为所述第一图像编辑效果;或者,显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第一摄像头对应的所述第一图像编辑效果。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,第二编辑效果确定模块用于:在获取第二摄像头采集的第二原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果作为所述第二图像编辑效果;或者,显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第二摄像头对应的所述第二图像编辑效果。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,所述装置还包括:第三图像编辑效果确定模块,用于响应于对屏幕的触发操作,确定所述第二摄像头对应的第三图像编辑效果;第三目标视频帧获得模块,用于按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,第三目标视频帧获得模块,包括:第二头部图像检测单元,用于检测所述第二原始视频帧中是否存在第二头部图像;第三目标视频帧获得单元,用于如果所述第二原始视频帧中存在第二头部图像,则按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,第三目标视频帧获得单元用于在所述第二原始视频帧中第二头部图像对应的位置添加第三附加图像素材,得到第三目标视频帧。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,所述第一图像编辑效果与所述第二图像编辑效果不同。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,所述第三图像编辑效果与所述第二图像编辑效果不同。
根据本公开的一个或多个实施例,本公开实施例提供一种图像处理装置,其中,所述装置还包括:引导信息显示模块,用于在检测到用户首次使用附加图像素材包的情况下,显示引导信息,所述引导信息用于提示用户所述附加图像素材包的使用方法。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:一个或多个处理器;和存储器,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处 理器执行,使得所述一个或多个处理器实现如本公开提供的任一所述的图像处理方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质(例如,非瞬时性计算机可读存储介质),其上存储有计算机程序,该程序被处理器执行时实现如本公开提供的任一所述的图像处理方法。
本公开实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机程序或指令,该计算机程序或指令被处理器执行时实现如上所述的图像处理方法。
本公开实施例还提供了一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行如上所述的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (18)

  1. 一种图像处理方法,包括:
    响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;
    确定所述第一摄像头对应的第一图像编辑效果;
    按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;
    响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;
    确定所述第二摄像头对应的第二图像编辑效果;和
    按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
  2. 根据权利要求1所述的图像处理方法,其中,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,包括:
    检测所述第一原始视频帧中是否存在第一头部图像;和
    如果所述第一原始视频帧中存在第一头部图像,则按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧。
  3. 根据权利要求2所述的图像处理方法,其中,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,包括:
    在所述第一原始视频帧中第一头部图像对应的位置添加第一附加图像素材,得到第一目标视频帧。
  4. 根据权利要求1至3中任一项所述的图像处理方法,其中,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,包括:
    检测所述第二原始视频帧中是否存在脚部图像;和
    如果所述第二原始视频帧中存在脚部图像,则按照所述第二图像编辑效果,对所 述第二原始视频帧进行图像编辑处理,得到第二目标视频帧。
  5. 根据权利要求4所述的图像处理方法,其中,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,包括:
    在所述第二原始视频帧中脚部图像对应的位置添加第二附加图像素材,得到第二目标视频帧。
  6. 根据权利要求1至5中任一项所述的图像处理方法,其中,确定所述第一摄像头对应的第一图像编辑效果包括:
    在获取所述第一摄像头采集的第一原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果为所述第一图像编辑效果;或者,
    显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第一摄像头对应的所述第一图像编辑效果。
  7. 根据权利要求1至6中任一项所述的图像处理方法,其中,确定所述第二摄像头对应的第二图像编辑效果包括:
    在获取第二摄像头采集的第二原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果作为所述第二图像编辑效果;或者,
    显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第二摄像头对应的所述第二图像编辑效果。
  8. 根据权利要求1至7中任一项所述的图像处理方法,还包括:
    在按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示之后,响应于对屏幕的触发操作,确定所述第二摄像头对应的第三图像编辑效果;和
    按照第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示,所述第三目标视频帧为所述第三图像编辑效果应用在所述第二原始视频帧上的效果图像。
  9. 根据权利要求8所述的图像处理方法,其中,按照所述第三图像编辑效果,对 所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示,包括:
    检测所述第二原始视频帧中是否存在第二头部图像;和
    如果所述第二原始视频帧中存在第二头部图像,则按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
  10. 根据权利要求9所述的图像处理方法,其中,按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧,包括:
    在所述第二原始视频帧中第二头部图像对应的位置添加第三附加图像素材,得到第三目标视频帧。
  11. 根据权利要求1至10中任一项所述的图像处理方法,其中,所述第一图像编辑效果与所述第二图像编辑效果不同。
  12. 根据权利要求8至10中任一项所述的图像处理方法,其中,所述第三图像编辑效果与所述第二图像编辑效果不同。
  13. 根据权利要求1至12中任一项所述的图像处理方法,还包括:
    在检测到用户首次使用附加图像素材包的情况下,显示引导信息,所述引导信息用于提示用户所述附加图像素材包的使用方法。
  14. 一种图像处理装置,包括:
    第一原始视频帧获取模块,用于响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;
    第一编辑效果确定模块,用于确定所述第一摄像头对应的第一图像编辑效果;
    第一目标视频帧获得模块,用于按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;第二原始视频帧获取模块,用于响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;第二编辑效果确定模块,用于确定所述第二摄像头对应的第二图像编辑效果;和第二目标视频帧获得模块,用于按照所述第二图像编辑效果,对所述第二原始 视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
  15. 一种电子设备,包括:
    一个或多个处理器;和
    存储装置,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1至13中任一项所述的图像处理方法。
  16. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如权利要求1至13中任一项所述的图像处理方法。
  17. 一种计算机程序产品,该计算机程序产品包括计算机程序或指令,该计算机程序或指令被处理器执行时实现如权利要求1至13中任一项所述的图像处理方法。
  18. 一种计算机程序,包括:
    指令,所述指令当由处理器执行时使所述处理器执行如权利要求1至13中任一项所述的图像处理方法。
PCT/CN2023/072579 2022-01-28 2023-01-17 图像处理方法、装置、设备、存储介质和程序产品 WO2023143240A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP23746122.3A EP4459556A1 (en) 2022-01-28 2023-01-17 Image processing method and apparatus, device, storage medium and program product
AU2023213666A AU2023213666A1 (en) 2022-01-28 2023-01-17 Image processing method and apparatus, device, storage medium and program product
KR1020247028347A KR20240141285A (ko) 2022-01-28 2023-01-17 이미지 처리 방법 및 장치, 디바이스, 저장 매체 및 프로그램 제품

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210107352.1A CN114429506B (zh) 2022-01-28 2022-01-28 图像处理方法、装置、设备、存储介质和程序产品
CN202210107352.1 2022-01-28

Publications (1)

Publication Number Publication Date
WO2023143240A1 true WO2023143240A1 (zh) 2023-08-03

Family

ID=81313250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072579 WO2023143240A1 (zh) 2022-01-28 2023-01-17 图像处理方法、装置、设备、存储介质和程序产品

Country Status (5)

Country Link
EP (1) EP4459556A1 (zh)
KR (1) KR20240141285A (zh)
CN (1) CN114429506B (zh)
AU (1) AU2023213666A1 (zh)
WO (1) WO2023143240A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429506B (zh) * 2022-01-28 2024-02-06 北京字跳网络技术有限公司 图像处理方法、装置、设备、存储介质和程序产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040210823A1 (en) * 2003-04-21 2004-10-21 Communications Research Laboratory, Independent Administrative Institution Real-time contents editing method, system, and program
JP2011211561A (ja) * 2010-03-30 2011-10-20 Nec Corp カメラ付き携帯端末、カメラ付携帯端末の制御方法及びその制御プログラム
CN105306802A (zh) * 2014-07-08 2016-02-03 腾讯科技(深圳)有限公司 拍照模式的切换方法及装置
CN112017137A (zh) * 2020-08-19 2020-12-01 深圳市锐尔觅移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN112199016A (zh) * 2020-09-30 2021-01-08 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN114429506A (zh) * 2022-01-28 2022-05-03 北京字跳网络技术有限公司 图像处理方法、装置、设备、存储介质和程序产品

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104052935B (zh) * 2014-06-18 2017-10-20 广东欧珀移动通信有限公司 一种视频编辑方法及装置
CN105391965B (zh) * 2015-11-05 2018-09-07 广东欧珀移动通信有限公司 基于多摄像头的视频录制方法及装置
CN108124094A (zh) * 2016-11-30 2018-06-05 北京小米移动软件有限公司 一种拍照模式的切换方法及装置
CN107820006A (zh) * 2017-11-07 2018-03-20 北京小米移动软件有限公司 控制摄像头摄像的方法及装置
KR101942063B1 (ko) * 2018-07-27 2019-01-24 아이씨티웨이주식회사 영상이미지의 오류 지점을 자동으로 확인해 갱신 및 처리하는 영상이미지 검수시스템
CN109618183B (zh) * 2018-11-29 2019-10-25 北京字节跳动网络技术有限公司 一种视频特效添加方法、装置、终端设备及存储介质
CN111327814A (zh) * 2018-12-17 2020-06-23 华为技术有限公司 一种图像处理的方法及电子设备
CN110113526A (zh) * 2019-04-22 2019-08-09 联想(北京)有限公司 处理方法、处理装置和电子设备
CN112153272B (zh) * 2019-06-28 2022-02-25 华为技术有限公司 一种图像拍摄方法与电子设备
CN111314617B (zh) * 2020-03-17 2023-04-07 北京达佳互联信息技术有限公司 视频数据处理方法、装置、电子设备及存储介质
CN112672061B (zh) * 2020-12-30 2023-01-24 维沃移动通信(杭州)有限公司 视频拍摄方法、装置、电子设备及介质
CN112862927B (zh) * 2021-01-07 2023-07-25 北京字跳网络技术有限公司 用于发布视频的方法、装置、设备和介质
CN113938587B (zh) * 2021-09-14 2024-03-15 青岛海信移动通信技术有限公司 基于双摄像头的摄录方法及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040210823A1 (en) * 2003-04-21 2004-10-21 Communications Research Laboratory, Independent Administrative Institution Real-time contents editing method, system, and program
JP2011211561A (ja) * 2010-03-30 2011-10-20 Nec Corp カメラ付き携帯端末、カメラ付携帯端末の制御方法及びその制御プログラム
CN105306802A (zh) * 2014-07-08 2016-02-03 腾讯科技(深圳)有限公司 拍照模式的切换方法及装置
CN112017137A (zh) * 2020-08-19 2020-12-01 深圳市锐尔觅移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN112199016A (zh) * 2020-09-30 2021-01-08 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN114429506A (zh) * 2022-01-28 2022-05-03 北京字跳网络技术有限公司 图像处理方法、装置、设备、存储介质和程序产品

Also Published As

Publication number Publication date
CN114429506B (zh) 2024-02-06
KR20240141285A (ko) 2024-09-26
EP4459556A1 (en) 2024-11-06
CN114429506A (zh) 2022-05-03
AU2023213666A1 (en) 2024-08-15

Similar Documents

Publication Publication Date Title
WO2021082760A1 (zh) 虚拟形象的生成方法、装置、终端及存储介质
WO2023051185A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
WO2021218325A1 (zh) 视频处理方法、装置、计算机可读介质和电子设备
JP7199527B2 (ja) 画像処理方法、装置、ハードウェア装置
JP7553582B2 (ja) 画像特殊効果の処理方法及び装置
WO2023185671A1 (zh) 风格图像生成方法、装置、设备及介质
WO2022042389A1 (zh) 搜索结果的展示方法、装置、可读介质和电子设备
JP7236551B2 (ja) キャラクタ推薦方法、キャラクタ推薦装置、コンピュータ装置およびプログラム
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
WO2021027631A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
WO2023165515A1 (zh) 拍摄方法、装置、电子设备和存储介质
WO2021135864A1 (zh) 图像处理方法及装置
CN110070496A (zh) 图像特效的生成方法、装置和硬件装置
WO2023138425A1 (zh) 虚拟资源的获取方法、装置、设备及存储介质
US12041379B2 (en) Image special effect processing method, apparatus, and electronic device, and computer-readable storage medium
WO2023143240A1 (zh) 图像处理方法、装置、设备、存储介质和程序产品
US11818491B2 (en) Image special effect configuration method, image recognition method, apparatus and electronic device
WO2023138441A1 (zh) 视频生成方法、装置、设备及存储介质
WO2023221941A1 (zh) 图像处理方法、装置、设备及存储介质
WO2023035936A1 (zh) 数据交互方法、装置、电子设备和存储介质
US11805219B2 (en) Image special effect processing method and apparatus, electronic device and computer-readable storage medium
CN113515329B (zh) 特效属性设置方法及装置
US20240348917A1 (en) Photographing method and apparatus, and device, storage medium and program product
US20240177272A1 (en) Image processing method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23746122

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023746122

Country of ref document: EP

Effective date: 20240730

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112024015436

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2023213666

Country of ref document: AU

Date of ref document: 20230117

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 202427063091

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 20247028347

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE