WO2023143240A1 - 图像处理方法、装置、设备、存储介质和程序产品 - Google Patents
图像处理方法、装置、设备、存储介质和程序产品 Download PDFInfo
- Publication number
- WO2023143240A1 WO2023143240A1 PCT/CN2023/072579 CN2023072579W WO2023143240A1 WO 2023143240 A1 WO2023143240 A1 WO 2023143240A1 CN 2023072579 W CN2023072579 W CN 2023072579W WO 2023143240 A1 WO2023143240 A1 WO 2023143240A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video frame
- image
- image editing
- camera
- original video
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 56
- 230000000694 effects Effects 0.000 claims abstract description 293
- 238000012545 processing Methods 0.000 claims abstract description 102
- 230000004044 response Effects 0.000 claims abstract description 33
- 239000000463 material Substances 0.000 claims description 61
- 238000004590 computer program Methods 0.000 claims description 27
- 210000003128 head Anatomy 0.000 description 23
- 238000000034 method Methods 0.000 description 18
- 241000282376 Panthera tigris Species 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 230000009471 action Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 241001125929 Trisopterus luscus Species 0.000 description 3
- 230000004397 blinking Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 241000283973 Oryctolagus cuniculus Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/206—Drawing of charts or graphs
Definitions
- the present disclosure relates to the technical field of computer processing, and in particular to an image processing method, device, equipment, storage medium and program product.
- the media software installed in the smart terminal can be used to operate the virtual image material to simulate a real environment. Based on this kind of software, the need for real materials can be reduced, cost can be saved, and it is convenient to count the operation results.
- an embodiment of the present disclosure provides an image processing method, the method comprising: in response to a first camera opening operation, acquiring a first original video frame captured by the first camera; First image editing effect; according to the first image editing effect, perform image editing processing on the first original video frame to obtain and display a first target video frame, where the first target video frame is the first image Editing the effect image applied to the first original video frame; in response to the camera switching instruction, switching to the second camera and acquiring the second original video frame captured by the second camera; determining the corresponding second image editing effect; and according to the second image editing effect, perform image editing processing on the second original video frame to obtain and display a second target video frame, where the second target video frame is the second target video frame An image editing effect is applied to an effect image on the second original video frame.
- an embodiment of the present disclosure provides an image processing device, the device including: a first original video frame acquisition module, configured to acquire the first original video frame captured by the first camera in response to the first camera opening operation Video frame; a first editing effect determining module, configured to determine a first image editing effect corresponding to the first camera; a first target video frame obtaining module, configured to perform the first editing effect according to the first image editing effect
- the original video frame is subjected to image editing processing, and the first target video frame is obtained and displayed, and the first target video frame is an effect image applied to the first original video frame by the first image editing effect;
- the second original video The frame acquisition module is used to switch to the second camera and acquire the second original video frame captured by the second camera in response to the camera switching instruction; the second editing effect determination module is used to determine the corresponding first video frame of the second camera.
- Two image editing effects and a second target video frame obtaining module, configured to perform image editing processing on the second original video frame according to the second image editing effect, to obtain and display a second target video frame, and to obtain and display the second target video frame.
- the second target video frame is an effect image of the second image editing effect applied to the second original video frame.
- an embodiment of the present disclosure provides an electronic device, and the electronic device includes: one or more processors; and a storage device for storing one or more programs; when the one or more programs are executed The one or more processors are executed, so that the one or more processors implement the image processing method according to any one of the above first aspect.
- an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the image processing method described in any one of the above-mentioned first aspects is implemented.
- an embodiment of the present disclosure provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the image processing described in any one of the above first aspects is realized method.
- an embodiment of the present disclosure provides a computer program, including: an instruction, which when executed by a processor causes the processor to execute the image processing method according to any one of the above first aspect.
- FIG. 1 is a flowchart of an image processing method in an embodiment of the present disclosure
- FIG. 2 is a flowchart of another image processing method in an embodiment of the present disclosure
- FIG. 3 is a schematic structural diagram of an image processing device in an embodiment of the present disclosure.
- FIG. 4 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure.
- the term “comprise” and its variations are open-ended, ie “including but not limited to”.
- the term “based on” is “based at least in part on”.
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
- the first interaction mode when a specific image is detected in a video frame, the corresponding additional image material is added to the video frame and displayed.
- the above-mentioned specific image may be a face image, a foot image, a hand image, and the like.
- the recognition detects that a hand image appears in a video frame the candied haws image material is displayed at the hand position in the video frame.
- the second interaction mode when the specified action is detected by the user in the video frame, the corresponding additional image material is displayed in the video frame.
- the above-mentioned specified actions may be pouting, blinking, comparing hearts, and the like.
- the recognition detects that the user pouts in a video frame, the balloon image material is displayed at the mouth in the video frame.
- the above-mentioned interactive method of adding image materials can only recognize a single image, such as a head or a foot, and cannot recognize both a head and a foot in a video when shooting a video.
- Multiple additional image materials can only be used in captured video frames when the front-facing camera is used to capture video frames. Or, when using the rear camera to capture video frames, multiple additional image materials are used in the captured video frames. In this way, there is no split between the front and rear cameras, When there are multiple additional image materials, the front and rear cameras have multiple 3D additional image materials, which consumes a lot of equipment performance. For example, when there are two image materials, the front and rear cameras both have the two image materials, which consumes a lot of performance of the terminal device.
- an embodiment of the present disclosure provides an image processing method, by using different image editing and processing methods in the video frames captured by different cameras, in the same video captured by the front camera and the rear camera, using multiple One image editing effect is used for video captured by the front camera, and another image editing effect is used for video captured by the rear camera to optimize device performance.
- the image processing method proposed in the embodiment of the present application will be described in detail below with reference to the accompanying drawings.
- FIG. 1 is a flowchart of an image processing method in an embodiment of the present disclosure. This embodiment is applicable to the situation of adding additional image material to the video.
- the method can be performed by an image processing device.
- the image processing device can be realized by software and/or hardware.
- the image processing device can be configured in electronic equipment.
- the electronic equipment may be a mobile terminal, a fixed terminal or a portable terminal, such as a mobile handset, a station, a unit, a device, a multimedia computer, a multimedia tablet, an Internet node, a communicator, a desktop computer, a laptop computer, a notebook computer, Netbook Computers, Tablet Computers, Personal Communication System (PCS) Devices, Personal Navigation Devices, Personal Digital Assistants (PDAs), Audio/Video Players, Digital Still/Video Cameras, Pointing Devices, Television Receivers, Radio Broadcast Receivers, Electronic Books devices, gaming devices, or any combination thereof, including accessories and peripherals for such devices, or any combination thereof.
- PCS Personal Communication System
- PDAs Personal Digital Assistants
- Audio/Video Players Audio/Video Players
- Digital Still/Video Cameras Pointing Devices
- Television Receivers Radio Broadcast Receivers
- Electronic Books devices Electronic Books devices, gaming devices, or any combination thereof, including accessories and peripherals for such devices, or any combination thereof.
- the electronic device may be a server, wherein the server may be a physical server or a cloud server, and the server may be a server or a server cluster.
- the image processing method provided by the embodiment of the present disclosure mainly includes the following steps S101 to S106.
- step S101 a first original video frame captured by the first camera is acquired in response to the first camera opening operation.
- the first camera mentioned above and the second camera described below are two cameras set in the same terminal device.
- the first camera and the second camera described below may be external cameras connected to the terminal device, or may be built-in cameras of the terminal device.
- the foregoing connection may be a wired connection or a wireless connection, which is not limited in this embodiment.
- the built-in camera of the above terminal device may be a front camera or a rear camera.
- the above-mentioned first camera is a front camera of the terminal device
- the second camera is a rear camera of the terminal device.
- the first camera may be one camera or a group of cameras, and the number of cameras included in the first camera is not limited in this embodiment.
- responding to the opening operation of the first camera includes: after detecting that the user starts the media application program and detecting the user's trigger operation on the target image editing effect, receiving an opening instruction of the first camera, responding In response to the opening instruction, the first camera is turned on.
- responding to the opening operation of the first camera may further include: after detecting the trigger operation of the camera switching button by the user, if the first camera is in the off state, receiving an opening instruction of the first camera, and responding to the Turn on the command, the first camera is turned on.
- the switch button mentioned above may be a virtual button or a physical button, which is not limited in this embodiment.
- the first original video frame may be understood as a video frame collected by the first camera without any processing.
- the first original video frame may also be understood as a video frame collected by the first camera and subjected to set processing, but without adding additional image material.
- the processing set above may be image beautification processing such as skin smoothing, makeup, and using a filter.
- the additional image material can be understood as the content added to the video frame that does not belong to the image of the video frame.
- the additional image material may also be referred to as prop material, special effect material, or the like. It is not limited in this embodiment.
- Acquiring the first original video frame captured by the first camera may include: acquiring the first video frame captured by the front camera in real time.
- the target image editing effect when the target image editing effect is applied to shoot the video, in response to the first camera turning on operation, the first original video frame captured by the first camera is acquired.
- an image editing effect is an effect that can be understood as adding additional image material to a video frame.
- the target image editing effect can be understood as an image editing effect selected by the user.
- the user in the case of applying the target image editing effect to shoot a video, it can be understood that the user enables the target image editing effect before shooting the video, and uses the target image editing effect to edit the captured video when shooting the video. Image editing processing.
- enabling the target image editing effect may be that when the user opens the media application, the target image editing effect is enabled by default, or after the user opens the media application, in response to the user's trigger operation on the target image editing effect, enabling Image editing effects.
- step S102 a first image editing effect corresponding to the first camera is determined.
- the first image editing effect may be understood as an image editing effect used when the first camera is used to capture video frames.
- determining the first image editing effect corresponding to the first camera includes: after acquiring the first original video frame captured by the first camera, one of the default target image editing effects is the first Image editing effects.
- determining the first image editing effect corresponding to the first camera includes: displaying multiple image editing effects included in the target image editing effect, and based on the user's selection operation, selecting the image editing effect selected by the user The result is determined to be the first image editing effect corresponding to the first camera.
- step S103 image editing processing is performed on the first original video frame to obtain and display a first target video frame, and the first target video frame is the first image editing effect An effect image applied to the first original video frame.
- image editing processing is performed on the first original video frame to obtain the first target video frame, which can be understood as applying the first image editing effect to the first original video frame .
- the first image editing effect is to add additional image material at a specified position in the first original video frame.
- the above-mentioned specified position may be human body parts such as eyes, mouth, head, and hands in the first original video frame, and may also be static objects such as buildings, flowers, trees, etc. in the first original video frame. No specific limitation is made in this embodiment.
- the first image editing effect is to add an additional image material at a specified position of the first original video frame after the user completes a specified action.
- the above specified action may be actions such as blinking, pouting, waving, and kicking by the user in the first original video frame. No specific limitation is made in this embodiment.
- performing image editing processing on the first original video frame to obtain the first target video frame includes: detecting whether the first original video frame exists head image; and if there is a first head image in the first original video frame, then according to the first image editing effect, the first original video frame is subjected to image editing processing to obtain a first target video frame .
- the first head image may be understood as a recognized and detected human face image in the first original video frame.
- three algorithms are provided: a face recognition method, a foot recognition algorithm and a whole body recognition algorithm.
- the face recognition algorithm is bound with the first camera. That is, after detecting that the first camera is turned on, the face recognition algorithm is started. Identify and detect whether there is a head image in the first original video frame through a face recognition algorithm. How the face recognition algorithm performs head recognition is not specifically limited in this implementation.
- image editing is performed on the first original video frame to obtain the first target video frame, including: the first header in the first original video frame
- the first additional image material is added to the position corresponding to the image to obtain the first target video frame.
- the first additional image material may be a virtual hat, a virtual hairpin, a virtual step shaker, a virtual hairpin, a virtual hairpin, and the like.
- the style of the hat can be an animal-style hat, such as a tiger head hat, a rabbit-style hat, etc., or a regular-style hat, such as a baseball cap.
- a first additional image material is added at the head position. For example: display a virtual tiger head hat at said head position.
- step S104 in response to the camera switching instruction, switch to the second camera and obtain the captured images from the second camera The second original video frame of .
- the camera switching instruction can be understood as an instruction to switch the currently working camera to the off state, and switch the off state camera to the working state.
- the first camera before responding to the camera switching instruction, the first camera is in the working state, and the second camera is in the off state. After responding to the camera switching instruction, the first camera is switched from the working state to the off state, and the second camera is switched from the off state to the working state (that is, the second camera is turned on).
- the camera switching instruction it may be that after detecting the operation of the camera switching button by the user, the camera switching instruction is received, and in response to the switching instruction, the first camera is turned off and the second camera is turned on, wherein the above-mentioned switching button may be a virtual
- the key may also be a physical key, which is not limited in this embodiment.
- the second original video frame may be understood as a video frame captured by the second camera.
- the second original video frame may also be understood as a video frame collected by the first camera and subjected to set processing, but without adding additional image material.
- the processing set above may be image beautification processing such as skin smoothing, makeup, and using a filter.
- Acquiring the second video frame captured by the second camera may include: acquiring the second video frame captured by the rear camera in real time.
- step S105 a second image editing effect corresponding to the second camera is determined.
- the second image editing effect can be understood as an image editing effect used when the second camera is used to capture video frames.
- determining the second image editing effect corresponding to the second camera includes: after acquiring the second original video frame captured by the second camera, using one of the default target image editing effects as the second Image editing effects.
- determining the second image editing effect corresponding to the second camera includes: displaying a plurality of image editing effects included in the target image editing effect, and determining the image editing effect selected by the user based on the user's selection operation It is the second image editing effect corresponding to the second camera.
- step S106 image editing processing is performed on the second original video frame to obtain and display a second target video frame, the second target video frame is the second image editing effect An effect image applied to the second original video frame.
- image editing processing is performed on the second original video frame to obtain the second target video frame, which can be understood as applying the second image editing effect to the second original video frame .
- the second image editing effect is the addition of additional image material at specified locations in the second original video frame.
- the above specified position may be human body parts such as feet and legs in the second original video frame. In this example, no Specific limits.
- performing image editing processing on the second original video frame to obtain a second target video frame includes: detecting whether there is a foot in the second original video frame image; and if there is a foot image in the second original video frame, performing image editing processing on the second original video frame according to the second image editing effect to obtain a second target video frame.
- three algorithms are provided: a face recognition method, a foot recognition algorithm and a whole body recognition algorithm.
- the foot recognition algorithm is bound with the second camera. That is, after detecting that the second camera is turned on, the foot recognition algorithm is started.
- a foot recognition algorithm is used to identify and detect whether there is a foot image in the second original video frame. How the foot recognition algorithm performs foot recognition is not specifically limited in this implementation.
- the foot image may be a barefoot image, or a foot image after wearing shoes, which is not limited in this embodiment.
- the whole body recognition algorithm when the face recognition algorithm is turned on, the whole body recognition algorithm is turned on; and/or, when the foot recognition algorithm is turned on, the whole body recognition algorithm is turned on.
- the above-mentioned whole-body recognition algorithm is used to assist the face recognition algorithm and/or the foot recognition algorithm to recognize images, so as to improve image recognition efficiency.
- image editing is performed on the second original video frame to obtain a second target video frame, including: corresponding to the foot image in the second original video frame A second additional image material is added at the position to obtain a second target video frame.
- the second additional image material may be a virtual shoe or the like.
- the style of the shoes may be animal hats, such as tiger paws, rabbit feet, etc., or regular shoes, such as sports shoes.
- a second additional image material is added at the foot location. For example: display a virtual tiger paw shoe at said foot position.
- a face recognition algorithm, a foot recognition algorithm, and a whole body recognition algorithm are provided; at the same time, hats and shoes are split into front and rear lenses. Under the front lens, only the virtual hat on the head is recognized, Prioritize the display effect of the hat; when the user switches to the rear camera to obtain a larger shooting space, virtual shoes will appear by default. Realize that when there are multiple additional image materials, the lenses of the two cameras are split, that is, different additional image materials are displayed in the video frames collected by the two cameras (that is, the first additional image material is different from the second additional image material), Optimized end device performance.
- An embodiment of the present disclosure provides an image processing method, including: in response to the first camera opening operation, acquiring the first original video frame captured by the first camera; determining the first image editing effect corresponding to the first camera; The first image editing effect is to perform image editing processing on the first original video frame to obtain the first target The video frame is displayed, and the first target video frame is an effect image of the first image editing effect applied on the first original video frame; in response to the camera switching instruction, switch to the second camera and obtain the first The second original video frame collected by the second camera; determine the second image editing effect corresponding to the second camera; according to the second image editing effect, perform image editing processing on the second original video frame to obtain a second target The video frame is displayed, and the second target video frame is an effect image of the second image editing effect applied to the second original video frame.
- different image editing processing methods are used in video frames captured by different cameras, for example, the first image editing effect is different from the second image editing effect.
- multiple image editing effects can be used in the same video captured by the front camera and the rear camera, that is, the video captured by the front camera uses one image editing effect, and the video captured by the rear camera uses another image Edit effects to optimize terminal device performance.
- the embodiments of the present disclosure further optimize the above image processing method.
- the optimized image processing method of the embodiments of the present disclosure mainly includes the following steps S201 to S208.
- step S201 a first original video frame captured by the first camera is acquired in response to a first camera opening operation.
- step S202 a first image editing effect corresponding to the first camera is determined.
- step S203 perform image editing processing on the first original video frame according to the first image editing effect, obtain and display a first target video frame, the first target video frame is the first image editing effect An effect image applied to the first original video frame.
- step S204 in response to a camera switching instruction, switch to a second camera and acquire a second original video frame captured by the second camera.
- step S205 a second image editing effect corresponding to the second camera is determined.
- step S206 image editing processing is performed on the second original video frame to obtain and display a second target video frame, and the second target video frame is the second image editing effect An effect image applied to the second original video frame.
- steps S201-S206 are performed in the same manner as steps S101-S106 in the above-mentioned embodiment, and details may refer to the description in the above-mentioned embodiment, and details will not be repeated in this embodiment.
- step S207 in response to a trigger operation on the screen, a third image editing effect corresponding to the second camera is determined.
- the screen refers to a touch screen capable of receiving operation signals, and the size and type of the screen are not specifically limited in this embodiment.
- the trigger operation on the screen may be a click operation or a double-click operation on the screen.
- the third image editing effect may be the same as or different from the first image editing effect.
- the third image editing effect is different from the second image editing effect.
- the third image editing effect may be the same as the first image editing effect.
- one of the default target image editing effects is the third image editing effect.
- step S208 image editing processing is performed on the second original video frame to obtain and display a third target video frame, the third target video frame is the third image editing effect An effect image applied to the second original video frame.
- image editing processing is performed on the second original video frame to obtain a third target video frame, which can be understood as applying the third image editing effect to the second original video frame .
- the third image editing effect is to add additional image material at a specified location in the second original video frame.
- the above-mentioned designated position may be human body parts such as eyes, mouth, head, and hands in the second original video frame, and may also be static objects such as buildings, flowers, trees, etc. in the second original video frame. No specific limitation is made in this embodiment.
- performing image editing processing on the second original video frame to obtain and display the third target video frame includes: detecting whether there is The second head image; if there is a second head image in the second original video frame, then according to the third image editing effect, the second original video frame is subjected to image editing processing to obtain a third target video frame and display.
- the second head image may be understood as a recognized and detected head image in the second original video frame. In some embodiments, whether there is a head image in the second original video frame is identified and detected through a face recognition algorithm. How the face recognition algorithm performs head recognition is not specifically limited in this implementation.
- image editing is performed on the second original video frame to obtain a third target video frame, including: the second header in the second original video frame A third additional image material is added to a position corresponding to the image to obtain a third target video frame.
- the third additional image material may be a virtual hat, a virtual hairpin, a virtual walker, a virtual hairpin, a virtual hairpin, and the like.
- the style of the hat can be an animal-style hat, such as a tiger head hat, a rabbit-style hat, etc., or a regular-style hat, such as a baseball cap.
- a third additional image material is added at the head position. For example: display a virtual tiger head hat at said head position.
- the front camera is turned on by default, and when the face image is detected, the screen displays a tiger head hat.
- the screen displays a tiger head hat.
- the foot image is detected, tiger shoes appear on the screen.
- the user taps the screen check whether there is Face image, after the face image is detected, the tiger head hat will be displayed on the screen.
- the image processing method provided by the embodiments of the present disclosure further includes: when it is detected that the user uses the additional image material package for the first time, displaying guide information, the guide information is used to prompt the user for the additional image How to use the material package.
- the guide information may be any one of video, audio or a combination of both.
- the guide information is used to inform the user that there are tiger hats for face pats and tiger shoes for feet pats.
- the guide information can also be used to pre-inform the user to switch between tiger shoes and tiger hats by clicking the screen under the rear camera after the user switches the rear camera and the tiger shoes appear by default.
- the use of the playback guide information can make the user clearly know how to use the image editing effect, thereby improving the user experience.
- FIG. 3 is a schematic structural diagram of an image processing device in an embodiment of the present disclosure. This embodiment is applicable to the situation of adding virtual special effect props to a video, and the image processing device can be realized by software and/or hardware, and the image processing device can be configured in an electronic device.
- the image processing device 30 mainly includes: a first original video frame acquisition module 31, a first editing effect determination module 32, a first target video frame acquisition module 33, a second original video frame An acquisition module 34 , a second editing effect determination module 35 and a second target video frame acquisition module 36 .
- the first original video frame acquisition module 31 is configured to acquire the first original video frame captured by the first camera in response to the first camera opening operation.
- the first editing effect determining module 32 is configured to determine a first image editing effect corresponding to the first camera.
- the first target video frame obtaining module 33 is used to perform image editing processing on the first original video frame according to the first image editing effect to obtain and display the first target video frame, and the first target video frame is
- the first image editing effect is an effect image applied to the first original video frame.
- the second original video frame acquiring module 34 is configured to switch to the second camera and acquire the second original video frame captured by the second camera in response to the camera switching instruction.
- the second editing effect determination module 35 is configured to determine a second image editing effect corresponding to the second camera.
- the second target video frame obtaining module 36 is used to perform image editing processing on the second original video frame according to the second image editing effect to obtain and display the second target video frame, and the second target video frame is
- the second image editing effect is an effect image applied to the second original video frame.
- the first target video frame obtaining module 33 includes: a first head image detection unit for detecting whether there is a first head image in the first original video frame; and obtaining the first target video frame A unit, configured to edit the first image according to the first image editing effect if the first head image exists in the first original video frame
- the original video frame is subjected to image editing processing to obtain the first target video frame.
- the first target video frame obtaining unit is specifically configured to add a first additional image material to a position corresponding to the first head image in the first original video frame to obtain the first target video frame.
- the second target video frame obtaining module 36 includes: a foot image detection unit for detecting whether there is a foot image in the second original video frame; and a second target video frame obtaining unit for If there is a foot image in the second original video frame, perform image editing processing on the second original video frame according to the second image editing effect to obtain a second target video frame.
- the second target video frame obtaining unit is configured to add a second additional image material to a position corresponding to the foot image in the second original video frame to obtain the second target video frame.
- the first editing effect determination module 32 is configured to: after acquiring the first original video frame captured by the first camera, one of the default target image editing effects is the first image editing effect effect; or, display multiple image editing effects included in the target image editing effect, and determine the image editing effect selected by the user as the first image editing effect corresponding to the first camera based on the user's selection operation.
- the second editing effect determining module 35 is configured to: after acquiring the second original video frame captured by the second camera, one of the default target image editing effects is used as the second image editing effect; Alternatively, multiple image editing effects included in the target image editing effect are displayed, and based on the user's selection operation, the image editing effect selected by the user is determined as the second image editing effect corresponding to the second camera.
- the device further includes: a third image editing effect determining module, configured to determine a third image editing effect corresponding to the second camera in response to a trigger operation on the screen; and a third target video frame
- the obtaining module is configured to perform image editing processing on the second original video frame according to the third image editing effect to obtain and display a third target video frame.
- the module for obtaining the third target video frame includes: a second head image detection unit, configured to detect whether there is a second head image in the second original video frame; and obtain the third target video frame A unit, configured to, if there is a second head image in the second original video frame, perform image editing processing on the second original video frame according to the third image editing effect to obtain a third target video frame and show.
- the third target video frame obtaining unit is configured to add a third additional image material at a position corresponding to the second head image in the second original video frame to obtain a third target video frame.
- the first image editing effect is different from the second image editing effect.
- the third image editing effect is different than the second image editing effect.
- the device further includes: a guide information display module, configured to display guide information when it is detected that the user uses the additional image material package for the first time, and the guide information is used to prompt the user for the additional image material How to use the package.
- a guide information display module configured to display guide information when it is detected that the user uses the additional image material package for the first time, and the guide information is used to prompt the user for the additional image material How to use the package.
- the image processing device provided by the embodiment of the present disclosure can execute the steps performed in the image processing method provided by the method embodiment of the present disclosure, and the execution steps and beneficial effects will not be repeated here.
- FIG. 4 is a schematic structural diagram of an electronic device in an embodiment of the present disclosure. Referring to FIG. 4 in detail below, it shows a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure.
- the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), PMPs (Portable Multimedia Players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals), wearable terminal devices, etc., and fixed terminals such as digital TVs, desktop computers, smart home devices, etc.
- the electronic device shown in FIG. 4 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
- an electronic device 400 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 401, which may be randomly accessed according to a program stored in a read-only memory (ROM) 402 or loaded from a storage device 408.
- the program in the memory (RAM) 403 executes various appropriate actions and processes to realize the image rendering method according to the embodiment of the present disclosure.
- various programs and data necessary for the operation of the terminal device 400 are also stored.
- the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
- An input/output (I/O) interface 405 is also connected to bus 404 .
- the following devices can be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 407 such as a computer; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409.
- the communication means 409 may allow the terminal device 400 to perform wireless or wired communication with other devices to exchange data. While FIG. 4 shows a terminal device 400 having various means, it should be understood that implementing or possessing all of the illustrated means is not a requirement. More or fewer means may alternatively be implemented or provided.
- the processes described above with reference to the flowcharts can be implemented as computer software programs.
- the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program includes program code for executing the method shown in the flow chart, thereby realizing the above The page jump method described above.
- the computer program may be downloaded and installed from a network via communication means 409 , or from storage means 408 , or from ROM 402 .
- the processing device 401 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
- a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
- Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
- the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium Communications (eg, communication networks) are interconnected.
- Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the terminal device, the terminal device: in response to an operation of turning on the first camera, acquires the first Original video frame; determine the first image editing effect corresponding to the first camera; perform image editing processing on the first original video frame according to the first image editing effect, obtain and display the first target video frame, and
- the first target video frame is an effect image of the first image editing effect applied on the first original video frame; in response to a camera switching instruction, switch to a second camera and obtain a second image captured by the second camera
- the original video frame determining the second image editing effect corresponding to the second camera; performing image editing processing on the second original video frame according to the second image editing effect, obtaining a second target video frame and displaying it, the The second target video frame is the second image An editing effect is applied to an effect image on the second original video frame.
- the terminal device may also perform other steps described in the foregoing embodiments.
- Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
- LAN local area network
- WAN wide area network
- Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSPs Application Specific Standard Products
- SOCs System on Chips
- CPLD Complex Programmable Logical device
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any of the foregoing any suitable combination.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
- the present disclosure provides an image processing method, including: in response to the first camera opening operation, acquiring the first original video frame captured by the first camera; determining the first The first image editing effect corresponding to the camera; according to the first image editing effect, image editing processing is performed on the first original video frame, and the first target video frame is obtained and displayed, and the first target video frame is the The first image editing effect is applied to the effect image on the first original video frame; in response to the camera switching instruction, switch to the second camera and acquire the second original video frame captured by the second camera; determine the second The second image editing effect corresponding to the camera; according to the second image editing effect, image editing processing is performed on the second original video frame, and the second target video frame is obtained and displayed, and the second target video frame is the A second image editing effect is applied to the effect image on the second original video frame.
- the present disclosure provides an image processing method, wherein, according to the first image editing effect, image editing processing is performed on the first original video frame to obtain a first target video frame, including: detecting whether there is a first head image in the first original video frame; if there is a first head image in the first original video frame, according to the first image editing effect, the The first original video frame is subjected to image editing processing to obtain the first target video frame.
- the present disclosure provides an image processing method, wherein, according to the first image editing effect, image editing processing is performed on the first original video frame to obtain a first target video frame, including: adding a first additional image material to a position corresponding to the first head image in the first original video frame to obtain a first target video frame.
- the present disclosure provides an image processing method, wherein, according to the second image editing effect, image editing processing is performed on the second original video frame to obtain a second target video frame, including: detecting whether there is a foot image in the second original video frame; if there is a foot image in the second original video frame, according to the second image editing effect, the second original video The frame is subjected to image editing processing to obtain a second target video frame.
- the present disclosure provides an image processing method, wherein, according to the second image editing effect, image editing processing is performed on the second original video frame to obtain a second target video frame, including: adding a second additional image material at the position corresponding to the foot image in the second original video frame to obtain the second target video frame.
- the present disclosure provides an image processing method, wherein determining the first image editing effect corresponding to the first camera includes: After the video frame, one image editing effect in the default target image editing effect is the first image editing effect; or, displaying multiple image editing effects included in the target image editing effect, based on the user's selection operation, the user selected The image editing effect is determined as the first image editing effect corresponding to the first camera.
- the present disclosure provides an image processing method, wherein determining the second image editing effect corresponding to the second camera includes: obtaining the second original video frame captured by the second camera Finally, one of the image editing effects in the default target image editing effect is used as the second image editing effect; or, a plurality of image editing effects included in the target image editing effect are displayed, and the image editing effect selected by the user is edited based on the user's selection operation. The effect is determined as the second image editing effect corresponding to the second camera.
- the present disclosure provides an image processing method, the method further comprising: performing image editing processing on the second original video frame according to the second image editing effect, After the second target video frame is obtained and displayed, in response to the trigger operation on the screen, determine the third image editing effect corresponding to the second camera; according to the third image editing effect, perform the second original video frame Image editing processing, obtaining and displaying the third target video frame.
- the present disclosure provides an image processing method, wherein, according to the third image editing effect, image editing processing is performed on the second original video frame to obtain a third target video frame and display, including: detecting whether there is a second head image in the second original video frame; if there is a second head image in the second original video frame, according to the third image editing effect, the The second original video frame is subjected to image editing processing to obtain and display a third target video frame.
- the present disclosure provides an image processing method, wherein, according to the third image editing effect, image editing processing is performed on the second original video frame to obtain a third target video frame, including: adding a third additional image material to a position corresponding to the second head image in the second original video frame to obtain a third target video frame.
- the present disclosure provides an image processing method, wherein the first image editing effect is different from the second image editing effect.
- the present disclosure provides an image processing method, wherein the third image editing effect is different from the second image editing effect.
- the present disclosure provides an image processing method, wherein the image processing method further includes: when it is detected that the user uses the additional image material package for the first time, displaying guide information, the The guide information is used to prompt the user how to use the additional image material package.
- an embodiment of the present disclosure provides an image processing device, the device comprising: a first original video frame acquisition module, configured to acquire the first original video frame in response to the first camera opening operation The first original video frame collected by the camera; the first editing effect determining module, configured to determine the first image editing effect corresponding to the first camera; the first target video frame obtaining module, configured to edit the effect according to the first image , performing image editing processing on the first original video frame to obtain and display a first target video frame, where the first target video frame is an effect of the first image editing effect applied to the first original video frame Image; the second original video frame acquisition module, used to switch to the second camera and obtain the second original video frame collected by the second camera in response to the camera switching instruction; the second editing effect determination module, used to determine the The second image editing effect corresponding to the second camera; the second target video frame obtaining module is used to perform image editing processing on the second original video frame according to the second image editing effect to obtain the second target video frame and It is shown that the second target video
- an embodiment of the present disclosure provides an image processing device, wherein the first target video frame obtaining module includes: a first head image detection unit, configured to detect the first original video Whether there is a first head image in the frame; the first target video frame obtaining unit is configured to: if there is a first head image in the first original video frame, according to the editing effect of the first image, the An original video frame is subjected to image editing processing to obtain a first target video frame.
- the first target video frame obtaining module includes: a first head image detection unit, configured to detect the first original video Whether there is a first head image in the frame; the first target video frame obtaining unit is configured to: if there is a first head image in the first original video frame, according to the editing effect of the first image, the An original video frame is subjected to image editing processing to obtain a first target video frame.
- an embodiment of the present disclosure provides an image processing device, wherein the first target video frame obtaining unit is specifically configured to correspond to the first head image in the first original video frame The first additional image material is added at the position to obtain the first target video frame.
- the second target video frame obtaining module includes: a foot image detection unit, configured to detect whether there is a foot image in the second original video frame; the second target video frame obtains A unit, configured to, if there is a foot image in the second original video frame, perform image editing processing on the second original video frame according to the second image editing effect to obtain a second target video frame.
- an embodiment of the present disclosure provides an image processing device, wherein the second target video frame obtaining unit is configured to add The second additional image material is used to obtain the second target video frame.
- an embodiment of the present disclosure provides an image processing device, wherein the first The editing effect determining module is used for: after acquiring the first original video frame collected by the first camera, one of the default target image editing effects is the first image editing effect; or, displaying the target image editing effect Based on the user's selection operation, determine the image editing effect selected by the user as the first image editing effect corresponding to the first camera.
- an embodiment of the present disclosure provides an image processing device, wherein the second editing effect determination module is configured to: after acquiring the second original video frame captured by the second camera, the default target image One image editing effect in the editing effects is used as the second image editing effect; or, displaying multiple image editing effects included in the target image editing effect, based on the user's selection operation, determining the image editing effect selected by the user as the second image editing effect The second image editing effect corresponding to the second camera.
- an embodiment of the present disclosure provides an image processing device, wherein the device further includes: a third image editing effect determining module, configured to determine the The third image editing effect corresponding to the second camera; the third target video frame obtaining module is used to perform image editing processing on the second original video frame according to the third image editing effect to obtain a third target video frame and display.
- a third image editing effect determining module configured to determine the The third image editing effect corresponding to the second camera
- the third target video frame obtaining module is used to perform image editing processing on the second original video frame according to the third image editing effect to obtain a third target video frame and display.
- an embodiment of the present disclosure provides an image processing device, wherein the third target video frame acquisition module includes: a second head image detection unit, configured to detect the second original Whether there is a second head image in the video frame; the third target video frame obtaining unit is configured to: if there is a second head image in the second original video frame, according to the third image editing effect, the The second original video frame is subjected to image editing processing to obtain and display the third target video frame.
- the third target video frame acquisition module includes: a second head image detection unit, configured to detect the second original Whether there is a second head image in the video frame; the third target video frame obtaining unit is configured to: if there is a second head image in the second original video frame, according to the third image editing effect, the The second original video frame is subjected to image editing processing to obtain and display the third target video frame.
- an embodiment of the present disclosure provides an image processing device, wherein the third target video frame obtaining unit is used to obtain the position corresponding to the second head image in the second original video frame Adding a third additional image material to obtain a third target video frame.
- an embodiment of the present disclosure provides an image processing apparatus, wherein the first image editing effect is different from the second image editing effect.
- an embodiment of the present disclosure provides an image processing device, wherein the third image editing effect is different from the second image editing effect.
- an embodiment of the present disclosure provides an image processing device, wherein the device further includes: a guide information display module, configured to detect that the user uses the additional image material package for the first time , displaying guide information, where the guide information is used to prompt the user how to use the additional image material package.
- a guide information display module configured to detect that the user uses the additional image material package for the first time , displaying guide information, where the guide information is used to prompt the user how to use the additional image material package.
- the present disclosure provides an electronic device, including: one or more processors; and a memory for storing one or more programs; when the one or more programs are executed the one or more processors, so that the one or more processors implement any one of the image processing methods provided in the present disclosure.
- the present disclosure provides a computer-readable storage medium (for example, a non-transitory computer-readable storage medium), on which a computer program is stored, and when the program is executed by a processor, the Any one of the image processing methods provided by the present disclosure.
- a computer-readable storage medium for example, a non-transitory computer-readable storage medium
- An embodiment of the present disclosure also provides a computer program product, where the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the above-mentioned image processing method is realized.
- An embodiment of the present disclosure also provides a computer program, including: instructions, which when executed by a processor cause the processor to execute the image processing method as described above.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Television Signal Processing For Recording (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (18)
- 一种图像处理方法,包括:响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;确定所述第一摄像头对应的第一图像编辑效果;按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;确定所述第二摄像头对应的第二图像编辑效果;和按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
- 根据权利要求1所述的图像处理方法,其中,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,包括:检测所述第一原始视频帧中是否存在第一头部图像;和如果所述第一原始视频帧中存在第一头部图像,则按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧。
- 根据权利要求2所述的图像处理方法,其中,按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧,包括:在所述第一原始视频帧中第一头部图像对应的位置添加第一附加图像素材,得到第一目标视频帧。
- 根据权利要求1至3中任一项所述的图像处理方法,其中,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,包括:检测所述第二原始视频帧中是否存在脚部图像;和如果所述第二原始视频帧中存在脚部图像,则按照所述第二图像编辑效果,对所 述第二原始视频帧进行图像编辑处理,得到第二目标视频帧。
- 根据权利要求4所述的图像处理方法,其中,按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧,包括:在所述第二原始视频帧中脚部图像对应的位置添加第二附加图像素材,得到第二目标视频帧。
- 根据权利要求1至5中任一项所述的图像处理方法,其中,确定所述第一摄像头对应的第一图像编辑效果包括:在获取所述第一摄像头采集的第一原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果为所述第一图像编辑效果;或者,显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第一摄像头对应的所述第一图像编辑效果。
- 根据权利要求1至6中任一项所述的图像处理方法,其中,确定所述第二摄像头对应的第二图像编辑效果包括:在获取第二摄像头采集的第二原始视频帧后,默认目标图像编辑效果中的一个图像编辑效果作为所述第二图像编辑效果;或者,显示目标图像编辑效果中包括的多个图像编辑效果,基于用户的选中操作,将用户选中的图像编辑效果确定为所述第二摄像头对应的所述第二图像编辑效果。
- 根据权利要求1至7中任一项所述的图像处理方法,还包括:在按照所述第二图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第二目标视频帧并显示之后,响应于对屏幕的触发操作,确定所述第二摄像头对应的第三图像编辑效果;和按照第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示,所述第三目标视频帧为所述第三图像编辑效果应用在所述第二原始视频帧上的效果图像。
- 根据权利要求8所述的图像处理方法,其中,按照所述第三图像编辑效果,对 所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示,包括:检测所述第二原始视频帧中是否存在第二头部图像;和如果所述第二原始视频帧中存在第二头部图像,则按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧并显示。
- 根据权利要求9所述的图像处理方法,其中,按照所述第三图像编辑效果,对所述第二原始视频帧进行图像编辑处理,得到第三目标视频帧,包括:在所述第二原始视频帧中第二头部图像对应的位置添加第三附加图像素材,得到第三目标视频帧。
- 根据权利要求1至10中任一项所述的图像处理方法,其中,所述第一图像编辑效果与所述第二图像编辑效果不同。
- 根据权利要求8至10中任一项所述的图像处理方法,其中,所述第三图像编辑效果与所述第二图像编辑效果不同。
- 根据权利要求1至12中任一项所述的图像处理方法,还包括:在检测到用户首次使用附加图像素材包的情况下,显示引导信息,所述引导信息用于提示用户所述附加图像素材包的使用方法。
- 一种图像处理装置,包括:第一原始视频帧获取模块,用于响应于第一摄像头开启操作,获取所述第一摄像头采集的第一原始视频帧;第一编辑效果确定模块,用于确定所述第一摄像头对应的第一图像编辑效果;第一目标视频帧获得模块,用于按照所述第一图像编辑效果,对所述第一原始视频帧进行图像编辑处理,得到第一目标视频帧并显示,所述第一目标视频帧为所述第一图像编辑效果应用在所述第一原始视频帧上的效果图像;第二原始视频帧获取模块,用于响应于摄像头切换指令,切换至第二摄像头并获取所述第二摄像头采集的第二原始视频帧;第二编辑效果确定模块,用于确定所述第二摄像头对应的第二图像编辑效果;和第二目标视频帧获得模块,用于按照所述第二图像编辑效果,对所述第二原始 视频帧进行图像编辑处理,得到第二目标视频帧并显示,所述第二目标视频帧为所述第二图像编辑效果应用在所述第二原始视频帧上的效果图像。
- 一种电子设备,包括:一个或多个处理器;和存储装置,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1至13中任一项所述的图像处理方法。
- 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如权利要求1至13中任一项所述的图像处理方法。
- 一种计算机程序产品,该计算机程序产品包括计算机程序或指令,该计算机程序或指令被处理器执行时实现如权利要求1至13中任一项所述的图像处理方法。
- 一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行如权利要求1至13中任一项所述的图像处理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP23746122.3A EP4459556A1 (en) | 2022-01-28 | 2023-01-17 | Image processing method and apparatus, device, storage medium and program product |
AU2023213666A AU2023213666A1 (en) | 2022-01-28 | 2023-01-17 | Image processing method and apparatus, device, storage medium and program product |
KR1020247028347A KR20240141285A (ko) | 2022-01-28 | 2023-01-17 | 이미지 처리 방법 및 장치, 디바이스, 저장 매체 및 프로그램 제품 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210107352.1A CN114429506B (zh) | 2022-01-28 | 2022-01-28 | 图像处理方法、装置、设备、存储介质和程序产品 |
CN202210107352.1 | 2022-01-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023143240A1 true WO2023143240A1 (zh) | 2023-08-03 |
Family
ID=81313250
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/072579 WO2023143240A1 (zh) | 2022-01-28 | 2023-01-17 | 图像处理方法、装置、设备、存储介质和程序产品 |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP4459556A1 (zh) |
KR (1) | KR20240141285A (zh) |
CN (1) | CN114429506B (zh) |
AU (1) | AU2023213666A1 (zh) |
WO (1) | WO2023143240A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114429506B (zh) * | 2022-01-28 | 2024-02-06 | 北京字跳网络技术有限公司 | 图像处理方法、装置、设备、存储介质和程序产品 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040210823A1 (en) * | 2003-04-21 | 2004-10-21 | Communications Research Laboratory, Independent Administrative Institution | Real-time contents editing method, system, and program |
JP2011211561A (ja) * | 2010-03-30 | 2011-10-20 | Nec Corp | カメラ付き携帯端末、カメラ付携帯端末の制御方法及びその制御プログラム |
CN105306802A (zh) * | 2014-07-08 | 2016-02-03 | 腾讯科技(深圳)有限公司 | 拍照模式的切换方法及装置 |
CN112017137A (zh) * | 2020-08-19 | 2020-12-01 | 深圳市锐尔觅移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN112199016A (zh) * | 2020-09-30 | 2021-01-08 | 北京字节跳动网络技术有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN114429506A (zh) * | 2022-01-28 | 2022-05-03 | 北京字跳网络技术有限公司 | 图像处理方法、装置、设备、存储介质和程序产品 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104052935B (zh) * | 2014-06-18 | 2017-10-20 | 广东欧珀移动通信有限公司 | 一种视频编辑方法及装置 |
CN105391965B (zh) * | 2015-11-05 | 2018-09-07 | 广东欧珀移动通信有限公司 | 基于多摄像头的视频录制方法及装置 |
CN108124094A (zh) * | 2016-11-30 | 2018-06-05 | 北京小米移动软件有限公司 | 一种拍照模式的切换方法及装置 |
CN107820006A (zh) * | 2017-11-07 | 2018-03-20 | 北京小米移动软件有限公司 | 控制摄像头摄像的方法及装置 |
KR101942063B1 (ko) * | 2018-07-27 | 2019-01-24 | 아이씨티웨이주식회사 | 영상이미지의 오류 지점을 자동으로 확인해 갱신 및 처리하는 영상이미지 검수시스템 |
CN109618183B (zh) * | 2018-11-29 | 2019-10-25 | 北京字节跳动网络技术有限公司 | 一种视频特效添加方法、装置、终端设备及存储介质 |
CN111327814A (zh) * | 2018-12-17 | 2020-06-23 | 华为技术有限公司 | 一种图像处理的方法及电子设备 |
CN110113526A (zh) * | 2019-04-22 | 2019-08-09 | 联想(北京)有限公司 | 处理方法、处理装置和电子设备 |
CN112153272B (zh) * | 2019-06-28 | 2022-02-25 | 华为技术有限公司 | 一种图像拍摄方法与电子设备 |
CN111314617B (zh) * | 2020-03-17 | 2023-04-07 | 北京达佳互联信息技术有限公司 | 视频数据处理方法、装置、电子设备及存储介质 |
CN112672061B (zh) * | 2020-12-30 | 2023-01-24 | 维沃移动通信(杭州)有限公司 | 视频拍摄方法、装置、电子设备及介质 |
CN112862927B (zh) * | 2021-01-07 | 2023-07-25 | 北京字跳网络技术有限公司 | 用于发布视频的方法、装置、设备和介质 |
CN113938587B (zh) * | 2021-09-14 | 2024-03-15 | 青岛海信移动通信技术有限公司 | 基于双摄像头的摄录方法及电子设备 |
-
2022
- 2022-01-28 CN CN202210107352.1A patent/CN114429506B/zh active Active
-
2023
- 2023-01-17 WO PCT/CN2023/072579 patent/WO2023143240A1/zh active Application Filing
- 2023-01-17 AU AU2023213666A patent/AU2023213666A1/en active Pending
- 2023-01-17 KR KR1020247028347A patent/KR20240141285A/ko active Search and Examination
- 2023-01-17 EP EP23746122.3A patent/EP4459556A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040210823A1 (en) * | 2003-04-21 | 2004-10-21 | Communications Research Laboratory, Independent Administrative Institution | Real-time contents editing method, system, and program |
JP2011211561A (ja) * | 2010-03-30 | 2011-10-20 | Nec Corp | カメラ付き携帯端末、カメラ付携帯端末の制御方法及びその制御プログラム |
CN105306802A (zh) * | 2014-07-08 | 2016-02-03 | 腾讯科技(深圳)有限公司 | 拍照模式的切换方法及装置 |
CN112017137A (zh) * | 2020-08-19 | 2020-12-01 | 深圳市锐尔觅移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN112199016A (zh) * | 2020-09-30 | 2021-01-08 | 北京字节跳动网络技术有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN114429506A (zh) * | 2022-01-28 | 2022-05-03 | 北京字跳网络技术有限公司 | 图像处理方法、装置、设备、存储介质和程序产品 |
Also Published As
Publication number | Publication date |
---|---|
CN114429506B (zh) | 2024-02-06 |
KR20240141285A (ko) | 2024-09-26 |
EP4459556A1 (en) | 2024-11-06 |
CN114429506A (zh) | 2022-05-03 |
AU2023213666A1 (en) | 2024-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021082760A1 (zh) | 虚拟形象的生成方法、装置、终端及存储介质 | |
WO2023051185A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
WO2021196903A1 (zh) | 视频处理方法、装置、可读介质及电子设备 | |
WO2021218325A1 (zh) | 视频处理方法、装置、计算机可读介质和电子设备 | |
JP7199527B2 (ja) | 画像処理方法、装置、ハードウェア装置 | |
JP7553582B2 (ja) | 画像特殊効果の処理方法及び装置 | |
WO2023185671A1 (zh) | 风格图像生成方法、装置、设备及介质 | |
WO2022042389A1 (zh) | 搜索结果的展示方法、装置、可读介质和电子设备 | |
JP7236551B2 (ja) | キャラクタ推薦方法、キャラクタ推薦装置、コンピュータ装置およびプログラム | |
US20220159197A1 (en) | Image special effect processing method and apparatus, and electronic device and computer readable storage medium | |
WO2021027631A1 (zh) | 图像特效处理方法、装置、电子设备和计算机可读存储介质 | |
WO2023165515A1 (zh) | 拍摄方法、装置、电子设备和存储介质 | |
WO2021135864A1 (zh) | 图像处理方法及装置 | |
CN110070496A (zh) | 图像特效的生成方法、装置和硬件装置 | |
WO2023138425A1 (zh) | 虚拟资源的获取方法、装置、设备及存储介质 | |
US12041379B2 (en) | Image special effect processing method, apparatus, and electronic device, and computer-readable storage medium | |
WO2023143240A1 (zh) | 图像处理方法、装置、设备、存储介质和程序产品 | |
US11818491B2 (en) | Image special effect configuration method, image recognition method, apparatus and electronic device | |
WO2023138441A1 (zh) | 视频生成方法、装置、设备及存储介质 | |
WO2023221941A1 (zh) | 图像处理方法、装置、设备及存储介质 | |
WO2023035936A1 (zh) | 数据交互方法、装置、电子设备和存储介质 | |
US11805219B2 (en) | Image special effect processing method and apparatus, electronic device and computer-readable storage medium | |
CN113515329B (zh) | 特效属性设置方法及装置 | |
US20240348917A1 (en) | Photographing method and apparatus, and device, storage medium and program product | |
US20240177272A1 (en) | Image processing method and apparatus, and electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23746122 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023746122 Country of ref document: EP Effective date: 20240730 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112024015436 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2023213666 Country of ref document: AU Date of ref document: 20230117 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202427063091 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 20247028347 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |