CN112261320A - Image processing method and related product - Google Patents

Image processing method and related product Download PDF

Info

Publication number
CN112261320A
CN112261320A CN202011066149.1A CN202011066149A CN112261320A CN 112261320 A CN112261320 A CN 112261320A CN 202011066149 A CN202011066149 A CN 202011066149A CN 112261320 A CN112261320 A CN 112261320A
Authority
CN
China
Prior art keywords
image
background
target
background image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011066149.1A
Other languages
Chinese (zh)
Inventor
段佳琦
曹恩丹
吴磊
王元吉
刘超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202011066149.1A priority Critical patent/CN112261320A/en
Publication of CN112261320A publication Critical patent/CN112261320A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an image processing method and a related product, wherein the method comprises the following steps: receiving a background image selection instruction from a user; selecting a first background image matched with the first image from a plurality of background images based on the background image selection instruction; and carrying out image fusion processing on the first background image and the first foreground image in the first image to obtain a first target image. In the embodiment of the application, the image processing device can select the first background image matched with the first image based on the background image selection instruction from the user, the user does not need to select the background image matched with the first image, the efficiency of changing the background of the video or the background of the image can be improved, and the operation is convenient.

Description

Image processing method and related product
Technical Field
The present application relates to the field of image processing, and more particularly, to image processing methods and related products.
Background
With the development of image processing technology, the application of video background changing and image background changing is more and more extensive. The currently adopted video background changing scheme and image background changing scheme have complex user operation and are difficult to meet the requirements of different users.
Disclosure of Invention
The embodiment of the application discloses an image processing method and a related product, which can improve the efficiency of changing a video background or an image background and are convenient and fast to operate.
In a first aspect, an embodiment of the present application provides an image processing method, including: receiving a background image selection instruction from a user; selecting a first background image matched with the first image from a plurality of background images based on the background image selection instruction; and carrying out image fusion processing on the first background image and the first foreground image in the first image to obtain a first target image.
The execution subject of the embodiment of the application is an image processing device, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer and other terminal equipment; but also a server (e.g., a cloud server). In this embodiment, after the user inputs the background image selection instruction, the image processing apparatus may select, based on the background image selection instruction, a first background image that matches the first image from the plurality of background images. That is, the background image selection instruction is not an instruction to select a certain background image, but instructs the image processing apparatus to select a background image that matches the first image from among a plurality of background images (for example, a background image that matches the first image most highly among the plurality of background images). It should be understood that, in the solution provided in the present application, a user does not need to select a background image matched with a first image by himself, and only needs to input a background image selection instruction to designate multiple selectable background images, and the image processing apparatus may automatically select a first background image matched with the first image, and further perform image fusion processing on the first background image and a first foreground image in the first image to obtain a first target image.
In the embodiment of the application, the image processing device can select the first background image matched with the first image based on the background image selection instruction from the user, and does not need the user to select the background image matched with the first image, so that the efficiency of changing the background of the video or the background of the image can be improved, and the operation is convenient.
In one possible implementation, the background image selection instruction is configured to indicate a target background library of at least one selectable background library, where the plurality of background images are included in the target background library; or, the background image selection instruction is used to indicate target scene information, and the scene of the first background image matches with the target scene information.
The image processing apparatus may have a plurality of selectable background libraries, the target background library being any one of the plurality of selectable background libraries, each background library including a plurality of background images. For example, the image processing apparatus stores a first background library to a tenth background library, and the background image selection instruction is used to indicate a target background library among the 10 background libraries (corresponding to at least one optional background library). The individual background images in each selectable background library may be matched to the same scene information or to different scene information. The background image selection instructions may indicate target scene information in offices, schools, sights, medical, landscapes, and the like. For example, each scene is matched with a plurality of background images, and the image processing apparatus may select a first background image matching the first image from the plurality of background images matching the target scene information based on the background image selection instruction.
In this implementation, the background image selection instruction indicates a condition that needs to be satisfied by the background image matching the first image, so that the range of screening the background image matching the first image can be narrowed, and the image matching operation can be reduced.
In one possible implementation manner, the matching degree of the first background image and the first image is not lower than the matching degree of at least one background image except the first background image in the plurality of background images.
In one possible implementation manner, the matching degree between the first background image and the first image is not lower than the matching degree between the first background image and any background image except the first background image in the plurality of background images.
In the implementation mode, the first target image obtained by performing image fusion processing on the first background image and the first image can be more natural and vivid.
In one possible implementation manner, the selecting, based on the background image selection instruction, a first background image matching the first image from a plurality of background images includes: carrying out image identification processing on the first image to obtain image identification information; the image recognition information comprises information describing at least one feature of the first image; determining the matching degree of at least one background image with the first image based on the annotation information and the image identification information of at least one background image in the plurality of background images; and selecting the first background image from the at least one background image based on the matching degree of the at least one background image and the first image respectively.
In the implementation mode, the background image which is low in matching degree with the first image can be quickly and accurately selected from the plurality of background images.
In one possible implementation, the image recognition information includes at least one of: the scene of the first image, the camera position of the first image, the shooting angle of the first image and the shooting height of the first image.
In a possible implementation manner, before performing image fusion processing on the first background image and the first foreground image in the first image to obtain the first target image, the method further includes: determining a target position of the first foreground image relative to the first background image; the image fusion processing of the first background image and the first foreground image in the first image to obtain a first target image includes: and performing image fusion processing on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image to obtain the first target image.
In the implementation manner, the target position of the first foreground image relative to the first background image is determined, and then the image fusion processing is performed on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image, so that the fused image is more natural and vivid.
In one possible implementation, the determining the target position of the first foreground image relative to the first background image includes: performing ground identification processing on the first background image to obtain position information of a ground area in the first background image; determining the target position of the first foreground image relative to the first background image based on the location information of the ground area; the determination of the target location is such that a target area in the second image is located within the ground area relative to the first image, the target area including an area in contact with the ground in the first image.
In one possible implementation, the determining the target position of the first foreground image relative to the first background image includes: processing the first foreground image to obtain position information of a first main body area in the first foreground image; obtaining the target position of the first foreground image relative to the first background image based on the position information of the first subject region; the determination of the target position is such that a particular position of a first subject region of the second image is within a heads-up region of the first image; the head-up region includes a vanishing line resulting from a plurality of vanishing points, any one of which is a point where two lines parallel in the real world intersect in the first image.
In one possible implementation, before receiving a background image selection instruction from a user, the method further includes: displaying a mode selection interface; the mode selection interface comprises a first option and a second option; the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting the background image; receiving a selection instruction for the second option.
Optionally, in response to a selection instruction for the second option, displaying a plurality of background image libraries; the receiving a background image selection instruction from a user comprises: receiving the background image selection instruction from the user for a target background library of the plurality of background image libraries; the target background library includes the plurality of background images.
In this implementation, the image processing apparatus can support two background image selection modes, and the user can select a desired mode according to actual needs, thereby meeting the needs of different users.
In one possible implementation, the method further includes: acquiring a first video clip, wherein the first video clip comprises a plurality of images belonging to the same shot, and the plurality of images comprise the first image; the selecting, based on the background image selection instruction, a first background image matching the first image from a plurality of background images includes: and selecting a first background image matched with the plurality of images of the first video clip from the plurality of background images based on the background image selection instruction.
The multiple images belonging to the same shot may be a continuous picture (i.e., a multi-frame image) captured by a video capture device such as a video camera from power on to power off, or a segment between two cropping points in a video.
In this implementation, the first background image that matches the plurality of images of the first video clip may be selected from the plurality of background images, user operations may be reduced, and matching operations may be reduced.
In an optional implementation manner, the selecting, based on the background image selection instruction, the first background image matching the plurality of images of the first video segment from the plurality of background images includes: selecting a target image from a plurality of images of the first video clip; selecting the first background image matching the target image from the plurality of background images. Optionally, the target image is any one of the plurality of images. Optionally, the target image is a key frame image of the plurality of images, for example, an image with better quality is determined in a certain manner.
In one possible implementation, the plurality of images includes the first image and at least one second image; the method further comprises the following steps: and performing image fusion processing on the first background image and a second foreground image of each second image in the at least one second image to obtain at least one second target image, wherein the background replacement result of the first video clip comprises the first target image and the at least one second target image.
In this implementation, at least two images in the first video segment may be background-changed to implement the video segment background change.
In one possible implementation, the method further includes: shot division is carried out on a target video to obtain a plurality of video clips, and the plurality of video clips comprise the first video clip and at least one second video clip; selecting a second background image matched with each second video clip in the at least one second video clip from the plurality of background images based on the background image selection instruction; performing image fusion processing on a third foreground image in at least one third image included in each second video clip and a second background image corresponding to each second video clip to obtain a background replacement result of each second video clip; and obtaining a background replacement result of the target video based on the background replacement result of each video clip in the plurality of video clips.
In this implementation, images in different video segments can be fused with different background images, so that more appropriate backgrounds can be replaced for different video segments.
In one possible implementation, the method further includes: and receiving the first image uploaded by the user, or acquiring the first image.
In a second aspect, an embodiment of the present application provides another image processing method, including: the server receives a background image selection instruction from the image processing device; selecting a first background image matched with the first image from a plurality of background images based on the background image selection instruction; performing image fusion processing on the first background image and a first foreground image in the first image to obtain a first target image; and sending the first target image to the image processing device.
The server may receive a background image selection instruction from the image processing apparatus through a wireless network or a wired network, and transmit the first target image to the image processing apparatus through the wireless network or the wired network.
In the embodiment of the application, the server can select the first background image matched with the first image based on the background image selection instruction from the image processing device, a user does not need to select the background image, the efficiency of changing the background of the video or the background of the image can be improved, and the operation is convenient and fast.
In one possible implementation, the background image selection instruction is configured to indicate a target background library of at least one selectable background library, where the plurality of background images are included in the target background library; or, the background image selection instruction is used to indicate target scene information, and the scene of the first background image matches with the target scene information.
In one possible implementation manner, the matching degree of the first background image and the first image is not lower than the matching degree of at least one background image except the first background image in the plurality of background images.
In one possible implementation manner, the matching degree between the first background image and the first image is not lower than the matching degree between the first background image and any background image except the first background image in the plurality of background images.
In one possible implementation manner, the selecting, based on the background image selection instruction, a first background image matching the first image from a plurality of background images includes: carrying out image identification processing on the first image to obtain image identification information; the image recognition information comprises information describing at least one feature of the first image; determining the matching degree of at least one background image with the first image based on the annotation information and the image identification information of at least one background image in the plurality of background images; and selecting the first background image from the at least one background image based on the matching degree of the at least one background image and the first image respectively.
In one possible implementation, the image recognition information includes at least one of: the scene of the first image, the camera position of the first image, the shooting angle of the first image and the shooting height of the first image.
In a possible implementation manner, before performing image fusion processing on the first background image and the first foreground image in the first image to obtain the first target image, the method further includes: determining a target position of the first foreground image relative to the first background image; the image fusion processing of the first background image and the first foreground image in the first image to obtain a first target image includes: and performing image fusion processing on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image to obtain the first target image.
In the implementation manner, the target position of the first foreground image relative to the first background image is determined, and then the image fusion processing is performed on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image, so that the fused image is more natural and vivid.
In one possible implementation, the determining the target position of the first foreground image relative to the first background image includes: performing ground identification processing on the first background image to obtain position information of a ground area in the first background image; determining the target position of the first foreground image relative to the first background image based on the location information of the ground area; the determination of the target location is such that a target area in the second image is located within the ground area relative to the first image, the target area including an area in contact with the ground in the first image.
In one possible implementation, the determining the target position of the first foreground image relative to the first background image includes: processing the first foreground image to obtain position information of a first main body area in the first foreground image; obtaining the target position of the first foreground image relative to the first background image based on the position information of the first subject region; the determination of the target position is such that a particular position of a first subject region of the second image is within a heads-up region of the first image; the head-up region includes a vanishing line resulting from a plurality of vanishing points, any one of which is a point where two lines parallel in the real world intersect in the first image.
In one possible implementation, before the server receives a background image selection instruction from the image processing apparatus, the method further includes: sending a mode selection interface to the image processing apparatus; the mode selection interface comprises a first option and a second option, the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting the background image; and receiving a selection instruction for the second option sent by the image processing device. Optionally, the server sends a background selection interface including options of a plurality of background libraries to the image processing apparatus in response to a selection instruction for the second option; the server receiving a background image selection instruction from an image processing apparatus includes: the server receiving the background image selection instruction for a target background library of the plurality of background image libraries from the image processing apparatus; the target background library includes the plurality of background images.
In one possible implementation, the method further includes: acquiring a first video clip, wherein the first video clip comprises a plurality of images belonging to the same shot, and the plurality of images comprise the first image; the selecting, based on the background image selection instruction, a first background image matching the first image from a plurality of background images includes: selecting the first background image matched with the plurality of images of the first video clip from the plurality of background images based on the background image selection instruction.
In an optional implementation manner, the selecting, based on the background image selection instruction, the first background image matching the plurality of images of the first video segment from the plurality of background images includes: selecting a target image from a plurality of images of the first video clip; selecting the first background image matching the target image from the plurality of background images. Optionally, the target image is any one of the plurality of images. Optionally, the target image is a key frame image of the plurality of images, for example, an image with better quality is determined in a certain manner.
In one possible implementation, the plurality of images includes the first image and at least one second image; the method further comprises the following steps: performing image fusion processing on the first background image and a second foreground image of each second image in the at least one second image to obtain at least one second target image, wherein a background replacement result of the first video clip comprises the first target image and the at least one second target image; and sending the background replacement result of the first video segment to the image processing device.
In one possible implementation, the method further includes: shot division is carried out on a target video to obtain a plurality of video clips, and the plurality of video clips comprise the first video clip and at least one second video clip; selecting a second background image matched with each second video clip in the at least one second video clip from the plurality of background images based on the background image selection instruction; performing image fusion processing on a third foreground image in at least one third image included in each second video clip and a second background image corresponding to each second video clip to obtain a background replacement result of each second video clip; obtaining a background replacement result of the target video based on the background replacement result of each video clip in the plurality of video clips; and sending the background replacement result of the target video to the image processing device.
In one possible implementation, the method further includes: receiving the first image from the image processing apparatus, or receiving a selection instruction for the first image from the image processing apparatus.
In one possible implementation, the method further includes: receiving the first video segment from the image processing device, or receiving a selection instruction for the first video segment from the image processing device.
In one possible implementation, the method further includes: receiving the target video from the image processing apparatus, or receiving a selection instruction for the target video from the image processing apparatus.
In a third aspect, an embodiment of the present application provides another image processing method, including: the image processing device sends a background image selection instruction to a server, wherein the background image selection instruction instructs the server to select a background image matched with a first image from a plurality of background images; receiving a first target image from the server; displaying the first target image; the first target image is obtained by the server performing image fusion processing on a first background image and a first foreground image in the first image, and the first background image is a background image selected by the server from the plurality of background images and matched with the first image.
The image processing device can be a mobile phone, a tablet computer, a notebook computer, a desktop computer and other terminal equipment.
In the embodiment of the application, the image processing device can receive and display the first target image by sending the background image selection instruction to the server, and does not need to specify a certain background image, so that the operation is convenient and fast.
In one possible implementation, the method further includes: the image processing apparatus transmits the first image to the server, or the image processing apparatus transmits a selection instruction for the first image to the server.
In one possible implementation, the method further includes: the image processing device sends a first video clip to the server, or sends a selection instruction for the first video clip to the server; the first video clip comprises a plurality of images belonging to the same shot, wherein the plurality of images comprise the first image.
In one possible implementation, the plurality of images includes the first image and at least one second image; the method further comprises the following steps: receiving a background replacement result of the first video segment from the server; the background replacement result of the first video segment comprises the first target image and at least one second target image; the at least one second target image is obtained by performing image fusion processing on the first background image and a second foreground image of each second image in the at least one second image.
In one possible implementation, the method further includes: the image processing device sends the target video to the server, or the image processing device sends a selection instruction for the target video to the server; the target video comprises the first video segment and at least one second video segment; receiving a background replacement result of the target video from the server; the background replacement result of the target video comprises the background replacement result of the first video segment and the background replacement result of each second video segment.
In one possible implementation, the method further includes: the image processing device receives a mode selection interface from the server, wherein the mode selection interface comprises a first option and a second option, the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting the background image; the image processing device sends a selection instruction for the second option to the server; the image processing device receives a background selection interface including options of a plurality of background libraries from the server; the image processing apparatus transmitting a background image selection instruction to a server includes: the image processing apparatus transmits the background image selection instruction for a target background library of the plurality of background image libraries to the server; the target background library includes the plurality of background images.
In this implementation, the image processing apparatus can support two background image selection modes, and the user can select a desired mode according to actual needs, thereby meeting the needs of different users.
In a fourth aspect, an embodiment of the present application provides an image processing apparatus, including: the instruction receiving unit is used for receiving a background image selection instruction from a user; an image matching unit configured to select a first background image matching the first image from a plurality of background images based on the background image selection instruction; and the image fusion unit is used for carrying out image fusion processing on the first background image and the first foreground image in the first image to obtain a first target image.
In one possible implementation, the background image selection instruction is configured to indicate a target background library of at least one selectable background library, where the plurality of background images are included in the target background library; or, the background image selection instruction is used to indicate target scene information, and the scene of the first background image matches with the target scene information.
In one possible implementation manner, the matching degree of the first background image and the first image is not lower than the matching degree of at least one background image except the first background image in the plurality of background images.
In one possible implementation manner, the matching degree between the first background image and the first image is not lower than the matching degree between the first background image and any background image except the first background image in the plurality of background images.
In a possible implementation manner, the image matching unit is specifically configured to perform image recognition processing on the first image to obtain image recognition information; the image recognition information comprises information describing at least one feature of the first image; determining the matching degree of at least one background image with the first image based on the annotation information and the image identification information of at least one background image in the plurality of background images; and selecting the first background image from the at least one background image based on the matching degree of the at least one background image and the first image respectively.
In one possible implementation, the image recognition information includes at least one of: the scene of the first image, the shooting angle of the first image and the shooting height of the first image.
In a possible implementation manner, before performing image fusion processing on the first background image and the first foreground image in the first image to obtain the first target image, the method further includes: a determining unit for determining a target position of the first foreground image relative to the first background image; and an image fusion unit, specifically configured to perform image fusion processing on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image, so as to obtain the first target image.
In a possible implementation manner, the determining unit is specifically configured to perform ground identification processing on the first background image to obtain location information of a ground area in the first background image; determining the target position of the first foreground image relative to the first background image based on the location information of the ground area; the determination of the target location is such that a target area in the second image is located within the ground area relative to the first image, the target area including an area in contact with the ground in the first image.
In a possible implementation manner, the determining unit is specifically configured to process the first foreground image to obtain position information of a first subject region in the first foreground image; obtaining the target position of the first foreground image relative to the first background image based on the position information of the first subject region; the determination of the target position is such that a particular position of a first subject region of the second image is within a heads-up region of the first image; the head-up region includes a vanishing line resulting from a plurality of vanishing points, any one of which is a point where two lines parallel in the real world intersect in the first image.
In one possible implementation, the image processing apparatus further includes: the display unit is used for displaying a mode selection interface; the mode selection interface comprises a first option and a second option; the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting the background image; the instruction receiving unit is further configured to receive a selection instruction for the second option.
Optionally, the display unit is further configured to display a plurality of background image libraries in response to a selection instruction for the second option; the instruction receiving unit is specifically configured to receive the background image selection instruction from the user for a target background library of the plurality of background image libraries; the target background library includes the plurality of background images.
In one possible implementation, the image processing apparatus further includes: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first video clip, the first video clip comprises a plurality of images belonging to the same shot, and the plurality of images comprise the first image; the image matching unit is specifically configured to select, based on the background image selection instruction, a first background image that matches the plurality of images of the first video clip from the plurality of background images.
In one possible implementation, the plurality of images includes the first image and at least one second image; the image fusion unit is further configured to perform image fusion processing on the first background image and a second foreground image of each of the at least one second image to obtain at least one second target image, where a background replacement result of the first video segment includes the first target image and the at least one second target image.
In one possible implementation, the image processing apparatus further includes: the video dividing unit is used for carrying out shot division on a target video to obtain a plurality of video clips, and the plurality of video clips comprise the first video clip and at least one second video clip; the image matching unit is further used for selecting a second background image matched with each second video clip in the at least one second video clip from a plurality of background images based on the background image selection instruction; the image fusion unit is further configured to perform image fusion processing on a third foreground image in at least one third image included in each second video segment and a second background image corresponding to each second video segment, so as to obtain a background replacement result of each second video segment; and the image processing unit is used for obtaining a background replacement result of the target video based on the background replacement result of each video clip in the plurality of video clips.
With regard to technical effects brought about by the fourth aspect or various alternative embodiments, reference may be made to the introduction of the technical effects of the first aspect or the corresponding implementation.
In a fifth aspect, an embodiment of the present application provides a server, including: a receiving unit configured to receive a background image selection instruction from an image processing apparatus; the processing unit is used for selecting a first background image matched with the first image from a plurality of background images based on the background image selection instruction; performing image fusion processing on the first background image and a first foreground image in the first image to obtain a first target image; a sending unit configured to send the first target image to the image processing apparatus.
In one possible implementation, the background image selection instruction is configured to indicate a target background library of at least one selectable background library, where the plurality of background images are included in the target background library; or, the background image selection instruction is used to indicate target scene information, and the scene of the first background image matches with the target scene information.
In one possible implementation manner, the matching degree of the first background image and the first image is not lower than the matching degree of at least one background image except the first background image in the plurality of background images.
In one possible implementation manner, the matching degree between the first background image and the first image is not lower than the matching degree between the first background image and any background image except the first background image in the plurality of background images.
In a possible implementation manner, the processing unit is specifically configured to perform image recognition processing on the first image to obtain image recognition information; the image recognition information comprises information describing at least one feature of the first image; determining the matching degree of at least one background image with the first image based on the annotation information and the image identification information of at least one background image in the plurality of background images; and selecting the first background image from the at least one background image based on the matching degree of the at least one background image and the first image respectively.
In one possible implementation, the image recognition information includes at least one of: the scene of the first image, the shooting angle of the first image and the shooting height of the first image.
In a possible implementation manner, before performing image fusion processing on the first background image and the first foreground image in the first image to obtain the first target image, the method further includes: a determining unit for determining a target position of the first foreground image relative to the first background image; and an image fusion unit, specifically configured to perform image fusion processing on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image, so as to obtain the first target image.
In a possible implementation manner, the determining unit is specifically configured to perform ground identification processing on the first background image to obtain location information of a ground area in the first background image; determining the target position of the first foreground image relative to the first background image based on the location information of the ground area; the determination of the target location is such that a target area in the second image is located within the ground area relative to the first image, the target area including an area in contact with the ground in the first image.
In a possible implementation manner, the determining unit is specifically configured to process the first foreground image to obtain position information of a first subject region in the first foreground image; obtaining the target position of the first foreground image relative to the first background image based on the position information of the first subject region; the determination of the target position is such that a particular position of a first subject region of the second image is within a heads-up region of the first image; the head-up region includes a vanishing line resulting from a plurality of vanishing points, any one of which is a point where two lines parallel in the real world intersect in the first image.
In a possible implementation manner, the sending unit is further configured to send a mode selection interface to the image processing apparatus; the mode selection interface comprises a first option and a second option, the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting the background image; the receiving unit is further configured to receive a selection instruction for the second option sent by the image processing apparatus. Optionally, the sending unit is further configured to send, to the image processing apparatus, a background selection interface including options of a plurality of background libraries in response to a selection instruction for the second option; the receiving unit is specifically configured to receive the background image selection instruction for a target background library of the plurality of background image libraries from the image processing apparatus; the target background library includes the plurality of background images.
In one possible implementation, the server further includes: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first video clip, the first video clip comprises a plurality of images belonging to the same shot, and the plurality of images comprise the first image; the processing unit is specifically configured to select, based on the background image selection instruction, the first background image that matches the multiple images of the first video clip from the multiple background images.
In a possible implementation manner, the processing unit is specifically configured to select a target image from a plurality of images of the first video segment; selecting the first background image matching the target image from the plurality of background images. Optionally, the target image is any one of the plurality of images. Optionally, the target image is a key frame image of the plurality of images, for example, an image with better quality is determined in a certain manner.
In one possible implementation, the plurality of images includes the first image and at least one second image; the processing unit is further configured to perform image fusion processing on the first background image and a second foreground image of each second image in the at least one second image to obtain at least one second target image, where a background replacement result of the first video segment includes the first target image and the at least one second target image; the sending unit is further configured to send a background replacement result of the first video segment to the image processing apparatus.
In a possible implementation manner, the processing unit is further configured to perform shot segmentation on a target video to obtain a plurality of video segments, where the plurality of video segments include the first video segment and at least one second video segment; selecting a second background image matched with each second video clip in the at least one second video clip from the plurality of background images based on the background image selection instruction; performing image fusion processing on a third foreground image in at least one third image included in each second video clip and a second background image corresponding to each second video clip to obtain a background replacement result of each second video clip; obtaining a background replacement result of the target video based on the background replacement result of each video clip in the plurality of video clips; the sending unit is further configured to send a background replacement result of the target video to the image processing apparatus.
In a possible implementation manner, the receiving unit is further configured to receive the first image from the image processing apparatus, or receive a selection instruction for the first image from the image processing apparatus.
In a possible implementation manner, the receiving unit is further configured to receive the first video segment from the image processing apparatus, or receive a selection instruction for the first video segment from the image processing apparatus.
In a possible implementation manner, the receiving unit is further configured to receive the target video from the image processing apparatus, or receive a selection instruction for the target video from the image processing apparatus.
With regard to the technical effects brought about by the fifth aspect or the various alternative embodiments, reference may be made to the introduction of the technical effects of the second aspect or the corresponding implementation.
In a sixth aspect, an embodiment of the present application provides an image processing apparatus, including: a sending unit, configured to send a background image selection instruction to a server, where the background image selection instruction instructs the server to select a background image matching a first image from a plurality of background images; a receiving unit configured to receive a first target image from the server; a display unit for displaying the first target image; the first target image is obtained by the server performing image fusion processing on a first background image and a first foreground image in the first image, and the first background image is a background image selected by the server from the plurality of background images and matched with the first image.
In a possible implementation manner, the sending unit is further configured to send the first image to the server, or send a selection instruction for the first image to the server.
In a possible implementation manner, the sending unit is further configured to send the first video segment to the server, or send a selection instruction for the first video segment to the server; the first video clip comprises a plurality of images belonging to the same shot, wherein the plurality of images comprise the first image.
In one possible implementation, the plurality of images includes the first image and at least one second image; the receiving unit is further configured to receive a background replacement result of the first video segment from the server; the background replacement result of the first video segment comprises the first target image and at least one second target image; the at least one second target image is obtained by performing image fusion processing on the first background image and a second foreground image of each second image in the at least one second image.
In a possible implementation manner, the sending unit is further configured to send the target video to the server, or send a selection instruction for the target video to the server; the target video comprises the first video segment and at least one second video segment; the receiving unit is further configured to receive a background replacement result of the target video from the server; the background replacement result of the target video comprises the background replacement result of the first video segment and the background replacement result of each second video segment.
In a possible implementation manner, the receiving unit is further configured to receive a mode selection interface from the server, where the mode selection interface includes a first option and a second option, the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting a background image; the sending unit is further configured to send a selection instruction for the second option to the server; the receiving unit is further used for receiving a background selection interface comprising options of a plurality of background libraries from the server; the sending unit is specifically configured to send the background image selection instruction for a target background library of the plurality of background image libraries to the server; the target background library includes the plurality of background images.
With regard to the technical effects brought about by the sixth aspect or the various alternative embodiments, reference may be made to the introduction of the technical effects of the third aspect or the corresponding implementation.
In a seventh aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory, wherein the memory is configured to store instructions and the processor is configured to execute the instructions stored by the memory, so that the processor performs the method according to the first aspect and any possible implementation manner.
In an eighth aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory, wherein the memory is configured to store instructions and the processor is configured to execute the instructions stored by the memory, so that the processor performs the method according to the second aspect and any possible implementation manner.
In a ninth aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory, wherein the memory is configured to store instructions, and the processor is configured to execute the instructions stored by the memory, so that the processor performs the method according to the third aspect and any possible implementation manner.
In a tenth aspect, an embodiment of the present application provides a chip, where the chip includes a communication interface and a processor, where the processor is configured to execute the method in the first aspect or any possible implementation manner of the first aspect.
In an eleventh aspect, embodiments of the present application provide a chip, where the chip includes a communication interface and a processor, where the processor is configured to execute the method in the second aspect or any possible implementation manner of the second aspect.
In a twelfth aspect, an embodiment of the present application provides a chip, where the chip includes a communication interface and a processor, where the processor is configured to execute the method in the third aspect or any possible implementation manner of the third aspect.
In a thirteenth aspect, the present application provides a computer-readable storage medium storing a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the method of the first aspect and any optional implementation manner.
In a fourteenth aspect, the present application provides a computer-readable storage medium storing a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the method of the second aspect and any optional implementation manner.
In a fifteenth aspect, the present application provides a computer-readable storage medium storing a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, cause the processor to execute the method of the third aspect and any optional implementation manner.
In a sixteenth aspect, the present application provides a computer program product, which includes program instructions, and when executed by a processor, causes the processor to execute the method of the first aspect and any optional implementation manner.
In a seventeenth aspect, the present application provides a computer program product, which includes program instructions, and when executed by a processor, causes the processor to execute the method of the second aspect and any optional implementation manner.
In an eighteenth aspect, the present application provides a computer program product, which includes program instructions, and when executed by a processor, causes the processor to execute the method of the third aspect and any optional implementation manner.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 2A and fig. 2B are schematic diagrams of an example of a background selection interface provided in an embodiment of the present application;
FIG. 3 is a flowchart of another image processing method provided in the embodiments of the present application;
FIG. 4 is a flowchart of another image processing method provided in the embodiments of the present application;
FIG. 5 is a flowchart of another image processing method provided in the embodiments of the present application;
FIG. 6 is a flowchart of another image processing method provided in the embodiments of the present application;
FIG. 7 is a flowchart of another image processing method provided in the embodiments of the present application;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The terms "first," "second," and "third," etc. in the description and claims of the present application and the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprises" and "comprising," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a list of steps or elements. A method, system, article, or apparatus is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not explicitly listed or inherent to such process, system, article, or apparatus.
As described in the background art, there is a need to research a video background changing scheme and an image background changing scheme which are convenient for a user to operate and can meet different user requirements. The application provides an image processing method which can enable a user to quickly select a background image matched with a first image (for example, a certain foreground image selected by the user) from a plurality of background images and further fuse the foreground image and the background image. The following respectively briefly introduces a scene to which the image processing method provided in the embodiment is applicable.
Scene 1: a user receives a background image selection instruction from the user through image processing software running on a terminal device (for example, a personal computer) or a logged-in web page, selects a background image matched with a certain foreground image from a plurality of background images based on the background image selection instruction, and then performs image fusion processing on the foreground image and the background image. The user designates a plurality of background images selectable by the terminal equipment by inputting a background image selection instruction, the terminal equipment can automatically select the background image matched with a certain image from the plurality of background images, and the user does not need to select the background image matched with the foreground image from the plurality of background images, so that the operation is convenient and fast, and the time is saved.
Scene 2: a user receives a background image selection instruction from the user through image processing software running on a terminal device (e.g., a personal computer) or a logged-in web page, selects a background image matched with a plurality of images in a certain video clip (or video) from the plurality of background images based on the background image selection instruction, and then performs image fusion processing on a foreground image and the background image of at least one frame of image in the video clip. The user can specify a plurality of background images selectable by the terminal equipment by inputting a background image selection instruction, the terminal equipment can automatically select the background image matched with the video clip from the plurality of background images, and the user does not need to select the background image matched with the video clip from the plurality of background images, so that the operation is convenient and fast, and the time is saved.
Scene 3: a user sends a background image selection instruction to a server through terminal equipment (such as a personal computer); the server selects a background image matched with a certain image from a plurality of background images based on the background image selection instruction, further performs image fusion processing on a foreground image and the background image in the image, and sends a target image obtained through the fusion processing to the terminal equipment; the terminal device displays the target image from the server.
Scene 4: a user sends a background image selection instruction to a server through terminal equipment (such as a personal computer); the server selects a background image matched with a certain video clip from a plurality of background images based on the background image selection instruction, further performs image fusion processing on a foreground image and a background image of at least one frame of image in the video clip, and sends a background replacement result of the video clip obtained by the fusion processing to the terminal equipment; the terminal device displays the background replacement result of the video clip from the server.
In the above scenario, by implementing the image processing method provided by the embodiment of the present application, a user can quickly implement image background changing or video background changing.
The following describes an image processing method provided by an embodiment of the present application with reference to the drawings.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
101. the image processing apparatus receives a background image selection instruction from a user.
The execution subject of the embodiment of the application is an image processing device, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer and other terminal equipment; but also a server (e.g., a cloud server). In some embodiments, the image processing apparatus is a terminal device; the background image selection command is used for indicating a target background library in at least one selectable background library, and the plurality of background images are contained in the target background library. For example, the user inputs a background image selection instruction through an input device (e.g., mouse, keyboard, touch screen) of the image processing apparatus. Fig. 2A is a schematic diagram of an example of a background selection interface provided in an embodiment of the present application. As shown in fig. 2A, the background selection interface includes a first background library, a second background library, … …, an nth background library; the image processing apparatus receiving the background image selection instruction from the user may be an operation of receiving a user selection of a target background library (e.g., mth background library), such as an operation of clicking on the target background library or an operation of clicking on an option for selecting the target background library; wherein N is an integer greater than 1, and M is an integer greater than 0. In fig. 2A, the first image is a first image uploaded or selected by a user, the first background library, the second background library, … …, and the nth background library are all selectable background libraries, the target background library is any one of the first background library to the nth background library, and dragging the scroll bar can show other background libraries. In some embodiments, the background image selection instruction is configured to indicate target scene information, and the scene of the first background image matches the target scene information. Fig. 2B is a schematic diagram of an example of another background selection interface provided in the embodiment of the present application. As shown in fig. 2B, 201 represents a scene selection interface, a black rectangular area represents a scroll bar, and after the user selects the scene selection interface 201, the background selection interface may display scenes 1 to F; after a scene is selected (e.g., clicked), a background selection interface displays one or more sub-scenes of the scene, each sub-scene having a plurality of background images; the image processing apparatus receiving the background image selection instruction from the user may be an operation of receiving a user selection of a target scene (e.g., any one of the sub-scenes). For example, after the user selects the scene selection interface 201, the background selection interface displays scenes of daily life, school, leisure, office, scenic spots, city coordinates, medical treatment, scenery and the like, that is, each scene corresponds to one kind of scene information; after the user selects the scene of daily life, the background selection interface displays the sub-scenes in the scene of daily life: living room, study room, bedroom, kitchen, vestibule, toilet, tea room, etc.; the user selects this sub-scenario of the study. In this example, the target scene information is information of a sub-scene of a study room, and the background image selection instruction from the user corresponds to the following operations: the user selects the operation of the scene selection interface 201, the operation of selecting the scene of daily life, and the operation of selecting the sub-scene of the study. It should be understood that in this example, the background image selection instruction specifies a background image in this sub-scene of the study. In the embodiment of the present application, the background image selection instruction is used to define a plurality of selectable background images, that is, to select a background image matching the first image from among the plurality of selectable background images.
In some embodiments, the image processing apparatus may perform the following operations before performing step 101: displaying a mode selection interface; the mode selection interface comprises a first option and a second option; the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting a background image; and receiving a selection instruction aiming at the second option. Optionally, the image processing apparatus displays a plurality of background image libraries in response to a selection instruction for the second option; one possible implementation of step 101 is as follows: receiving the background image selection instruction (i.e., an instruction to select a target background library) from the user for a target background library among the plurality of background image libraries; the target background library includes the plurality of background images. For example, after the user starts certain image processing software on the image processing apparatus or logs in a certain web page, a display device (e.g., a display) of the image processing apparatus displays a mode selection interface, and after the user selects the second option, the display device displays a plurality of background image libraries, from which the user selects a desired background library (i.e., a target background library).
102. The image processing apparatus selects a first background image matching the first image from the plurality of background images based on the background image selection instruction.
The matching degree of the first background image and the first image is not lower than the matching degree of at least one background image except the first background image in the plurality of background images. Further, the degree of matching between the first background image and the first image is not lower than the degree of matching between the first background image and any background image of the plurality of background images except the first background image. That is, the first background image is the background image having the highest degree of matching with the first image among the plurality of background images. In some embodiments, after receiving the background image selection instruction, the image processing apparatus selects the first background image matching the first image from the plurality of background images without inputting an instruction by the user. In some embodiments, after the image processing apparatus receives the image fusion instruction from the user, step 102 and step 103 are performed.
In practical applications, the image processing apparatus may determine the matching degree of each background image with the first image based on the annotation information of each background image and the image identification information of the first image in a similar manner. The following describes how to determine the matching degree between the background image and the first image by taking the determination of the matching degree between the first background image and the first image as an example. In some embodiments, determining the matching degree of the first background image and the first image based on the annotation information of the first background image and the image identification information may be: and calculating the weighted sum of the first similarity between the scene of the first background image and the scene of the first image, the second similarity between the camera position of the first background image and the camera position of the first image, and the third similarity between the shooting angle of the first background image and the shooting angle of the first image. It should be understood that the higher the similarity between each item in the annotation information of the first background image and each item in the image identification information of the first image, the higher the matching degree between the first background image and the first image. In some embodiments, the scenes of the image are: close-up, medium, panoramic, long-range. The image processing apparatus may store similarities between different scenes, for example, the similarity between the same scene is 1, and the similarity between different scenes is 0. The third similarity of the photographing angle of the first background image and the photographing angle of the first image is inversely related to the difference between the photographing angle of the first background image and the photographing angle of the first image. That is, the smaller the difference between the photographing angle of the first background image and the photographing angle of the first image, the higher the third similarity. Optionally, the similarity between the same shooting orientations is 1, and the similarity between different camera positions is 0.
In some embodiments, the image processing apparatus may display the first background image after performing step 102. That is, the user can preview the first background image.
103. The image processing device carries out image fusion processing on a first background image and a first foreground image in the first image to obtain a first target image.
In some embodiments, the image processing apparatus may perform operations such as edge feathering, adjusting foreground and background brightness, adjusting foreground and background color tones, and the like in the process of performing image fusion processing on the first background image and the first foreground image in the first image to obtain the first target image. In some embodiments, one possible implementation of step 103 is as follows: and receiving an image fusion instruction from a user, responding to the image fusion instruction, and performing image fusion processing on a first background image and a first foreground image in the first image to obtain a first target image.
In some embodiments, the image processing apparatus may determine a target position of the first foreground image relative to the first background image before performing step 103; the implementation of step 103 is as follows: and performing image fusion processing on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image to obtain the first target image. That is, the image processing apparatus may correspondingly adjust the position and/or size of the first foreground image relative to the first background image based on the target position of the first foreground image relative to the first background image, and perform fusion processing on the adjusted first foreground image and the first background image to obtain the first target image.
An example of a possible determination of the target position of the first foreground image relative to the first background image is as follows: performing ground identification processing on the first background image to obtain position information of a ground area in the first background image; determining the target position of the first foreground image relative to the first background image based on the position information of the ground area; the target position is determined such that a target area in the second image is located within the ground area with respect to the first image, the target area including an area in contact with the ground in the first image.
Another possible example of determining the target position of the first foreground image relative to the first background image is as follows: processing the first foreground image to obtain the position information of a first main body area in the first foreground image; obtaining the target position of the first foreground image relative to the first background image based on the position information of the first main body area; the determination of the target position is such that a specific position of a first subject region of the second image is located within a head-up region of the first image; the head-up region includes a vanishing line obtained from a plurality of vanishing points, and any one of the plurality of vanishing points is a point where two parallel lines in the real world intersect in the first image. In some embodiments, the processing the first foreground image to obtain the position information of the first subject region in the first foreground image may: and obtaining the position information of the first main body area in the first foreground image based on the transparency channel of the first foreground image. In some embodiments, the processing the first foreground image to obtain the position information of the first subject region in the first foreground image may: and performing image recognition processing on the first foreground image to obtain position information of a first main body area where a target object (such as a person) in the first foreground image is located. In some embodiments, the processing the first foreground image to obtain the position information of the first subject region in the first foreground image may: and performing image recognition processing on the first foreground image to obtain position information of a first main body region (namely, a region occupied by the target object) where the target object with the largest occupied area is located in the first foreground image.
In the embodiment of the application, the image processing device can select the first background image matched with the first image based on the background image selection instruction from the user, and does not need the user to select the background image matched with the first image, so that the efficiency of changing the background of the video or the background of the image can be improved, and the operation is convenient.
Some possible implementations of step 102 are described below.
In some embodiments, one possible implementation of step 102 is as follows: the image processing device carries out image identification processing on the first image to obtain image identification information; the image recognition information includes information describing at least one characteristic of the first image; determining matching degrees of the at least one background image and the first image respectively based on the annotation information and the image identification information of the at least one background image in the plurality of background images; and selecting the first background image from the at least one background image based on the matching degree between the at least one background image and the first image. The image processing device may store or obtain annotation information of each of the plurality of background images. In some embodiments, the annotation information of any one of the background images may be manually or automatically added annotation information such as a photographing orientation (i.e., photographing angle), photographing height, and scene (corresponding to photographing distance) of the background image when or after the any one of the background images is photographed. In the present application, the photographing angle and the photographing orientation may be the same concept. For example, the photographer obtains a camera position (including a shooting direction, a shooting height, and a shooting distance) when shooting a background image, and adds annotation information (i.e., the camera position) to the background image based on the camera position. The camera position (or camera position) is the spatial position of the camera (or camera) relative to the subject. People often represent camera positions in three dimensions, shooting orientation, shooting altitude, and shooting distance. In some embodiments, the image processing apparatus performs image recognition processing on the first image, and the obtained image recognition information may be: the image identification information is extracted from the first image, or the image identification information is acquired from attribute information of the first image and/or captured data. The image recognition information may be a camera position of the first image, such as a photographing direction, a photographing height, and a photographing distance. For example, a camera (e.g., a camera) obtains a camera position (including a shooting orientation, a shooting height, and a shooting distance) when a first image is shot, and uses the camera position as attribute information or shooting data of the first image; the image processing apparatus performs image recognition processing on the first image to obtain image recognition information (i.e., a camera position). It is to be understood that, in this embodiment, the image identification information of the first image may be understood as the annotation information of the first image, and the image processing apparatus obtains the image identification information of the first image in the same manner as the annotation information of the background image.
The shooting direction refers to shooting points selected around the shot subject in front, back, left and right, and modeling rules of different shooting directions can be grasped through the following typical directions: carrying out positive shooting: a mode of photographing from a front subject; positive side-shooting: a mode of shooting from the front side direction of the shot subject; oblique side bat: a mode of taking an image from between the front surface and the front side surface (or the front back surface) of the subject; fourthly, back beating: the method of imaging from the back side direction of the subject.
The shooting height refers to shooting points selected at different heights relative to the shot subject, and the modeling rules of the different shooting heights can be grasped through the following typical heights: carrying out flat shooting: the camera position and the shot main body basically keep a shooting mode of a horizontal state; second, upward shooting: a mode of photographing a subject from a low place; shooting in a bowing mode: the shooting subject is shot from a high place.
The shooting distance refers to a shooting point selected at a different distance from the subject. The same subject is shot at different distances, the occupied area of the subject in the picture is different, and the modeling difference is generally characterized by a scene. The modeling rules of different shooting distances can be grasped through the following typical distances: the method comprises the following steps: the method mainly expresses the overall visual information such as spatial range, quantity scale, spatial relationship and the like of the relevant environment, and expresses the overall impression such as atmosphere, vigor and the like of the scene; panoramic view: the method mainly expresses the overall visual information of a shot subject, and simultaneously reserves a certain range of environmental information closely related to the subject, but the visual appearance of a theme is the content center and the structure center of a picture; third, middle scenery: mainly expressing visual information of most of a subject; fourthly, close-range view: mainly expressing visual information of a main part of a subject; feature-making: mainly emphasizes the expression, texture and local details of the subject.
In some embodiments, the annotation information of each background image may be image identification information of each background image obtained by the image processing apparatus performing image identification processing on each background image in advance. In some embodiments, the image processing apparatus obtains image recognition information obtained by performing image recognition processing on each background image from another device. It should be understood that the annotation information of any one of the background images may be understood as image identification information obtained by performing image identification processing on the any one of the background images. Optionally, the image identification information of the first image (i.e. the image identification information obtained by performing the image identification process on the first image) includes at least one of the following items: a scene of the first image, a photographing angle of the first image, and a photographing height of the first image; the annotation information of any one of the background images (i.e., the image recognition information obtained by performing the image recognition processing on any one of the images) includes: the scene of any image, the shooting angle of any image and the shooting height of any image. A manner of obtaining the shooting angle of the first image by performing the image recognition processing on the first image will be described below.
In some embodiments, the image processing device may process the first image by using a neural network to obtain feature points (landworks) of the face, and the image processing device may calculate projection coordinates of the nose tip coordinates on a straight line where the pupil is located by using two points of the pupil and one point of the nose tip, that is, the following projection (projection). Then, the image processing device calculates a first length between the projection coordinate and the left pupil and a second length between the projection coordinate and the right pupil, and then compares the first length with the second length, if the second length is longer than the first length, the image processing device faces to the right (the foreground person distinguishes the left from the right), otherwise, the image processing device faces to the left. We then derive the direction, followed by the calculation of the scale. And then the image processing device determines the shooting angle according to the corresponding relation between the proportion and the angle. It should be noted that the foregoing operations may be performed within a shot (shot) and averaged within the shot to obtain stable results. The resulting angle can be used to estimate the position, based on our assumption that the person in the foreground is not moving. Table 1 is a table of correspondence between a ratio and an angle provided in the embodiment of the present application.
TABLE 1
Figure BDA0002713806420000171
In table 1, if the ratio is less than-1, the shooting angle corresponding to the ratio is 0; if the proportion is-1-0.8, the shooting angle corresponding to the proportion is 0; if the proportion is-0.8-0.4, the shooting angle corresponding to the proportion is-30; if the proportion is-0.4-0.25, the shooting angle corresponding to the proportion is-30; and so on. The image processing apparatus may further perform image recognition processing on the first image to obtain a shooting height of the first image, a scene of the first image, and the like, and do not expand here.
The foregoing embodiment describes a scheme of selecting a first background image matched with a first image from a plurality of background images, and performing fusion processing on the first background image and a first foreground image of the first image to obtain a first target image. It should be understood that this scheme is an image-to-background scheme. The following is an example of the first video segment, and a video background changing scheme is introduced. Fig. 3 is a flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 3, the method includes:
301. the image processing apparatus acquires a first video segment.
The first video clip comprises a plurality of images belonging to the same shot. The multiple images belonging to the same shot may be a continuous picture (i.e., a multi-frame image) captured by a video capture device such as a video camera from power on to power off, or a segment between two cropping points in a video. One possible implementation of step 301 is as follows: the image processing device receives the first video clip uploaded by the user. For example, the user starts certain image processing software on the image processing device, and uploads the first video segment to the image processing software. For another example, the image processing apparatus logs in a webpage, and the user uploads the first video clip to a background server supporting the webpage through the webpage. One possible implementation of step 301 is as follows: the image processing apparatus receives a selection instruction for a first video segment. For example, after a user starts certain image processing software on the image processing device or logs in a certain webpage, the user selects the first video clip in the background library; the background library includes a plurality of video clips. One possible implementation of step 301 is as follows: the image processing apparatus acquires the first video segment from another device (e.g., a server) via a network.
302. The image processing apparatus receives a background image selection instruction from a user.
The implementation of step 302 may be the same as the implementation of step 101. The embodiment of the present application does not limit the sequence of executing step 301 and step 302. That is, the image processing apparatus may execute step 301 and then step 302, or may execute step 302 and then step 301.
303. And selecting a first background image matched with the plurality of images of the first video clip from the plurality of background images based on the background image selection instruction.
In some embodiments, after receiving the background image selection instruction, the image processing apparatus selects the first background image matching the first image from the plurality of background images without inputting an instruction by the user. In some embodiments, after the image processing apparatus receives the image fusion instruction from the user, step 303 and step 304 are performed.
One possible implementation of step 303 is as follows: selecting a target image from a plurality of images of the first video clip; selecting the first background image matching the target image from the plurality of background images. Optionally, the target image is any one of the plurality of images. Optionally, the target image is a key frame image of the plurality of images, and for example, an image with better quality is determined in a certain manner. The plurality of images includes the first image and at least one second image.
In some embodiments, the image processing apparatus may display the first background image after performing step 303. That is, the user can preview the first background image.
304. And the image processing device performs image fusion processing on the first background image and the foreground image of at least one image in the first video clip to obtain a background replacement result of the first video clip.
In some embodiments, the plurality of images of the first video segment includes the first image and at least one second image; the background replacement result of the first video segment includes the first target image and at least one second target image, where the at least one second target image may be obtained by image fusion processing performed by an image processing device on a second foreground image of each of the first background image and the at least one second image, and the first target image is obtained by image fusion performed on a first image and a first background image. The method flow of fig. 2 may be understood as a scheme for background-changing an image in a first video segment.
In the implementation mode, the user can change the background of at least two images in the first video clip by inputting the background image selection instruction, and the operation is convenient and fast.
The following is an example of a target video, and a scheme of changing a background of the video is described. Fig. 4 is a flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 4, the method includes:
401. the image processing apparatus acquires a target video.
The target video may be divided into a plurality of video segments. One possible implementation of step 401 is as follows: and the image processing device receives the target video uploaded by the user. For example, after a user starts certain image processing software on the image processing device, the target video is uploaded to the image processing software. For another example, after the image processing apparatus logs in a web page, the target video is uploaded to a server supporting the web page through the web page. One possible implementation of step 401 is as follows: the image processing apparatus receives a selection instruction for a target video. For example, after a user starts certain image processing software on the image processing device or logs in a certain webpage, a target video in a background library is selected; the background library includes a plurality of videos. One possible implementation of step 401 is as follows: the image processing apparatus acquires a target video from another device (e.g., a server) via a network.
402. The image processing device divides a target video into a plurality of video segments.
The plurality of video segments includes the first video segment and at least one second video segment. In some embodiments, the image processing apparatus may perform shot segmentation on the target video in any manner, which is not limited in this application. For example, the image processing apparatus performs shot division according to the background of each frame image in the target video, that is, the background of each frame image in each video clip is the same.
403. And selecting a first background image matched with the first video clip and a second background image matched with each second video clip in the at least one second video clip from the plurality of background images based on the background image selection instruction.
In some embodiments, after the image processing apparatus receives the background image selection instruction, step 403 is executed, without the user inputting an instruction. In some embodiments, after the image processing apparatus receives the image fusion instruction from the user, steps 403 and 404 are performed.
In some embodiments, the image processing apparatus may select the background image matching each video clip from the plurality of background images in the same manner as the implementation manner of step 303.
404. The image processing device performs image fusion processing on the first background image and a foreground image of at least one image in the first video segment to obtain a background replacement result of the first video segment, and performs image fusion processing on a third foreground image in at least one third image included in each second video segment and a second background image corresponding to each second video segment to obtain a background replacement result of each second video segment.
405. And obtaining a background replacement result of the target video based on the background replacement result of each video clip in the plurality of video clips.
The background replacement result of the target video includes a background replacement result of each video segment. The method flow of fig. 3 may be understood as a scheme for background-changing at least one image in a first video segment in a target video.
In this implementation, images in different video segments can be fused with different background images, so that more appropriate backgrounds can be replaced for different video segments.
Fig. 2 to 4 describe the flow of the method for realizing the image background changing and the video background changing by the image processing device separately. The following describes a method flow for realizing image background change by the image processing device and the server together.
Fig. 5 is a flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 5, the method includes:
501. the image processing apparatus receives a background image selection instruction from a user.
The implementation of step 501 may be the same as the implementation of step 101. In some embodiments, the image processing apparatus may perform the following operations before performing step 501: receiving a mode selection interface from a server, wherein the mode selection interface comprises a first option and a second option, the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting the background image; the image processing apparatus transmits a selection instruction of the user for the second option to the server; the image processing device receives a background selection interface including options of a plurality of background libraries from the server; the image processing apparatus transmitting the background image selection instruction to the server includes: the image processing apparatus transmitting the background image selection command for a target background library among the plurality of background image libraries to the server; the target background library includes a plurality of background images. For example, after the image processing apparatus starts the image processing software or logs in a web page, the image processing apparatus receives a mode selection interface from the server and displays the mode selection interface.
502. The image processing apparatus transmits the first image or an instruction to select the first image to the server.
In some embodiments, the image processing device may upload (correspond to sending) its locally stored first image to a server over a network. In some embodiments, the image processing apparatus selects the first image through a certain webpage in which the image processing apparatus logs in, that is, the background library of the server stores the first image.
503. The image processing apparatus transmits a background image selection instruction to the server.
The background image selection instruction instructs the server to select a background image matching the first image from the plurality of background images. The sequence of step 501, step 502 and step 503 is not limited.
504. And the server selects a first background image matched with the first image from the plurality of background images based on the background image selection instruction.
In some embodiments, after the server receives the background image selection instruction, step 504 is executed, without the image processing apparatus sending other instructions. In some embodiments, after the server receives the image fusion instruction from the image processing apparatus, step 504 and step 505 are executed.
505. And the server performs image fusion processing on the first background image and the first foreground image in the first image to obtain a first target image.
506. The server transmits the first target image to the image processing apparatus.
507. The image processing device displays the first target image.
In the embodiment of the application, the server executes image matching operation and image fusion processing; the calculation amount of the image processing device can be reduced, and time can be saved.
The following is an example of the first video segment, and a scheme of video background change is described. Fig. 6 is a flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 6, the method includes:
601. the image processing apparatus receives a background image selection instruction from a user.
The implementation of step 601 may be the same as the implementation of step 501.
602. The image processing apparatus transmits the first video clip or an instruction to select the first video clip to the server.
In some embodiments, the image processing device may upload (correspond to sending) its locally stored first video segment to the server over the network. In some embodiments, the image processing apparatus selects the first video clip through a certain webpage in which the image processing apparatus logs in, that is, the first video clip is stored in the background library of the server.
603. The image processing apparatus transmits a background image selection instruction to the server.
The background image selection instruction instructs the server to select a background image matching the first video clip from the plurality of background images. The sequence of step 601, step 602, and step 603 is not limited.
604. The server selects a first background image matching the plurality of images of the first video clip from the plurality of background images based on the background image selection instruction.
In some embodiments, after the image processing apparatus receives the background image selection instruction, step 604 is executed, without the image processing apparatus sending other instructions. In some embodiments, after the image processing apparatus receives the image fusion instruction from the user, step 604 and step 605 are performed.
The implementation of step 604 may be the same as the implementation of step 303.
605. And the server performs image fusion processing on the first background image and the foreground image of at least one image in the first video clip to obtain a background replacement result of the first video clip.
606. And the server sends the background replacement result of the first video clip to the image processing device.
607. The image processing apparatus displays a background replacement result of the first video segment.
In the embodiment of the application, the server executes image matching operation and image fusion processing; the calculation amount of the image processing device can be reduced, and time can be saved.
The following is an example of a target video, and a scheme of changing a background of the video is described. Fig. 7 is a flowchart of another image processing method according to an embodiment of the present application. As shown in fig. 7, the method includes:
701. the image processing apparatus receives a background image selection instruction from a user.
The implementation of step 701 may be the same as the implementation of step 501.
702. The image processing apparatus transmits the target video or an instruction to select the target video to the server.
In some embodiments, the image processing device may upload (correspond to sending) its locally stored target video to a server over a network. In some embodiments, the image processing apparatus selects the target video through a certain webpage in which the image processing apparatus logs in, that is, the target video is stored in the background library of the server.
703. The image processing apparatus transmits a background image selection instruction to the server.
The background image selection instruction instructs the server to select a background image matching each video clip in the target video from the plurality of background images. The sequence of step 701, step 702 and step 703 is not limited.
704. The server divides the target video into a plurality of video clips.
The plurality of video segments includes the first video segment and at least one second video segment. In some embodiments, the image processing apparatus may perform shot segmentation on the target video in any manner, which is not limited in this application.
705. The server selects a first background image matched with the first video clip and a second background image matched with each second video clip in at least one second video clip from the plurality of background images based on the background image selection instruction.
In some embodiments, after the image processing apparatus receives the background image selection instruction, step 705 is executed, without the image processing apparatus sending other instructions. In some embodiments, step 705 is performed after the image processing apparatus receives an image fusion instruction from a user.
706. The server performs image fusion processing on the first background image and a foreground image of at least one image in the first video clip to obtain a background replacement result of the first video clip, and performs image fusion processing on a third foreground image in at least one third image included in each second video clip and a second background image corresponding to each second video clip to obtain a background replacement result of each second video clip.
707. And the server obtains the background replacement result of the target video based on the background replacement result of each video clip in the plurality of video clips.
708. The server sends the background replacement result of the target video to the image processing device.
709. The image processing apparatus displays a background replacement result of the target video.
In the embodiment of the application, the server executes image matching operation and image fusion processing; the calculation amount of the image processing device can be reduced, and time can be saved.
Having described the image processing apparatus provided in the embodiment of the present application, the functions of the respective components of the image processing apparatus that can provide the image processing method according to the embodiment of the present application are described below. Fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 8, the image processing apparatus may include:
an instruction receiving unit 801 configured to receive a background image selection instruction from a user;
an image matching unit 802 configured to select a first background image matching the first image from the plurality of background images based on the background image selection instruction;
an image fusion unit 803, configured to perform image fusion processing on the first background image and the first foreground image in the first image to obtain a first target image.
In one possible implementation, the background image selection instruction is used to indicate a target background library of at least one selectable background library, and the plurality of background images are included in the target background library; or, the background image selection instruction is used to indicate target scene information, and the scene of the first background image matches with the target scene information.
In one possible implementation manner, the matching degree between the first background image and the first image is not lower than the matching degree of at least one background image except the first background image in the plurality of background images.
In one possible implementation manner, a matching degree between the first background image and the first image is not lower than a matching degree between the first background image and any background image of the plurality of background images except for the first background image.
In a possible implementation manner, the image matching unit 802 is specifically configured to perform image recognition processing on the first image to obtain image recognition information; the image recognition information includes information describing at least one characteristic of the first image; determining matching degrees of the at least one background image and the first image respectively based on the annotation information and the image identification information of the at least one background image in the plurality of background images; and selecting the first background image from the at least one background image based on the matching degree between the at least one background image and the first image.
In one possible implementation, the image recognition information includes at least one of: a scene of the first image, a camera position of the first image, and a shooting angle of the first image.
In one possible implementation manner, the image processing apparatus further includes: a determining unit 804, configured to determine a target position of the first foreground image relative to the first background image;
the image fusion unit 803 is specifically configured to perform image fusion processing on the first background image and the first foreground image based on the target position of the first foreground image with respect to the first background image, so as to obtain the first target image.
In a possible implementation manner, the determining unit 804 is specifically configured to perform ground identification processing on the first background image to obtain location information of a ground area in the first background image;
determining the target position of the first foreground image relative to the first background image based on the position information of the ground area; the target position is determined such that a target area in the second image is located within the ground area with respect to the first image, the target area including an area in contact with the ground in the first image.
In a possible implementation manner, the determining unit 804 is specifically configured to process the first foreground image to obtain position information of a first subject region in the first foreground image;
obtaining the target position of the first foreground image relative to the first background image based on the position information of the first main body area; the determination of the target position is such that a specific position of a first subject region of the second image is located within a head-up region of the first image; the head-up region includes a vanishing line obtained from a plurality of vanishing points, and any one of the plurality of vanishing points is a point where two parallel lines in the real world intersect in the first image.
In one possible implementation manner, the image processing apparatus further includes: a display unit 805 for displaying a mode selection interface; the mode selection interface comprises a first option and a second option; the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting a background image; the instruction receiving unit is further configured to receive a selection instruction for the second option.
Optionally, the display unit 805 is further configured to display a plurality of background image libraries in response to a selection instruction for the second option; an instruction receiving unit 801, configured to specifically receive the background image selection instruction from the user for a target background library of the plurality of background image libraries; the target background library includes the plurality of background images.
In one possible implementation manner, the image processing apparatus further includes: an obtaining unit 806, configured to obtain a first video clip, where the first video clip includes a plurality of images belonging to a same shot, and the plurality of images include the first image; the image matching unit 802 is specifically configured to select, based on the background image selection instruction, a first background image that matches the plurality of images of the first video clip from the plurality of background images.
In one possible implementation, the plurality of images includes the first image and at least one second image; the image fusion unit 803 is further configured to perform image fusion processing on the first background image and the second foreground image of each of the at least one second image to obtain at least one second target image, where the background replacement result of the first video segment includes the first target image and the at least one second target image.
In one possible implementation manner, the image processing apparatus further includes: a video dividing unit 807, configured to perform shot division on a target video to obtain a plurality of video segments, where the plurality of video segments include the first video segment and at least one second video segment;
an image matching unit 802, further configured to select, based on the background image selection instruction, a second background image that matches each of the at least one second video segment from the plurality of background images;
the image fusion unit 803 is further configured to perform image fusion processing on a third foreground image in at least one third image included in each second video segment and a second background image corresponding to each second video segment, so as to obtain a background replacement result of each second video segment;
an image processing unit 808, configured to obtain a background replacement result of the target video based on the background replacement result of each of the plurality of video segments.
The functions of the components of the server of the image processing method that can be provided by the embodiments of the present application are described below. Fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 9, the server may include:
a receiving unit 901 configured to receive a background image selection instruction from the image processing apparatus;
a processing unit 902, configured to select, based on the background image selection instruction, a first background image matching the first image from the plurality of background images; performing image fusion processing on the first background image and a first foreground image in the first image to obtain a first target image;
a sending unit 903, configured to send the first target image to the image processing apparatus.
In one possible implementation, the background image selection instruction is used to indicate a target background library of at least one selectable background library, and the plurality of background images are included in the target background library; or, the background image selection instruction is used to indicate target scene information, and the scene of the first background image matches with the target scene information.
In one possible implementation manner, the matching degree between the first background image and the first image is not lower than the matching degree of at least one background image except the first background image in the plurality of background images.
In one possible implementation manner, a matching degree between the first background image and the first image is not lower than a matching degree between the first background image and any background image of the plurality of background images except for the first background image.
In a possible implementation manner, the processing unit 902 is specifically configured to perform image recognition processing on the first image to obtain image recognition information; the image recognition information includes information describing at least one characteristic of the first image; determining matching degrees of the at least one background image and the first image respectively based on the annotation information and the image identification information of the at least one background image in the plurality of background images; and selecting the first background image from the at least one background image based on the matching degree between the at least one background image and the first image.
In one possible implementation, the image recognition information includes at least one of: a scene of the first image, an imaging angle of the first image, and an imaging height of the first image.
In a possible implementation manner, the processing unit 902 is further configured to determine a target position and/or size of the first foreground image relative to the first background image; the processing unit 902 is specifically configured to perform image fusion processing on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image, so as to obtain the first target image.
In a possible implementation manner, the processing unit 902 is specifically configured to perform ground identification processing on the first background image to obtain location information of a ground area in the first background image;
determining the target position of the first foreground image relative to the first background image based on the position information of the ground area; the target position is determined such that a target area in the second image is located within the ground area with respect to the first image, the target area including an area in contact with the ground in the first image.
In a possible implementation manner, the processing unit 902 is specifically configured to process the first foreground image to obtain position information of a first main body region in the first foreground image;
obtaining the target position of the first foreground image relative to the first background image based on the position information of the first main body area; the determination of the target position is such that a specific position of a first subject region of the second image is located within a head-up region of the first image; the head-up region includes a vanishing line obtained from a plurality of vanishing points, and any one of the plurality of vanishing points is a point where two parallel lines in the real world intersect in the first image.
In a possible implementation manner, the sending unit 903 is further configured to send a mode selection interface to the image processing apparatus; the mode selection interface comprises a first option and a second option, the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting the background image; the receiving unit 901 is further configured to receive a selection instruction for the second option sent by the image processing apparatus. Optionally, the sending unit 903 is further configured to send a background selection interface including options of a plurality of background libraries to the image processing apparatus in response to a selection instruction for the second option; a receiving unit 901, configured to specifically receive the background image selection instruction from the image processing apparatus for a target background library of the plurality of background image libraries; the target background library includes the plurality of background images.
In one possible implementation manner, the server further includes: an acquiring unit 904, configured to acquire a first video clip, where the first video clip includes a plurality of images belonging to a same shot, and the plurality of images includes the first image; the processing unit 902 is specifically configured to select the first background image matching with the plurality of images of the first video clip from the plurality of background images based on the background image selection instruction.
In a possible implementation manner, the processing unit 902 is specifically configured to select a target image from a plurality of images of the first video segment; selecting the first background image matching the target image from the plurality of background images. Optionally, the target image is any one of the plurality of images. Optionally, the target image is a key frame image of the plurality of images, and for example, an image with better quality is determined in a certain manner.
In one possible implementation, the plurality of images includes the first image and at least one second image; a processing unit 902, further configured to perform image fusion processing on the first background image and a second foreground image of each of the at least one second image to obtain at least one second target image, where a background replacement result of the first video segment includes the first target image and the at least one second target image; the sending unit 903 is further configured to send a background replacement result of the first video segment to the image processing apparatus.
In a possible implementation manner, the processing unit 902 is further configured to perform shot segmentation on the target video to obtain a plurality of video segments, where the plurality of video segments include the first video segment and at least one second video segment; selecting a second background image matched with each second video clip in the at least one second video clip from the plurality of background images based on the background image selection instruction; performing image fusion processing on a third foreground image in at least one third image included in each second video clip and a second background image corresponding to each second video clip to obtain a background replacement result of each second video clip; obtaining a background replacement result of the target video based on the background replacement result of each video clip in the plurality of video clips; the sending unit 903 is further configured to send the background replacement result of the target video to the image processing apparatus.
In a possible implementation manner, the receiving unit 901 is further configured to receive the first image from the image processing apparatus, or receive a selection instruction for the first image from the image processing apparatus.
In a possible implementation manner, the receiving unit 901 is further configured to receive the first video segment from the image processing apparatus, or receive a selection instruction for the first video segment from the image processing apparatus.
In a possible implementation manner, the receiving unit 901 is further configured to receive the target video from the image processing apparatus, or receive a selection instruction for the target video from the image processing apparatus.
Fig. 10 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application. As shown in fig. 10, the graphic processing apparatus includes:
a sending unit 1001 configured to send a background image selection instruction to a server, where the background image selection instruction instructs the server to select a background image matching a first image from a plurality of background images;
a receiving unit 1002, configured to receive a first target image from the server;
a display unit 1003 for displaying the first target image; the first target image is obtained by the server performing image fusion processing on a first background image and a first foreground image in the first image, and the first background image is a background image selected by the server from the plurality of background images and matched with the first image.
In a possible implementation manner, the sending unit 1001 is further configured to send the first image to the server, or send a selection instruction for the first image to the server.
In a possible implementation manner, the sending unit 1001 is further configured to send the first video segment to the server, or send a selection instruction for the first video segment to the server; the first video clip includes a plurality of images belonging to the same shot, and the plurality of images include the first image.
In one possible implementation, the plurality of images includes the first image and at least one second image; a receiving unit 1002, further configured to receive a background replacement result of the first video segment from the server; the background replacement result of the first video clip comprises the first target image and at least one second target image; the at least one second target image is obtained by performing image fusion processing on the first background image and a second foreground image of each second image in the at least one second image.
In a possible implementation manner, the sending unit 1001 is further configured to send the target video to the server, or send a selection instruction for the target video to the server; the target video comprises the first video segment and at least one second video segment; a receiving unit 1002, further configured to receive a background replacement result of the target video from the server; the background replacement result of the target video includes the background replacement result of the first video segment and the background replacement result of each second video segment.
In a possible implementation manner, the receiving unit 1002 is further configured to receive a mode selection interface from the server, where the mode selection interface includes a first option and a second option, the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting a background image; a sending unit 1001, configured to send a selection instruction for the second option to the server; a receiving unit 1002, further configured to receive a background selection interface including options of a plurality of background libraries from the server; a transmitting unit 1001 configured to transmit the background image selection instruction for a target background library among the plurality of background image libraries to the server; the target background library includes the plurality of background images.
It should be understood that the above division of the units of the image processing apparatus and the server is only a division of logical functions, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. For example, the above units may be processing elements which are set up separately, or may be implemented by integrating the same chip, or may be stored in a storage element of the controller in the form of program codes, and a certain processing element of the processor calls and executes the functions of the above units. In addition, the units can be integrated together or can be independently realized. The processing element may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method or the units above may be implemented by hardware integrated logic circuits in a processor element or instructions in software. The processing element may be a general-purpose processor, such as a Central Processing Unit (CPU), or may be one or more integrated circuits configured to implement the above method, such as: one or more application-specific integrated circuits (ASICs), one or more microprocessors (DSPs), one or more field-programmable gate arrays (FPGAs), etc.
Fig. 11 is a schematic structural diagram of another server according to an embodiment of the present disclosure, where the server 1100 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1122 (e.g., one or more processors) and a memory 1132, and one or more storage media 1130 (e.g., one or more mass storage devices) for storing an application program 1142 or data 1144. Memory 1132 and storage media 1130 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1122 may be provided in communication with the storage medium 1130 to execute a series of instruction operations in the storage medium 1130 on the server 1100. The server 1100 may be the image processing method provided by the present application.
The server 1100 may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, and/or one or more operating systems 1141, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
The steps performed by the image processing apparatus in the above-described embodiment may be based on the server configuration shown in fig. 11. Specifically, the central processing unit 1122 may implement the functions of the processing unit 902 in fig. 9, and the input/output interface 1158 may implement the functions of the receiving unit 901 and the sending unit 903 in fig. 9.
Fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 12, the terminal device 120 includes a processor 1201, a memory 1202, a communication interface 1203, and an input-output device 1204; the processor 1201, the memory 1202, and the communication interface 1203 are connected to each other by a bus. The terminal device in fig. 12 may be the image processing apparatus in the foregoing embodiment.
The memory 1202 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a compact read-only memory (CDROM), and the memory 1202 is used for related instructions and data. The communication interface 1203 is used for receiving and transmitting data. The input and output devices 1204 may include input devices such as a keyboard, mouse, touch screen, etc., and output devices such as a display, screen, etc. The user may input an instruction, such as a background image selection instruction, to the terminal device through the input device. The output device may display an application interface of the image processing software, as well as other content.
The processor 1201 may be one or more Central Processing Units (CPUs), and in the case that the processor 1201 is one CPU, the CPU may be a single-core CPU or a multi-core CPU. The steps performed by the image processing apparatus in the above-described embodiment may be based on the structure of the terminal device shown in fig. 12. Specifically, the input/output device 1204 may realize the functions of the instruction receiving unit 801, the display unit 805, the acquisition unit 806, and the display unit 1003; the processor 1201 can realize the functions of the image matching unit 802, the image fusion unit 803, the determination unit 804, the video division unit 807, and the image processing unit 808; the communication interface 1203 may realize the functions of the transmitting unit 1001 and the receiving unit 1002.
In an embodiment of the present application, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the image processing method provided by the foregoing embodiment.
The present application provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the image processing method provided by the foregoing embodiments.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. An image processing method, comprising:
receiving a background image selection instruction from a user;
selecting a first background image matched with the first image from a plurality of background images based on the background image selection instruction;
and carrying out image fusion processing on the first background image and the first foreground image in the first image to obtain a first target image.
2. The method of claim 1, wherein the background image selection instruction is used to indicate a target background library of at least one selectable background library, the plurality of background images being included in the target background library;
or, the background image selection instruction is used to indicate target scene information, and the scene of the first background image matches with the target scene information.
3. The method according to claim 1 or 2, wherein the first background image matches the first image by no less than a degree of matching of at least one background image of the plurality of background images other than the first background image.
4. The method according to any one of claims 1 to 3, wherein the degree of matching of the first background image with the first image is not lower than the degree of matching of the first background image with any background image of the plurality of background images other than the first background image.
5. The method according to any one of claims 1 to 4, wherein the selecting a first background image matching the first image from a plurality of background images based on the background image selection instruction comprises:
carrying out image identification processing on the first image to obtain image identification information; the image recognition information comprises information describing at least one feature of the first image;
determining the matching degree of at least one background image with the first image based on the annotation information and the image identification information of at least one background image in the plurality of background images;
and selecting the first background image from the at least one background image based on the matching degree of the at least one background image and the first image respectively.
6. The method of claim 5, wherein the image recognition information comprises at least one of: the scene of the first image, the shooting angle of the first image and the shooting height of the first image.
7. The method according to any one of claims 1 to 6, wherein before performing image fusion processing on the first background image and the first foreground image in the first image to obtain the first target image, the method further comprises:
determining a target position of the first foreground image relative to the first background image;
the image fusion processing of the first background image and the first foreground image in the first image to obtain a first target image includes:
and performing image fusion processing on the first background image and the first foreground image based on the target position of the first foreground image relative to the first background image to obtain the first target image.
8. The method of claim 7, wherein determining the target position of the first foreground image relative to the first background image comprises:
performing ground identification processing on the first background image to obtain position information of a ground area in the first background image;
determining the target position of the first foreground image relative to the first background image based on the location information of the ground area; the determination of the target location is such that a target area in the second image is located within the ground area relative to the first image, the target area including an area in contact with the ground in the first image.
9. The method of claim 7, wherein determining the target position of the first foreground image relative to the first background image comprises:
processing the first foreground image to obtain position information of a first main body area in the first foreground image;
obtaining the target position of the first foreground image relative to the first background image based on the position information of the first subject region; the determination of the target position is such that a particular position of a first subject region of the second image is within a heads-up region of the first image; the head-up region includes a vanishing line resulting from a plurality of vanishing points, any one of which is a point where two lines parallel in the real world intersect in the first image.
10. The method of any one of claims 1 to 9, wherein prior to receiving a background image selection instruction from a user, the method further comprises:
displaying a mode selection interface; the mode selection interface comprises a first option and a second option; the first option is an option for manually selecting a background image, and the second option is an option for automatically selecting the background image;
receiving a selection instruction for the second option.
11. The method according to any one of claims 1 to 10, further comprising:
acquiring a first video clip, wherein the first video clip comprises a plurality of images belonging to the same shot, and the plurality of images comprise the first image;
the selecting, based on the background image selection instruction, a first background image matching the first image from a plurality of background images includes:
and selecting a first background image matched with the plurality of images of the first video clip from the plurality of background images based on the background image selection instruction.
12. The method of claim 11, wherein the plurality of images includes the first image and at least one second image; the method further comprises the following steps:
and performing image fusion processing on the first background image and a second foreground image of each second image in the at least one second image to obtain at least one second target image, wherein the background replacement result of the first video clip comprises the first target image and the at least one second target image.
13. The method according to claim 11 or 12, characterized in that the method further comprises:
shot division is carried out on a target video to obtain a plurality of video clips, and the plurality of video clips comprise the first video clip and at least one second video clip;
selecting a second background image matched with each second video clip in the at least one second video clip from the plurality of background images based on the background image selection instruction;
performing image fusion processing on a third foreground image in at least one third image included in each second video clip and a second background image corresponding to each second video clip to obtain a background replacement result of each second video clip;
and obtaining a background replacement result of the target video based on the background replacement result of each video clip in the plurality of video clips.
14. An image processing apparatus characterized by comprising:
the instruction receiving unit is used for receiving a background image selection instruction from a user;
an image matching unit configured to select a first background image matching the first image from a plurality of background images based on the background image selection instruction;
and the image fusion unit is used for carrying out image fusion processing on the first background image and the first foreground image in the first image to obtain a first target image.
15. An electronic device comprising a memory and a processor, wherein the memory is configured to store instructions and the processor is configured to execute the instructions stored by the memory, such that the processor performs the method of any of claims 1-14.
16. A computer-readable storage medium, in which a computer program is stored, the computer program comprising program instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 14.
CN202011066149.1A 2020-09-30 2020-09-30 Image processing method and related product Pending CN112261320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011066149.1A CN112261320A (en) 2020-09-30 2020-09-30 Image processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011066149.1A CN112261320A (en) 2020-09-30 2020-09-30 Image processing method and related product

Publications (1)

Publication Number Publication Date
CN112261320A true CN112261320A (en) 2021-01-22

Family

ID=74233572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011066149.1A Pending CN112261320A (en) 2020-09-30 2020-09-30 Image processing method and related product

Country Status (1)

Country Link
CN (1) CN112261320A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112689196A (en) * 2021-03-09 2021-04-20 北京世纪好未来教育科技有限公司 Interactive video playing method, player, equipment and storage medium
CN113284080A (en) * 2021-06-17 2021-08-20 Oppo广东移动通信有限公司 Image processing method and device, electronic device and storage medium
WO2022199710A1 (en) * 2021-03-23 2022-09-29 影石创新科技股份有限公司 Image fusion method and apparatus, computer device, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
CN105893419A (en) * 2015-11-30 2016-08-24 乐视致新电子科技(天津)有限公司 Generation device, device and equipment of multimedia photo, and mobile phone
CN107529096A (en) * 2017-09-11 2017-12-29 广东欧珀移动通信有限公司 Image processing method and device
US20190205650A1 (en) * 2017-12-28 2019-07-04 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and medium
CN110209861A (en) * 2019-05-21 2019-09-06 北京字节跳动网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
US20190355172A1 (en) * 2018-05-15 2019-11-21 Adobe Inc. Synthesis of composite images having virtual backgrounds
US20200234451A1 (en) * 2019-01-22 2020-07-23 Fyusion, Inc. Automatic background replacement for single-image and multi-view captures
US20200258236A1 (en) * 2017-10-24 2020-08-13 Hewlett-Packard Development Company, L.P. Person segmentations for background replacements
CN111724407A (en) * 2020-05-25 2020-09-29 北京市商汤科技开发有限公司 Image processing method and related product

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
CN105893419A (en) * 2015-11-30 2016-08-24 乐视致新电子科技(天津)有限公司 Generation device, device and equipment of multimedia photo, and mobile phone
CN107529096A (en) * 2017-09-11 2017-12-29 广东欧珀移动通信有限公司 Image processing method and device
US20200258236A1 (en) * 2017-10-24 2020-08-13 Hewlett-Packard Development Company, L.P. Person segmentations for background replacements
US20190205650A1 (en) * 2017-12-28 2019-07-04 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and medium
US20190355172A1 (en) * 2018-05-15 2019-11-21 Adobe Inc. Synthesis of composite images having virtual backgrounds
US20200234451A1 (en) * 2019-01-22 2020-07-23 Fyusion, Inc. Automatic background replacement for single-image and multi-view captures
CN110209861A (en) * 2019-05-21 2019-09-06 北京字节跳动网络技术有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN111724407A (en) * 2020-05-25 2020-09-29 北京市商汤科技开发有限公司 Image processing method and related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘本斌: "一种图像换背景的自动批处理系统设计", 《福建电脑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112689196A (en) * 2021-03-09 2021-04-20 北京世纪好未来教育科技有限公司 Interactive video playing method, player, equipment and storage medium
CN112689196B (en) * 2021-03-09 2021-06-11 北京世纪好未来教育科技有限公司 Interactive video playing method, player, equipment and storage medium
WO2022199710A1 (en) * 2021-03-23 2022-09-29 影石创新科技股份有限公司 Image fusion method and apparatus, computer device, and storage medium
CN113284080A (en) * 2021-06-17 2021-08-20 Oppo广东移动通信有限公司 Image processing method and device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN112261320A (en) Image processing method and related product
CN107980221B (en) Compositing and scaling angularly separated sub-scenes
TWI543610B (en) Electronic device and image selection method thereof
CN110300264B (en) Image processing method, image processing device, mobile terminal and storage medium
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN111541907B (en) Article display method, apparatus, device and storage medium
CN110072046B (en) Image synthesis method and device
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN105096354A (en) Image processing method and device
US10748000B2 (en) Method, electronic device, and recording medium for notifying of surrounding situation information
CN112532882B (en) Image display method and device
WO2022242397A1 (en) Image processing method and apparatus, and computer-readable storage medium
US9141191B2 (en) Capturing photos without a camera
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN111629242B (en) Image rendering method, device, system, equipment and storage medium
JP2017016166A (en) Image processing apparatus and image processing method
CN114025100B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112887603B (en) Shooting preview method and device and electronic equipment
CN115278084A (en) Image processing method, image processing device, electronic equipment and storage medium
JP7293362B2 (en) Imaging method, device, electronic equipment and storage medium
CN114387157A (en) Image processing method and device and computer readable storage medium
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN112258435A (en) Image processing method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210122