CN112991157B - Image processing method, image processing device, electronic equipment and storage medium - Google Patents
Image processing method, image processing device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112991157B CN112991157B CN202110339447.1A CN202110339447A CN112991157B CN 112991157 B CN112991157 B CN 112991157B CN 202110339447 A CN202110339447 A CN 202110339447A CN 112991157 B CN112991157 B CN 112991157B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- background
- group
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000012545 processing Methods 0.000 title claims description 67
- 238000000034 method Methods 0.000 claims abstract description 43
- 230000000875 corresponding effect Effects 0.000 claims description 44
- 230000015572 biosynthetic process Effects 0.000 claims description 42
- 230000000694 effects Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 10
- 238000007499 fusion processing Methods 0.000 claims description 10
- 238000003709 image segmentation Methods 0.000 claims description 10
- 238000012790 confirmation Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 claims 1
- 238000005755 formation reaction Methods 0.000 description 33
- 230000036544 posture Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 231100000289 photo-effect Toxicity 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000002087 whitening effect Effects 0.000 description 3
- 239000002131 composite material Substances 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 235000009508 confectionery Nutrition 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The present disclosure provides an image processing method, an apparatus, an electronic device, and a storage medium, the method including: acquiring first images shot by a plurality of shooting devices in different shooting areas respectively; determining target images corresponding to target objects in the first images respectively based on the first images, wherein the target objects are located at different designated positions of different shooting areas respectively; and generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of mobile internet and image processing technology, the function of the photographing software is also more and more powerful, for example, special effect processing such as beautifying is supported. At present, shooting scenes are mostly focused on shooting by themselves or shooting by others, but the combination of different users in different places is difficult to realize.
Disclosure of Invention
In view of the above, the present disclosure provides at least an image processing method, an image processing apparatus, an electronic device, and a storage medium, so as to implement a photo-matching for different users in different locations.
In a first aspect, the present disclosure provides an image processing method, including:
acquiring first images respectively shot by a plurality of shooting devices in different shooting areas;
determining target images corresponding to target objects in the first images respectively based on the first images, wherein the target objects are located at different designated positions of different shooting areas respectively;
and generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background.
In the embodiment of the disclosure, after a plurality of target objects located in different shooting areas are shot, the group photo of the plurality of target objects is generated by intercepting the area image of the target object and fusing the area image of the target object with the preset target background, so that the group photo effect can be achieved even if the plurality of target objects are located in different places, and the requirement of the group photo of the user is met.
In a possible embodiment, the determining, based on the first images, the target images corresponding to the target objects in the first images respectively includes:
performing image segmentation processing on any first image to obtain a first image area where a target object is located and a second image area except the target object;
and performing first processing on the first image area, and/or performing second processing on the second image area to obtain a target image corresponding to a target object in the first image.
In a possible implementation, the performing the first processing on the first image region includes:
adding a filter effect with a set style to the first image area; and/or performing beautification processing on the face of the target object in the first image area, wherein the beautification processing comprises beautifying and/or deforming processing;
the second processing of the second image area includes:
adjusting pixel values of the second image area; and/or adjusting the transparency of the second image area.
In the method, the display effect of the target object in the original first image can be adjusted by adding the filter effect with the set style to the first image area where the target object is located in the first image and/or performing beautification processing on the target object, so that the display effect of each target object in the group photo is optimized, and in addition, the pixel values and/or the transparencies of other image areas except the target object are adjusted, so that each target object can be better fused in the image with the target background.
In a possible implementation, the size of each target image is the same as the size of the background image;
generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background, including:
and performing fusion processing on each target image and the background image to obtain a group photo image containing the target background and each target object.
Here, by setting the size of each target image to be the same as the size of the background image, better fusion of each target object in the image with the target background is facilitated.
In a possible implementation manner, the fusing the target images and the background image to obtain a group image including the target background and the target objects includes:
traversing pixel points at a first position in the target image;
if the pixel value of the pixel point at the first position is a first pixel value, determining the pixel value at the same position as the first position in the group image as the first pixel value;
if the pixel value of the pixel point at the first position is the second pixel value, identifying a third pixel value of the pixel point at the first position in the background image, and determining the pixel value at the same position as the first position in the group image as the third pixel value.
By the method, the positions of the target objects in the generated group photo image are consistent with the positions of the target objects in the target images, the coordination degree of the group photo is enhanced, and the display effect of the group photo is improved.
In one possible implementation, the fusing the target images and the background image to obtain a group image including the target background and the target objects includes:
and determining the pixel value at the same position in the group image based on the transparency and the pixel value at the same position in each target image and the background image.
In a possible embodiment, the method further comprises:
responding to selection operation of a first target object in each target object, and acquiring a first background image corresponding to the selection operation;
generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background, including:
and generating a first group image including the first background image and each target object based on each target image and the first background image.
Here, the user can customize the background image according to the requirement, so that the user can obtain the group photo image containing the customized background image, the diversity of group photo effects is enriched, and the group photo requirements of different users are met.
The following descriptions of the effects of the apparatus, the electronic device, and the like refer to the description of the above method, and are not repeated here.
In a second aspect, the present disclosure provides an image processing apparatus comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first images which are respectively shot by a plurality of shooting devices in different shooting areas;
the determining module is used for determining target images corresponding to target objects in the first images respectively based on the first images, wherein the target objects are located at different appointed positions of different shooting areas respectively;
and the generation module is used for generating a group image comprising the target background and each target object based on each target image and a predetermined background image of the target background.
In a possible embodiment, when performing the determining, based on the first images, target images corresponding to the target objects in the first images, the determining module is specifically configured to:
performing image segmentation processing on any first image to obtain a first image area where a target object is located and a second image area except the target object; and performing first processing on the first image area, and/or performing second processing on the second image area to obtain a target image corresponding to a target object in the first image.
In a possible implementation manner, when performing the first processing on the first image region, the determining module is specifically configured to:
adding a filter effect with a set style to the first image area; and/or performing beautification processing on the face of the target object in the first image area, wherein the beautification processing comprises beautifying and/or deforming processing;
the determining module, when performing the second processing on the second image region, is specifically configured to:
adjusting pixel values of the second image area; and/or adjusting the transparency of the second image area.
In a possible implementation, the size of each target image is the same as the size of the background image;
the generating module is specifically configured to, when executing the background image based on each target image and a predetermined target background to generate a group image including the target background and each target object:
and performing fusion processing on each target image and the background image to obtain a group photo image containing the target background and each target object.
In a possible implementation manner, when the generating module performs the fusion processing on each target image and the background image to obtain a group photo image including the target background and each target object, the generating module is specifically configured to:
traversing pixel points at a first position in the target image; if the pixel value of the pixel point at the first position is a first pixel value, determining the pixel value at the same position as the first position in the group image as the first pixel value; if the pixel value of the pixel point at the first position is the second pixel value, identifying a third pixel value of the pixel point at the first position in the background image, and determining the pixel value at the same position as the first position in the composite image as the third pixel value.
In a possible implementation manner, when the generating module performs the fusion processing on each target image and the background image to obtain a group photo image including the target background and each target object, the generating module is specifically configured to:
and determining the pixel value at the same position in the group image based on the transparency and the pixel value at the same position in each target image and the background image.
In a possible embodiment, the device further comprises: the processing module is used for responding to selection operation of a first target object in all target objects and acquiring a first background image corresponding to the selection operation;
the generating module is specifically configured to, when executing the background image based on each target image and a predetermined target background to generate a group image including the target background and each target object:
and generating a first group image including the first background image and each target object based on each target image and the first background image.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the image processing method according to the first aspect or any one of the embodiments.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the image processing method according to the first aspect or any one of the embodiments.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is to be understood that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art to which the disclosure pertains without the benefit of the inventive faculty, and that additional related drawings may be derived therefrom.
Fig. 1 is a schematic flowchart illustrating an image processing method provided in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a selection interface of a co-shooting formation and a co-shooting gesture provided by an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a display interface for indicating a user's photo configuration provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a display interface of placeholder hint information provided by an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a group photo image display interface provided by an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating an architecture of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 7 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In order to solve the problem that different users located in different places are difficult to realize group photo, the embodiment of the disclosure provides a method for generating a group photo image containing a plurality of target objects by intercepting a region image where the target object is located in a first image and fusing the region image where the target object is located with a preset target background after the first image of the target objects located in different shooting regions is acquired, so that the effect of group photo can be realized even if the plurality of target objects are located in different places, and the group photo requirements of the users located in different places are met.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
The technical solutions in the present disclosure will be described clearly and completely with reference to the accompanying drawings in the present disclosure, and it is to be understood that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The components of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
For the purpose of understanding the embodiments of the present disclosure, an image processing method disclosed in the embodiments of the present disclosure will be described in detail first. An execution subject of the image processing method provided by the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or a server or other processing device. In some possible implementations, the image processing method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a schematic flow chart of an image processing method provided in the embodiment of the present disclosure is shown, where the method includes S101 to S103, specifically:
s101, acquiring first images which are respectively shot in different shooting areas by a plurality of shooting devices.
Here, the plurality of photographing apparatuses may include at least one of photographing apparatuses disposed at different photographing places and a terminal apparatus of a user; the shooting equipment can comprise at least one of electronic equipment with a shooting function, such as a single lens reflex camera, a digital camera, a mobile phone, a tablet computer, a video camera, monitoring equipment and the like; the different photographing regions may include at least one of an arbitrary place where the user is currently located and a fixed place where the photographing apparatus is disposed. For example, a photo studio in each city is deployed with a shooting device and a shooting green cloth or a punctuation mark, the punctuation mark is used for indicating the position of the user when shooting, and the punctuation mark can comprise at least one of an icon, a symbol, and a slogan, such as a footprint, used for indicating the shooting position of the user. The first image may include the target object itself and a scene of a place where the target object is located, where the target object may include at least one of a person, an animal, a plant, and other objects having a requirement for taking a photo.
Here, the target object itself image contained in the first image may contain: at least one of a target object whole body image, a target object half body image, and a target object face image.
In a specific implementation, when there is a need for group-taking by a plurality of users located at different locations, an applet may be installed on a terminal device of each user, and each user may capture a first image including the user ' S own image using the respective terminal device and transmit the first image including the user ' S own image to the server through the installed applet, so that the server may process the first image including the user ' S own image by the method shown in the following steps S102 to S103.
In addition, each user may take a first image including the user 'S own image by a camera on the large-screen electronic device at an imaging location where the large-screen electronic device is disposed, and perform the following processing from S102 to S103 on the first image including the user' S own image.
In the embodiment of the disclosure, in order to enrich the group photo effect of the group photo image, a plurality of group photo formations and/or group photo gestures can be provided for the user to select, so that the user can occupy the position indicated by the group photo formations confirmed by the user and/or adjust the body posture according to the group photo gestures confirmed by the user when shooting the first image containing the image of the user; for example, the group photo formation may include the number of group photo persons and the position where each person stands, and may include at least one of a heart shape, a triangle shape, a circle shape, and the like; the group-photo pose may include at least one of a variety of limb poses such as squat, hand-in-hand, heart-in-hand, and the like.
In a specific implementation, an applet may be installed on a terminal device of a user, when there is a need for group-taking for multiple users located at different locations, any user may select a group-taking formation and/or a group-taking gesture through the installed applet, after the user selects a target group-taking formation and/or a target group-taking gesture from the multiple group-taking formations and/or the group-taking gestures, the target group-taking formation and/or the target group-taking gesture may be sent to other users participating in group-taking, after confirmation of the other users, the users may place their positions according to positions indicated by the target group-taking formation displayed on their respective terminal devices, and may make corresponding motions according to body postures indicated by the target group-taking gesture displayed on their respective terminal devices, then use their respective terminal devices to take a first image including their own images at the current location, and send the taken first image including their own images to a server through the installed applet, and the server receives the first image sent by their terminal devices, and performs processing S102 to S103 described below on the acquired first image.
In addition, any user participating in the group photo can also select a group photo formation and/or a group photo gesture on the large-screen electronic device deployed at the shooting location, when the user selects a target group photo formation and/or a target group photo gesture from a plurality of group photo formations and/or group photo gestures, the target group photo formation and/or the target group photo gesture is sent to other users participating in the group photo, after confirmation of the other users, each user can occupy the space according to the position indicated by the target group photo formation displayed on the large-screen electronic device at the shooting location, and can perform corresponding actions according to the body posture indicated by the target group photo gesture displayed on the large-screen electronic device at the shooting location, then the large-screen electronic device at the shooting location is used for shooting a first image containing the image of the user, and the first image containing the image of the user is processed in the following steps S102 to S103.
When the user occupies the position indicated by the target group photo formation displayed on the respective terminal device or the large-screen electronic device located at the shooting place, if the position occupied by the user currently is occupied by other users participating in group photo, the occupied position prompt information is sent to the terminal device of the user or the large-screen electronic device located at the shooting place, so that the terminal device of the user or the large-screen electronic device located at the shooting place displays the occupied position prompt information to the user and prompts the user that the position is occupied; the occupation prompting information can comprise at least one of text prompting information, image-text prompting information, symbol prompting information and voice prompting information; for example, the occupancy prompt may be a text prompt of "the position is occupied, please reselect the occupancy".
Illustratively, when a user a in a city 1 and a user b in a city 2 have a lighting demand, after the user a clicks a lighting mark in an applet interface on a terminal device of the user a, a selection interface including a plurality of lighting formations and a plurality of lighting gestures may be displayed to the user a (here, a specific selection interface may be as shown in fig. 2, taking the terminal device as a mobile phone as an example, the selection interface includes three lighting formations of two-person side-by-side standing, ten-person circular standing, and three-person triangular standing, three lighting gestures of a heart, a quarter, and a hand-in-hand, and a selection touch area corresponding to each lighting formation and each lighting gesture respectively, and a touch area instructing the user to confirm operation confirmation, and "confirm" prompt information, and a return touch area instructing the user to return operation, and "return" prompt information, and more touch areas instructing the user to check more operations, and "more" prompt information; here, the selection touch area may be represented by various shapes and styles, the selection touch area is represented by a symbol in which a square is added to a circle in fig. 2), when the user a selects a two-person side-by-side standing formation as a target co-shooting formation and a hand-pulling posture as a target co-shooting posture from three co-shooting formations and three co-shooting postures shown in fig. 2, the two-person side-by-side standing formation and the hand-pulling posture are transmitted to the user b (here, a display interface for indicating a photo-formation of the user shown on a terminal screen of the specific user b may be as shown in fig. 3, taking a terminal device as an example of a mobile phone), after the user b confirms, the user a and the user b may occupy the occupied position indicated by the two-person side-by-side standing formation, the corresponding model can be put out according to the limb posture indicated by the hand-in-hand posture; after the user a and the user b put the model indicated by the hand-in-hand posture at the occupied position indicated by the double-person side-by-side standing formation, the user a and the user b can respectively use respective terminal equipment to shoot a first image containing the self image, and send the first image containing the self image to the server through the small program installed on the respective terminal equipment, so that the server obtains two first images which are respectively put by the user a and the user b according to the target co-shooting formation and the target co-shooting posture and correspond to the co-shooting model.
Here, when the user a and the user b respectively put out the corresponding group photo modeling according to the target group photo formation and the target group photo posture, if the position currently occupied by the user a is occupied by the user b, the occupation prompt information is displayed to the user a so that the user a changes the current occupation; a specific display interface for displaying the occupancy prompt information on the terminal screen of the user a may be as shown in fig. 4, and taking a terminal device as a mobile phone as an example, the display interface may include the photo modeling of the user in fig. 3 and the text prompt information of "the position is occupied and please reselect the occupancy".
In addition, each user participating in the group photo may deploy a photographing location (for example, may be a photo studio) with a photographing device in each city, locate at a fixed photographing position indicated by the photographing location, use the photographing device at the photographing location to photograph a first image including the user ' S own image, and send the first image including the user ' S own image to the server, so that the server may process the first image including the user ' S own image by the method shown in S102 to S103 described below.
And S102, determining target images corresponding to target objects in the first images respectively based on the first images, wherein the target objects are located at different specified positions of different shooting areas respectively.
Wherein the target image is used to characterize the image of the target object itself in the first image.
Here, the place occupation selected by each user according to the target formation can be used as the designated position of each user in different shooting areas based on the method of S101; fixed photographing positions indicated by photographing places where photographing equipment is deployed in each city can be used as designated positions; it should be noted that, generally, fixed photographing positions indicated by photographing places where photographing devices are deployed in different cities are different, and generally, at least one fixed photographing position is set in each photographing place where photographing devices are deployed in each city.
In a specific implementation, after acquiring a plurality of first images, the server may perform image segmentation processing on each first image to obtain a first image area where a target object is located in each first image and a second image area other than the target object; and performing first processing on the first image area and/or performing second processing on the second image area to obtain a target image corresponding to the target object in each first image.
Wherein the second image area comprises other image areas in the first image except the target object.
Here, an image segmentation algorithm may be adopted to process each first image to obtain a first image region in which the target object is located in each first image and a second image region except the target object in each first image; the image segmentation algorithm may include at least one of a threshold-based segmentation algorithm, a region-based segmentation algorithm, an edge-based segmentation algorithm, and a theory-specific segmentation algorithm, among others.
Here, the first processing may be performed on the first image region where the target object is located by the following method, which is specifically described as follows: a filter effect with a set style can be added to the first image area; and/or performing beautification processing on the face of the target object in the first image area.
Wherein, the beautification treatment is used for adjusting the brightness degree, the whitening degree and the like of the face image of the target object; the beautification treatment may include a beautifying and/or deforming treatment, which may include at least one of, for example, a large eye, a face thinning, a whitening, and the like; the filter effects of the set style may include at least one of a plurality of filter effects such as a fresh style filter, an aesthetic style filter, a sweet style filter, and the like.
Specifically, after a first image area containing a target object is obtained, a target style corresponding to the first image area can be determined by analyzing the first image area, and a filter effect matched with the target style is added to the first image area; the face area of the target object in the first image area can be identified through a face identification technology, and parameters such as three-primary-color numerical values and transparency of the image of the face area are adjusted, so that the face of the target object is subjected to beautifying and deformation processing such as whitening, large-eye and face thinning.
In a specific implementation, in order to highlight the target object in the first image, the second image area except for the first image area where the target object is located in the first image may be processed by the following method, which is specifically described as follows: adjusting the pixel value of the second image area; and/or adjusting the transparency of the second image area.
Here, in order to highlight the first image region in which the target object is located in the first image, the target objects are better fused in the image having the target background, and only the pixel value of the second image region in the first image except for the target object may be adjusted to 0; the transparency of the second image area except the target object in the first image can be adjusted to be fully transparent; it is also possible to adjust the pixel value of the second image area to 0 and the transparency to full transparency.
In a specific implementation, based on the above S102, image segmentation is performed on each first image to obtain a first image region where each target object is located and other image regions except the first image region where each target object is located, and the first image region and the other image regions are respectively subjected to corresponding processing to obtain target images corresponding to the target objects in each first image, and then each target object may be fused with a preset background image to generate a co-shot of each target object through the following S103.
And S103, generating a group image comprising the target background and each target object based on each target image and a background image of a predetermined target background.
The predetermined background image of the target background may include an image customized by a user according to a requirement, or may include an image set by a server as a default; the background image may include at least one of a plurality of types of images such as an expression sticker image, a landscape image, a building image, a food image, and the like.
In a specific implementation, the target images and the background image may be fused by an image fusion technique to obtain a group image including the target background and the target objects.
Here, in generating the group image, in order to better fuse each target object in the image having the target background, the size of the target image of each target object may be set to be the same as the size of the background image.
In a specific implementation, after the pixel value of the second image area except the first image area where the target object is located in the first image is adjusted to 0 based on the above S102, in order to ensure that the position of each target object in the target image is the same as the position of each target object in the group-photo image, the target image and the background image may be fused by the following method to obtain a group-photo image including the target background and each target object, which is specifically described as follows: traversing pixel points at a first position in the target image; if the pixel value of the pixel point at the first position is the first pixel value, determining the pixel value at the position in the group image, which is the same as the first position, as the first pixel value; and if the pixel value of the pixel point at the first position is the second pixel value, identifying a third pixel value of the pixel point at the first position in the background image, and determining the pixel value at the same position as the first position in the group image as the third pixel value.
Wherein the first position is used for representing any position in the target image; here, the first pixel value is used to represent a pixel value of a pixel point at any position in the region where the target object is located in the target image, and for example, the first pixel value may be set to be not equal to 0; the second pixel value is used to represent the pixel value of the pixel point in any position in the other region except the region where the target object is located in the target image, and for example, the second pixel value may be set to be equal to 0; the third pixel value is used for representing the pixel value of the pixel point at any position in the background image and can be set according to actual requirements.
Here, the first person image region may be used to represent a region in which the target object is located in the target image; the method comprises the steps that a first background area is adopted to represent other areas except the area where a target object is located in a target image; adopting a second portrait area to represent an area in the group photo image, wherein the area has the same position as the area of the target object in the target image; and adopting the second background area to represent the areas in the group photo image, wherein the positions of the areas are the same as those of other areas except the area of the target object in the target image.
For example, since the pixel values of the second image area except the first image area where the target object is located in the first image are adjusted to 0 based on the above S102, it may be determined that the area where the pixel value is not 0 in the target image is the area where the target object is located (i.e., the first person area); taking the pixel value of each pixel point in the first portrait area in the target image as the pixel value of each pixel point corresponding to the second portrait area in the group photo image; and the pixel value of each pixel point in the area with the same position as the second image area in the background image is used as the pixel value of each pixel point corresponding to the second background area in the group photo image, so that the pixel value of the pixel point at each position in the group photo image is determined, and the group photo image is obtained.
In an optional embodiment, in order to ensure that the position of each target object in the target image is the same as the position of each target object in the group photo image, the target image and the background image may be fused by the following method to obtain a group photo image including the target background and each target object, which is specifically described as follows: and determining the pixel value at the same position in the group image based on the transparency and the pixel value at the same position in each target image and the background image.
Here, the transparency of the pixel point at each position may be used as a weight of the pixel point, and the pixel values of the pixel points at the same position in each target image and the background image are subjected to weighted summation to obtain the pixel value of the pixel point at each position in the group image.
Here, the third portrait area may also be used to represent an area in the background image that is at the same position as the area of the target object in the target image; and adopting the third background area to represent the areas in the background image, which are at the same positions as the areas except the areas of the target objects in the target image.
Here, the first transparency of the first portrait area in each target image may be set to 1, and the second transparency of the first background area in each target image may be set to 0; and setting the third transparency of the third portrait area in the background image to 0 and setting the fourth transparency of the third background area in the background image to 1.
Specifically, the pixel value of each pixel point corresponding to the second portrait area in the group photo image is obtained by multiplying the pixel value of each pixel point in the first portrait area in the target image by the first transparency and adding the product of the pixel value of each pixel point corresponding to the third portrait area in the background image by the third transparency (that is, the pixel value of each pixel point in the first portrait area in the target image is used as the pixel value of each pixel point corresponding to the second portrait area in the group photo image); multiplying the pixel value of each pixel point in the first background area in the target image by the second transparency, and adding the product of the pixel value of each pixel point in the third background area in the background image by the fourth transparency to obtain the pixel value of each pixel point corresponding to the second background area in the group image (i.e. the pixel value of each pixel point in the third background area in the background image is used as the pixel value of each pixel point corresponding to the second background area in the group image); and determining the pixel value of the pixel point at each position in the group photo image to obtain the group photo image.
In specific implementation, in order to meet the group photo requirements of different users, each user participating in the group photo can customize a background image according to the needs of the user, so as to obtain a group photo image meeting the needs of the user, which is specifically described as follows: responding to the selection operation of a first target object in each target object, and acquiring a first background image corresponding to the selection operation; based on each target image and the first background image, a first group image including the first background image and each target object is generated.
Wherein the first target object is used for characterizing any one of a plurality of target objects participating in the co-illumination; the first background image is used for representing a target background image customized by the user or a target background image selected by the user in a plurality of provided background images.
Specifically, in the process of generating the group photo image, after a first background image is set on a terminal device of any one of a plurality of target objects participating in group photo, or a large-screen electronic device deployed in a fixed shooting place, or a shooting device deployed in the fixed shooting place according to requirements (or a favorite first background image is selected from a plurality of background images provided according to preferences), the first background image is sent to the server, so that the server performs fusion processing on the acquired first background image and the target images corresponding to the target objects to obtain the group photo image including the first background image and the target objects.
Here, after the group photo image including the first background image and each target object is obtained, the group photo image may be transmitted to each user participating in the group photo, or only the group photo image may be transmitted to the target user who customizes or selects the first background image.
Exemplarily, if the first background image customized by the user a is a landscape image, and the two first images obtained by the user a and the user b and respectively put out the corresponding group-taking model according to the target group-taking formation and the target group-taking posture are subjected to image segmentation, processing and fusion shown in S102 to S103 to obtain a group-taking image including the user a, the user b and the first background image, and the group-taking image is sent to the terminal device of the user a, the terminal device of the user a shows the group-taking image to the user a, and a specific showing interface may be shown in fig. 5, taking the terminal device as a mobile phone as an example, and the showing interface includes the user a, the user b and the landscape image which are put out the corresponding group-taking formation according to the target group-taking formation and the target group-taking posture; here, the target photographic formation and the target photographic pose in fig. 5 may be the same as the target photographic formation and the target photographic pose shown in fig. 3.
The embodiment of the disclosure provides a method and a device for generating a group photo image containing a plurality of target objects by capturing the area image of the target object in the first image and fusing the area image of the target object with a preset target background after the first image of the target objects in different shooting areas is obtained, so that the group photo effect can be achieved even if the plurality of target objects are in different places, and the group photo requirements of users in different places are met.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same concept, an embodiment of the present disclosure further provides an image processing apparatus, as shown in fig. 6, an architecture diagram of an image processing apparatus 600 provided in the embodiment of the present disclosure includes an obtaining module 601, a determining module 602, and a generating module 603, specifically:
the acquiring module 601 is configured to acquire first images captured by a plurality of capturing devices in different capturing areas respectively.
A determining module 602, configured to determine, based on each first image, target images corresponding to target objects in the first images, where the target objects are located at different designated positions of different shooting areas.
A generating module 603, configured to generate a group image including the target background and each target object based on each target image and a background image of a predetermined target background.
In a possible implementation manner, when performing the determining, based on each first image, a target image corresponding to a target object in each first image, the determining module 602 is specifically configured to: performing image segmentation processing on any first image to obtain a first image area where a target object is located and a second image area except the target object; and performing first processing on the first image area, and/or performing second processing on the second image area to obtain a target image corresponding to a target object in the first image.
In a possible implementation manner, when performing the first processing on the first image region, the determining module 602 is specifically configured to: adding a filter effect with a set style to the first image area; and/or performing beautification processing on the face of the target object in the first image area, wherein the beautification processing comprises beautifying and/or deforming processing;
when the determining module 602 performs the second processing on the second image region, specifically configured to: adjusting pixel values of the second image area; and/or adjusting the transparency of the second image area.
In a possible implementation, the size of each target image is the same as the size of the background image;
the generating module 603, when executing the background image based on each target image and the predetermined target background to generate a group image including the target background and each target object, is specifically configured to: and performing fusion processing on each target image and the background image to obtain a group photo image containing the target background and each target object.
In a possible implementation manner, when the fusion processing is performed on each target image and the background image to obtain a group image including the target background and each target object, the generating module 603 is specifically configured to: traversing pixel points at a first position in the target image; if the pixel value of the pixel point at the first position is a first pixel value, determining the pixel value at the same position as the first position in the group image as the first pixel value; if the pixel value of the pixel point at the first position is the second pixel value, identifying a third pixel value of the pixel point at the first position in the background image, and determining the pixel value at the same position as the first position in the composite image as the third pixel value.
In a possible implementation manner, when the fusion processing is performed on each target image and the background image to obtain a group photo image including the target background and each target object, the generating module 603 is specifically configured to: and determining the pixel value at the same position in the group image based on the transparency and the pixel value at the same position in each target image and the background image.
In a possible embodiment, the apparatus further comprises: the processing module is used for responding to selection operation of a first target object in all target objects and acquiring a first background image corresponding to the selection operation;
the generating module 603, when executing the background image based on each target image and the predetermined target background to generate a group image including the target background and each target object, is specifically configured to:
and generating a first group image including the first background image and each target object based on each target image and the first background image.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and for specific implementation, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again.
Based on the same technical concept, the embodiment of the present disclosure also provides an electronic device 700. Referring to fig. 7, a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used to temporarily store operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, the processor 701 exchanges data with the external memory 7022 through the memory 7021, and when the electronic device 700 is operated, the processor 701 and the memory 702 communicate with each other through the bus 703, so that the processor 701 executes the following instructions:
acquiring first images respectively shot by a plurality of shooting devices in different shooting areas;
determining target images corresponding to target objects in the first images respectively based on the first images, wherein the target objects are located at different designated positions of different shooting areas respectively;
and generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background.
In one possible design, in the instructions executed by the processor 701, the determining, based on each first image, a target image corresponding to the target object in each first image includes:
performing image segmentation processing on any first image to obtain a first image area where a target object is located and a second image area except the target object;
and performing first processing on the first image area, and/or performing second processing on the second image area to obtain a target image corresponding to a target object in the first image.
In one possible design, the instructions executed by the processor 701 include instructions for performing a first process on the first image region, where the first process includes:
adding a filter effect with a set style to the first image area; and/or performing beautification processing on the face of the target object in the first image area, wherein the beautification processing comprises beautifying and/or deforming processing;
the second processing of the second image area includes:
adjusting pixel values of the second image area; and/or adjusting the transparency of the second image area.
In one possible design, processor 701 may execute instructions in which the size of each target image is the same as the size of the background image;
generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background, including:
and performing fusion processing on each target image and the background image to obtain a group photo image containing the target background and each target object.
In a possible design, in an instruction executed by the processor 701, the fusing the target images and the background image to obtain a group image including the target background and the target objects includes:
traversing pixel points at a first position in the target image;
if the pixel value of the pixel point at the first position is a first pixel value, determining the pixel value at the same position as the first position in the group image as the first pixel value;
if the pixel value of the pixel point at the first position is the second pixel value, identifying a third pixel value of the pixel point at the first position in the background image, and determining the pixel value at the same position as the first position in the group image as the third pixel value.
In a possible design, in an instruction executed by the processor 701, the fusing the target images and the background image to obtain a group image including the target background and the target objects includes:
and determining a pixel value at the same position in the group image based on the transparency and the pixel value at the same position in each of the target image and the background image.
In one possible design, the instructions executed by the processor 701 further include:
responding to selection operation of a first target object in each target object, and acquiring a first background image corresponding to the selection operation;
generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background, including:
and generating a first group image including the first background image and each target object based on each target image and the first background image.
Furthermore, the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to perform the steps of the image processing method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the image processing method in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK) or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units into only one type of logical function may be implemented in other ways, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-transitory computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. An image processing method, comprising:
acquiring first images respectively shot by a plurality of shooting devices in different shooting areas; wherein the first image comprises a target object occupying a position based on the position indicated by the group photo formation and/or adjusting a limb posture based on the group photo posture;
determining target images corresponding to target objects in the first images respectively based on the first images, wherein the target objects are located at different designated positions of different shooting areas respectively;
generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background;
the acquiring of the first images respectively shot by the plurality of shooting devices in different shooting areas comprises:
responding to the trigger of any user in the plurality of users to select a group photo formation and/or a group photo gesture, and sending the selected target group photo formation and/or the selected target group photo gesture to other users for confirmation;
and after the other users confirm, displaying a target group photo formation and/or a target group photo gesture so that a plurality of users occupy the position indicated by the displayed target group photo formation, and/or making corresponding actions according to the limb gesture indicated by the target group photo gesture displayed by the shooting equipment, and shooting the first image containing the user image in the shooting area by using the shooting equipment.
2. The method according to claim 1, wherein the determining, based on each first image, a target image corresponding to the target object in each first image comprises:
performing image segmentation processing on any first image to obtain a first image area where a target object is located and a second image area except the target object;
and performing first processing on the first image area, and/or performing second processing on the second image area to obtain a target image corresponding to a target object in the first image.
3. The method of claim 2, wherein the first processing the first image region comprises:
adding a filter effect with a set style to the first image area; and/or performing beautification processing on the face of the target object in the first image area, wherein the beautification processing comprises beautifying and/or deforming processing;
the second processing of the second image area includes:
adjusting pixel values of the second image area; and/or adjusting the transparency of the second image area.
4. A method according to any one of claims 1 to 3, wherein the size of each of the target images is the same as the size of the background image;
generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background, including:
and performing fusion processing on each target image and the background image to obtain a group photo image containing the target background and each target object.
5. The method according to claim 4, wherein the fusing the target images and the background image to obtain a group image including the target background and the target objects comprises:
traversing pixel points at a first position in the target image;
if the pixel value of the pixel point at the first position is a first pixel value, determining the pixel value at the same position as the first position in the group image as the first pixel value;
if the pixel value of the pixel point at the first position is the second pixel value, identifying a third pixel value of the pixel point at the first position in the background image, and determining the pixel value at the same position as the first position in the group image as the third pixel value.
6. The method according to claim 4, wherein the fusing the target images and the background image to obtain a group image including the target background and the target objects comprises:
and determining a pixel value at the same position in the group image based on the transparency and the pixel value at the same position in each of the target image and the background image.
7. The method of any of claims 1 to 6, further comprising:
responding to selection operation of a first target object in each target object, and acquiring a first background image corresponding to the selection operation;
generating a group image including the target background and each target object based on each target image and a background image of a predetermined target background, including:
and generating a first group image including the first background image and each target object based on each target image and the first background image.
8. An image processing apparatus characterized by comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first images which are respectively shot by a plurality of shooting devices in different shooting areas; wherein the first image comprises a target object for occupying a position based on the position indicated by the group-photo formation and/or adjusting a limb posture based on the group-photo posture;
the determining module is used for determining target images corresponding to target objects in the first images respectively based on the first images, wherein the target objects are located at different appointed positions of different shooting areas respectively;
a generation module, configured to generate a group image including the target background and each target object based on each target image and a background image of a predetermined target background;
the acquiring module, when acquiring first images respectively captured in different capturing areas by a plurality of capturing devices, is configured to:
responding to the trigger of any user in the plurality of users to select a group photo formation and/or a group photo gesture, and sending the selected target group photo formation and/or the target group photo gesture to other users for confirmation;
and after the other users confirm, displaying a target group photo formation and/or a target group photo gesture so that a plurality of users occupy the position indicated by the displayed target group photo formation, and/or making a response action according to the limb gesture indicated by the target group photo gesture displayed by the shooting equipment, and shooting the first image containing the user image in the shooting area by using the shooting equipment.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the image processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the image processing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110339447.1A CN112991157B (en) | 2021-03-30 | 2021-03-30 | Image processing method, image processing device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110339447.1A CN112991157B (en) | 2021-03-30 | 2021-03-30 | Image processing method, image processing device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112991157A CN112991157A (en) | 2021-06-18 |
CN112991157B true CN112991157B (en) | 2023-04-07 |
Family
ID=76339141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110339447.1A Active CN112991157B (en) | 2021-03-30 | 2021-03-30 | Image processing method, image processing device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112991157B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113577766B (en) * | 2021-08-05 | 2024-04-02 | 百度在线网络技术(北京)有限公司 | Object processing method and device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106331528A (en) * | 2016-10-24 | 2017-01-11 | 宇龙计算机通信科技(深圳)有限公司 | Photograph processing method and image photograph equipment |
US10129461B2 (en) * | 2016-12-05 | 2018-11-13 | International Business Machines Corporation | Automated image capture based on image context |
CN110602396B (en) * | 2019-09-11 | 2022-03-22 | 腾讯科技(深圳)有限公司 | Intelligent group photo method and device, electronic equipment and storage medium |
CN111182236A (en) * | 2020-01-03 | 2020-05-19 | 惠州Tcl移动通信有限公司 | Image synthesis method and device, storage medium and terminal equipment |
CN111640165A (en) * | 2020-06-08 | 2020-09-08 | 上海商汤智能科技有限公司 | Method and device for acquiring AR group photo image, computer equipment and storage medium |
CN111640166B (en) * | 2020-06-08 | 2024-03-26 | 上海商汤智能科技有限公司 | AR group photo method, device, computer equipment and storage medium |
-
2021
- 2021-03-30 CN CN202110339447.1A patent/CN112991157B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN112991157A (en) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106161939B (en) | Photo shooting method and terminal | |
KR101944112B1 (en) | Method and apparatus for creating user-created sticker, system for sharing user-created sticker | |
CN104284064B (en) | Method and apparatus for preview twin-lens image | |
US11503205B2 (en) | Photographing method and device, and related electronic apparatus | |
US20090251484A1 (en) | Avatar for a portable device | |
CN104580922B (en) | A kind of control method and device for shooting light filling | |
CN103152489A (en) | Showing method and device for self-shooting image | |
CN106412458A (en) | Image processing method and apparatus | |
CN110114798A (en) | Image processing equipment, image processing system, image capture system, image processing method and recording medium | |
EP3905662A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
CN111640165A (en) | Method and device for acquiring AR group photo image, computer equipment and storage medium | |
JP6445707B2 (en) | Image processing method and apparatus | |
CN111627086A (en) | Head portrait display method and device, computer equipment and storage medium | |
CN112333386A (en) | Shooting method and device and electronic equipment | |
CN110929063A (en) | Album generating method, terminal device and computer readable storage medium | |
CN112991157B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN106101575A (en) | Generation method, device and the mobile terminal of a kind of augmented reality photo | |
CN104869283B (en) | A kind of image pickup method and electronic equipment | |
CN113194256B (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN104735353B (en) | A kind of method and device for the photo that pans | |
CN113596323A (en) | Intelligent group photo method, device, mobile terminal and computer program product | |
CN113012040B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN116489504A (en) | Control method of camera module, camera module and electronic equipment | |
JP6497030B2 (en) | Imaging system, information processing apparatus, imaging method, program, storage medium | |
CN104298442B (en) | A kind of information processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |