CN116320721A - Shooting method, shooting device, terminal and storage medium - Google Patents

Shooting method, shooting device, terminal and storage medium Download PDF

Info

Publication number
CN116320721A
CN116320721A CN202310312887.7A CN202310312887A CN116320721A CN 116320721 A CN116320721 A CN 116320721A CN 202310312887 A CN202310312887 A CN 202310312887A CN 116320721 A CN116320721 A CN 116320721A
Authority
CN
China
Prior art keywords
curve
marking
image
face
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310312887.7A
Other languages
Chinese (zh)
Inventor
汤海燕
郑兆廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310312887.7A priority Critical patent/CN116320721A/en
Publication of CN116320721A publication Critical patent/CN116320721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The present invention relates to the field of photography technologies, and in particular, to a photographing method, a photographing device, a terminal, and a storage medium. Wherein the method comprises the following steps: determining a target template image, wherein the target template image comprises a first face image; capturing a current image, wherein the current image comprises a second face image; determining a first marking curve pointing to a first face image, wherein the first marking curve is used for marking the angle of a face in the first face image in a target template image; generating a second marking curve according to the second face image and a preset marking curve construction method, wherein the second marking curve is used for marking the angle of the face in the second face image in the current image; when the first marker curve and the second marker curve are matched, shooting processing is performed. The scheme of the invention not only simplifies the operation of imitating photographing, but also improves the effect of imitating photographing, and has better user experience.

Description

Shooting method, shooting device, terminal and storage medium
Technical Field
The present invention relates to the field of photography technologies, and in particular, to a photographing method, a photographing device, a terminal, and a storage medium.
Background
With the development of mobile communication technology and the popularization of intelligent terminal equipment, the intelligent terminal equipment has become one of the indispensable tools in people's life. Due to the improvement of the front camera pixels of the intelligent terminal equipment and the diversification of social products and social platforms, users increasingly like to use the terminal equipment to perform self-photographing and enjoy sharing self-photographing in the social products and the social platforms. In the self-photographing process, the user usually does not only take a front view photograph, but also takes a plurality of angles, for example, lifts the terminal device, takes a photograph from top to bottom, or takes a photograph of a face on one side only, etc. However, in order to obtain a self-shot photo with a better performance, the user is often required to continuously adjust the self-shooting angle and the expression, which wastes time, and even if the user takes time to adjust for a plurality of times, the user cannot always obtain a better shooting result.
Disclosure of Invention
The invention provides a shooting method, a shooting device, a terminal and a storage medium, which can improve the convenience of shooting operation and optimize shooting effect.
In one aspect, the present invention provides a photographing method, including:
determining a target template image, wherein the target template image comprises a first face image;
capturing a current image, wherein the current image comprises a second face image;
determining a first marking curve pointing to the first face image, wherein the first marking curve is used for identifying the angle of the face in the first face image in the target template image;
generating a second marking curve according to the second face image and a preset marking curve construction method, wherein the second marking curve is used for marking the angle of the face in the second face image in the current image;
and when the first marking curve and the second marking curve are matched, shooting processing is carried out.
In another aspect, the present invention provides a photographing apparatus, including:
the target template image determining module is used for determining a target template image, wherein the target template image comprises a first face image;
the current image acquisition module is used for capturing a current image, wherein the current image comprises a second face image;
A first marker curve determining module, configured to determine a first marker curve pointing to the first face image, where the first marker curve is used to identify an angle of a face in the first face image in the target template image;
the second marking curve generating module is used for generating a second marking curve according to the second face image and a preset marking curve construction method, and the second marking curve is used for marking the angle of the face in the second face image in the current image;
and the shooting processing module is used for carrying out shooting processing when the first marking curve and the second marking curve are matched.
In another aspect, the present invention provides a terminal, where the terminal includes a processor and a memory, where at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the above-mentioned photographing method.
In another aspect, the present invention provides a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or an instruction set, which is loaded and executed by a processor to implement the photographing method described above.
The shooting method, the shooting device, the terminal and the storage medium provided by the invention have the following beneficial effects:
in the shooting process, a second mark curve is generated for the current image in real time, the second mark curve is compared with a first mark curve of the target template image, shooting processing is carried out when the second mark curve is matched with the first mark curve, and because the second mark curve is used for marking the angle of a face in the second face image in the current image and the first mark curve is used for marking the angle of the face in the first face image in the target template image, shooting processing is carried out when the second mark curve is matched with the first mark curve, the angle of the face in a photo obtained by shooting is close to or consistent with the angle of the face in the target template image, the simulated shooting effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an interaction scenario between a user and a terminal provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a shooting method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for constructing a marker curve according to an embodiment of the present invention;
fig. 4 is a flowchart of a method of shooting processing provided in an embodiment of the present invention;
FIG. 5 is a flowchart of a method for determining whether two curves match according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of determining whether two curves match according to an embodiment of the present invention;
fig. 7 is a schematic view of an application scenario of a shooting method provided by an embodiment of the present invention;
fig. 8 is a schematic flow chart of a front end interacting with a background to implement a shooting method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a process of taking a photograph by using the photographing method according to the embodiment of the present invention;
fig. 10 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present invention;
fig. 11 is a block diagram of a hardware structure of a terminal according to an embodiment of the present invention.
Detailed Description
In order to enable those skilled in the art to better understand the present invention, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present invention with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. First, description will be made of the prior art and related concepts related to the embodiments of the present invention:
APP: the method is an abbreviation of Application program Application, mainly refers to software installed on a smart phone, so as to perfect the deficiency and individuation of an original system and provide richer use experience for users.
A mask: refers to an image that is hidden at an upper layer of the screen.
And (3) self-photographing: the method is that a user takes a picture for himself by using a front camera of the mobile phone.
Face point location: upon recognition of a portrait photograph or image, the system automatically recognizes the facial contours and facial features and marks the facial and facial feature edges with points, typically 88 points, marking the point map of the face and facial feature.
In order to facilitate the explanation of the advantages of the method in the embodiment of the present invention, the technical solution of the embodiment of the present invention is first described in detail in the related content of the prior art:
with the diversification of social products and social platforms, more and more users like to record daily life by self-timer and share and show on the social platforms. But the expression and the face angle of the user in the self-photographing are easy to be single, and the expressive force of the self-photographing is insufficient. In order to help users find more self-timer expressions and self-timer angles, more diversified life scene records are displayed, so that the users find more beautiful and different self-timer, and some photographing software is gradually added with the function of assisting self-timer.
For example, an APP sets a "composition" function in a self-timer to provide two simulated photographing modes, namely a mask mode and a wire frame mode, and when a user photographs, the composition template is displayed in a user viewfinder in a semi-transparent mask or in a wire frame mode in a superimposed manner, so that the user can realize simulated photographing by aiming at the composition template. However, both modes of composition template imitation shooting cannot well enable a user to align with the face and obtain a self-shot picture with better expression. When the shooting mode is a mask mode, the composition template is used as a semitransparent mask to seriously shade the face of the user, and the user cannot see the face angle and expression details of the user; when the shooting mode is a composition mode, the composition template only appears in a wire frame form, and as the wire frame only marks the cross straight line at the center of the human face approximately, the angle difference of the user facing upward, overlooking, slightly left and slightly right cannot be reflected, so that the angle difference cannot be completely consistent with the composition template. And when the user confirms shooting, the user can look at the screen, look for a shooting button to shoot, and cannot keep the mind imitated originally.
In view of the defects of the prior art, the embodiment of the invention provides a shooting scheme, which can generate two sets of five-sense organ positioning marks according to the face in the target template image and the face in the current image, automatically perform shooting processing when the two sets of five-sense organ positioning marks are matched, and improve the expressive force of a photo by enabling the angle between the face in the current image obtained by shooting and the face in the target template image to be consistent. The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an interaction scenario between a user 10 and a terminal 20 provided in an embodiment of the present application is shown. The terminal 20 may interact with the user 10. The terminal 20 may be a smart phone, a camera, a desktop computer, a tablet computer, a notebook computer, a digital assistant, a smart wearable device, or other type of physical device; wherein, intelligent wearable equipment can include intelligent bracelet, intelligent wrist-watch, intelligent glasses, intelligent helmet etc.. Of course, the terminal 20 is not limited to the electronic device with a certain entity, and may be software running in the electronic device. Specifically, for example, the terminal 20 may be a web page provided by a service provider such as a WeChat or a microblog, or may be an application provided by the service provider to the user.
The terminal 20 may include a camera, a display, a memory device, and a processor coupled via a data bus. The display screen is used for displaying data such as self-timer images and the like, and can be a touch screen of a mobile phone or a tablet personal computer and the like. The storage device is used for storing program codes, data materials and the like of the photographing device, and the storage device may be a memory of the terminal 20, or may be a storage device such as a smart media card (smart media card), a secure digital card (secure digital card), a flash memory card (flash card) and the like. The processor may be a single-core or multi-core processor.
Fig. 2 is a schematic flow chart of a photographing method provided in an embodiment of the present invention, where the flow chart may be implemented by the terminal 20, and the present specification provides the steps of the method according to the embodiment or the flowchart, but may include more or fewer steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or client product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment). Referring to fig. 2, the photographing method includes:
s201, determining a target template image, wherein the target template image comprises a first face image.
The embodiment of the invention provides a scheme for imitating a target template image to take a picture, and provides a target template for a user to refer to a shooting gesture and an expression, wherein the target template image can comprise a background image and a first face image, and the user imitates the first face image to adjust a shooting angle and the expression. In actual operation, the target template image may be an image automatically configured by application software, or may be an image determined by user selection. In one possible embodiment, the background automatically allocates a template image for the user to imitate photographing, and the template image is the target template image. Specifically, the background can collect the basic information of the user such as the gender, character, favorite shooting style and other personalized data, and configure the target template image for the user according to the user personalized data.
In another possible embodiment, a plurality of template images may be provided for selection by a user, and the template image selected by the user is used as the target template image. Specifically, the method comprises the following steps: displaying the template image; acquiring a selection operation of a user on a template image; and taking the template image corresponding to the selection operation as the target template image. For example, a plurality of template images are displayed on a screen, a touch operation of a user on the screen is received, and the template image corresponding to the touch operation is used as a target template image selected by the user.
S203, capturing a current image, wherein the current image comprises a second face image.
The current image is a real-time image of a shooting object acquired by a camera device, and the camera device can be a camera, a camera integrated with a smart phone and other equipment with a shooting function. The current image can comprise a background image and a second face image, and a user corresponding to the second face image imitates the first face image in the target template image to adjust shooting angles and expressions, so that an image similar to the target template image in expression effect can be obtained.
S205, determining a first marking curve pointing to the first face image, wherein the first marking curve is used for marking the angle of the face in the first face image in the target template image.
The imitation face image shooting can involve face angles and facial expressions, and the embodiment of the invention adopts a marking curve to mark the angles of the faces in the image. And particularly, the angle of the face in the first face image in the target template image is marked by adopting a first marking curve. Wherein, the method for determining the first marking curve may comprise: acquiring a first mark curve corresponding to a first face image in the target template image according to the mapping relation between the target template image and the first mark curve; or generating a first marking curve according to the first face image and the marking curve construction method.
The embodiment of the invention can generate the first marking curve in real time according to the first face image; or generating a first marking curve according to the first face image in advance, then establishing a mapping relation between the first marking curve and a target template image to which the first face image belongs, and then directly acquiring the first marking curve corresponding to the target template image according to the mapping relation. The first marker curve can be obtained by a marker curve construction method whether generated in real time or in advance, and the marker curve construction method is described in detail below.
Fig. 3 is a flowchart of a method for constructing a marker curve according to an embodiment of the present invention, please refer to fig. 3, wherein the method for constructing a marker curve may include:
s301, acquiring a target face image of a marking curve to be determined.
S303, analyzing the characteristic points of the target face image to determine a target characteristic point sequence of the target face image.
Specifically, the face feature point detection technology can be adopted to carry out face recognition on the target face image, and the key area position of the face in the target face image is determined. Face feature point detection is also called face key point detection, positioning or face alignment, and refers to positioning the key area position of the face of a person, including eyebrows, eyes, nose, mouth, face outline and the like, given a face image. The face feature point detection method is roughly divided into three types: model-based ASM (Active Shape Model) and AAM (Active Appearnce Model), cascade shape-based regression CPR (Cascaded pose regression), and deep learning-based methods. The embodiment of the invention does not limit the specific method for identifying the face feature points, and any method is adopted for identifying the face feature points.
In one possible embodiment, the step may include: carrying out face recognition on the target face image by adopting a face feature point detection algorithm to obtain a feature point set corresponding to a face in the target face image, wherein the feature point set comprises feature points corresponding to eyebrows, noses and mouths respectively; acquiring characteristic points corresponding to the tail ends of eyebrows, the middle points of eyebrows, the nose tips and the middle points of lips in the characteristic point set, and taking the acquired characteristic points as target characteristic points; transversely sequencing target characteristic points corresponding to the tail ends of the eyebrows and the middle points of the eyebrows to obtain a transverse curve characteristic point sequence, and longitudinally sequencing target characteristic points corresponding to the middle points of the nose tips and the lips to obtain a longitudinal curve characteristic point sequence; and taking the transverse curve characteristic point sequence and the longitudinal curve characteristic point sequence as the target characteristic point sequence.
S305, generating a marking curve of the target face image according to the target feature point sequence.
The target feature point sequence may include a first feature point sequence that may include feature points corresponding to eyebrows of a face in the target face image and a second feature point sequence that may include feature points corresponding to noses and mouths of the face in the target face image. The generating the marking curve of the target face image according to the target feature point sequence may include: generating a transverse curve according to the first characteristic point sequence; generating a longitudinal curve according to the second characteristic point sequence; and combining the transverse curve and the longitudinal curve to obtain the marking curve. In one possible embodiment, combining the transverse curve with the longitudinal curve may include: the longitudinal curve is extended to intersect the transverse curve.
The generating a first marker curve from the first face image may include: performing feature point analysis on the first face image to determine a target feature point sequence of the first face image; and generating a marking curve of the first face image according to the target feature point sequence of the first face image.
S207, generating a second marking curve according to the second face image and a preset marking curve construction method, wherein the second marking curve is used for marking the angle of the face in the second face image in the current image.
The method for constructing the marking curve is identical to the marking curve method in step S205, and will not be described in detail herein.
The generating a second marker curve according to the second face image and a preset marker curve construction method may include: performing feature point analysis on the second face image to determine a target feature point sequence of the second face image; and generating a marking curve of the second face image according to the target feature point sequence of the second face image.
S211, when the first marking curve and the second marking curve are matched, shooting processing is carried out.
Fig. 5 is a flowchart of a method for determining whether two curves match, and referring to fig. 5, determining whether a first marked curve and a second marked curve match may include:
S501, determining a first marking point set of the first marking curve and a second marking point set of the second marking curve, wherein the first marking points in the first marking point set are in one-to-one correspondence with the second marking points in the second marking point set.
S503, calculating the distance between the second mark point and the corresponding first mark point, and judging whether the distance is smaller than a preset distance threshold value.
S505, when the distance between each second mark point in the second mark point set and the corresponding first mark point is smaller than the distance threshold value, judging that the second mark curve is matched with the first mark curve.
Specifically, the first marker curve includes a transverse curve and a longitudinal curve, and the first marker point set may include two end points and a middle point of the transverse curve and two end points and a middle point of the longitudinal curve; the second marker curve includes a transverse curve and a longitudinal curve, and the second set of marker points may include two end points and a midpoint of the transverse curve and two end points and a midpoint of the longitudinal curve. When the first marked curve and the second marked curve are matched, the transverse curve of the first marked curve and the transverse curve of the second marked curve can be compared, the longitudinal curve of the first marked curve and the longitudinal curve of the second marked curve are compared, and when the comparison results of the first marked curve and the second marked curve are matched, the first marked curve and the second marked curve are determined to be matched.
Fig. 6 is a schematic diagram of determining whether two curves match according to an embodiment of the present invention, and an exemplary method for determining whether two curves match is described below.
As shown in fig. 6, the mark points of one curve include an end point a, an end point B, and a midpoint C, and the mark points of the other curve include an end point a ', an end point B', and a midpoint C ', where the end point a' corresponds to the end point a, the end point B 'corresponds to the end point B, and the midpoint C' corresponds to the midpoint C. Calculating the distance d between the end point A' and the end point A 1 Distance d between endpoint B' and endpoint B 2 Distance d between midpoint C' and midpoint C 3 The method comprises the steps of carrying out a first treatment on the surface of the Respectively comparing the distance threshold d with d 1 、d 2 And d 3 Comparing and judging d 1 、d 2 And d 3 Whether it is less than a distance threshold d; if d 1 、d 2 And d 3 If both are smaller than the distance threshold d, then judging two curvesAnd if not, judging that the two curves are not matched.
In practical application, the position of the first mark curve in the screen is unchanged, and the second mark curve changes along with the change of the face angle of the shooting object, so that the second mark curve can be gradually close to the first mark curve until the second mark curve is matched with the first mark curve by adjusting the face angle of the shooting object.
When the first marker curve and the second marker curve are matched, angle matching information can be generated, and the angle matching information can be displayed. The angle matching information is used for identifying that the angle of the face in the second face image in the current image is matched with the angle of the face in the first face image in the target template image. In a possible embodiment, when the first marker curve and the second marker curve are matched, the first marker curve and the second marker curve may be subjected to fusion processing, and a fusion processing result is displayed, where the fusion processing may be to combine the first marker curve and the second marker curve and/or change a display color. In another possible embodiment, when the first marking curve and the second marking curve are matched, a voice prompt is sent out to remind the user that the current shooting angle is basically consistent with the face angle in the target template image, and the current shooting angle is kept.
Preferably, S209 may be further included before S211.
S209, displaying the current image, the first marked curve and the second marked curve.
Specifically, the current image, the first marking curve and the second marking curve can be displayed in a view-finding frame, the first marking curve and the second marking curve are displayed on the current image in a superimposed mode, the second marking curve changes in real time along with the second face image in the current image, the first marking curve does not change, a user can conveniently adjust the shooting angle of the face by referring to the second marking curve until the second marking curve presented in real time is aligned with the first marking curve. When the present embodiment is applied to an electronic device having a screen, the viewfinder may be the screen of the electronic device.
In a possible embodiment, the target template image, the current image, the first marker curve and the second marker curve may also be displayed on a screen at the same time, wherein the screen comprises a view frame and a reference frame, and the reference frame may be disposed outside the view frame or may be superimposed on the view frame, for example, disposed in an upper left corner, an upper right corner, a lower left corner or a lower right corner of the view frame; the target template image can be displayed in the reference frame, the current image is displayed in the view finding frame, the first marking curve and the second marking curve can be displayed on the current image in a superimposed mode, the second marking curve is attached to the face of a second face image in the current image, and when the angle of the face corresponding to the second face image changes, the second marking curve follows the change, so that a user can check whether the face shooting angle of the user is consistent with the target template image in real time. Wherein the position of the first marker curve in the viewfinder can be determined by: superposing and displaying a first marking curve on a target template image in a reference frame, wherein the first marking curve is attached to a face of a first face image in the target template image; acquiring the size ratio of a reference frame to the view finding frame; amplifying the first marking curve according to the size proportion; the first marked curve falls into the viewfinder after being enlarged.
In addition, a face line frame can be displayed on the current image in a superimposed manner, the face line frame corresponds to the face of the first face image in the target template image, the face line frame shows the approximate area of the face of the first face image in the target template image, and in the shooting process, a user can firstly adjust the head position by referring to the face line frame and then finely adjust the face angle by referring to the first mark curve, so that the speed of imitating shooting can be improved.
Fig. 4 is a flowchart of a method of photographing processing provided in an embodiment of the present invention, referring to fig. 4, the photographing processing may include:
s401, when the first mark curve is matched with the second mark curve, expression prompt information generated according to the expression of the face in the first face image is obtained;
s403, displaying the expression prompt information, and starting photographing timing;
s405, when the photographing timing reaches a preset duration, recording a current image.
Specifically, the expression prompt information may be a facial expression extracted from the first facial image in advance, and the expression prompt information may be obtained through manual arrangement, or may be obtained by performing expression recognition on a face in the first facial image. The expression hint information may include expression type and eye gaze direction. The expression prompt information can be displayed in a voice mode, so that a user does not need to change the current head gesture, only the expression is adjusted according to voice prompt, the consistency of the facial angle and the expression of the target template image can be maintained to the maximum extent, and the user operation is facilitated. And when the expression prompt information is displayed, performing photographing timing, automatically photographing when the photographing timing reaches the preset duration, and recording the current image without photographing by manual operation of a user. The photographing timing can be presented in the form of voice and/or image, for example, the voice is played for counting time and synchronously displaying timing pictures in a screen, and the photographing timing mode can be countdown.
Fig. 7 is an application scenario schematic diagram of a shooting method provided by an embodiment of the present invention. Referring to fig. 7, fig. 7 illustrates a scenario of self-photographing using the method provided in this embodiment on a smart phone, in which a current picture captured by a camera (i.e., an image of a user) is displayed in a screen, a target template image is displayed in the lower left corner of the screen, and a face line frame, a second marking curve corresponding to the current image, and a first marking curve corresponding to the target template image are displayed superimposed on the current picture. The user firstly refers to the target template image to conduct preliminary face angle adjustment, so that the face in the current picture completely enters the face wire frame, then the face angle is finely adjusted according to the first mark curve until the second mark curve is matched with the first mark curve, and shooting angle simulation is completed. According to the embodiment of the invention, in the self-photographing process, the function of simulating the self-photographing of the target template image is added, and the face position and the face angle positioning can be automatically generated according to the target template image, so that a user can more conveniently simulate different self-photographing angles and self-photographing expressions, the expressive force of self-photographing is enhanced, and the self-confidence and liveness of the user in a social network are increased.
The embodiment of the invention has the following beneficial effects: in the shooting process, a second mark curve is generated for the current image in real time, the second mark curve is compared with a first mark curve of the target template image, shooting processing is carried out when the second mark curve is matched with the first mark curve, and because the second mark curve is used for marking the angle of a face in the second face image in the current image and the first mark curve is used for marking the angle of the face in the first face image in the target template image, shooting processing is carried out when the second mark curve is matched with the first mark curve, the angle of the face in a photo obtained by shooting is close to or consistent with the angle of the face in the target template image, the simulated shooting effect is improved.
In addition, expression prompt information is generated according to the target template image, and the user can further adjust the expression of the user through the expression prompt information, so that a photo with higher angle and expression similarity with the face in the target template image can be obtained. According to the embodiment of the invention, the photo is automatically shot after the shooting angle and the expression are adjusted by the user, so that the user operation is simplified, and meanwhile, the change of the shooting angle and the expression of the template caused by the movement of the head or the eye of the user is avoided.
Fig. 8 is a flowchart illustrating interaction between a front end and a background to implement a shooting method according to an embodiment of the present invention. Referring to fig. 8, the shooting method of the embodiment of the present invention may be implemented by interaction between a front end and a background of an intelligent device, where the front end includes a screen and a loudspeaker, the screen is used for displaying contents such as images and information, and the loudspeaker is used for outputting voice.
The tasks executed by the background comprise: configuring a target template image, and determining a first mark curve and expression prompt information corresponding to a first face in the target template image; generating a second marking curve according to a second face image in the current image; judging whether the second marked curve is matched with the first marked curve or not, and generating angle matching information when the second marked curve is matched with the first marked curve; and (3) shooting timing is carried out when the angle matching information display is finished, and a photo is shot when the shooting timing reaches a preset duration.
The tasks performed by the front end include: displaying the current image; acquiring a first marking curve and a second marking curve from the background and displaying the first marking curve and the second marking curve; obtaining angle matching information from a background and displaying the angle matching information; acquiring expression prompt information from the background and displaying the expression prompt information through a screen and/or a loudspeaker; displaying the photographing timing through a screen and/or a loudspeaker; and obtaining and displaying the photographed picture from the background.
Fig. 9 is a schematic diagram of a process of taking a photograph by using the photographing method provided by the embodiment of the present invention, please refer to fig. 9, fig. 9 mainly shows a front-end presented content directly interacted with a user, and fig. 9 shows four different states from left to right, which are respectively: 1. displaying a current image, a first marked curve and a second marked curve in a screen, wherein the first marked curve is white, the second marked curve is yellow, and a user adjusts the face angle to enable the second marked curve to be matched with the first marked curve; 2. when the second marked curve is matched with the first marked curve, the second marked curve is combined with the first marked curve, the curve display color is changed to be green, and meanwhile, the expression prompt message of ' please see the lower right corner ' is played through voice, and smile ' is kept; 3. photographing and timing, and superposing and displaying countdown numbers on the current image; 4. and displaying the photographed photo.
According to the embodiment of the invention, two sets of facial feature positioning marks (namely a first mark curve and a second mark curve) can be automatically generated according to the target template image and the user facial form, the user adjusts the facial angle to align the two sets of facial feature positioning marks, and further adjusts the expression according to the expression prompt information, so that the facial angle and the expression consistent with the target template image can be realized. And when the angle and the expression of the user are detected to be consistent with the target template image, the photographing countdown is automatically triggered to photograph, so that more expressive self-photographing can be obtained.
Fig. 10 is a schematic structural diagram of a photographing device according to an embodiment of the present invention, where, as shown in fig. 10, the photographing device includes a target template image determining module 1010, a current image obtaining module 1020, a first marking curve determining module 1030, a second marking curve generating module 1040, and a photographing processing module 1060. Wherein, the liquid crystal display device comprises a liquid crystal display device,
a target template image determination module 1010 configured to determine a target template image, where the target template image includes a first face image;
a current image acquisition module 1020 for capturing a current image, the current image comprising a second face image;
A first marker curve determining module 1030 configured to determine a first marker curve pointing to the first face image, where the first marker curve is used to identify an angle of a face in the first face image in the target template image;
a second marking curve generating module 1040, configured to generate a second marking curve according to the second face image and a preset marking curve construction method, where the second marking curve is used to identify an angle of a face in the second face image in the current image;
and a shooting processing module 1060, configured to perform shooting processing when the first marker curve and the second marker curve match.
A display module 1050 may also be included for displaying the current image, the first marker curve, and the second marker curve.
In a possible embodiment, the marker curve construction method includes: acquiring a target face image of a marking curve to be determined; performing feature point analysis on the target face image to determine a target feature point sequence of the target face image; and generating a marking curve of the target face image according to the target characteristic point sequence.
Specifically, the target feature point sequence includes a first feature point sequence including feature points corresponding to eyebrows of a face in the target face image and a second feature point sequence including feature points corresponding to noses and mouths of the face in the target face image. The generating a marking curve of the target face image according to the target feature point sequence comprises the following steps: generating a transverse curve according to the first characteristic point sequence; generating a longitudinal curve according to the second characteristic point sequence; and combining the transverse curve and the longitudinal curve to obtain the marking curve.
The first marker curve determination module 1030 includes a first determination unit and a second determination unit. The first determining unit is used for obtaining a first marking curve corresponding to a first face image in the target template image according to the mapping relation between the target template image and the first marking curve; the second determining unit is configured to generate a first marker curve according to the first face image and the marker curve construction method.
The shooting device further comprises a marking curve matching module. The marking curve matching module is used for: determining a first marking point set of the first marking curve and a second marking point set of the second marking curve, wherein the first marking points in the first marking point set are in one-to-one correspondence with the second marking points in the second marking point set; calculating the distance between the second mark point and the corresponding first mark point, and judging whether the distance is smaller than a preset distance threshold value or not; and when the distance between each second mark point in the second mark point set and the corresponding first mark point is smaller than the distance threshold value, judging that the second mark curve is matched with the first mark curve.
The shooting processing module 1060 is further configured to: when the first mark curve is matched with the second mark curve, expression prompt information generated according to the expression of the face in the first face image is obtained; displaying the expression prompt information, and starting photographing timing; and when the photographing timing reaches the preset duration, recording the current image.
The shooting device further comprises an angle matching module. The angle matching module is used for: when the first mark curve and the second mark curve are matched, angle matching information is generated, wherein the angle matching information is used for identifying that the angle of the face in the second face image in the current image is matched with the angle of the face in the first face image in the target template image; and displaying the angle matching information.
The described photographing apparatus and method embodiments are based on the same inventive concept.
In the shooting process of the embodiment of the invention, the second mark curve is generated for the current image in real time, the second mark curve is compared with the first mark curve of the target template image, shooting processing is carried out when the second mark curve is matched with the first mark curve, the angle of the face in the shot photo is similar to or identical with the angle of the face in the target template image, the imitation shooting effect can be improved, and meanwhile, in the shooting process, the target template image is used for providing an adjustment reference of the face angle for the user, and the face angle is automatically analyzed and matched, so that the imitation shooting operation is greatly simplified, the imitation shooting difficulty is reduced, and the better user experience is provided.
The embodiment of the invention provides a terminal, which comprises a processor and a memory, wherein at least one instruction, at least one section of program, a code set or an instruction set is stored in the memory, and the at least one instruction, the at least one section of program, the code set or the instruction set is loaded and executed by the processor to realize the shooting method provided by the embodiment of the method.
The memory may be used to store software programs and modules that the processor executes to perform various functional applications and data processing by executing the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the terminal, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
The embodiment of the invention also provides a schematic structural diagram of a terminal, as shown in fig. 11, which can be used for implementing the shooting method provided in the above embodiment. Specifically, the present invention relates to a method for manufacturing a semiconductor device.
The client may include RF (Radio Frequency) circuitry 1110, memory 1120 including one or more computer-readable storage media, input unit 1130, display unit 1140, sensor 1150, audio circuit 1160, wiFi (wireless fidelity ) module 1170, processor 1180 including one or more processing cores, and power supply 1190. Those skilled in the art will appreciate that the client architecture shown in fig. 11 is not limiting of the client and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
the RF circuit 1110 may be used for receiving and transmitting signals during a message or a call, and in particular, after receiving downlink information of a base station, the downlink information is processed by one or more processors 1180; in addition, data relating to uplink is transmitted to the base station. Typically, RF circuitry 1110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier ), a duplexer, and the like. In addition, RF circuitry 810 may also communicate with networks and other clients via wireless communications. The wireless communication may use any communication standard or protocol including, but not limited to, GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General Packet Radio Service ), CDMA (Code Division Multiple Access, code division multiple access), WCDMA (Wideband Code Division Multiple Access ), LTE (Long Term Evolution, long term evolution), email, SMS (Short Messaging Service, short message service), and the like.
The memory 1120 may be used to store software programs and modules, and the processor 1180 may perform various functional applications and data processing by executing the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the client, etc. In addition, memory 1120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 1120 may also include a memory controller to provide access to the memory 1120 by the processor 880 and the input unit 1130.
The input unit 1130 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 1130 may include a touch-sensitive surface 1131 and other input devices 1132. The touch-sensitive surface 1131, also referred to as a touch display screen or touch pad, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch-sensitive surface 1131 or thereabout using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 1131 may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device and converts it into touch point coordinates, which are then sent to the processor 1180, and can receive commands from the processor 1180 and execute them. In addition, the touch-sensitive surface 1131 may be implemented using various types of resistive, capacitive, infrared, surface acoustic waves, and the like. In addition to the touch-sensitive surface 1131, the input unit 1130 may also include other input devices 1132. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 1140 may be used to display information input by a user or information provided to a user and various graphical user interfaces of the client, which may be composed of graphics, text, icons, video, and any combination thereof. The display unit 1140 may include a display panel 1141, and optionally, the display panel 1141 may be configured in the form of an LCD (Liquid Crystal Display ), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 1131 may overlay the display panel 1141, and upon detection of a touch operation thereon or thereabout by the touch-sensitive surface 1131, the touch-sensitive surface is passed to the processor 1180 to determine the type of touch event, and the processor 1180 then provides a corresponding visual output on the display panel 1141 in accordance with the type of touch event. Wherein the touch-sensitive surface 1131 and the display panel 1141 may be two separate components to implement the input and input functions, but in some embodiments the touch-sensitive surface 1131 may be integrated with the display panel 1141 to implement the input and output functions.
The client may also include at least one sensor 1150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 1141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1141 and/or the backlight when the client moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking) and the like for recognizing the gesture of the client; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may be configured by the client are not described in detail herein.
Audio circuitry 1160, speakers 1161, and microphone 1162 may provide an audio interface between a user and the client. The audio circuit 1160 may transmit the received electrical signal converted from audio data to the speaker 1161, and may be converted into a sound signal by the speaker 1161 to be output; on the other hand, the microphone 1162 converts the collected sound signals into electrical signals, which are received by the audio circuit 1160 and converted into audio data, which are processed by the audio data output processor 1180 for transmission to, for example, another client via the RF circuit 1110, or which are output to the memory 1120 for further processing. Audio circuit 1160 may also include an ear bud jack to provide communication of a peripheral ear bud with the client.
WiFi belongs to a short-distance wireless transmission technology, and the client can help a user to send and receive e-mails, browse web pages, access streaming media and the like through the WiFi module 1170, so that wireless broadband Internet access is provided for the user. Although fig. 11 shows a WiFi module 1170, it is understood that it does not belong to the essential constitution of the client, and can be omitted entirely as required within the scope of not changing the essence of the invention.
The processor 1180 is a control center of the client, and connects various parts of the entire client using various interfaces and lines, and performs various functions and processes of the client by running or executing software programs and/or modules stored in the memory 1120, and calling data stored in the memory 1120, thereby performing overall monitoring of the client. Optionally, the processor 1180 may include one or more processing cores; preferably, the processor 1180 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1180.
The client also includes a power supply 1190 (e.g., a battery) for powering the various components, which may be logically connected to the processor 1180 via a power management system to perform charge, discharge, and power management functions. The power supply 1190 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the client may further include a camera, a bluetooth module, etc., which will not be described herein. In particular, in this embodiment, the display unit of the client is a touch screen display, and the client further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors.
The embodiment of the invention also provides a storage medium, which can be set in a terminal to store at least one instruction, at least one section of program, code set or instruction set related to a shooting method in the embodiment of the method, where the at least one instruction, the at least one section of program, the code set or instruction set is loaded and executed by the processor to implement the shooting method provided in the embodiment of the method.
Alternatively, in this embodiment, the storage medium may be located in at least one network client of a plurality of network clients of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (16)

1. A photographing method, comprising:
displaying a target template image and a captured current image, wherein the target template image comprises a first face image; the current image comprises a second face image; the second face image is a real-time image of a shooting object imitating a first face image in the target template image;
superposing and displaying a first marking curve of the target template image and a second marking curve of the current image in the current image; the first marking curve is used for marking the angle of the face in the first face image in the target template image; the second marking curve is used for marking the angle of the face in the second face image in the current image; wherein the position of the first marking curve in the current image is unchanged; the second marking curve changes along with the change of the face angle of the shooting object;
When the second marked curve is matched with the first marked curve, expression prompt information is displayed and shooting processing is carried out; the expression prompt information is generated according to the expression of the face in the first face image.
2. The method of claim 1, wherein the screen includes a viewfinder and a reference frame; the displaying the target template image and the captured current image includes:
displaying the current image in the viewfinder;
displaying the target template image in the reference frame;
wherein the positional relationship between the viewfinder frame and the reference frame includes any one of: the reference frame is arranged outside the view finding frame; the reference frame is superimposed on the viewfinder frame.
3. The method of claim 2, wherein the superimposing the first marker curve of the target template image and the second marker curve of the current image in the current image comprises:
the second marking curve is attached to the face of a second face image in the current image in the view-finding frame for display;
superposing and displaying the first marking curve on a target template image in the reference frame;
Acquiring the size ratio of the reference frame to the view finding frame;
and displaying the first marking curve in the view-finding frame in an enlarged mode according to the size proportion.
4. The method according to claim 1, wherein the method further comprises:
determining a first marker curve pointing to the first face image;
and generating a second marking curve according to the second face image and a preset marking curve construction method.
5. The method of claim 4, wherein the marker curve construction method comprises:
acquiring a target face image of a marking curve to be determined;
performing feature point analysis on the target face image to determine a target feature point sequence of the target face image;
and generating a marking curve of the target face image according to the target characteristic point sequence.
6. The method of claim 4, wherein the determining a first marker curve pointing to the first face image comprises:
acquiring a first mark curve corresponding to a first face image in the target template image according to the mapping relation between the target template image and the first mark curve;
or alternatively, the process may be performed,
and generating a first marking curve according to the first face image and the marking curve construction method.
7. The method of claim 5, wherein the target sequence of feature points comprises a first sequence of feature points comprising feature points corresponding to eyebrows of a face in the target face image and a second sequence of feature points comprising feature points corresponding to nose and mouth of a face in the target face image;
the generating a marking curve of the target face image according to the target feature point sequence comprises the following steps:
generating a transverse curve according to the first characteristic point sequence; generating a longitudinal curve according to the second characteristic point sequence;
combining the transverse curve and the longitudinal curve to obtain the marking curve;
wherein the combination comprises: the longitudinal curve is extended to intersect the transverse curve.
8. The method of claim 1, wherein the first marker profile comprises a transverse profile and a longitudinal profile; the second marking curve comprises a transverse curve and a longitudinal curve; the method further comprises the steps of:
comparing the transverse curve of the first marked curve with the transverse curve of the second marked curve;
Comparing the longitudinal curve of the first marked curve with the longitudinal curve of the second marked curve;
and if the transverse curve of the first marked curve is matched with the transverse curve of the second marked curve, and the longitudinal curve of the first marked curve is matched with the longitudinal curve of the second marked curve, determining that the first marked curve is matched with the second marked curve.
9. The method according to claim 1, wherein the method further comprises:
determining a first marking point set of the first marking curve and a second marking point set of the second marking curve, wherein the first marking points in the first marking point set are in one-to-one correspondence with the second marking points in the second marking point set;
calculating the distance between the second mark point and the corresponding first mark point, and judging whether the distance is smaller than a preset distance threshold value or not;
and when the distance between each second mark point in the second mark point set and the corresponding first mark point is smaller than the distance threshold value, judging that the second mark curve is matched with the first mark curve.
10. The method according to claim 1, wherein when the second marker curve matches the first marker curve, the expression prompt information is displayed and shooting is performed, including:
When the first marked curve is matched with the second marked curve, acquiring prompt information according to the expression;
displaying the expression prompt information, and starting photographing timing;
and when the photographing timing reaches the preset duration, recording the current image.
11. The method according to claim 1, wherein the method further comprises:
when the first mark curve and the second mark curve are matched, angle matching information is generated, wherein the angle matching information is used for identifying that the angle of the face in the second face image in the current image is matched with the angle of the face in the first face image in the target template image;
and displaying the angle matching information.
12. The method according to claim 1, wherein the method further comprises:
when the first marked curve and the second marked curve are matched, carrying out fusion processing on the first marked curve and the second marked curve; the method comprises the steps of,
displaying a fusion processing result;
wherein the fusion process comprises at least one of: combining the first marked curve and the second marked curve; changing the color of the first marking curve and/or the second marking curve.
13. The method according to claim 1, wherein the method further comprises:
outputting a voice prompt when the first marked curve and the second marked curve are matched; the voice prompt is used for reminding that the current shooting angle of the shooting object is consistent with the face angle in the target template image, and the current shooting angle is required to be kept unchanged.
14. A photographing apparatus, comprising:
the display module is used for displaying a target template image and a captured current image, wherein the target template image comprises a first face image; the current image comprises a second face image; the second face image is a real-time image of a shooting object imitating a first face image in the target template image;
the display module is further used for displaying a first marking curve of the target template image and a second marking curve of the current image in a superposition mode in the current image; the first marking curve is used for marking the angle of the face in the first face image in the target template image; the second marking curve is used for marking the angle of the face in the second face image in the current image; wherein the position of the first marking curve in the current image is unchanged; the second marking curve changes along with the change of the face angle of the shooting object;
The shooting processing module is used for displaying expression prompt information and shooting when the second marked curve is matched with the first marked curve; the expression prompt information is generated according to the expression of the face in the first face image.
15. A terminal comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the photographing method of any of claims 1-13.
16. A computer readable storage medium having stored therein at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, the code set, or instruction set being loaded and executed by a processor to implement the photographing method of any of claims 1-13.
CN202310312887.7A 2019-08-29 2019-08-29 Shooting method, shooting device, terminal and storage medium Pending CN116320721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310312887.7A CN116320721A (en) 2019-08-29 2019-08-29 Shooting method, shooting device, terminal and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910809688.0A CN112449098B (en) 2019-08-29 2019-08-29 Shooting method, device, terminal and storage medium
CN202310312887.7A CN116320721A (en) 2019-08-29 2019-08-29 Shooting method, shooting device, terminal and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910809688.0A Division CN112449098B (en) 2019-08-29 2019-08-29 Shooting method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN116320721A true CN116320721A (en) 2023-06-23

Family

ID=74741556

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310312887.7A Pending CN116320721A (en) 2019-08-29 2019-08-29 Shooting method, shooting device, terminal and storage medium
CN201910809688.0A Active CN112449098B (en) 2019-08-29 2019-08-29 Shooting method, device, terminal and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910809688.0A Active CN112449098B (en) 2019-08-29 2019-08-29 Shooting method, device, terminal and storage medium

Country Status (1)

Country Link
CN (2) CN116320721A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727024B (en) * 2021-08-30 2023-07-25 北京达佳互联信息技术有限公司 Method, device, electronic equipment and storage medium for generating multimedia information
WO2024079893A1 (en) * 2022-10-14 2024-04-18 日本電気株式会社 Information processing system, information processing method, and recording medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125396B (en) * 2014-06-24 2018-06-08 小米科技有限责任公司 Image capturing method and device
CN105407285A (en) * 2015-12-01 2016-03-16 小米科技有限责任公司 Photographing control method and device
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108229369B (en) * 2017-12-28 2020-06-02 Oppo广东移动通信有限公司 Image shooting method and device, storage medium and electronic equipment
CN108062404A (en) * 2017-12-28 2018-05-22 奇酷互联网络科技(深圳)有限公司 Processing method, device, readable storage medium storing program for executing and the terminal of facial image
CN108769537A (en) * 2018-07-25 2018-11-06 珠海格力电器股份有限公司 A kind of photographic method, device, terminal and readable storage medium storing program for executing
CN109218615A (en) * 2018-09-27 2019-01-15 百度在线网络技术(北京)有限公司 Image taking householder method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN112449098B (en) 2023-04-07
CN112449098A (en) 2021-03-05

Similar Documents

Publication Publication Date Title
US10977873B2 (en) Electronic device for generating image including 3D avatar reflecting face motion through 3D avatar corresponding to face and method of operating same
WO2021135601A1 (en) Auxiliary photographing method and apparatus, terminal device, and storage medium
WO2018153267A1 (en) Group video session method and network device
WO2019223421A1 (en) Method and device for generating cartoon face image, and computer storage medium
CN108184050B (en) Photographing method and mobile terminal
US20150049924A1 (en) Method, terminal device and storage medium for processing image
CN108712603B (en) Image processing method and mobile terminal
EP3035283A1 (en) Image processing method and apparatus, and terminal device
JP2016531362A (en) Skin color adjustment method, skin color adjustment device, program, and recording medium
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN109819167B (en) Image processing method and device and mobile terminal
WO2019105237A1 (en) Image processing method, computer device, and computer-readable storage medium
CN109272473B (en) Image processing method and mobile terminal
CN108876878B (en) Head portrait generation method and device
WO2022062808A1 (en) Portrait generation method and device
CN109448069B (en) Template generation method and mobile terminal
CN108881544A (en) A kind of method taken pictures and mobile terminal
CN112581358A (en) Training method of image processing model, image processing method and device
CN107959755B (en) Photographing method, mobile terminal and computer readable storage medium
CN111080747B (en) Face image processing method and electronic equipment
CN112449098B (en) Shooting method, device, terminal and storage medium
CN109859115A (en) A kind of image processing method, terminal and computer readable storage medium
CN108255389B (en) Image editing method, mobile terminal and computer readable storage medium
CN110399780B (en) Face detection method and device and computer readable storage medium
CN108958505B (en) Method and terminal for displaying candidate information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40087289

Country of ref document: HK