CN112351185A - Photographing method and mobile terminal - Google Patents

Photographing method and mobile terminal Download PDF

Info

Publication number
CN112351185A
CN112351185A CN201910727319.7A CN201910727319A CN112351185A CN 112351185 A CN112351185 A CN 112351185A CN 201910727319 A CN201910727319 A CN 201910727319A CN 112351185 A CN112351185 A CN 112351185A
Authority
CN
China
Prior art keywords
image
display area
scene
shooting
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910727319.7A
Other languages
Chinese (zh)
Inventor
戴同武
勾军委
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910727319.7A priority Critical patent/CN112351185A/en
Priority to PCT/CN2020/105144 priority patent/WO2021023059A1/en
Publication of CN112351185A publication Critical patent/CN112351185A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders

Abstract

The application is applicable to the technical field of terminals, and particularly relates to a photographing method based on an Artificial Intelligence (AI) technology and a mobile terminal. The photographing method comprises the following steps: acquiring a first image corresponding to a first display area; identifying target characteristics of a shooting object in a first image and scene characteristics of a scene where the shooting object is located; acquiring a posture image to be recommended according to the target characteristics of the shooting object and the scene characteristics of the shooting object, and displaying the posture image in a second display area to prompt the shooting object to perform posture adjustment according to the posture image displayed in the second display area; and photographing the photographed object to obtain a photographed image. The attitude image is obtained by combining the target characteristic and the scene characteristic of the shot object, the accuracy and the effectiveness of obtaining the attitude image are improved, the attitude image is displayed in the second display area, the shot object can be conveniently subjected to attitude adjustment or a photographer can conveniently guide the shot object to perform attitude adjustment, and the image shooting effect is improved.

Description

Photographing method and mobile terminal
Technical Field
The application belongs to the technical field of terminals, and particularly relates to a photographing method based on an Artificial Intelligence (AI) technology and a mobile terminal.
Background
With the continuous development of photographing technology and the wide popularization of mobile terminals, the photographing function of the mobile terminal is widely used, more and more users take photos by using mobile terminals such as mobile phones and tablet computers, and the convenience of photographing are greatly improved.
The existing mobile terminal can identify the characteristics of a shot object and/or the scene where the shot object is located and the like through an intelligent identification technology, so that parameters such as light and shadow brightness, exposure and the like of a camera in the mobile terminal can be automatically adjusted according to the characteristics of the shot object and/or the scene where the shot object is located, and an image with a good imaging effect can be obtained.
However, the existing mobile terminal adjusts the camera parameters according to the features recognized by the intelligent recognition technology, which can only beautify the shot object and/or the shot scene, but cannot change the shooting position and/or the shooting posture of the shot object, and thus, when the user lacks the shooting skill, it is often difficult to obtain the image satisfying the shot object, and the shooting effect is affected.
Disclosure of Invention
The embodiment of the application provides a photographing method and a mobile terminal, which can recommend a photographing station and/or a photographing posture in the photographing process, so that the photographing effect of an image is improved.
In a first aspect, an embodiment of the present application provides a photographing method, including:
acquiring a first image corresponding to a first display area;
identifying target characteristics of a shooting object in the first image and scene characteristics of a scene where the shooting object is located;
acquiring a posture image to be recommended according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located, and displaying the posture image in a second display area to prompt the shooting object to perform posture adjustment according to the posture image displayed in the second display area;
and photographing the photographed object to obtain a photographed image.
It should be noted that the first image may be a first frame preview image previewed by the first display area, or an nth frame preview image after the first frame, or a first frame image acquired by a camera, or an nth frame image after the first frame, where N is an integer greater than 1; the characteristics of the shooting object include but are not limited to facial characteristics, body characteristics and the like of the shooting object; the pose image to be recommended may be obtained locally or from a network.
It should be understood that the identifying the target feature of the photographic subject in the first image and the scene feature of the scene in which the photographic subject is located may be identifying the target feature of the photographic subject in the first image and the scene feature of the scene in which the photographic subject is located by using an AI identification technology and an object detection technology.
In a possible implementation manner of the first aspect, the obtaining an attitude image to be recommended according to a target feature of the photographic object and a scene feature of a scene where the photographic object is located includes:
acquiring at least one sample image according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located;
and selecting a preset number of target sample images from the sample images, and determining the selected target sample images as the attitude images to be recommended.
It should be understood that when the target sample image is selected from the sample images, the target sample image may be selected by combining with the personalized features of the photographic subject, wherein the personalized features may be photographic features such as a preferred photographic action preset by the photographic subject, or photographic features such as a preferred photographic action obtained by analyzing the behavior data of the photographic subject, and the target sample image is selected by combining with the personalized features of the photographic subject, which is beneficial to improving the satisfaction degree of the photographic subject on the posture image and improving the user experience.
Optionally, the selecting a preset number of target sample images from the sample images includes:
and acquiring a recommended value of each sample image, and selecting the first N sample images with high recommended values as target sample images, wherein N is an integer greater than or equal to 1.
It should be understood that the recommended value of the sample image may be determined according to the historical recommended number of the sample image in the preset time, or according to the historical selection times of the sample image in the preset time by the user, or may be determined according to personalized settings of the user, for example, according to a favorite tag set by the user, so as to recommend the posture image according to the recommended value, which is beneficial to improving the recommendation accuracy and recommendation efficiency of the posture image.
It should be noted that after a preset number of target sample images are selected, the target sample images can be sequentially displayed in the second display area as gesture images to be recommended, so that a shooting object or a photographer can select a target gesture image used in the current shooting from the gesture images displayed dynamically, and the target gesture image can be fixedly displayed in the second display area for the shooting object to perform gesture reference in the current shooting, thereby improving the satisfaction of image shooting and improving user experience.
For example, the display order of the posture images in the second display area may be determined according to the recommendation value of each posture image, for example, the posture image with a higher recommendation value is displayed in the second display area first, and the posture image with a lower recommendation value is displayed in the second display area later.
In a possible implementation manner of the first aspect, when a photographic subject exists in the first image, the obtaining at least one sample image according to a target feature of the photographic subject and a scene feature of a scene where the photographic subject is located includes:
determining a shooting category corresponding to the first image according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located;
and acquiring at least one sample image corresponding to the shooting category.
It should be noted that the shooting categories may be categories obtained by classification according to scene features and target features, and each shooting category may be provided with multiple sample images, that is, multiple sample images with scene features matched with the scene features of the shooting category and target features matched with the target features of the shooting category may be provided in each shooting category, so that accuracy and efficiency of determining the sample images may be greatly improved by setting the shooting categories, and user experience may be improved.
In a possible implementation manner of the first aspect, when there are multiple photographic subjects in the first image, the identifying a target feature of the photographic subject in the first image includes:
identifying a target feature of each shooting object in the first image;
correspondingly, the obtaining at least one sample image according to the target feature of the photographic object and the scene feature of the scene where the photographic object is located includes:
determining the group characteristics of the plurality of photographic objects in the first image according to the target characteristics of each photographic object;
and acquiring at least one sample image according to the group characteristics and the scene characteristics.
It should be understood that, if the photographing scene is a multi-person photographing scene, the target feature of each target object in the first image may be identified, and the group features of the plurality of photographing objects in the first image may be determined according to the target feature of each photographing object, where the group features may include the total number of the photographing objects, the proportion of different facial features, and/or the proportion of different physical features.
Optionally, the acquiring at least one sample image according to the group feature and the scene feature includes:
and acquiring at least one sample image with the matching degree of the group characteristics and the scene characteristics larger than a first threshold value from a pre-stored sample image library according to the group characteristics and the scene characteristics.
It should be noted that, in a scene of taking a picture of multiple persons, a plurality of sample images are stored in the sample image library, and each sample image is marked with a scene feature and a group feature corresponding to the sample image. Therefore, after the group features and the scene features corresponding to the first image are obtained, the matching degrees between the group features corresponding to the first image and the group features corresponding to each sample image may be respectively calculated, the matching degrees between the scene features corresponding to the first image and the scene features corresponding to each sample image may be respectively calculated, then the matching degrees between the first image and each sample image may be respectively determined according to the matching degrees corresponding to the scene features and the matching degrees corresponding to the group features obtained by calculation, and at least one sample image with a matching degree greater than a first threshold value or the first M sample images with a high matching degree may be obtained, where M is an integer greater than or equal to this, which is not limited in this application.
It should be understood that, in a multi-user photographing scene, the posture adjustment may further include a standing position adjustment between the photographing objects, that is, when the posture adjustment is performed, the height feature and/or the body type feature and/or the face type feature corresponding to each photographing object may be further obtained through an AI recognition technology and an image segmentation technology, and a standing position adjustment manner between the photographing objects may be determined according to the height feature, and/or the body type feature and/or the face type feature between the photographing objects, so that the standing position adjustment between the photographing objects is performed according to the standing position adjustment manner, thereby preventing a blocking situation from occurring in the multi-user photographing scene, facilitating improvement of an image photographing effect, and facilitating improvement of user experience.
In a possible implementation manner of the first aspect, the photographing the photographic object to obtain a photographed image includes:
and after determining that the shooting object carries out posture adjustment according to the posture image displayed in the second display area, shooting the shooting object to obtain a shooting image.
It should be noted that, in order to ensure the shooting effect of the image, improve the satisfaction of image shooting, and improve the user experience, the embodiment of the present application may further photograph the photographic object after determining that the photographic object has adjusted the photographing posture according to the posture image displayed in the second display area.
Optionally, after determining that the gesture of the photographic object is adjusted according to the gesture image displayed in the second display area, photographing the photographic object to obtain a photographic image, including:
acquiring a second image corresponding to the first display area;
and when the matching degree of the second image and the posture image displayed in the second display area is greater than a second threshold value, determining that the posture adjustment of the shot object is finished, and shooting the shot object to obtain a shot image.
It should be understood that, in the process of adjusting the posture of the photographic subject, a second image corresponding to the first display area may be obtained in real time, for example, a second image currently previewed in the first display area or a second image collected by the camera and to be displayed in the first display area may be obtained, and whether the photographic posture of the photographic subject is adjusted or not may be determined by comparing the second image with the posture image displayed in the second display area, and if it is determined that the photographic posture of the photographic subject is adjusted, the photographic subject may be photographed; if it is determined that the photographing posture of the photographing object is not adjusted, the difference between the photographing posture of the photographing object and the photographing posture in the posture image can be recognized according to the image comparison, and the photographing object can be prompted to further adjust the photographing posture according to the difference, for example, the photographing object can be prompted to further adjust the photographing posture in a voice prompt mode, a character prompt mode or an action prompt mode, so that the photographing posture can be further confirmed and corrected, the image photographing effect is improved, and the user experience is improved.
In one possible implementation manner of the first aspect, after the displaying the gesture image in the second display area, the method further includes:
and acquiring prompt voice and/or prompt characters corresponding to the attitude image, and prompting the shooting object to perform attitude adjustment through the prompt voice and/or the prompt characters.
It should be noted that after the gesture image is obtained, the human body gesture can be recognized and detected for the preset object in the gesture image through the AI recognition technology and the target detection technology, and can generate prompt voice and/or prompt characters corresponding to the gesture image according to the recognition and detection results, wherein, the generated prompt voice and/or prompt text are mainly used for describing the specific gesture of the preset object in the gesture image, and therefore, when the gesture image is displayed in the second display area, the prompt voice corresponding to the gesture image can be played or the prompt text corresponding to the gesture image can be displayed in the second display area or the first display area, the shooting object is assisted to be prompted to adjust the posture by combining the prompting voice and/or the prompting characters, so that the posture adjusting speed and the posture adjusting efficiency are improved.
In a possible implementation manner of the first aspect, the photographing method is applied to a mobile terminal, the mobile terminal includes a first display screen and a second display screen, a display area of the first display screen is a first display area, a display area of the second display screen is a second display area, the first display area faces a photographer and the second display area faces a photographing object when photographing.
According to the embodiment of the application, the attitude image is displayed in the second display area facing the shooting object, so that the shooting object can directly check the attitude image in the shooting process, the shooting object can conveniently directly perform attitude adjustment according to the checked attitude image, and the attitude adjustment efficiency is improved.
It should be noted that, when the mobile terminal includes the first display screen and the second display screen, the gesture image can be simultaneously displayed in the first display area viewable by the photographer and the second display area viewable by the photographic object, so that on the basis that the photographic object can directly perform gesture adjustment according to the gesture image, the photographer can also assist in guiding the photographic object to perform gesture adjustment according to the gesture image displayed in the first display area, or the photographer can conveniently know the gesture adjustment effect of the photographic object in real time, thereby improving the gesture adjustment efficiency, improving the image shooting effect, and improving the user experience.
In another possible implementation manner of the first aspect, the photographing method is applied to a mobile terminal, the mobile terminal includes a display screen, a display area of the display screen is a first display area, the second display area is a partial display area of the first display area, and when photographing, the first display area and the second display area face a photographer or a photographic subject.
In a possible implementation manner of the first aspect, the photographing method is applied to a mobile terminal, the mobile terminal includes a display screen, the first display area and the second display area are located in different display areas of the display screen, and when photographing, the first display area and the second display area face a photographer or a photographic object.
It should be noted that, during the photographing process, the contents displayed in the first display area and the second display area may also be interchanged, for example, the gesture image displayed in the second display area may be switched to be displayed in the first display area, and the first image displayed in the first display area may be switched to be displayed in the second display area, so as to facilitate the photographer and/or the subject to clearly view the first image or the gesture image.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including:
the image acquisition module is used for acquiring a first image corresponding to the first display area;
the characteristic identification module is used for identifying the target characteristic of the shooting object in the first image and the scene characteristic of the scene where the shooting object is located;
the attitude image recommendation module is used for acquiring an attitude image to be recommended according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located, and displaying the attitude image in a second display area to prompt the shooting object to perform attitude adjustment according to the attitude image displayed in the second display area;
and the photographing module is used for photographing the photographing object to obtain a photographed image.
In a possible implementation manner of the second aspect, the pose image recommendation module includes:
the sample image acquisition unit is used for acquiring at least one sample image according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located;
and the attitude image determining unit is used for selecting a preset number of target sample images from the sample images and determining the selected target sample images as the attitude images to be recommended.
Optionally, the pose image determining unit is further configured to obtain a recommended value of each sample image, and select the top N sample images with high recommended values as target sample images, where N is an integer greater than or equal to 1.
In a possible implementation manner of the second aspect, when there is one photographic subject in the first image, the sample image acquiring unit includes:
the shooting category determining subunit is configured to determine a shooting category corresponding to the first image according to the target feature of the shooting object and the scene feature of the scene where the shooting object is located;
and the sample image acquisition first subunit is used for acquiring at least one sample image corresponding to the shooting category.
In another possible implementation manner of the second aspect, when a plurality of photographic subjects exist in the first image, the feature identification module is further configured to identify a target feature of each photographic subject in the first image;
correspondingly, the sample image obtaining unit further includes:
a group feature determination subunit configured to determine a group feature of the plurality of photographic subjects in the first image according to a target feature of each photographic subject;
and the sample image acquisition second subunit is used for acquiring at least one sample image according to the group characteristics and the scene characteristics.
Optionally, the sample image obtaining second subunit is specifically configured to obtain, according to the group feature and the scene feature, at least one sample image whose matching degree with the group feature and the scene feature is greater than a first threshold from a pre-stored sample image library.
In a possible implementation manner of the second aspect, the photographing module includes:
and the attitude adjustment determining unit is used for photographing the shooting object to obtain a photographed image after determining that the shooting object performs attitude adjustment according to the attitude image displayed in the second display area.
Optionally, the posture adjustment determining unit includes:
an image acquisition subunit, configured to acquire a second image corresponding to the first display area;
and the photographing subunit is configured to determine that the posture adjustment of the photographic object is completed when the matching degree between the second image and the posture image displayed in the second display area is greater than a second threshold, and photograph the photographic object to obtain a photographed image.
Optionally, the photographing apparatus further includes:
and the prompting module is used for acquiring prompting voice and/or prompting characters corresponding to the attitude image and prompting the shooting object to perform attitude adjustment through the prompting voice and/or the prompting characters.
In a possible implementation manner of the second aspect, the photographing method is applied to a mobile terminal, the mobile terminal includes a first display screen and a second display screen, a display area of the first display screen is a first display area, a display area of the second display screen is a second display area, the first display area faces a photographer and the second display area faces a photographic object during photographing.
In another possible implementation manner of the second aspect, the photographing method is applied to a mobile terminal, the mobile terminal includes a display screen, a display area of the display screen is a first display area, the second display area is a partial display area of the first display area, and when photographing, the first display area and the second display area face a photographer or a photographic subject.
In a possible implementation manner of the second aspect, the photographing method is applied to a mobile terminal, the mobile terminal includes a display screen, the first display area and the second display area are located in different display areas of the display screen, and when photographing, the first display area and the second display area face a photographer or a photographing object.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a display screen, a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program, so that the mobile terminal implements the photographing method according to any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the photographing method according to any one of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the photographing method according to any one of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that:
in the embodiment of the application, a first image corresponding to a first display area is obtained; identifying target characteristics of a shooting object in a first image and scene characteristics of a scene where the shooting object is located; acquiring a posture image to be recommended according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located, and displaying the posture image in a second display area to prompt the shooting object to perform posture adjustment according to the posture image displayed in the second display area; and photographing the photographed object to obtain a photographed image. According to the embodiment of the application, the attitude image is obtained by combining the target characteristic and the scene characteristic of the shooting object, the accuracy and the effectiveness of the attitude image obtaining can be improved, the attitude image is displayed in the second display area, so that the shooting object can be conveniently and directly subjected to attitude adjustment according to the displayed attitude image or a shooting person can conveniently guide the shooting object to be subjected to attitude adjustment according to the displayed attitude image, the image shooting effect is effectively improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a mobile phone to which a photographing method according to an embodiment of the present application is applied;
fig. 2 is an exemplary diagram of a single-screen mobile phone to which the photographing method provided in an embodiment of the present application is applied;
fig. 3 is an exemplary diagram of a folding screen mobile phone to which the photographing method provided in an embodiment of the present application is applied;
FIG. 4 is a schematic diagram of an application scenario provided by an embodiment of the present application;
fig. 5 is a schematic diagram of previewing a first image by a photographing method according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a posture image displayed in a second display area by a photographing method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a photographing method according to another embodiment of the present application displaying a posture image in a second display area;
fig. 8 is a schematic diagram of a photographing method according to another embodiment of the present application displaying a posture image in a second display area;
FIG. 9 is a schematic diagram of an application scenario provided in another embodiment of the present application;
FIG. 10 is a schematic diagram of an application scenario provided in another embodiment of the present application;
fig. 11 is a schematic diagram illustrating a photographing method determining a position adjustment according to an embodiment of the present application;
fig. 12 is a schematic diagram illustrating a photographing method according to an embodiment of the present application instructing a station position adjustment through image indication;
fig. 13 is a schematic flowchart of a photographing method according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The photographing method provided by the embodiment of the application can be applied to mobile terminals such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, Augmented Reality (AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computers (UMPCs), netbooks, Personal Digital Assistants (PDAs), and the like, and the embodiment of the application does not limit the specific types of the mobile terminals at all.
Take the mobile terminal as a mobile phone as an example. Fig. 1 is a block diagram illustrating a partial structure of a mobile phone according to an embodiment of the present disclosure. Referring to fig. 1, the cellular phone includes: memory 110, input unit 120, display unit 130, sensor 140, audio circuit 150, processor 160, power supply 170, and camera 180. Those skilled in the art will appreciate that the handset configuration shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 1:
the memory 110 may be used to store software programs and modules, and the processor 160 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 110. The memory 110 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 110 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 120 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 100. Specifically, the input unit 120 may include a touch panel 121 and other input devices 122. The touch panel 121, also called a touch screen, may collect a touch operation performed by a user on or near the touch panel 121 (e.g., an operation performed by the user on or near the touch panel 121 using any suitable object or accessory such as a finger or a stylus), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 121 may include two parts, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 160, and can receive and execute commands sent by the processor 160. In addition, the touch panel 121 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 120 may include other input devices 122 in addition to the touch panel 121. In particular, other input devices 122 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 130 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 130 may include a Display panel 131, and optionally, the Display panel 131 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 121 can cover the display panel 131, and when the touch panel 121 detects a touch operation on or near the touch panel 121, the touch operation is transmitted to the processor 160 to determine the type of the touch event, and then the processor 160 provides a corresponding visual output on the display panel 131 according to the type of the touch event. Although in fig. 1, the touch panel 121 and the display panel 131 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 121 and the display panel 131 may be integrated to implement the input and output functions of the mobile phone.
In some embodiments, the display unit 130 may include 1 or N display screens, N being a positive integer greater than 1.
In some embodiments, when the display panel is made of OLED, AMOLED, FLED, or the like, the display screen may be bent. Here, the display screen may be bent in a manner that the display screen may be bent at any position to any angle along any axis and may be held at the angle, for example, the display screen may be folded right and left from a middle portion. Or can be folded from the middle part up and down. In the embodiment of the present application, the display screen that is bent may be referred to as a folding screen. The folding screen may be a single screen, or a display screen formed by combining multiple screens together, which is not limited herein. The display screen can also be a flexible screen, has the characteristics of strong flexibility and flexibility, can provide a new interaction mode based on the bendable characteristic for a user, and can meet more requirements of the user on a folding screen mobile phone. For a mobile phone configured with a folding screen, the folding screen on the mobile phone can be switched between a small screen in a folding state and a large screen in an unfolding state at any time.
In some embodiments, the display unit 130 may include a main display and a sub-display, the main display and the sub-display being parallel and operating independently, the main display being disposed on one side of the mobile phone and the sub-display being disposed on the other side of the mobile phone. It can be understood that, in general, the main display screen and the secondary display screen are two parallel surfaces, and when the main display screen and the secondary display screen are in parallel relationship, an included angle between the secondary display screen and the horizontal ground is equal to or complementary to a plane included angle between the main display screen and the horizontal plane of the dual-surface mobile phone. The double-sided screen mobile phone can detect the size of a plane included angle formed by a main display screen of the double-sided screen mobile phone and a horizontal plane through modes such as a gravity sensor and the like. Understandably, the main display screen in the dual-screen mobile phone is the main display device and is the screen with the most frequent use, and the auxiliary display screen is only used as the auxiliary screen and has lower use frequency. However, the main display and the sub-display are not strictly distinguished, and a user may set one screen as the main display (or the sub-display) in different use scenarios, and then the other screen is automatically set as the sub-display (or the main display).
The handset 100 may also include at least one sensor 140, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 131 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 131 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The audio circuit 150, speaker 151, microphone 152 may provide an audio interface between the user and the handset. The audio circuit 150 may transmit the electrical signal converted from the received audio data to the speaker 151, and convert the electrical signal into a sound signal for output by the speaker 151; on the other hand, the microphone 152 converts the collected sound signal into an electrical signal, which is received by the audio circuit 150 and converted into audio data, which is then processed by the audio data output processor 160 and sent to, for example, another cellular phone, or output to the memory 110 for further processing.
The processor 160 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 110 and calling data stored in the memory 110, thereby performing overall monitoring of the mobile phone. Alternatively, processor 160 may include one or more processing units; preferably, the processor 160 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 160.
The handset 100 also includes a power supply 170 (e.g., a battery) for powering the various components, and preferably, the power supply may be logically coupled to the processor 170 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
The handset 100 may also include a camera 180. Optionally, the position of the camera 180 on the mobile phone 100 may be front-end or rear-end, which is not limited in this embodiment of the application.
The camera 180 may acquire an optical image generated by photographing an object, convert the optical image into an electrical signal, convert the electrical signal into a digital signal through analog-to-digital conversion, process the digital signal through the digital signal processor DSP, send the processed digital signal to the processor 160 for processing, and finally convert the processed digital signal into an image that can be seen on the display unit 130.
Optionally, the mobile phone 100 may include a single camera, a dual camera, or a triple camera, which is not limited in this embodiment.
For example, the cell phone 100 may include three cameras, one being a main camera, one being a wide camera, and one being a tele camera.
Optionally, when the mobile phone 100 includes a plurality of cameras, the plurality of cameras may be all front-mounted, all rear-mounted, or a part of the cameras front-mounted and another part of the cameras rear-mounted, which is not limited in this embodiment of the present application.
In addition, although not shown, the mobile phone 100 may further include a bluetooth module or the like, which is not described herein.
In order to describe the technical solution of the present application more clearly, the following will explain in detail the photographing method provided in the embodiment of the present application with reference to the drawings and application scenarios, and the photographing method can be applied to the mobile phone 100 described above by way of example and not limitation.
Scene one
The scene can be a single-person photographing scene, the photographing method in the scene can be applied to a single-screen mobile phone, and also can be applied to a double-screen mobile phone or a folding-screen mobile phone, wherein the single-screen mobile phone can also be a single-screen mobile phone formed after the folding-screen mobile phone is unfolded, and the single-screen mobile phone, the double-screen mobile phone or the folding-screen mobile phone comprises a first display area 20 and a second display area 21.
For example, when the photographing method is applied to a single-screen mobile phone as shown in fig. 2, the display screen of the single-screen mobile phone may include a first display area 20 for previewing a first image and a second display area 21 for displaying a posture image, and both the first display area 20 and the second display area 21 face a photographer or both face a subject to be photographed.
It should be understood that the positions and sizes of the first display area 20 and the second display area 21 shown in fig. 2 are only schematically illustrated, and should not be construed as limiting the embodiments of the present application. In the embodiment of the present application, the first display area 20 and the second display area 21 may be both located at any position in the display screen of the single-sided mobile phone, for example, the first display area 20 and the second display area 21 may be located at different display areas in the display screen of the single-sided mobile phone, that is, there is no overlap between the display areas of the first display area 20 and the second display area 21; for another example, the first display area 20 may be the entire display area of the display screen of the single-screen mobile phone, and the second display area 21 may be a partial display area of the first display area 20, that is, there may be an overlap between the display areas of the first display area 20 and the second display area 21.
It should be understood that during the photographing process, the contents displayed in the first display area 20 and the second display area 21 may also be interchanged, for example, the posture image displayed in the second display area 21 may be switched to be displayed in the first display area 20, and the first image displayed in the first display area 20 may be switched to be displayed in the second display area 21, so as to facilitate the photographer and/or the subject to clearly view the first image or the posture image.
For example, when the photographing method is applied to a dual-screen mobile phone, the dual-screen mobile phone may have the display area of the main display screen as the first display area 20 and the display area of the sub display screen as the second display area 21. When taking a picture, the first display area 20 can face the photographer to be used as an operation end of the photographer, and the second display area 21 can face the shooting object to be used for the shooting object to view the attitude image, so that the shooting object can be conveniently and directly adjusted in attitude according to the attitude image displayed in the second display area 21, and the attitude adjustment efficiency is improved.
For example, when the photographing method is applied to a folding screen mobile phone as shown in fig. 3, the folding screen may be an integrated flexible display screen, or a display screen composed of two rigid screens and a flexible screen located between the two rigid screens. The folding screen may include: the screen comprises a first screen, a second screen and a bendable area connecting the first screen and the second screen. When the foldable screen of the mobile phone is in the folded state, the display area of the first screen may be set as the first display area 20, and the display area of the second screen may be set as the second display area 21. The folding state may be a complete folding state, that is, an included angle between the first screen and the second screen is 0 degree (which may not reach 0 degree actually, specifically based on an actual reporting angle of the mobile phone sensor), or a partial folding state, that is, the included angle between the first screen and the second screen is greater than 0 degree and less than 180 degrees. As shown in fig. 4, in the photographing of the folding screen mobile phone, the first display area 20 may face the photographer to serve as an operation end of the photographer, and the second display area 21 may face the photographic subject for the viewing of the gesture image by the photographic subject.
Specifically, as shown in fig. 5 and 6, when taking a picture, the processor of the mobile phone may acquire a first image corresponding to the first display area 20, and may recognize the first image through an AI recognition technology to obtain a recognition result, where the recognition result is used to indicate whether a photographic subject exists in the first image, indicate a target feature of the photographic subject when the photographic subject exists, and indicate a scene feature of a scene where the photographic subject exists when the photographic subject exists. If a shooting object exists in the first image, acquiring a posture image to be recommended according to the identified target characteristic and the scene characteristic of the shooting object, displaying the posture image in the second display area 21 to prompt the shooting object to perform posture adjustment according to the posture image, and after prompting the shooting object to perform posture adjustment, shooting the shooting object to obtain a shooting image. The target feature of the photographic subject may include a facial feature and/or a physical feature of the photographic subject, and the scene feature may include a scene type, such as a beach/beach scene, a forest scene, a field scene, a grassland scene, a desert scene, a mountain peak scene, a sea scene, a lake/pond scene, a snow scene, a blue sky scene, and the like. According to the embodiment of the application, the attitude image is obtained by combining the target characteristic and the scene characteristic of the shooting object, the accuracy and the effectiveness of the attitude image obtaining can be improved, the attitude image is displayed through the second display area 21, so that the shooting object can be conveniently subjected to attitude adjustment directly according to the displayed attitude image or a shooting person can conveniently guide the shooting object to be subjected to attitude adjustment according to the displayed attitude image, the image shooting effect is effectively improved, and the user experience is improved.
It should be understood that, a photographing mode for acquiring the posture image to perform photographing may be set in advance in the mobile phone for a user to select, and when the photographing mode is selected by the user, the acquired first image may be identified in the photographing process, and the posture image may be displayed in the second display area 21 of the mobile phone according to the identification result, so as to prompt the photographing object to perform posture adjustment according to the displayed posture image; if the user does not select the photographing mode, the conventional photographing process can be directly performed.
Optionally, the mobile phone may start the camera according to the detected camera start instruction, and may preview an image acquired by the camera in real time through the first display area of the mobile phone. Correspondingly, a user can start the camera in the mobile phone by inputting a camera starting instruction, where the camera starting instruction may be an instruction for triggering generation of a preset key, a preset gesture, or a preset voice keyword, and the embodiment of the present application is not limited thereto.
For example, the mobile phone may preset that a "volume +" key is triggered twice to generate a camera start instruction, or preset that a "volume +" key and a "volume-" key are triggered simultaneously to generate a camera start instruction, and when a user presses the "volume +" key twice, or when the user presses the "volume +" key and the "volume-" key simultaneously, the mobile phone may generate and acquire the camera start instruction. For another example, the mobile phone may be preset to generate a camera start instruction when preset gestures such as an "O" shape gesture are acquired, and when a user inputs a gesture matched with the preset gestures such as the "O" shape gesture, the mobile phone may generate and acquire the camera start instruction. For example, the mobile phone may preset a camera start instruction generated when a preset voice keyword such as "take a picture" is detected, and when the voice input by the user includes the preset voice keyword such as "take a picture", the mobile phone may generate and acquire the camera start instruction.
Optionally, the mobile phone may obtain a first image corresponding to the first display area according to a trigger event or a preset duration, where the obtaining of the first image corresponding to the first display area may be obtaining of an image previewed by the first display area, or may be obtaining of an image acquired by the camera in real time, that is, the camera may send the image acquired in real time to the processor of the mobile phone while sending the image to the first display area of the mobile phone for previewing, so as to serve as the first image acquired by the processor of the mobile phone and corresponding to the first display area, which is not limited in the embodiment of the present application.
It should be understood that the preset time duration described in the embodiment of the present application may be understood as a time duration preset in the mobile phone.
Optionally, the mobile phone may obtain an image currently previewed in the first display area or an image currently acquired by a camera according to the detected specific instruction, and may determine the currently previewed image or the currently acquired image as the first image.
For example, the mobile phone may acquire an image currently previewed by the first display area or an image currently acquired by the camera when a click operation that a user clicks any position of the first display area is detected.
Optionally, the mobile phone may obtain a currently previewed image of the first display area or a currently acquired image of the camera when the time length for starting the camera reaches the preset time length, and may determine the currently previewed image or the currently acquired image as the first image.
For example, a timer or a timer may be preset in the mobile phone, the timer or the timer may be started after the camera is started, the duration of the timer or the timer is the preset duration (for example, 7 seconds), and when the timer is timed out or the timer times out to 7 seconds, the image currently previewed in the first display area is acquired or the image currently captured by the camera is acquired.
Optionally, in the embodiment of the application, a scene recognition model may be previously built in the mobile phone, so as to recognize the scene feature in the first image through the scene recognition model.
In a possible implementation manner, the scene recognition model may be a convolutional Neural Network model based on a Residual Neural Network (ResNet) architecture, and the convolutional Neural Network may be obtained by training based on a scene database.
Specifically, after the scene recognition model receives the first image, preprocessing such as information conversion, denoising, smoothing, transformation and the like can be performed on the first image to strengthen important features of the image; then, an image segmentation technology, such as a Mask Region-based volumetric Neural Networks (Mask RCNN) is adopted to perform target detection and segmentation on the preprocessed first image, so as to obtain a foreground part and a background part in the first image; the features of the background portion can be extracted and selected, and the selected features can be analyzed to identify scene features such as a scene type corresponding to the first image.
Optionally, when the target feature of the photographic subject includes a facial feature of the photographic subject, the embodiment of the application may further build a facial feature recognition model in the mobile phone in advance, so as to recognize the facial feature of the photographic subject in the first image through the facial feature recognition model.
Specifically, the facial feature recognition model may be a Multi-task convolutional Neural network (MTCNN), and the embodiment does not limit the specific construction process and the specific training process of the MTCNN, and may be constructed by using an existing construction process or trained by using an existing training process.
It should be noted that the facial features recognized by the facial feature recognition model may include an age feature, a gender feature, a facial feature, an expression feature, and so on.
Specifically, after the facial feature recognition model receives the first image, preprocessing such as information conversion, denoising, smoothing, transformation and the like can be performed on the first image to strengthen important features of the image; then, face recognition technology can be sampled to perform face recognition on the preprocessed first image to obtain a face frame diagram and face five-sense organ key point coordinates in the first image; and then, the face in the face block diagram can be aligned according to the coordinates of the key points of the five sense organs of the face, and the aligned face is subjected to feature extraction and analysis to identify the facial features in the first image. The aligning of the face in the face block diagram according to the coordinates of the key points of the five sense organs of the face can be to adjust the face to a predetermined size and shape by rotating, scaling, matting and the like the face in the face block diagram according to the coordinates of the key points of the five sense organs of the face.
Optionally, when the target feature of the photographic subject includes a physical feature of the photographic subject, the embodiment of the application may further build a physical recognition model in the mobile phone in advance, so as to recognize the physical feature of the photographic subject in the first image through the physical recognition model.
Specifically, the body recognition model may be a dense human body posture estimation model DensePose Region-based connected Neural Networks, DensePose RCNN).
Specifically, after the body recognition model receives the first image, the body recognition model may first perform preprocessing such as information conversion, denoising, smoothing, transformation, etc. on the first image to enhance important features of the image; then, the first image can be subjected to target detection through a Region-based Convolutional Neural network (RCNN) or through a fast Region-based Convolutional Neural network (fast RCNN) or through a YOLO algorithm (YOLO), and a shooting object in the first image is detected; and then, extracting human body posture characteristics of the shooting object, and identifying the body characteristics of the shooting object in the first image according to the extracted human body posture characteristics. The identified physical characteristics may include height characteristics, body type characteristics, and the like.
Optionally, a posture recommendation model may be built in the mobile phone in advance, and the posture recommendation model may obtain a posture image to be recommended according to the scene features and the target features (i.e., the face features and/or the body features), so as to obtain an accurate and effective posture image by comprehensively considering the scene features and the target features (i.e., the face features and/or the body features) to assist and guide the shooting object to shoot, thereby improving the image shooting effect and improving the user experience.
In a possible implementation manner, the gesture recommendation model may first obtain at least one sample image according to the target feature of the photographic object and the scene feature of the scene where the photographic object is located, select a preset number of target sample images from the sample images, and determine the selected target sample image as the gesture image to be recommended.
Specifically, the posture recommendation model may first obtain a recommendation value of each sample image, then select the first N sample images with a high recommendation value as target sample images, and determine the selected N target sample images as posture images to be recommended, where N is an integer greater than or equal to 1, so as to recommend the posture images through the recommendation values, which is beneficial to improving recommendation accuracy and recommendation efficiency of the posture images.
In a possible implementation manner, the pose recommendation model may first determine a shooting category corresponding to the first image according to the scene features and the target features (the facial features and/or the body features); and then, each sample image corresponding to the shooting category can be obtained, a preset number of target sample images can be selected from the sample images, and each selected target sample image is determined as an attitude image to be recommended.
It should be noted that, in the pose recommendation model, a plurality of preset shooting categories may be set according to the scene features and the target features (i.e., the facial features and/or the body features), for example, a small lady grass category and a high boy grass category may be set according to the scene features and the body features, for example, a beach beauty category and a beach funny category may be set according to the scene features, the facial features and the body features, and so on. Each preset shooting category can be configured with a plurality of sample images with good shooting effect and good posture, and the plurality of sample images in each preset shooting category are sample images with scene characteristics matched with the scene characteristics of the shooting category and target characteristics matched with the target characteristics of the shooting category. Therefore, after the scene feature and the face feature and/or the body feature are received by the pose recommendation model, the shooting category corresponding to the first image can be determined through matching of the scene feature and the face feature and/or the body feature, that is, which preset shooting category is the shooting category corresponding to the first image is determined.
It should be understood that the preset shooting category and the shooting category are both categories determined based on the scene features and the target features (i.e., the facial features and/or the body features), that is, the preset shooting category and the shooting category are both categories classified according to the scene features and the target features (i.e., the facial features and/or the body features), and therefore, if any features are different, the determined shooting categories may also be different. Specifically, when the shooting category is determined by the scene feature and the face feature together, the two determined shooting categories may not be the same for the two first images having the same scene feature and different face features, and therefore the recommended posture images may not be the same. For example, when both the subject a and the subject B are photographed on the grass, the photographing category determined from the subject a and the grass scene may be a grass-small girlfriend category, and the photographing category determined from the subject B and the grass scene may be a grass-high girlfriend category, so that the posture image is recommended to be acquired from the grass-small girlfriend category for the subject a, and the posture image is recommended to be acquired from the grass-high girlfriend category for the subject B. For example, when the subject C is photographed on the grass and the sand, the photographing type determined from the subject C and the grass scene may be a small girlfriend grass type, and the photographing type determined from the subject C and the sand scene may be a sand beauty type.
It should be understood that when the shooting category is determined by the scene feature and the physical feature together or when the shooting category is determined by the scene feature and the facial feature and the physical feature together, if the scene feature is not the same, or if the facial feature is not the same, or if the physical feature is not the same, the determined shooting category may also be different, and the recommended posture image may also be different, and the specific content is similar to the above description, and will not be described herein again.
That is to say, in the embodiment of the present application, when the same photographic subject takes pictures in different scenes, the determined photographic categories may be different, and the recommended posture images may also be different, and similarly, when different photographic subjects take pictures in the same scene, the determined photographic categories may also be different, and the recommended posture images may also be different.
The posture recommendation model can be classified by classifier design or classification decision, wherein the classifier design is trained to obtain a recognition rule through which classification is performed. The classification decision is to classify the identified object in the feature space.
For example, the scene feature and the first feature vector corresponding to the face feature and/or the body feature may be first constructed, the scene feature and the second feature vector corresponding to the face feature and/or the body feature in each preset shooting category may be respectively constructed, then the matching degree between the first feature vector and each second feature vector may be respectively calculated, and the shooting category to which the first image belongs may be determined according to the matching degree.
Optionally, after determining the shooting category to which the first image belongs, the posture recommendation model may obtain sample images corresponding to the shooting category, so as to select a preset number of target sample images from the sample images as posture images to be recommended.
In a possible implementation manner, the pose recommendation model may determine the pose image according to the recommendation value of each sample image, so as to improve the recommendation accuracy and recommendation efficiency of the pose image.
Optionally, the recommendation value corresponding to each sample image may be determined according to the historical recommendation number of each sample image in the preset time, for example, the recommendation value corresponding to a sample image with a larger historical recommendation number may be higher according to the historical recommendation number of each sample image in the past week; the less the historical recommendation number of sample images, the lower the recommendation value corresponding to the sample images.
Optionally, the recommendation value corresponding to each sample image may also be determined according to the historical selection times of the mobile phone user for each sample image within a preset time, for example, may be determined according to the historical selection times of the mobile phone user for each sample image within the past month, where the recommendation value corresponding to the sample image with the greater historical selection times is higher; and the less the sample image is selected historically, the lower the corresponding recommended value.
Optionally, the recommended value corresponding to each sample image may also be determined according to personalized settings of the mobile phone user, for example, when the mobile phone user uses a photographing gesture in a certain sample image in a previous photographing process, and collects the photographing gesture or sets a preference tag for the photographing gesture, the recommended value corresponding to the sample image is relatively high, and correspondingly, if the mobile phone user deletes the collected photographing gesture or sets a preference tag for a certain photographing gesture, the recommended value corresponding to the sample image is relatively low.
Optionally, the preset number recommended by the gesture images every time can be preset in the mobile phone by the mobile phone user, and the gesture recommendation model can output the preset number of gesture images for selection of photographing every time, so that the recommendation will of the user is met, and the user experience is improved.
For example, when the mobile phone user selects a photographing mode for obtaining the gesture image for photographing, an input box may pop up in the mobile phone, and the mobile phone user may input a preset number recommended each time in the input box (for example, 10 may be input), so that in each photographing process, the gesture recommendation model may output 10 gesture images for selection of photographing.
It should be understood that, in the embodiment of the present application, a default setting may also be performed on the preset number recommended each time, for example, the preset number recommended each time may be set to 15 by default, and when the mobile phone user does not perform the setting of the preset number, the posture recommendation model may perform output of the posture images according to the preset number of the default setting, for example, 15 posture images are output each time for selection of taking a picture.
Specifically, after obtaining each sample image corresponding to the shooting category, the posture recommendation model can further obtain a recommendation value corresponding to each sample image, and select a preset number of target sample images as the posture images to be recommended according to the recommendation value.
For example, after obtaining the recommendation value corresponding to each sample image, the posture recommendation model may perform descending order arrangement on the sample images according to the recommendation value to obtain a descending order arrangement group, in which sample images with higher recommendation values are ranked farther forward, and sample images with lower recommendation values are ranked farther backward; then, the top N top-ranked target sample images may be selected from the descending order group, and the selected top N target sample images may be determined as the posture images to be recommended.
Optionally, when determining the pose image, the pose recommendation model may be determined by combining with personalized features of the photographic subject, where the personalized features may be photographic features such as a preferred photographic action preset by the photographic subject, or photographic features such as a preferred photographic action analyzed according to behavior data of the photographic subject, for example, personalized features such as a preferred photographic action of the photographic subject may be obtained by analyzing picture search behavior data, picture collection behavior data, picture browsing behavior data, and/or historical photographic image data of the photographic subject itself.
It should be understood that when the shooting object has the personalized feature, after the gesture recommendation model obtains each sample image corresponding to the shooting category, the gesture recommendation model may first find out sample images matched with the personalized feature from the sample images, then obtain recommendation values corresponding to the found sample images, select a preset number of target sample images according to the recommendation values, and determine each target sample image as the gesture image to be recommended.
Alternatively, the pose recommendation model may be a convolutional neural network model obtained by collecting a large number of training images for label learning.
Specifically, a first training image and a second training image are collected, and preset shooting categories corresponding to the first training images are marked; secondly, identifying training scene characteristics of a training scene in which a preset object is located in each first training image and training face characteristics and/or training body characteristics of the preset object; then, respectively inputting training scene features and training facial features and/or training body features corresponding to the first training images into an initial posture recommendation model, respectively determining training shooting categories corresponding to the first training images through the initial posture recommendation model, and calculating training errors according to the training shooting categories corresponding to the first training images and preset shooting categories; if the training error does not meet the preset condition, adjusting model parameters of the posture recommendation model, determining the posture recommendation model after the model parameters are adjusted as an initial posture recommendation model, returning to execute the step of respectively inputting the training scene characteristics and the training face characteristics and/or the training body characteristics corresponding to the first training images into the initial posture recommendation model, and respectively determining the training shooting categories corresponding to the first training images through the initial posture recommendation model and the subsequent steps; and if the training error meets the preset condition, determining that the posture recommendation model is trained, and classifying each second training image into a corresponding preset shooting category by using the trained posture recommendation model to serve as a sample image corresponding to each preset shooting category. The second training image and the first training image are both images with good shooting effect and good posture in a single shooting scene, and the second training image can comprise the first training image.
It should be understood that each sample image that the preset shooting category corresponds not only can include the second training image that corresponds, in this application embodiment, can also carry out crawling of image from the network in real time, and detect and analyze the image that crawls and confirm whether the image that crawls is that the shooting effect is better, the better single scene image of seeing of gesture, if yes, then can classify the image that crawls to the preset shooting category that corresponds, supplement each sample image of presetting the shooting category through the network crawling promptly, improve the variety that gesture image recommended, promote user experience.
Optionally, the pose recommendation model may also integrate functions of a scene recognition model, a facial feature recognition model, and/or a body recognition model, that is, the first image may be directly input to the pose recommendation model, and the pose recommendation model may directly perform scene recognition, facial feature recognition, and/or body feature recognition on the first image, so that a pose image to be recommended may be directly obtained according to the recognized scene features and facial features and/or body features.
For example, the pose recommendation model may be a convolutional neural network model constructed based on Mask RCNN and face recognition technology. Specifically, after receiving the first image, the pose recommendation model may first perform preprocessing such as information conversion, denoising, smoothing, and transformation on the first image to enhance important features of the image; then, target detection, target segmentation, human body posture recognition and the like can be carried out on the preprocessed first image by using Mask RCNN, so that a background part in the first image, a shooting object existing in the first image and human body posture characteristics corresponding to the shooting object are obtained; then, extracting and selecting the features of the background part, identifying the scene features corresponding to the first image by analyzing the selected features, and simultaneously carrying out face detection on the shooting object in the first image by utilizing a face identification technology to obtain the face features corresponding to the first image; finally, the body feature in the first image can be obtained according to the human body posture feature, the shooting category corresponding to the first image can be determined according to the scene feature, the face feature and/or the body feature by utilizing a convolutional neural network, and therefore the posture image to be recommended can be obtained according to the sample image corresponding to the shooting category.
It should be understood that each sample image may be an image collected in advance and stored in the mobile phone or stored in a cloud server connected to the mobile phone, or an image captured from a network in real time in the photographing process, for example, when the pose recommendation model performs target detection on the first image, the pose recommendation model may also crawl the image from the network through technologies such as web crawler and the like, and may perform target detection and analysis on the captured image, so as to classify the real-time crawled image into a corresponding preset photographing category according to the detection and analysis result, so as to serve as the sample image corresponding to the preset photographing category.
Optionally, after the attitude image to be recommended is obtained, the attitude image can be displayed in the second display area of the mobile phone, and the attitude image can be displayed to the shooting object or to the shooter, so that the shooting object can be conveniently subjected to attitude adjustment directly according to the attitude image or the shooter can conveniently guide the shooting object to be subjected to attitude adjustment according to the attitude image, the image shooting effect is improved, and the user experience is improved.
Specifically, in two-sided screen cell-phone and the folding screen cell-phone that is in fold condition, after obtaining the gesture image that treats the recommendation, also can show this gesture image simultaneously in the first display area that shooter can look over and the second display area that the shooting object can look over, on the basis that the object can directly carry out the attitude adjustment according to this gesture image conveniently shooting, still can conveniently shoot the person and come supplementary guidance shooting object according to the gesture image that first display area shows and carry out the attitude adjustment, perhaps can conveniently shoot the person and know the attitude adjustment effect of shooting the object in real time, thereby improve attitude adjustment efficiency, improve the image shooting effect, promote user experience.
Optionally, after a plurality of posture images to be recommended are acquired, a display sequence corresponding to each posture image can be determined, and each posture image is sequentially displayed in the second display area according to the display sequence, so that a posture recommendation effect is improved, a shooting object can conveniently and quickly find a satisfactory recommended posture, and user experience is improved.
Specifically, the display order corresponding to each posture image may be determined according to a recommendation value corresponding to each posture image, wherein the posture image with the larger recommendation value is displayed in the second display area first, and the posture image with the smaller recommendation value is displayed in the second display area later.
Optionally, when each attitude image is displayed in the second display area, a recommended value corresponding to each attitude image may also be displayed, so that the shooting object and/or the photographer can conveniently select the attitude image according to the recommended value, and the user experience is improved. The recommendation value can be a specific value as shown in fig. 6, or can be a recommendation index, such as a 5 star (:), a 4 star (:), a 3 star (:), etc.
Optionally, after each posture image is acquired, the human body posture of the preset object in each posture image may be recognized and detected by an AI recognition technology and a target detection technology, and a prompt voice and/or a prompt text corresponding to each posture image may be generated according to the recognition and detection result, where each generated prompt voice and/or each generated prompt text is mainly used to describe the specific posture of the preset object in the corresponding posture image.
Specifically, when the gesture image is displayed in the second display area, a prompt voice corresponding to the gesture image can be played through a voice device such as a microphone in the mobile phone, so as to assist the shooting object to perform gesture adjustment by combining the prompt voice, thereby improving the gesture adjustment speed and the gesture adjustment efficiency. When the gesture image is displayed in the second display area, the prompt characters corresponding to the gesture image can be displayed in the second display area or the first display area, so that the shooting object can perform gesture adjustment by combining the displayed prompt characters, or a photographer can assist in guiding the shooting object to perform gesture adjustment according to the displayed prompt characters, and the gesture adjustment speed and the gesture adjustment efficiency are improved.
Optionally, in the display process of the attitude image, the photographic subject or the photographer may select the attitude image, that is, the photographic subject or the photographer may select a favorite target attitude image from the plurality of attitude images, and in the photographing, the target attitude image may be fixedly displayed in the second display region for the photographic subject to perform the reference of the attitude, so that the satisfaction of image photographing is improved, and the user experience is improved.
For example, as shown in fig. 7, the central position of the second display area may dynamically display the gesture images in the display order, and the edge position of the second display area may provide a "confirm" or "OK" button for the subject or the photographer to select the gesture images, that is, if the "confirm" or "OK" button is triggered during the display of a certain gesture image, the central position of the second display area will fixedly display the gesture image during the photographing, so as to prompt the subject to perform the gesture adjustment according to the gesture image.
As shown in fig. 7, buttons such as "up" and "down" may be further provided at an edge of the second display area, so as to facilitate viewing of a previous gesture image or viewing of a next gesture image, thereby improving user experience.
It should be understood that in the embodiment of the present application, the previous gesture image or the next gesture image may also be viewed through a left-stroke operation and a right-stroke operation, for example, the previous gesture image may be viewed through a left-stroke operation, the next image may be viewed through a right-stroke operation, or the next gesture image may also be viewed through a left-stroke operation, the previous image may be viewed through a right-stroke operation, or the previous gesture image or the next gesture image may be viewed through an up-stroke operation and a down-stroke operation, which is not limited in this embodiment of the present application.
Optionally, as shown in fig. 8, in the dual-screen mobile phone or the folding-screen mobile phone in the folded state, the central position of the second display area may be further divided into two parts, the first part 81 is configured to display a recommended posture image, and the display content of the second part 82 may be the same as the display content in the first display area, that is, the second part 82 may be configured to display an image corresponding to the current photographing posture of the photographic subject, so that the photographic subject can adjust the photographing posture thereof according to the comparison between the two, and the posture adjustment efficiency is improved.
Optionally, after prompting the shooting object to perform the adjustment of the shooting posture according to the posture image in the second display area, the processor of the mobile phone may send a shooting instruction to control the camera to shoot the shooting object, so as to obtain a shooting image.
In a possible implementation manner, the mobile phone may further perform posture adjustment on the photographic object according to the posture image displayed in the second display area, and then photograph the photographic object to obtain a photographed image.
Specifically, the mobile phone may determine whether the shooting attitude of the shooting object is adjusted according to whether a voice confirmation instruction sent by the shooting object is received or not, and may also determine whether the shooting attitude of the shooting object is adjusted according to whether the attitude adjustment duration reaches a preset adjustment duration or not, which is not limited in the embodiment of the present application.
For example, in automatic shooting without the operation of a photographer, if a voice including a keyword such as "OK" or "adjusted" is emitted from a subject during the posture adjustment, the mobile phone can confirm that a voice confirmation instruction emitted from the subject is received, and at this time, the processor of the mobile phone can emit a shooting instruction to control the camera to shoot to obtain a shot image. For another example, after the object selects the target posture image, a timer in the mobile phone may be started in time to count the adjustment time, and when the counted time reaches the preset adjustment time for 30 seconds, the mobile phone may confirm that the object has adjusted the photographing posture of the mobile phone, and at this time, the processor of the mobile phone may send a photographing instruction to control the camera to photograph.
It should be understood that the mobile phone may also control the camera to photograph the photographic object after receiving the photographing instruction of the photographer, so as to obtain the photographed image, that is, the photographer may send the photographing instruction to the mobile phone according to the posture image adjustment condition of the photographic object, and the mobile phone may control the camera to photograph the photographic object according to the photographing instruction.
For example, when the photographer determines that the posture of the photographic object is adjusted, the photographer can send a photographing instruction to the mobile phone by clicking a photographing button in the mobile phone or triggering a voice instruction, and the mobile phone can control the camera to photograph according to the photographing instruction to obtain a photographed image.
In a possible implementation manner, determining whether the shooting posture of the shooting object is adjusted or not can be determined according to the matching degree between the second image and the posture image in the posture adjustment process, specifically, the second image corresponding to the first display area can be obtained firstly, the current shooting posture of the shooting object in the second image and the target shooting posture of the preset object in the selected target posture image can be recognized through an AI recognition technology, and then the matching degree between the current shooting posture and the target shooting posture can be calculated; if the matching degree is less than the first threshold, the shooting object is not adjusted to the shooting attitude currently, at this time, the distinguishing position between the current shooting attitude and the target shooting attitude can be determined, and the shooting object is prompted to adjust the shooting attitude according to the distinguishing position until the matching degree between the adjusted current shooting attitude and the target shooting attitude is greater than or equal to the first threshold. When the matching degree between the adjusted current photographing posture and the target photographing posture is larger than or equal to the first threshold value, the photographing object is determined to be adjusted to the photographing posture, the camera can be controlled to photograph at the moment to obtain a photographed image, so that confirmation and correction of the photographing posture are further achieved through calculation of the matching degree, the image photographing effect can be improved, and the user experience is improved.
Alternatively, the histogram HistA for the current camera pose and the histogram HistB for the target camera pose may be calculated first, and then the normalized correlation coefficient between HistA and HistB may be calculated, or the babbitt distance, or the histogram intersection distance, etc., so that the histogram may be calculated based on the normalized correlation coefficient or the babbitt distance, or the histogram intersection distance to determine a degree of match between the current photograph pose and the target photograph pose, when the calculated matching degree is smaller than the first threshold value, the distinguishing position between the current photographing posture and the target photographing posture can be found out through histogram comparison, therefore, the shooting object can be prompted to perform posture adjustment according to the distinguished position, for example, the shooting object can be prompted to perform posture adjustment in a voice playing mode and/or the shooting object can be prompted to perform posture adjustment in a text display mode.
It should be understood that the determination of the matching degree between the current photographing posture and the target photographing posture through calculating the normalized correlation coefficient between the histograms and the like is only an example and is not limited, and other methods such as matrix decomposition and the like may also be adopted in the embodiment of the present application to determine the matching degree between the current photographing posture and the target photographing posture.
Optionally, after the matching degree between the current photographing posture and the target photographing posture is obtained through calculation, the matching degree can be displayed in the first display area and/or the second display area in real time, so that the photographic subject and/or the photographer can clearly know the specific fitting condition of the photographing posture and the posture image of the photographic subject.
Scene two
The scene can be a multi-user combined scene, and the photographing method in the scene can be applied to a single-screen mobile phone shown in fig. 2, a double-screen mobile phone and a folding-screen mobile phone shown in fig. 3, wherein the single-screen mobile phone can also be a single-screen mobile phone formed after the folding-screen mobile phone is unfolded.
For example, when the photographing method is applied to a dual-screen mobile phone or a folding-screen mobile phone, both the dual-screen mobile phone and the folding-screen mobile phone in a folded state may include the first display area 20 and the second display area 21, when photographing, the first display area 20 may face a photographer to serve as an operation end of the photographer, and the second display area 21 may face a photographic object to allow the photographic object to view a posture image, so that the photographic object may directly perform posture adjustment according to an image displayed in the second display area 21, and posture adjustment efficiency is improved.
As shown in fig. 2 and 9, when the photographing method is applied to a single-screen mobile phone, a display screen of the single-screen mobile phone may include a first display area 20 and a second display area 21, and both the first display area 20 and the second display area 21 face a photographer or both the photographer.
It should be understood that the positions and sizes of the first display area 20 and the second display area 21 shown in fig. 2 and 9 are only schematically illustrated, and should not be construed as limiting the embodiment of the present application, in which the first display area 20 and the second display area 21 may be located at any position in the display screen of the single-screen mobile phone, for example, the positions of the first display area 20 and the second display area 21 may also be the positions shown in fig. 10. In addition, during the photographing process, the contents displayed in the first display area 20 and the second display area 21 may be interchanged, for example, the gesture image displayed in the second display area 21 may be switched to be displayed in the first display area 20, and the first image previewed in the first display area 20 may be switched to be displayed in the second display area 21, so that the photographer and/or the subject can clearly view the first image or the gesture image.
For the scene, when taking a picture, the processor of the mobile phone may obtain a first image corresponding to the first display area 20, and may recognize the first image through an AI recognition technology to obtain a recognition result, where the recognition result is used to indicate whether a photographic subject exists in the first image, indicate a target feature of the photographic subject when the photographic subject exists, and indicate a scene feature of the scene where the photographic subject exists when the photographic subject exists. If the shooting object exists in the first image, acquiring a posture image to be recommended according to the identified target characteristic and scene characteristic of the shooting object, displaying the posture image in a second display area to prompt the shooting object to perform posture adjustment according to the posture image, and after prompting the shooting object to perform posture adjustment, shooting the shooting object to obtain a shooting image. The target feature of the photographic subject may include a facial feature of the photographic subject and/or a physical feature of the photographic subject, and the scene feature may include a scene type, such as a beach/beach scene, a forest scene, a field scene, a grass scene, a desert scene, a mountain scene, a sea scene, a lake/pond scene, a snow scene, a blue sky scene, and the like. According to the embodiment of the application, the attitude image is obtained by combining the target characteristic and the scene characteristic of the shooting object, the accuracy and the effectiveness of the attitude image obtaining can be improved, and the attitude image is displayed through the second display area, so that the shooting object can be conveniently and directly subjected to attitude adjustment according to the displayed attitude image or a shooting person can conveniently guide the shooting object to be subjected to attitude adjustment according to the displayed attitude image, the image shooting effect is improved, and the user experience is improved.
The scene is different from the scene one in that the photographic objects identified in the scene may include a plurality of target features (i.e., facial features and/or body features) corresponding to the plurality of photographic objects in the first image, that is, the target features (i.e., facial features and/or body features) corresponding to the plurality of photographic objects may be identified, and when the posture image to be recommended is obtained according to the scene features and the target features (i.e., facial features and/or body features), a statistical analysis may be first performed on the target features (i.e., facial features and/or body features) corresponding to the plurality of photographic objects to obtain group features in the first image, where the group features may include a total number of the photographic objects, proportions of different facial features, proportions of different body features, and/or the like, for example, when the facial features include facial features, the group features may include proportions of fat faces in the plurality of photographic objects, The ratio of the thin face may include a ratio of a round face, a ratio of a square face, a ratio of a melon seed face, and the like. For another example, when the facial feature includes a gender feature, the group feature may include a ratio of women and a ratio of men in the plurality of subjects. Also, when the facial feature includes an age feature, the group feature may include a proportion of young people, a proportion of middle people, a proportion of old people, and the like among the plurality of photographic subjects. Similarly, when the physical characteristics include a body shape characteristic, the group characteristic may include a proportion of different body shapes in the plurality of photographic subjects, and when the physical characteristics include a height characteristic, the group characteristic may further include a proportion of different height ranges in the plurality of photographic subjects, and the like. Subsequently, a posture image to be recommended may be obtained according to the scene characteristics and the group characteristics, where the recommended posture image may include a heart-shaped close shot form, a V-shaped close shot form, a simple branch station close shot form, and the like, which correspond to a preset object in the posture image, and for example, it may be determined that the posture image shown in fig. 9 is displayed in the second display area according to the scene characteristics of the campus scene where the shooting object is located in the first image and the group characteristics of the number of shooting persons, the shooting age, and the like of the shooting object.
It should be understood that in the double-sided screen cell-phone and the folding screen cell-phone that is in fold condition, after obtaining the gesture image that treats the recommendation, also can this gesture image simultaneous display in the first display area that shooter can look over and the second display area that the shooting object can look over to on the basis that the object can directly carry out attitude adjustment according to this gesture image conveniently shooted, still can make things convenient for the shooter to come supplementary direction shooting object according to the gesture image that first display area shows and carry out attitude adjustment, perhaps can make things convenient for the shooter to know the station position adjustment effect of shooting the object in real time, thereby improve attitude adjustment efficiency, improve the image shooting effect, promote user experience.
In a possible implementation manner, after the scene features and the group features corresponding to the first image are obtained, at least one sample image may be obtained according to the scene features and the group features, a preset number of target sample images are selected from the sample images, and the selected target sample images are determined as the posture images to be recommended.
Specifically, the scene characteristic and the group characteristic corresponding to the first image may be first matched with the scene characteristic and the group characteristic corresponding to each sample image in the sample image library to obtain a matching degree between the first image and each sample image, and a plurality of sample images may be obtained from the sample image library according to the matching degree, then, N target sample images may be selected from the plurality of sample images, and the selected N target sample images may be determined as the posture image to be recommended, for example, a plurality of sample images with matching degrees greater than a first threshold value may be obtained from the sample image library, and N target sample images may be selected from the plurality of sample images according to the recommended value of each sample image, and determining the selected N target sample images as attitude images to be recommended, wherein N is an integer greater than or equal to 1.
For example, first feature vectors corresponding to scene features and population features of a first image may be first constructed, second feature vectors corresponding to scene features and population features of each sample image in a sample image library may be respectively constructed, and then matching degrees between the first feature vectors and the second feature vectors may be respectively calculated, so that a plurality of sample images may be obtained from the sample image library according to the matching degrees, and N target sample images may be selected as pose images to be recommended according to recommendation values corresponding to the sample images.
For example, before the degree of matching between the first image and each sample image is calculated, a first weight corresponding to the scene feature and a second weight corresponding to the population feature may be set in advance according to the importance of the scene feature and the population feature, and therefore, when the matching degree between the first image and each sample image is calculated, a first feature vector corresponding to the scene feature and the group feature of the first image can be firstly constructed, respectively constructing second feature vectors corresponding to the scene features and the group features of each second sample image, then respectively calculating the matching degree between the first feature vector and each second feature vector according to the first weight corresponding to the scene features and the second weight corresponding to the group features, therefore, a plurality of sample images can be obtained from the sample image library according to the matching degree, and N target sample images can be selected as the posture images to be recommended according to the recommendation values corresponding to the sample images.
It should be understood that, in the embodiment of the present application, each feature included in the group feature may be set to be a uniform second weight, or may be set to be a different second weight according to the importance of each feature.
It should be understood that, in the embodiment of the present application, the target sample image may also be directly selected from the sample image library according to the matching degree, for example, the top M sample images with high matching degree may be selected from the sample image library as the target sample images. Optionally, in this embodiment of the application, the pose image may also be obtained by using a pre-constructed pose recommendation model, for example, the identified scene features and group features may be input into the pose recommendation model, and the pose recommendation model may obtain the recommended pose image according to the scene features and the group features.
Specifically, the pose recommendation model may first determine a shooting category corresponding to the first image according to the scene features and the group features; then, sample images corresponding to the shooting category can be obtained, a preset number of target sample images can be selected from the sample images, the selected target sample images are determined as the attitude images, and the co-shooting forms corresponding to the preset objects in the target sample images can be output to the shooting objects so as to prompt the shooting objects to adjust the shooting attitude according to the co-shooting forms, wherein the adjustment of the shooting attitude can comprise adjustment of a shooting station position and/or adjustment of a shooting action and the like, so that the optimal co-shooting forms are comprehensively recommended through scene characteristics and group characteristics to guide the shooting objects to carry out the shooting action and/or the adjustment of the shooting station position, the image shooting effect in the co-shooting is improved, and the user experience is improved.
Alternatively, the pose recommendation model may be a convolutional neural network model obtained by collecting a large number of training images for label learning.
Specifically, a first training image and a second training image are collected, wherein the first training image and the second training image are both images in a scene of taking pictures in time, and preset shooting categories corresponding to the first training images are marked; secondly, identifying training scene features of a training scene in which a preset object is located in each first training image and training facial features and/or training body features of each preset object, and performing statistical analysis on the identified training facial features and/or training body features to obtain training population features in each first training image; then, respectively inputting training scene characteristics and training group characteristics corresponding to each first training image into an initial posture recommendation model, respectively determining a training shooting category corresponding to each first training image through the initial posture recommendation model, and calculating a training error according to a training shooting type and a preset shooting category corresponding to each first training image; if the training error does not meet the preset condition, adjusting model parameters of the posture recommendation model, determining the posture recommendation model after the model parameters are adjusted as an initial posture recommendation model, returning to execute the step of respectively inputting training scene characteristics and training group characteristics corresponding to each first training image into the initial posture recommendation model, and respectively determining the training shooting category corresponding to each first training image through the initial posture recommendation model and the subsequent steps; and if the training error meets the preset condition, determining that the posture recommendation model is trained, and classifying each second training image into a corresponding preset shooting category by using the trained posture recommendation model to serve as a sample image corresponding to each preset shooting category. The second training images can comprise the first training images, and each second training image is an image which is good in shooting effect and good in looking of the close-up formation in the close-up scene.
It should be understood that the sample images corresponding to the preset shooting categories not only can include the corresponding second training images, in the embodiment of the application, crawling of the images can be performed from the network in real time, the crawled images are detected and analyzed to determine whether the crawled images are close-shot scene images with good shooting effects and good close-shot formation, if yes, the crawled images can be classified into the corresponding preset shooting categories, the sample images of the preset shooting categories are supplemented through network crawling, the recommendation diversity of the posture images is improved, and the user experience is improved.
Optionally, the pose recommendation model may first construct a first feature vector corresponding to the scene feature and the group feature, and respectively construct a second feature vector corresponding to the scene feature and the group feature in each preset shooting category, then respectively calculate a matching degree between the first feature vector and each second feature vector, and determine the shooting category to which the first image belongs according to the matching degree.
Specifically, the pose recommendation model may first construct a first feature vector corresponding to the scene feature and the group feature, and respectively construct a second feature vector corresponding to the scene feature and the group feature in each preset shooting category, then respectively calculate a matching degree between the first feature vector and each second feature vector according to a first weight corresponding to the scene feature and a second weight corresponding to the group feature, and determine the shooting category to which the first image belongs according to the matching degree.
Optionally, after the shooting category to which the first image belongs is determined, a preset number of target sample images in the shooting category may be acquired, and each target sample image may be determined as a posture image to be recommended and displayed in a second display area of the dual-screen mobile phone or the folding-screen mobile phone, so that the shooting object can view the posture image, and the shooting object can adjust its own shooting action and/or shooting station position and the like directly according to the posture image, so as to improve posture adjustment efficiency.
Optionally, after the shooting category to which the first image belongs is determined, a preset number of target sample images in the shooting category may be acquired, and each target sample image may be determined as a posture image to be recommended and displayed in the second display area of the single-screen mobile phone, so that a photographer guides a shooting object to perform adjustment of a shooting posture according to the posture image.
It should be understood that the preset number and the obtaining manner corresponding to the target sample images in the scene are similar to the preset number and the obtaining manner corresponding to the target sample images in the scene one, and the basic principle is the same, and the description is omitted here.
In a possible implementation manner, after determining that the photographing posture of each photographing object is adjusted, a second image corresponding to the first display area at present may be obtained first, and each photographing object in the second image and a height feature, and/or a body shape feature, and/or a face shape feature corresponding to each photographing object may be identified through an AI identification technology and an image segmentation technology; then, comparing height features between the shot objects, and/or comparing body type features between the shot objects, and/or comparing face type features between the shot objects to obtain corresponding comparison results, determining whether the shot objects need to be subjected to standing position adjustment according to the comparison results, and if the shot objects do not need to be subjected to standing position adjustment, performing shooting operation to obtain a shot image; if the stand position of the photographed object needs to be adjusted, the stand position adjusting mode is determined according to the comparison result, and the stand position between the photographed objects is adjusted according to the stand position adjusting mode, so that the shielding situation in the co-shooting scene is prevented, the image photographing effect is improved, and the user experience is improved.
It should be understood that the above-mentioned obtaining of the height feature, and/or the body shape feature, and/or the face shape feature corresponding to each object may be obtaining of the height feature, or obtaining of the body shape feature, or obtaining of the face shape feature, or obtaining of the height feature and the body shape feature, or obtaining of the height feature and the face shape feature, or obtaining of the body shape feature and the face shape feature, or obtaining of the height feature, the body shape feature, and the face shape feature, or obtaining of the body shape feature, and the face shape feature, corresponding to each object.
Specifically, the standing position adjusting mode can be determined according to the comparison result between the height characteristics, and the standing position adjusting mode at the moment mainly adjusts the shooting object with higher height to the rear row far away from the camera so as to ensure that the shooting object between the front row and the rear row cannot be shielded, thereby improving the image shooting effect.
For example, if it is determined that the first subject in the first row blocks the second subject in the second row according to the comparison result between the height features, the determined standing position adjustment manner may be to perform standing position adjustment on the first subject and the second subject.
Optionally, a standing position adjustment mode may be determined according to a comparison result between the facial features, where the standing position adjustment mode mainly adjusts the shot object with a larger facial shape to a rear row farther from the camera, so as to improve an image shooting effect and increase shooting satisfaction of the shot object.
For example, if it is determined from the comparison result between the facial shape features that the facial shape of the third photographic subject in the first row is larger than the facial shape of the fourth photographic subject in the second row, the determined standing position adjustment manner may be to perform standing position adjustment on the third photographic subject and the fourth photographic subject.
Optionally, the standing position adjustment mode may be determined according to a comparison result between the body type features, and the standing position adjustment mode at this time is mainly to adjust the shooting object with the larger body type to the rear row farther from the camera or to both sides farther from the shooting center, so as to improve the image shooting effect.
For example, as shown in fig. 11, if it is determined from the comparison result between the body type features that the body type of the photographic subject 1100 in the first row is larger than the body type of the photographic subject 1101 in the first row, the determined standing position adjustment method may be to perform standing position adjustment on the photographic subject 1100 and the photographic subject 1101.
It should be noted that, when determining the standing position adjustment mode according to two or three of the height feature, the body shape feature and the face shape feature, the standing position adjustment mode may be determined according to the priority order of the height feature > the face shape feature > the body shape feature, that is, the standing position adjustment mode may be determined preferentially according to the height feature, when the height features are the same, the standing position adjustment mode may be determined according to the face shape feature, and when the height features and the face shape features are the same, the standing position adjustment mode may be determined according to the body shape features.
Alternatively, as shown in fig. 12, in the dual-screen mobile phone and the folding-screen mobile phone in the folded state, the determined standing position adjustment manner may be displayed in the second display area viewable by the photographic subject through the image indication, so that the photographic subject may directly perform the standing position adjustment according to the displayed image indication, thereby improving the standing position adjustment efficiency.
It should be understood that in the double-sided screen mobile phone and the folding screen mobile phone in folding state, the determined station adjustment mode can also be displayed in the first display area that the photographer can check through image indication, so that the photographer can assist in guiding the shooting object to perform station adjustment according to the image indication displayed in the first display area, or the photographer can know the station adjustment effect of the shooting object, the image shooting effect is improved, and the user experience is improved.
Optionally, in the single-screen mobile phone, the determined standing position adjustment mode may instruct the shooting object to perform the standing position adjustment in a voice playing mode, or may be displayed in the first display area through an image indication, so that the photographer may instruct the shooting object to perform the standing position adjustment according to the displayed image indication.
It should be understood that, in the dual-screen mobile phone and the folding-screen mobile phone in the folded state, the determined standing position adjustment mode may also assist to guide the shooting object to perform the standing position adjustment in a voice playing mode, or may also be displayed in the first display area through an image indication, so that the photographer assists to guide the shooting object to perform the standing position adjustment according to the displayed image indication, which is not limited in the embodiment of the present application.
In a possible implementation manner, after the station position adjustment of the photographic subject is completed, a third image corresponding to the first display area may be further obtained, an overall photographic attitude corresponding to the photographic subject in the third image may be recognized through an AI recognition technology, and then a matching degree between the overall photographic attitude and a target close-up photographic attitude of a preset subject in the selected target attitude image may be calculated; and if the matching degree is smaller than a second threshold value, determining a distinguishing position between the overall photographing posture and the target close-up photographing posture, and prompting the photographing object to adjust the photographing posture according to the distinguishing position until the matching degree between the adjusted overall photographing posture and the target close-up photographing posture is larger than or equal to the second threshold value. When the matching degree between the adjusted overall photographing posture and the target close-up photographing posture is larger than or equal to the second threshold value, the camera can be controlled to photograph, a photographed image is obtained, the image photographing effect is improved, and the user experience is improved.
Optionally, after the matching degree between the overall photographing posture and the target close-up photographing posture is calculated, the matching degree may be displayed in the first display area and/or the second display area in real time, so that the subject and/or the photographer can clearly know the specific fitting condition between the photographing posture and the posture image of the subject, for example, as shown in fig. 9, the calculated matching degree 9.5 may be displayed in the second display area, so that the photographer can clearly know the specific fitting condition between the photographing posture and the posture image of the subject.
It should be noted that the matching degree calculation in this scenario is similar to the matching degree calculation in scenario one, and the basic principle is the same, which is not described herein again.
The scene is different from the first scene in that the scene can be applied to a multi-person close-up scene, and the actions of all shooting objects, the adjustment of the station positions and the like in close-up can be guided according to the attitude images, so that the image shooting effect of close-up is improved, and the user experience is improved.
As shown in fig. 13, an embodiment of the present application provides a photographing method, which is applicable to the mobile terminal, where the photographing method may include:
s1301, acquiring a first image corresponding to the first display area;
s1302, identifying a target feature of a shooting object in the first image and a scene feature where the shooting object is located;
s1303, obtaining a posture image to be recommended according to the target characteristics of the shot object and the scene characteristics of the scene where the shot object is located, and displaying the posture image in a second display area to prompt the shot object to perform posture adjustment according to the posture image displayed in the second display area;
and S1304, photographing the photographed object to obtain a photographed image.
It should be noted that the first image may be a first frame preview image, or an nth frame preview image after the first frame, or a first frame image acquired by a camera, or an nth frame image after the first frame, where N is an integer greater than 1; the characteristics of the shooting object include but are not limited to facial characteristics, body characteristics and the like of the shooting object; the pose image to be recommended may be obtained locally or from a network.
In a possible implementation manner, the obtaining a posture image to be recommended according to the target feature of the photographic object and the scene feature of the scene where the photographic object is located includes:
acquiring at least one sample image according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located;
and selecting a preset number of target sample images from the sample images, and determining the selected target sample images as the attitude images to be recommended.
Specifically, the selecting a preset number of target sample images from the sample images includes:
and acquiring a recommended value of each sample image, and selecting the first N sample images with high recommended values as target sample images, wherein N is an integer greater than or equal to 1.
Optionally, when a shooting object exists in the first image, the obtaining at least one sample image according to the target feature of the shooting object and the scene feature of the scene where the shooting object is located includes:
determining a shooting category corresponding to the first image according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located;
and acquiring at least one sample image corresponding to the shooting category.
In one possible implementation manner, when a plurality of photographic subjects exist in the first image, the identifying the target feature of the photographic subject in the first image includes:
identifying a target feature of each shooting object in the first image;
correspondingly, the obtaining at least one sample image according to the target feature of the photographic object and the scene feature of the scene where the photographic object is located includes:
determining the group characteristics of the plurality of photographic objects in the first image according to the target characteristics of each photographic object;
and acquiring at least one sample image according to the group characteristics and the scene characteristics.
Optionally, the acquiring at least one sample image according to the group feature and the scene feature comprises:
and acquiring at least one sample image with the matching degree of the group characteristics and the scene characteristics larger than a first threshold value from a pre-stored sample image library according to the group characteristics and the scene characteristics.
In a possible implementation manner, the photographing the photographic object to obtain a photographed image includes:
and after determining that the shooting object carries out posture adjustment according to the posture image displayed in the second display area, shooting the shooting object to obtain a shooting image.
Optionally, after determining that the gesture of the photographic object is adjusted according to the gesture image displayed in the second display area, photographing the photographic object to obtain a photographic image, including:
acquiring a second image corresponding to the first display area;
and when the matching degree of the second image and the posture image displayed in the second display area is greater than a second threshold value, determining that the posture adjustment of the shot object is finished, and shooting the shot object to obtain a shot image.
Optionally, after the displaying the gesture image in the second display area, further comprising:
and acquiring prompt voice and/or prompt characters corresponding to the attitude image, and prompting the shooting object to perform attitude adjustment through the prompt voice and/or the prompt characters.
In a possible implementation manner, the photographing method is applied to a mobile terminal, the mobile terminal includes a first display screen and a second display screen, a display area of the first display screen is a first display area, a display area of the second display screen is a second display area, the first display area faces a photographer and the second display area faces a photographic object during photographing.
Optionally, the photographing method is applied to a mobile terminal, the mobile terminal includes a display screen, a display area of the display screen is a first display area, the second display area is a partial display area of the first display area, and the first display area and the second display area face a photographer or a photographic object when photographing.
In a possible implementation manner, the photographing method is applied to a mobile terminal, the mobile terminal includes a display screen, the first display area and the second display area are located in different display areas of the display screen, and when photographing, the first display area and the second display area face a photographer or a photographing object.
According to the image shooting method and device, the attitude image is obtained by combining the target feature and the scene feature of the shooting object, the accuracy and the effectiveness of the attitude image obtaining can be improved, the attitude image is displayed in the second display area, so that the shooting object can be conveniently and directly adjusted in attitude according to the displayed attitude image or a shooting person can conveniently guide the shooting object to be adjusted in attitude according to the displayed attitude image, the image shooting effect is effectively improved, and the user experience is improved.
Fig. 14 shows a block diagram of a photographing apparatus provided in the embodiment of the present application, which corresponds to the photographing method described in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 14, the photographing apparatus may be applied to the mobile terminal, and the photographing apparatus may include:
an image acquisition module 1401 for acquiring a first image corresponding to the first display region;
a feature recognition module 1402, configured to recognize a target feature of a photographic object in the first image and a scene feature of a scene where the photographic object is located;
a pose image recommendation module 1403, configured to obtain a pose image to be recommended according to the target feature of the photographic object and the scene feature of the scene where the photographic object is located, and display the pose image in a second display area, so as to prompt the photographic object to perform pose adjustment according to the pose image displayed in the second display area;
and a photographing module 1404, configured to photograph the photographic object to obtain a photographed image.
In one possible implementation, the pose image recommendation module 1403 includes:
the sample image acquisition unit is used for acquiring at least one sample image according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located;
and the attitude image determining unit is used for selecting a preset number of target sample images from the sample images and determining the selected target sample images as the attitude images to be recommended.
Specifically, the pose image determination unit is specifically configured to obtain a recommendation value of each sample image, and select the first N sample images with higher recommendation values as target sample images, where N is an integer greater than or equal to 1.
Alternatively, when one photographic subject exists in the first image, the sample image acquiring unit includes:
the shooting category determining subunit is configured to determine a shooting category corresponding to the first image according to the target feature of the shooting object and the scene feature of the scene where the shooting object is located;
and the sample image acquisition first subunit is used for acquiring at least one sample image corresponding to the shooting category.
In a possible implementation manner, when a plurality of photographic subjects exist in the first image, the feature recognition module 1402 is further configured to recognize a target feature of each photographic subject in the first image;
correspondingly, the sample image acquisition unit comprises:
a group feature determination subunit configured to determine a group feature of the plurality of photographic subjects in the first image according to a target feature of each photographic subject;
and the sample image acquisition second subunit is used for acquiring at least one sample image according to the group characteristics and the scene characteristics.
Specifically, the sample image obtaining second subunit is specifically configured to obtain, according to the group feature and the scene feature, at least one sample image whose matching degree with the group feature and the scene feature is greater than a first threshold from a pre-stored sample image library.
Optionally, the photographing module 1404 includes:
and the attitude adjustment determining unit is used for photographing the shooting object to obtain a photographed image after determining that the shooting object performs attitude adjustment according to the attitude image displayed in the second display area.
Specifically, the attitude adjustment determination unit includes:
an image acquisition subunit, configured to acquire a second image corresponding to the first display area;
and the photographing subunit is configured to determine that the posture adjustment of the photographic object is completed when the matching degree between the second image and the posture image displayed in the second display area is greater than a second threshold, and photograph the photographic object to obtain a photographed image.
In a possible implementation manner, the photographing apparatus further includes:
and the prompting module is used for acquiring prompting voice and/or prompting characters corresponding to the attitude image and prompting the shooting object to perform attitude adjustment through the prompting voice and/or the prompting characters.
Optionally, the photographing device is applied to a mobile terminal, the mobile terminal includes a first display screen and a second display screen, a display area of the first display screen is a first display area, a display area of the second display screen is a second display area, when photographing, the first display area faces a photographer, and the second display area faces a photographing object.
In a possible implementation manner, the photographing device is applied to a mobile terminal, the mobile terminal includes a display screen, a display area of the display screen is a first display area, the second display area is a partial display area of the first display area, and when photographing, the first display area and the second display area face a photographer or a photographic object.
Optionally, the photographing device is applied to a mobile terminal, the mobile terminal includes a display screen, the first display area and the second display area are located in different display areas of the display screen, and when photographing, the first display area and the second display area face a photographer or a photographing object.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 15 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application. As shown in fig. 15, the mobile terminal 15 of this embodiment includes: at least one processor 1500 (only one shown in fig. 15), a memory 1501, a computer program 1502 stored in the memory 1501 and operable on the at least one processor 1500, and a display 1503, the display 1503 including a first display region and a second display region, the processor 1500, when executing the computer program 1502, causing the mobile terminal to implement any of the various photographing method embodiments described above.
Those skilled in the art will appreciate that fig. 15 is merely an example of a mobile terminal 15 and is not intended to limit the mobile terminal 15, and may include more or less components than those shown, or some components in combination, or different components, such as input output devices, network access devices, etc.
Embodiments of the present application further provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements steps that can implement the above-mentioned method embodiments.
The embodiment of the present application provides a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the above-mentioned photographing method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (14)

1. A method of taking a picture, comprising:
acquiring a first image corresponding to a first display area;
identifying target characteristics of a shooting object in the first image and scene characteristics of a scene where the shooting object is located;
acquiring a posture image to be recommended according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located, and displaying the posture image in a second display area to prompt the shooting object to perform posture adjustment according to the posture image displayed in the second display area;
and photographing the photographed object to obtain a photographed image.
2. The photographing method according to claim 1, wherein the obtaining of the posture image to be recommended according to the target feature of the photographic object and the scene feature of the scene in which the photographic object is located comprises:
acquiring at least one sample image according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located;
and selecting a preset number of target sample images from the sample images, and determining the selected target sample images as the attitude images to be recommended.
3. The photographing method of claim 2, wherein the selecting a preset number of target sample images from the sample images comprises:
and acquiring a recommended value of each sample image, and selecting the first N sample images with high recommended values as target sample images, wherein N is an integer greater than or equal to 1.
4. The photographing method of claim 2, wherein when a photographic subject exists in the first image, the obtaining at least one sample image according to the target feature of the photographic subject and the scene feature of the scene in which the photographic subject is located comprises:
determining a shooting category corresponding to the first image according to the target characteristics of the shooting object and the scene characteristics of the scene where the shooting object is located;
and acquiring at least one sample image corresponding to the shooting category.
5. The photographing method according to claim 2, wherein the identifying a target feature of the photographic subject in the first image when a plurality of photographic subjects exist in the first image comprises:
identifying a target feature of each shooting object in the first image;
correspondingly, the obtaining at least one sample image according to the target feature of the photographic object and the scene feature of the scene where the photographic object is located includes:
determining the group characteristics of the plurality of photographic objects in the first image according to the target characteristics of each photographic object;
and acquiring at least one sample image according to the group characteristics and the scene characteristics.
6. The photographing method of claim 5, wherein the obtaining at least one sample image according to the group feature and the scene feature comprises:
and acquiring at least one sample image with the matching degree of the group characteristics and the scene characteristics larger than a first threshold value from a pre-stored sample image library according to the group characteristics and the scene characteristics.
7. The photographing method according to claim 1, wherein the photographing of the subject to obtain the photographed image comprises:
and after determining that the shooting object carries out posture adjustment according to the posture image displayed in the second display area, shooting the shooting object to obtain a shooting image.
8. The photographing method according to claim 7, wherein the photographing the subject to obtain the photographed image after determining that the subject performs the posture adjustment according to the posture image displayed in the second display region comprises:
acquiring a second image corresponding to the first display area;
and when the matching degree of the second image and the posture image displayed in the second display area is greater than a second threshold value, determining that the posture adjustment of the shot object is finished, and shooting the shot object to obtain a shot image.
9. The photographing method of claim 1, further comprising, after displaying the pose image in the second display area:
and acquiring prompt voice and/or prompt characters corresponding to the attitude image, and prompting the shooting object to perform attitude adjustment through the prompt voice and/or the prompt characters.
10. The photographing method according to any one of claims 1 to 9, wherein the photographing method is applied to a mobile terminal, the mobile terminal includes a first display screen and a second display screen, a display area of the first display screen is a first display area, a display area of the second display screen is a second display area, the first display area faces a photographer when photographing, and the second display area faces a subject to be photographed.
11. The photographing method according to any one of claims 1 to 9, wherein the photographing method is applied to a mobile terminal including a display screen, a display area of the display screen is a first display area, the second display area is a part of the first display area, and the first display area and the second display area face a photographer or a subject of photographing at the time of photographing.
12. The photographing method according to any one of claims 1 to 9, wherein the photographing method is applied to a mobile terminal including a display screen, the first display area and the second display area are located in different display areas of the display screen, and the first display area and the second display area face a photographer or a subject to be photographed when photographing.
13. A mobile terminal comprising a display, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, causes the mobile terminal to implement the photographing method according to any one of claims 1 to 12.
14. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the photographing method according to any one of claims 1 to 12.
CN201910727319.7A 2019-08-07 2019-08-07 Photographing method and mobile terminal Pending CN112351185A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910727319.7A CN112351185A (en) 2019-08-07 2019-08-07 Photographing method and mobile terminal
PCT/CN2020/105144 WO2021023059A1 (en) 2019-08-07 2020-07-28 Photographing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910727319.7A CN112351185A (en) 2019-08-07 2019-08-07 Photographing method and mobile terminal

Publications (1)

Publication Number Publication Date
CN112351185A true CN112351185A (en) 2021-02-09

Family

ID=74367294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910727319.7A Pending CN112351185A (en) 2019-08-07 2019-08-07 Photographing method and mobile terminal

Country Status (2)

Country Link
CN (1) CN112351185A (en)
WO (1) WO2021023059A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194254A (en) * 2021-04-28 2021-07-30 上海商汤智能科技有限公司 Image shooting method and device, electronic equipment and storage medium
CN113596306A (en) * 2021-07-27 2021-11-02 重庆电子工程职业学院 Shooting range prompting device of photographic equipment
CN113705401A (en) * 2021-08-18 2021-11-26 深圳传音控股股份有限公司 Image processing method, terminal device and storage medium
CN114466132A (en) * 2021-06-11 2022-05-10 荣耀终端有限公司 Photographing display method, electronic equipment and storage medium
CN115022543A (en) * 2022-05-31 2022-09-06 Oppo广东移动通信有限公司 Photographing method, photographing device, terminal and storage medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022226809A1 (en) * 2021-04-27 2022-11-03 深圳市大疆创新科技有限公司 Image capture method and apparatus, storage medium, and terminal device
CN113361513A (en) * 2021-06-07 2021-09-07 博奥生物集团有限公司 Mobile terminal tongue picture acquisition method, device and equipment
CN113595746B (en) * 2021-07-16 2023-07-11 广州市瀚云信息技术有限公司 Control method and device for power supply of shielding device
CN113824878A (en) * 2021-08-20 2021-12-21 荣耀终端有限公司 Shooting control method based on foldable screen and electronic equipment
CN113780217A (en) * 2021-09-16 2021-12-10 中国平安人寿保险股份有限公司 Live broadcast auxiliary prompting method and device, computer equipment and storage medium
CN113890994B (en) * 2021-09-30 2022-12-23 荣耀终端有限公司 Image photographing method, system and storage medium
CN114063864A (en) * 2021-11-29 2022-02-18 惠州Tcl移动通信有限公司 Image display method, image display device, electronic equipment and computer readable storage medium
US11871104B2 (en) * 2022-03-29 2024-01-09 Qualcomm Incorporated Recommendations for image capture

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110314049A1 (en) * 2010-06-22 2011-12-22 Xerox Corporation Photography assistant and method for assisting a user in photographing landmarks and scenes
CN107257439A (en) * 2017-07-26 2017-10-17 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107835364A (en) * 2017-10-30 2018-03-23 维沃移动通信有限公司 One kind is taken pictures householder method and mobile terminal
CN108184050A (en) * 2017-12-15 2018-06-19 维沃移动通信有限公司 A kind of photographic method, mobile terminal
CN108347559A (en) * 2018-01-05 2018-07-31 深圳市金立通信设备有限公司 A kind of image pickup method, terminal and computer readable storage medium
CN108924413A (en) * 2018-06-27 2018-11-30 维沃移动通信有限公司 Image pickup method and mobile terminal
CN109547694A (en) * 2018-11-29 2019-03-29 维沃移动通信有限公司 A kind of image display method and terminal device
CN109964478A (en) * 2017-10-14 2019-07-02 华为技术有限公司 A kind of image pickup method and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102891958A (en) * 2011-07-22 2013-01-23 北京华旗随身数码股份有限公司 Digital camera with posture guiding function
CN103220466B (en) * 2013-03-27 2016-08-24 华为终端有限公司 The output intent of picture and device
CN107018333A (en) * 2017-05-27 2017-08-04 北京小米移动软件有限公司 Shoot template and recommend method, device and capture apparatus
KR102438201B1 (en) * 2017-12-01 2022-08-30 삼성전자주식회사 Method and system for providing recommendation information related to photography

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110314049A1 (en) * 2010-06-22 2011-12-22 Xerox Corporation Photography assistant and method for assisting a user in photographing landmarks and scenes
CN107257439A (en) * 2017-07-26 2017-10-17 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN109964478A (en) * 2017-10-14 2019-07-02 华为技术有限公司 A kind of image pickup method and electronic device
CN107835364A (en) * 2017-10-30 2018-03-23 维沃移动通信有限公司 One kind is taken pictures householder method and mobile terminal
CN108184050A (en) * 2017-12-15 2018-06-19 维沃移动通信有限公司 A kind of photographic method, mobile terminal
CN108347559A (en) * 2018-01-05 2018-07-31 深圳市金立通信设备有限公司 A kind of image pickup method, terminal and computer readable storage medium
CN108924413A (en) * 2018-06-27 2018-11-30 维沃移动通信有限公司 Image pickup method and mobile terminal
CN109547694A (en) * 2018-11-29 2019-03-29 维沃移动通信有限公司 A kind of image display method and terminal device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113194254A (en) * 2021-04-28 2021-07-30 上海商汤智能科技有限公司 Image shooting method and device, electronic equipment and storage medium
WO2022227393A1 (en) * 2021-04-28 2022-11-03 上海商汤智能科技有限公司 Image photographing method and apparatus, electronic device, and computer readable storage medium
CN114466132A (en) * 2021-06-11 2022-05-10 荣耀终端有限公司 Photographing display method, electronic equipment and storage medium
CN113596306A (en) * 2021-07-27 2021-11-02 重庆电子工程职业学院 Shooting range prompting device of photographic equipment
CN113596306B (en) * 2021-07-27 2023-05-05 重庆电子工程职业学院 Shooting range prompting device of photographic equipment
CN113705401A (en) * 2021-08-18 2021-11-26 深圳传音控股股份有限公司 Image processing method, terminal device and storage medium
CN115022543A (en) * 2022-05-31 2022-09-06 Oppo广东移动通信有限公司 Photographing method, photographing device, terminal and storage medium

Also Published As

Publication number Publication date
WO2021023059A1 (en) 2021-02-11

Similar Documents

Publication Publication Date Title
WO2021023059A1 (en) Photographing method and mobile terminal
WO2021135601A1 (en) Auxiliary photographing method and apparatus, terminal device, and storage medium
CN108629747B (en) Image enhancement method and device, electronic equipment and storage medium
CN108234891B (en) A kind of photographic method and mobile terminal
CN107995429A (en) A kind of image pickup method and mobile terminal
CN109194879A (en) Photographic method, device, storage medium and mobile terminal
WO2019134516A1 (en) Method and device for generating panoramic image, storage medium, and electronic apparatus
CN109547694A (en) A kind of image display method and terminal device
CN109361865A (en) A kind of image pickup method and terminal
CN108712603B (en) Image processing method and mobile terminal
CN108886574B (en) Shooting guide method, equipment and system
CN109348135A (en) Photographic method, device, storage medium and terminal device
CN110865754B (en) Information display method and device and terminal
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
CN107592459A (en) A kind of photographic method and mobile terminal
CN108600647A (en) Shooting preview method, mobile terminal and storage medium
CN110827195B (en) Virtual article adding method and device, electronic equipment and storage medium
CN109639969A (en) A kind of image processing method, terminal and server
CN108462826A (en) A kind of method and mobile terminal of auxiliary photo-taking
CN109495616A (en) A kind of photographic method and terminal device
CN110086998B (en) Shooting method and terminal
CN107948503A (en) A kind of photographic method, camera arrangement and mobile terminal
CN107995417A (en) A kind of method taken pictures and mobile terminal
CN110365906A (en) Image pickup method and mobile terminal
CN109947243A (en) Based on the capture of intelligent electronic device gesture and identification technology for touching hand detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210209