CN110868538A - Method and electronic equipment for recommending shooting posture - Google Patents

Method and electronic equipment for recommending shooting posture Download PDF

Info

Publication number
CN110868538A
CN110868538A CN201911093302.7A CN201911093302A CN110868538A CN 110868538 A CN110868538 A CN 110868538A CN 201911093302 A CN201911093302 A CN 201911093302A CN 110868538 A CN110868538 A CN 110868538A
Authority
CN
China
Prior art keywords
image
recommended
preview image
posture
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911093302.7A
Other languages
Chinese (zh)
Inventor
徐杨
王左龙
吕品
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201911093302.7A priority Critical patent/CN110868538A/en
Publication of CN110868538A publication Critical patent/CN110868538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method for recommending a shooting posture and electronic equipment. The method comprises the following steps: acquiring a preview image of a pre-shot scene; identifying an object in the preview image and/or a category to which the preview image belongs; and recommending corresponding shooting postures based on the recognition result. According to the method and the device, the object in the preview image of the pre-shot scene can be automatically identified so that the user can avoid the specific object during shooting, shooting posture recommendation can be automatically carried out based on the identified category, personalized posture recommendation is provided for the user, and the method and the device have the advantages of convenience in operation, intuitionistic operation and diversified postures.

Description

Method and electronic equipment for recommending shooting posture
Technical Field
The present disclosure relates to an image capturing technology, and more particularly, to a method and an electronic device for recommending a capturing posture.
Background
With the development of technology and the improvement of living standard, a photographing function is widely used. Electronic devices such as mobile phones generally have a photographing function.
However, an object that is not desired to be photographed, such as a trash can, often occurs in a photographed picture, and a user needs to remove the object through image processing.
In addition, gestures such as gestures of the photographed person may not be matched with the scene of the photographed person, the wearing of the photographed person, or the number of photographed persons photographed together, which may cause re-photographing or photo deletion, and the photographed person who re-photographs or photo deletion may not find the previous photographing time again, which may cause a reduction in the photographing experience.
Disclosure of Invention
The present disclosure provides a method and an electronic device for recommending a photographing gesture to facilitate photographing and improve a photographing effect.
According to an exemplary embodiment of the present disclosure, there is provided a method of recommending a photographing gesture, wherein the method includes: acquiring a preview image of a pre-shot scene; identifying an object in the preview image and/or a category to which the preview image belongs; and recommending corresponding shooting postures based on the recognition result.
Optionally, identifying the object in the preview image includes: performing object recognition using the object authentication model; marking an object which is not expected to be shot in the preview image according to the recognition result.
Optionally, identifying the category to which the preview image belongs includes: preview image class identification is performed using an image classification model.
Optionally, the object identification model and the image classification model are obtained by training a training image labeled with a label respectively.
Optionally, the method further includes: training an image classification model for identifying the class to which the image belongs by using the training images of the labeled scene class, the figure number class and the human body feature class.
Optionally, recommending the corresponding shooting posture based on the recognition result includes at least one of: displaying a recommended posture image corresponding to the identified category in the preview image; displaying a prompt for prompting the change of the posture of the shooting equipment; displaying the similarity between the character gesture and the recommended gesture; and displaying a recommended posture image corresponding to the number of the identified persons in the preview image, wherein the number of the persons is identified according to the distance between the persons in the preview image.
Optionally, displaying the recommended posture image includes at least one of: displaying a thumbnail of a recommended posture image in the preview image; displaying a recommended posture image at a recommended position in the preview image; switching the displayed recommended posture image in response to a user input, wherein the recommended posture image includes at least one of: a ghosted recommended pose image, a cartoon recommended pose image, a recommended pose contour, and a real person recommended pose image.
According to another exemplary embodiment of the present disclosure, there is provided an electronic apparatus, wherein the electronic apparatus includes: the shooting unit is used for acquiring a preview image of a pre-shot scene; a display unit configured to display the preview image; a processor for identifying an object in the preview image and/or a category to which the preview image belongs; control is performed to recommend a corresponding photographing posture based on a result of the recognition.
Optionally, the processor performs object recognition using an object authentication model; and controlling a display unit to mark an object which is not expected to be shot in the preview image according to the recognition result.
Optionally, the processor performs preview image category identification using an image classification model.
Optionally, the object identification model and the image classification model are obtained by training a training image labeled with a label respectively.
Optionally, the processor is further configured to train an image classification model for identifying a class to which the image belongs, by using a training image labeled with a scene class, a number of people class, and a human body feature class.
Optionally, the processor controls the display unit to recommend the shooting posture by at least one of the following operations: displaying a recommended posture image corresponding to the identified category in the preview image; displaying a prompt for prompting the change of the posture of the shooting equipment; displaying the similarity between the character gesture and the recommended gesture; and displaying a recommended posture image corresponding to the number of the identified persons in the preview image, wherein the number of the persons is identified according to the distance between the persons in the preview image.
Optionally, the processor controls the display unit to display the recommended posture image by at least one of: displaying a thumbnail of a recommended posture image in the preview image; displaying a recommended posture image at a recommended position in the preview image; switching the displayed recommended posture image in response to a user input, wherein the recommended posture image includes at least one of: blurring a recommended posture image, a cartoon recommended posture image, a recommended posture contour, a real person recommended posture image.
According to another exemplary embodiment of the present disclosure, a computer-readable storage medium storing instructions is provided, wherein the instructions, when executed by at least one computing device, cause the at least one computing device to perform the method as described above.
According to the method and the electronic equipment disclosed by the invention, the object can be automatically identified so as to avoid a specific object such as a trash can by moving or converting the shooting direction and the like during shooting; shooting posture recommendation can be automatically carried out according to the image category including or not including the preview image of the shot person, so that the recommended postures such as gestures and facial expressions can be in accordance with the scene, the number or the human body characteristics of the shot person. The present disclosure performs finer division on image categories to indicate scene categories, number of people categories, and human feature categories of an image, the scene categories including: scenic, play, urban, household, cultural, workplace and party categories, the number of people categories including: single human, small group class, and large group class, the human feature class is determined based on at least one of: gender, size, skin tone, facial features, hairstyle, and clothing. In this case, more category combinations can be accommodated, and more posture recommendation schemes can be provided for the user. When recommended, a thumbnail, a live image, a contour, or the like, or a prompt regarding similarity may be provided to provide an intuitive indication. In addition to the posture recommendation for the subject, an instruction to the photographer, for example, an instruction to change the posture of the photographing apparatus such as the photographing angle, the orientation, or the like, may be made. Therefore, the method and the device have the advantages of being simple and visual in operation, suitable for environment, diversified and the like, and user experience can be improved.
Additional aspects and/or advantages of the present general inventive concept will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the general inventive concept.
Drawings
The above and other objects and features of the exemplary embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings which illustrate exemplary embodiments, wherein:
fig. 1 illustrates a flowchart of a method of recommending a photographing posture according to an exemplary embodiment of the present disclosure;
FIG. 2 shows a cell phone screen according to an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a gesture recommendation screen in accordance with an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a gesture recommendation screen according to another exemplary embodiment of the present disclosure;
FIG. 5 illustrates a gesture recommendation screen according to another exemplary embodiment of the present disclosure;
FIG. 6 illustrates a gesture recommendation screen according to another exemplary embodiment of the present disclosure;
fig. 7 and 8 respectively show a posture recommendation screen of another exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present disclosure by referring to the figures.
Fig. 1 illustrates a flowchart of a method of recommending a photographing posture according to an exemplary embodiment of the present disclosure, which may include steps 101 to 103, as illustrated in fig. 1. In step 101, a preview image of a pre-shot scene is acquired; at step 102, identifying an object in the preview image and/or a category to which the preview image belongs; in step 103, a corresponding shooting attitude is recommended based on the result of the recognition.
The method according to the exemplary embodiment of the present disclosure may be implemented in a mobile phone, of course, this is for illustrative purposes only and is not intended to limit the scope of the present disclosure. The method of the present disclosure may be implemented in an electronic device having a photographing function, such as a tablet computer, a digital camera, and a portable wearable device. The preview image may be captured by the camera and displayed on the display unit, with or without a person and background in the preview image. The preview image can be identified and the like through the object identification model, and object prompt is carried out according to the operation result. The object identification model may be used to identify an object, and when a particular object is identified, the particular object may be prompted.
As an example, identifying the object in the preview image includes: performing object recognition using the object authentication model; marking an object which is not expected to be shot in the preview image according to the recognition result.
As an example, the object discrimination model may be trained by: acquiring a training image of a label of an annotated object; an object discrimination model for identifying the object, which may be an object not desired to be photographed, is trained using the training images.
The object that is not desired to be photographed is a specific object and may be set in advance. Objects that are not intended to be photographed may include trash cans, bystanders, and the like, persons or objects that are not intended to appear in the photograph or video being photographed, and may be considered non-aesthetic objects. For example, take a trash can as an example. If a photo has a trash can in it, the user may need to take a new shot or find the photo with the trash can after play and then delete the photo. This requires object hinting using an object authentication model. The object identification model can be used for identifying whether a scene corresponding to the preview image (namely the scene currently shot by the camera) is suitable for shooting. Specifically, the object identification model may be used to identify whether a preview image has a specific object, and if so, the preview image is not suitable for photographing, otherwise, the preview image is suitable for photographing. The object discrimination model may be an Artificial Intelligence (AI) model, which may be obtained by training. The AI model herein may include, but is not limited to, a neural network model. The data for training the AI model may be labeled picture data.
For example, an object authentication model may be trained by: the image data set containing a specific object (for example, a garbage can) is used as a training set, each image data in the training set can be labeled with a label, the training set and the corresponding label set are input into the neural network model to update the weight in the neural network, and therefore the neural network model with the updated weight is used as an object identification model. The accuracy of the object authentication model can also be verified by the verification set. For example, a validation set that is one-fiftieth the size of the training set is prepared (e.g., the size of the set can be represented by the number of pictures), and the validation set is pre-labeled; subsequently, the verification set is input into the trained object authentication model and the labels output by the object authentication model are compared with the labels labeled in advance, and the accuracy of the object authentication model is judged according to the comparison result.
Specific embodiments of the picture data set for training the object authentication model may include a plurality of pictures, each of which is labeled with a label, for example, picture 1 is labeled as a trash can, picture 2 is labeled as a plastic bag, etc. After such a picture data set is input into the neural network model, the weights of the neural network model may be trained, and the neural network model with the trained weights may be used to predict a preview image of a pre-shot scene, for example, whether the preview image is a trash can, a plastic bag, or other object.
As an example, object hinting may be performed based on the preview image and the object authentication model. Specifically, the existence of an object which is not expected to be shot in the preview image is identified according to the object identification model; the object is marked in the preview image.
Fig. 2 illustrates a cell phone screen according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the cell phone screen may display a preview image, which may be captured in real time by a camera of the cell phone. A character and a trash can are displayed in the preview image. And inputting the preview image into an object identification model, wherein the object identification model can identify the trash can. When the object identification model identifies the trash can, a prompt can be output in a mobile phone screen. For example, trash cans are labeled by "x". When the user sees "x" of the mark, it can be known that there is an object which is not desired to be photographed, and then, the photographing place or orientation can be changed. Of course, the user may directly take the picture without changing the shooting location or orientation, and the marked "x" is not displayed in the taken picture.
As an example, identifying the category to which the preview image belongs includes: preview image class identification is performed using an image classification model. As an example, the image classification model may be obtained by training a training image labeled with a label. Specifically, the training images labeled with the scene category, the number of people category, and the human body feature category may be used to train an image classification model for identifying the category to which the image belongs, where the category to which the image belongs indicates the scene category, the number of people category, and the human body feature category of the image, and the scene category includes: scenic, play, urban, household, cultural, workplace and party categories, the number of people categories including: single human, small group class, and large group class, the human feature class is determined based on at least one of: gender, size, skin tone, facial features, hairstyle, and clothing.
For example, the image category may be landscape-one-person-little fresh, landscape-one-person-sun, park-large-scale group-building, etc., by which a scene category (e.g., landscape), a number of people category (e.g., one person), and a human feature category (e.g., little fresh) of the image are indicated, wherein the landscape category may be divided in more detail to include: beach type, mountain and water type, grassland type, and lake type, etc., the play type can be divided into more detail including: casino type, etc., the metro type may be divided in more detail to include: street view class, building class, etc., the job class can be divided into more detail including: office class and conference class, etc., the party class can be divided into more detail including: restaurant category, KTV category, bar category, and the like. The difference between the small group class and the large group class includes the number of persons, and when the number of persons to be photographed is less than a specific number of persons, the class is determined to be the small group class, and when the number of persons to be photographed is greater than or equal to the specific number of persons, the class is determined to be the large group class. For example, a small group may correspond to a sibling group, a girlfriend group, a friend group, and a family group, a family group may correspond to a parent-child photograph or a family photograph, and so on. A large community class (e.g., people greater than 5) may correspond to a party community, a conference community, a colleague community, and a community of parties. The party group can correspond to a party photo, the colleague group can correspond to a meeting photo of a workplace, and the group building group can correspond to a group building photo.
In addition, according to at least one of gender, body type, skin color, facial features, hairstyle, and clothing, the following human body feature categories may be defined: lovely (delicate), high (imperial sister), fresher, charming (charming), profession, sunshine, male, cool, mature.
The image classification model may be trained using the image dataset as a training set, which may be labeled with labels (classes) in advance, and may include data of a size (e.g., the number of images) several times (e.g., 300 times) the number of classes. The image classification model may be obtained by training a neural network model. In the process of training the neural network model, the weight values of the neural network model are updated through the labels of the training set and the training set. The trained image classification model may be used to predict labels (classes) for a particular image, and the predicted classes may indicate a scene class, a number of people class, and a human feature class to which the image belongs. In addition, a verification set of annotation tags can be set to verify the accuracy of the image classification model.
A gesture library may be established to store gestures that may indicate facial expressions and/or body movements of a character. Through the image classification model, the category of the image of the current scene can be predicted, and according to the predicted category, a specific gesture can be selected from the gesture library to be recommended to the user. Therefore, the recommended posture is suitable for the current scene, and the user can quickly and conveniently put out the proper posture.
As an example, recommending the respective photographing gestures based on the recognition result includes at least one of: displaying a recommended posture image corresponding to the recognized image category in the preview image; displaying a prompt for prompting the change of the posture of the shooting equipment; displaying the similarity between the character gesture and the recommended gesture; and displaying a recommended posture image corresponding to the number of recognized characters in the preview image, wherein the number of characters is recognized according to the distance between the characters in the preview image.
According to an exemplary embodiment of the present disclosure, a specific category of a preview image, for example, landscape-single-person-little fresh, may be determined using an image classification model, and although this category has one name, this category may contain three kinds of category information, i.e., a scene category, a number of persons category, and a human feature category. Specifically, the scene category is represented by "landscape", the number of people category is represented by "one person", and the human body feature category is represented by "freshness". The gesture library may give a recommended gesture for at least one of a scene category, a number of people category, and a human feature category.
As an example, in a multi-person scene, the relationship between persons is analyzed by determining whether the persons belong to a subject or a guest who is about to enter randomly according to the distance between the persons. Specifically, the relative distance between persons around the focus (within a predetermined distance from the focus) may be acquired, and then, the distance between a person other than the person around the focus and the person around the focus is determined, and if the distance exceeds a threshold, it is determined as a guest about straying in, otherwise, it belongs to the current photographic subject. In this way the actual number of persons to be photographed can be specifically determined. The pose corresponding to the actual number of persons to be photographed can be selected from the pose library.
As an example, the recommended gesture may be provided according to at least one of an annotation scene category, a specific number of people, and a human feature category.
As an example, displaying the recommended posture image includes at least one of: displaying a thumbnail of the recommended posture image in the preview image; displaying a recommended posture image at a recommended position in the preview image; switching the displayed recommended posture image in response to a user input, wherein the recommended posture image includes at least one of: a ghosted recommended pose image, a cartoon recommended pose image, a recommended pose contour, and a real person recommended pose image.
Fig. 3 illustrates a posture recommendation screen according to an exemplary embodiment of the present disclosure, and fig. 4 illustrates a posture recommendation screen according to another exemplary embodiment of the present disclosure. As shown in fig. 3, the character with the rabbit ears is posed, and as the recommended pose, the character with the rabbit ears is displayed in phantom. As shown in fig. 4, the screen shows the subject and the person with rabbit ears in a blurred presentation, and the subject can be overlaid on the person with rabbit ears. The similarity between the pose of the person with rabbit ears and the pose of the subject can be calculated and displayed on the screen. For example, the similarity in terms of percentage is 95%. The similarity may be calculated and displayed when the subject is located in the preview image.
The gesture recommendation process according to an exemplary embodiment of the present disclosure may be described in conjunction with the screens shown in fig. 2 and 4. The screens in fig. 2 and 4 may be displayed by a mobile phone having a photo taking function. The object identification model can be obtained by pre-training of the mobile phone, for example, inputting a training picture labeled with a label into the mobile phone to train the object identification model. The user may hold a mobile phone and take a picture of a pre-shot scene to obtain a picture, which is displayed in the screen as a preview image. When the preview image is obtained, the mobile phone can identify the object by using the pre-trained object identification model. For example, the trash can shown in FIG. 2 may be identified. When a trash can is identified, the cell phone can modify the preview image to mark the trash can with a marker. As shown in fig. 2, trash cans labeled with "x" are displayed in the screen. The user can see the trash can marked with "x" through the screen and then decide whether to photograph using the photographing scene. Suppose that the user does not want a trash can in the photo and changes another scene for shooting, so that the scene can be continuously acquired through the camera of the mobile phone. The mobile phone can also train an image classification model in advance, and the image classification model can be used for identifying which category of preset categories the image belongs to. The mobile phone can determine the category of the preview image of the scene through an image classification model, and acquire a recommended posture image corresponding to the determined category. Subsequently, the recommended posture image may be displayed in the screen. As shown in fig. 4, in the screen, an image outline that puts out a "V" type gesture can be displayed, and the subject can be posed according to the contents of the screen display. When the shot person puts out a specific gesture, the camera of the mobile phone can acquire the gesture and compare the gesture with the recommended gesture image to determine the similarity between the gesture of the shot person and the recommended gesture. As shown in fig. 4, the subject, the recommended posture image, and the similarity are displayed in the screen, and the user can know the relevant information and make a corresponding operation, which provides convenience for the user to take a picture.
Fig. 5 illustrates a posture recommendation screen according to another exemplary embodiment of the present disclosure. As shown in fig. 5, the recommendation gesture may be displayed in the form of a reduced image (thumbnail). When the photographed person is in the preview image, the similarity between the photographed posture and the recommended posture can be calculated and displayed in the screen. The recommended gesture may be displayed at one of the four corners of the screen.
Fig. 6 illustrates a posture recommendation screen according to another exemplary embodiment of the present disclosure. As shown in fig. 6, a human recommended gesture, which may be a human figure, may be displayed in the screen. Gestures in exemplary embodiments of the present disclosure may include body gestures, facial expressions, and the like.
Fig. 7 and 8 respectively show a posture recommendation screen of another exemplary embodiment of the present disclosure, and the object to which the recommendations of fig. 7 and 8 are directed is a photographer who can rotate a cellular phone to the right according to a rightward rotation instruction in fig. 7, and who can also rotate the cellular phone to the left according to a leftward rotation instruction in fig. 8. In this way, a photograph of the person can be taken of the person who is leaning.
The above exemplary embodiments provide various posture recommendation schemes, in practical applications, as many image categories as possible can be divided, and posture recommendation is provided according to each image category, so that recommended postures are diversified, providing repeated postures can be avoided, that is, diversified recommended postures can be provided, and personalized experience of users can be improved.
According to an exemplary embodiment of the present disclosure, there is provided an electronic apparatus, wherein the electronic apparatus includes: the shooting unit is used for acquiring a preview image of a pre-shot scene; a display unit configured to display the preview image; a processor for identifying an object in the preview image and/or a category to which the preview image belongs; control is performed to recommend a corresponding photographing posture based on a result of the recognition.
As an example, the processor utilizes an object authentication model for object recognition; and controlling a display unit to mark an object which is not expected to be shot in the preview image according to the recognition result.
As an example, the processor utilizes an image classification model for preview image class identification.
As an example, the object identification model and the image classification model are obtained by training a labeled training image, respectively.
As an example, the processor is further configured to train an image classification model for identifying a class to which the image belongs, using training images labeled scene class, number of people class, and human feature class.
As an example, the processor controls the display unit to recommend the photographing posture by at least one of: displaying a recommended posture image corresponding to the identified category in the preview image; displaying a prompt for prompting the change of the posture of the shooting equipment; displaying the similarity between the character gesture and the recommended gesture; and displaying a recommended posture image corresponding to the number of the identified persons in the preview image, wherein the number of the persons is identified according to the distance between the persons in the preview image.
As an example, the processor controls the display unit to display the recommended posture image by at least the following: displaying a thumbnail of a recommended posture image in the preview image; displaying a recommended posture image at a recommended position in the preview image; switching the displayed recommended posture image in response to a user input, wherein the recommended posture image includes at least one of: blurring a recommended posture image, a cartoon recommended posture image, a recommended posture contour, a real person recommended posture image.
In the above exemplary embodiments, for the method for recommending a shooting gesture, a specific implementation manner of the method may be applied to an electronic device in an exemplary embodiment implementing the present disclosure, and details are not repeated here. While the above lists a few embodiments of the disclosure, the various embodiments can be combined within the spirit of the disclosure. In addition, some technical features in some embodiments or combined embodiments may be omitted. Such modified embodiments should also fall within the scope of the present disclosure.
In addition, the electronic device in the exemplary embodiments of the present disclosure may further include a memory for storing various models and a gesture library. The stored models may include object identification models and image classification models, and may also be used to store data such as training sets and validation sets.
As the taking proceeds, the photographs taken by the user may also be stored, and such photographs may also be used to retrain at least one of the object identification model and the image classification model in order to update the weighting, etc., parameters of such models. Because the data used for updating the model is the photos in the mobile phone of the user, the updated model gradually meets the personalized needs of the user, and the personalized experience of the user can be further improved. The training of the object identification model and the image classification model can be performed by a user mobile phone, or can be performed by a server or other equipment independent of the user mobile phone. From the perspective of user privacy protection, model training may be performed by the user's handset. From the viewpoint of the power consumption of the mobile phone, the model training may be performed by a device such as a server independent of the mobile phone. In addition, the universal model can be downloaded for the mobile phone of the user in advance, and the mobile phone of the user is retrained based on the universal model and the photo taken by the user so as to obtain the model meeting the personalized requirement of the user.
According to another exemplary embodiment of the present disclosure, a computer-readable storage medium storing instructions is provided, wherein the instructions, when executed by at least one computing device, cause the at least one computing device to perform the method as described above.
Having described embodiments according to the inventive concept, features of the various embodiments may be combined without departing from the scope of the disclosure, and such combinations are intended to fall within the scope of the disclosure.
The computer readable storage medium is any data storage device that can store data which can be read by a computer system. Examples of computer-readable storage media include: read-only memory, random access memory, read-only optical disks, magnetic tapes, floppy disks, optical data storage devices, and carrier waves (such as data transmission through the internet via wired or wireless transmission paths).
Further, it should be understood that the respective units of the terminal and the base station according to the exemplary embodiments of the present disclosure may be implemented as hardware components and/or software components. The individual units may be implemented, for example, using Field Programmable Gate Arrays (FPGAs) or Application Specific Integrated Circuits (ASICs), depending on the processing performed by the individual units as defined by the skilled person.
Furthermore, the method according to the exemplary embodiments of the present disclosure may be implemented as computer code in a computer-readable storage medium. The computer code can be implemented by those skilled in the art from the description of the method above. The computer code when executed in a computer implements the above-described methods of the present disclosure.
Although a few exemplary embodiments of the present disclosure have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the disclosure, the scope of which is defined in the claims and their equivalents.

Claims (15)

1. A method of recommending a shooting gesture, wherein the method comprises:
acquiring a preview image of a pre-shot scene;
identifying an object in the preview image and/or a category to which the preview image belongs;
and recommending corresponding shooting postures based on the recognition result.
2. The method of claim 1, wherein identifying the object in the preview image comprises:
performing object recognition using the object authentication model;
marking an object which is not expected to be shot in the preview image according to the recognition result.
3. The method of claim 1, wherein identifying the category to which the preview image belongs comprises: preview image class identification is performed using an image classification model.
4. The method of claim 2 or 3,
the object identification model and the image classification model are obtained by training images labeled with labels, respectively.
5. The method of claim 3, wherein the method further comprises:
training an image classification model for identifying the class to which the image belongs by using the training images of the labeled scene class, the figure number class and the human body feature class.
6. The method of claim 1, wherein recommending respective photographic poses based on the recognition results comprises at least one of:
displaying a recommended posture image corresponding to the identified category in the preview image;
displaying a prompt for prompting the change of the posture of the shooting equipment;
displaying the similarity between the character gesture and the recommended gesture;
and displaying a recommended posture image corresponding to the number of the identified persons in the preview image, wherein the number of the persons is identified according to the distance between the persons in the preview image.
7. The method of claim 6, wherein displaying the recommended pose image comprises at least one of:
displaying a thumbnail of a recommended posture image in the preview image;
displaying a recommended posture image at a recommended position in the preview image;
switching the displayed recommended posture image in response to a user input,
wherein the recommended posture image comprises at least one of: a ghosted recommended pose image, a cartoon recommended pose image, a recommended pose contour, and a real person recommended pose image.
8. An electronic device, wherein the electronic device comprises:
the shooting unit is used for acquiring a preview image of a pre-shot scene;
a display unit configured to display the preview image;
a processor for identifying an object in the preview image and/or a category to which the preview image belongs; control is performed to recommend a corresponding photographing posture based on a result of the recognition.
9. The electronic device of claim 8, wherein the processor utilizes an object authentication model for object recognition; and controlling a display unit to mark an object which is not expected to be shot in the preview image according to the recognition result.
10. The electronic device of claim 8, wherein the processor utilizes an image classification model for preview image class identification.
11. The electronic device of claim 9 or 10, wherein the object identification model and the image classification model are obtained by training a training image labeled with a label, respectively.
12. The electronic device of claim 9, wherein the processor is further configured to train an image classification model for identifying a class to which an image belongs using training images labeled scene class, number of people class, and human feature class.
13. The electronic device of claim 8, wherein the processor controls the display unit to recommend the photographic gesture by at least one of:
displaying a recommended posture image corresponding to the identified category in the preview image;
displaying a prompt for prompting the change of the posture of the shooting equipment;
displaying the similarity between the character gesture and the recommended gesture;
and displaying a recommended posture image corresponding to the number of the identified persons in the preview image, wherein the number of the persons is identified according to the distance between the persons in the preview image.
14. The electronic device of claim 13, wherein the processor controls the display unit to display the recommended posture image by at least one of:
displaying a thumbnail of a recommended posture image in the preview image;
displaying a recommended posture image at a recommended position in the preview image;
switching the displayed recommended posture image in response to a user input,
wherein the recommended posture image comprises at least one of: blurring a recommended posture image, a cartoon recommended posture image, a recommended posture contour, a real person recommended posture image.
15. A computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform the method of any of claims 1 to 7.
CN201911093302.7A 2019-11-11 2019-11-11 Method and electronic equipment for recommending shooting posture Pending CN110868538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911093302.7A CN110868538A (en) 2019-11-11 2019-11-11 Method and electronic equipment for recommending shooting posture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911093302.7A CN110868538A (en) 2019-11-11 2019-11-11 Method and electronic equipment for recommending shooting posture

Publications (1)

Publication Number Publication Date
CN110868538A true CN110868538A (en) 2020-03-06

Family

ID=69653622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911093302.7A Pending CN110868538A (en) 2019-11-11 2019-11-11 Method and electronic equipment for recommending shooting posture

Country Status (1)

Country Link
CN (1) CN110868538A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069358A (en) * 2020-08-18 2020-12-11 北京达佳互联信息技术有限公司 Information recommendation method and device and electronic equipment
CN113194254A (en) * 2021-04-28 2021-07-30 上海商汤智能科技有限公司 Image shooting method and device, electronic equipment and storage medium
CN113364971A (en) * 2020-03-07 2021-09-07 华为技术有限公司 Image processing method and device
GB2606423A (en) * 2021-01-19 2022-11-09 Adobe Inc Providing contextual augmented reality photo pose guidance

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103220466A (en) * 2013-03-27 2013-07-24 华为终端有限公司 Method and device for outputting pictures
CN105812665A (en) * 2016-03-29 2016-07-27 联想(北京)有限公司 Shooting processing method and device, electronic apparatus
CN107018333A (en) * 2017-05-27 2017-08-04 北京小米移动软件有限公司 Shoot template and recommend method, device and capture apparatus
CN108848303A (en) * 2018-05-28 2018-11-20 北京小米移动软件有限公司 Shoot reminding method and device
US20190159966A1 (en) * 2017-11-29 2019-05-30 International Business Machines Corporation Methods and systems for managing photographic capture
CN110049180A (en) * 2018-11-27 2019-07-23 阿里巴巴集团控股有限公司 Shoot posture method for pushing and device, intelligent terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103220466A (en) * 2013-03-27 2013-07-24 华为终端有限公司 Method and device for outputting pictures
CN105812665A (en) * 2016-03-29 2016-07-27 联想(北京)有限公司 Shooting processing method and device, electronic apparatus
CN107018333A (en) * 2017-05-27 2017-08-04 北京小米移动软件有限公司 Shoot template and recommend method, device and capture apparatus
US20190159966A1 (en) * 2017-11-29 2019-05-30 International Business Machines Corporation Methods and systems for managing photographic capture
CN108848303A (en) * 2018-05-28 2018-11-20 北京小米移动软件有限公司 Shoot reminding method and device
CN110049180A (en) * 2018-11-27 2019-07-23 阿里巴巴集团控股有限公司 Shoot posture method for pushing and device, intelligent terminal

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113364971A (en) * 2020-03-07 2021-09-07 华为技术有限公司 Image processing method and device
WO2021179773A1 (en) * 2020-03-07 2021-09-16 华为技术有限公司 Image processing method and device
CN113364971B (en) * 2020-03-07 2023-04-18 华为技术有限公司 Image processing method and device
CN112069358A (en) * 2020-08-18 2020-12-11 北京达佳互联信息技术有限公司 Information recommendation method and device and electronic equipment
GB2606423A (en) * 2021-01-19 2022-11-09 Adobe Inc Providing contextual augmented reality photo pose guidance
US11509819B2 (en) 2021-01-19 2022-11-22 Adobe Inc. Providing contextual augmented reality photo pose guidance
CN113194254A (en) * 2021-04-28 2021-07-30 上海商汤智能科技有限公司 Image shooting method and device, electronic equipment and storage medium
WO2022227393A1 (en) * 2021-04-28 2022-11-03 上海商汤智能科技有限公司 Image photographing method and apparatus, electronic device, and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN108764091B (en) Living body detection method and apparatus, electronic device, and storage medium
CN110868538A (en) Method and electronic equipment for recommending shooting posture
US9898647B2 (en) Systems and methods for detecting, identifying and tracking objects and events over time
US20200380594A1 (en) Virtual try-on system, virtual try-on method, computer program product, and information processing device
US10679041B2 (en) Hybrid deep learning method for recognizing facial expressions
CN109905593B (en) Image processing method and device
CN116601675A (en) Virtual garment fitting
CN107771336A (en) Feature detection and mask in image based on distribution of color
CN114930399A (en) Image generation using surface-based neurosynthesis
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
JP2010044516A (en) Detection information registration device, electronic device, method for controlling detection information registration device, method for controlling electronic device, program for controlling detection information device, and program for controlling electronic device
CN112383830A (en) Video cover determining method and device and storage medium
CN107018330A (en) A kind of guidance method and device of taking pictures in real time
US11783192B2 (en) Hybrid deep learning method for recognizing facial expressions
US20190045270A1 (en) Intelligent Chatting on Digital Communication Network
CN111640165A (en) Method and device for acquiring AR group photo image, computer equipment and storage medium
CN111488774A (en) Image processing method and device for image processing
CN114697539A (en) Photographing recommendation method and device, electronic equipment and storage medium
KR20210049783A (en) Image processing apparatus, image processing method and image processing program
CN109840885A (en) Image interfusion method and Related product
CN113591562A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Ma et al. Finding your spot: A photography suggestion system for placing human in the scene
CN116363725A (en) Portrait tracking method and system for display device, display device and storage medium
CN115623313A (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP6958795B1 (en) Information processing methods, computer programs and information processing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200306

RJ01 Rejection of invention patent application after publication