CN109600550B - Shooting prompting method and terminal equipment - Google Patents

Shooting prompting method and terminal equipment Download PDF

Info

Publication number
CN109600550B
CN109600550B CN201811550958.2A CN201811550958A CN109600550B CN 109600550 B CN109600550 B CN 109600550B CN 201811550958 A CN201811550958 A CN 201811550958A CN 109600550 B CN109600550 B CN 109600550B
Authority
CN
China
Prior art keywords
model
display
target object
shooting
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811550958.2A
Other languages
Chinese (zh)
Other versions
CN109600550A (en
Inventor
刘先亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811550958.2A priority Critical patent/CN109600550B/en
Publication of CN109600550A publication Critical patent/CN109600550A/en
Application granted granted Critical
Publication of CN109600550B publication Critical patent/CN109600550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Abstract

The invention provides a shooting prompting method and terminal equipment, and relates to the technical field of communication. Wherein the method comprises the following steps: acquiring a three-dimensional model corresponding to a target object in a preview image displayed on a shooting preview interface; identifying scene information of the preview image; determining model display parameters corresponding to the scene information; and displaying the three-dimensional model in the shooting preview interface according to the model display parameters. In the embodiment of the invention, the three-dimensional model corresponding to the target object has the three-dimensional detail characteristic of the target object, so that the terminal equipment can display the three-dimensional model of the target object in the shooting preview interface according to the model display parameter corresponding to the current scene information to perform composition prompt, and a user can more intuitively see the three-dimensional detail of the target object in the current scene, therefore, the composition prompt has stronger intuitiveness, and the scene substitution sense of the user during shooting is improved.

Description

Shooting prompting method and terminal equipment
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a shooting prompting method and a terminal device.
Background
Many current photographing applications support some photographing auxiliary functions, for example, user gestures can be automatically recognized, photographing can be automatically performed when a specific photographing triggering gesture is detected, or an optimal character station in a current scene can be displayed so as to perform composition prompting, and the like.
However, in the related photographing support technology, the optimal position of a person in the current scene is generally indicated only by a simple pattern such as a line or a wire frame to perform composition prompting, which results in a lack of scene substitution feeling for the user and an insufficient intuitive effect of the composition prompting.
Disclosure of Invention
The invention provides a shooting prompting method and terminal equipment, and aims to solve the problems that a user lacks scene substitution feeling and the effect of composition prompting is not visual enough due to the fact that the user only indicates the optimal position of a character in a current scene through a simple pattern during composition prompting.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a method for shooting a prompt is provided, which is applied to a terminal device, and includes:
acquiring a three-dimensional model corresponding to a target object in a preview image displayed on a shooting preview interface;
identifying scene information of the preview image;
determining model display parameters corresponding to the scene information;
and displaying the three-dimensional model in the shooting preview interface according to the model display parameters.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes:
the acquisition module is used for acquiring a three-dimensional model corresponding to a target object in a preview image displayed on the shooting preview interface;
the identification module is used for identifying scene information of the preview image;
the determining module is used for determining model display parameters corresponding to the scene information;
and the display module is used for displaying the three-dimensional model in the shooting preview interface according to the model display parameters.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a processor, a memory, and a computer program that is stored in the memory and is executable on the processor, and when the computer program is executed by the processor, the steps of the shooting prompting method according to the present invention are implemented.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the shooting prompting method according to the present invention are implemented.
In the embodiment of the invention, the terminal equipment can firstly acquire the three-dimensional model corresponding to the target object in the preview image displayed on the shooting preview interface, then can identify the scene information of the preview image, further determine the model display parameter corresponding to the scene information, and display the three-dimensional model in the shooting preview interface according to the model display parameter, thereby realizing composition prompt. In the embodiment of the invention, the three-dimensional model corresponding to the target object has the three-dimensional detail characteristic of the target object, so that the terminal equipment can display the three-dimensional model of the target object in the shooting preview interface according to the model display parameter corresponding to the current scene information to perform composition prompt, and a user can more intuitively see the three-dimensional detail of the target object in the current scene, therefore, the composition prompt has stronger intuitiveness, and the scene substitution sense of the user during shooting is improved.
Drawings
Fig. 1 is a flowchart illustrating a shooting prompting method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method for prompting shooting in an embodiment of the present invention;
FIG. 3 illustrates an interface diagram for collecting three-dimensional information in an embodiment of the invention;
FIG. 4 is a diagram illustrating an operation of a triggered interesting composition guide box in an embodiment of the present invention;
FIG. 5 is a view showing an interface for displaying an interesting composition guide frame in the embodiment of the present invention;
fig. 6 shows a block diagram of a terminal device in an embodiment of the present invention;
fig. 7 shows a block diagram of another terminal device in the embodiment of the present invention;
fig. 8 shows a schematic hardware structure diagram of a terminal device in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of a shooting prompting method in the embodiment of the present invention is shown, which may specifically include the following steps:
step 101, acquiring a three-dimensional model corresponding to a target object in a preview image displayed on a shooting preview interface.
In the embodiment of the present invention, the terminal device may generally be installed with photographing software having a composition prompting function, and before the user needs to use the composition prompting function, the user may first enter the three-dimensional model corresponding to the target object in the terminal device, so that the displayed three-dimensional model may be used as a photographing reference for the target object in the composition prompting process of photographing. The target object may be a person, that is, any user, and certainly, in practical applications, the target object may also be an object, such as a sculpture and the like, which is not specifically limited in this embodiment of the present invention.
Fig. 2 shows a flowchart of another shooting prompting method in the embodiment of the present invention, and referring to fig. 2, a specific implementation manner of this step may include:
substep 1011: three-dimensional information of at least one shooting posture of the target object is acquired.
Substep 1012: and establishing a three-dimensional model of the target object according to the three-dimensional information of the at least one shooting posture.
In the embodiment of the invention, for any target object, the terminal device can acquire the three-dimensional information of at least one shooting gesture of the target object, and further can establish the exclusive three-dimensional model of the target object according to the three-dimensional information of at least one shooting gesture, so that the exclusive three-dimensional model of the target object can be adopted to perform composition prompt in the subsequent process of shooting the target object, and thus, the intuitiveness of composition prompt on the target object is stronger, and particularly, the scene substitution sense of the target object during shooting can be improved under the condition that the target object is a person.
The terminal device can receive a three-dimensional modeling instruction, and in response to the three-dimensional modeling instruction, the terminal device can acquire three-dimensional information of at least one shooting gesture of the target object. Specifically, when a user needs to enter a three-dimensional model corresponding to a target object in the terminal device, the user can click the photographing software icon, so that the terminal device can open the photographing software, and further can enter a photographing preview interface. Then, the user can switch to the front-end photographing mode, and further can click the virtual button of the three-dimensional modeling, and correspondingly, the terminal equipment can receive the three-dimensional modeling instruction.
The terminal equipment can be configured with a depth camera, and after receiving the three-dimensional modeling instruction, the terminal equipment can respond to the three-dimensional modeling instruction, so that the depth camera is started to acquire three-dimensional information of at least one shooting posture of the target object, namely the three-dimensional information of the target object at different angles.
Taking the target object as an example of the user, the three-dimensional information may include front face depth information, left side face depth information, right side face depth information, parietal depth information, and chin depth information of the target object. In practical application, the photographing software can prompt the user to photograph from five directions, namely the front face direction, the left side face direction, the right side face direction, the head top direction and the chin direction, through the interface as shown in fig. 3, so that the depth camera can acquire the front face depth information, the left side face depth information, the right side face depth information, the head top depth information and the chin depth information of the user. In addition, referring to fig. 3, the terminal device may further display an information acquisition progress bar in the three-dimensional information acquisition interface, so that the acquisition progress of the depth information may be displayed in the process of inputting the depth information by the user. For example, as shown in fig. 3, the user may sequentially enter depth information in five directions according to the direction of an arrow, the user may first enter front face depth information, and when the front face depth information is entered, the terminal device may display that the current acquisition progress is 25% in the three-dimensional information acquisition interface, and may display prompt information that the front face entry is completed.
Similarly, when the target object is an object, the user can adjust the placing position of the object relative to the terminal equipment and shoot through the depth camera, so that the terminal equipment can acquire the depth information of the object in different shooting postures.
The photographing software can be configured with a three-dimensional modeling algorithm, and when the terminal device acquires the three-dimensional information of at least one photographing gesture of the target object, the three-dimensional information of the at least one photographing gesture can be input into the three-dimensional modeling algorithm, so that the three-dimensional modeling algorithm can establish and output a three-dimensional model of the target object. And the terminal equipment can store the three-dimensional model of the target object as a local file of the photographing software locally. It should be noted that, the three-dimensional modeling algorithm in the embodiment of the present invention may refer to related technologies, and details of the embodiment of the present invention are not described herein.
In practical applications, a user may establish a three-dimensional model belonging to a target object through photographing software, so that the terminal device may store the three-dimensional model of at least one target object. Taking a target object as an example, when a certain user needs to take a picture, a composition prompting function of the shooting software can be started before the picture is taken, after the terminal device recognizes a face in a shooting preview interface, the user can press a face area in the shooting preview interface for a long time, so that the terminal device is triggered to match the front face of the user in the shooting preview interface with a front image corresponding to any three-dimensional model stored in the terminal device, and when the front face of the user is matched with the front image corresponding to any three-dimensional model stored in the terminal device, the three-dimensional model of the user can be determined to exist in the terminal device. When the matching is unsuccessful, the terminal equipment can pop up a prompt box to prompt that the three-dimensional model corresponding to the user is not stored currently.
It should be noted that, in the embodiment of the present invention, a related face matching technology may be referred to in a manner of matching the front face of the user with the front face image, if a similarity between the front face of the user and the front face image of any one of the three-dimensional models is greater than or equal to a preset similarity, it may be determined that the front face of the user matches the front face image of the three-dimensional model, and if the similarity between the front face of the user and the front face image of the three-dimensional model is less than the preset similarity, it may be determined that the front face of the user does not match the front face image of the three-dimensional model, which is not described herein again in the embodiment of the present invention.
Step 102, identifying scene information of the preview image.
In the embodiment of the invention, the terminal device can identify the scene information of the preview image, so that how the three-dimensional model of the target object is displayed can be determined according to the scene information of the preview image. In one implementation, the scene information may specifically include a background environment of the preview image, that is, under what environment scene the target object is photographed, for example, an environment scene including plant flowers, an environment scene including buildings, an environment scene including animals, an environment scene including people in pictures or human sculptures, and the like, including non-living human faces, and the like, so as to facilitate a subsequent determination of what composition template should be used according to a type of the environment scene.
In practical applications, the environment scene type may be identified through a scene identification technology, for example, an environment scene identification algorithm based on deep learning, an environment scene identification algorithm based on object identification, and the like, which is not specifically limited in the embodiment of the present invention. For example, for an actual background environment such as a person oil painting, a person sketch, a person sculpture, and the like, the terminal device may recognize that the scene information of the current preview image includes a non-living human face.
Of course, since the environment scene recognition technology usually has a certain recognition error rate, in a specific application, when a user has a recognition error in a background environment, the user may select a correct environment scene type in the terminal device according to an actual situation, which is not specifically limited in the embodiment of the present invention.
In another implementation, the scene information may specifically include the first display position and the first display information of the target object, that is, a specific implementation of this step may include: a first display position and first display information of a target object are acquired.
The first display position is also the display position of the target object currently in the preview image, and the first display information is also the display parameters of the target object currently in the preview image except the first display position, such as the attitude orientation of the target object, the inclination relative to any central axis, and the like. In practical applications, a user may want to take some more interesting photos, for example, two faces that are the same as the user are displayed, and the positions of the two faces are mirror-symmetric interesting photos, so that the terminal device can obtain a first display position and first display information of a target object, and further can determine a display position that has a certain positional relationship with the first display position and display information that has a certain corresponding relationship with the first display information. In addition, when acquiring the first display position and the first display information of the target object, the terminal device may perform a shooting operation to output a first image when the target object is at the first display position and has the first display information, so as to subsequently obtain an interesting photo based on the first image.
And 103, determining model display parameters corresponding to the scene information.
In an embodiment of the present invention, the model display parameters may include model basic display parameters and model detail display parameters, wherein the model basic display parameters may include at least one of the following: the model size, the model display direction, the model display angle, the model display position, namely, some parameters necessary for composition prompting. When composition prompting is carried out, the model size can indicate the optimal size of the three-dimensional model of the target object in the shooting preview interface, the model display direction can indicate the optimal orientation of the three-dimensional model of the target object in the shooting preview interface, such as the front of the three-dimensional model faces to the left, the front of the three-dimensional model faces to the right and the like, the model display angle can indicate the transverse and longitudinal central axis of the three-dimensional model of the target object in the shooting preview interface and the angle formed between the transverse and longitudinal central axis of the shooting preview interface, and the model display position can indicate the optimal shooting position of the three-dimensional model of the target object in the shooting preview interface, such as the middle position in the shooting preview interface, the lower right position in the shooting preview interface and the like, so that guidance of shooting composition of the target object can be carried out. In addition, the model detail display parameters may include a display style of at least one part of the model, such as a hair style, a facial expression, and the like of the three-dimensional model, that is, some parameters that can be used to perform composition prompting in a more detailed manner.
The basic display parameters of the model and the detail display parameters of the model are used for displaying the three-dimensional model, so that a user can visually check the basic characteristics and detail characteristics of the three-dimensional model, the user can conveniently control the target object to simulate the posture of the three-dimensional model, and the scene substituting feeling of the user when shooting the target object can be improved.
In an implementation manner of this step, for different types of environment scenes, the terminal device may determine, through a related composition algorithm, a composition template corresponding to the type of the environment scene, and may further determine a model display parameter in the composition template. The composition template may be configured to display composition elements in the current background environment, for example, an optimal position, an optimal size, an optimal angle, and the like of the three-dimensional model of the target object in the current background environment, and composition elements such as a geometric composition dividing line, which is not specifically limited in this embodiment of the present invention.
In another implementation manner of this step, the terminal device may perform more interesting composition guidance for the user, and specifically, this step may include: and determining a second display position and second display information according to the first display position and the first display information. The second display position is a mirror image position or a preset non-mirror image position of the first display position. Specifically, the mirror image position is a display position that is symmetrical to the first display position with respect to a longitudinal central axis of the preview image in a mirror symmetry manner, and the preset non-mirror image position may be, for example, a display position that is displaced from the first display position by a preset distance along a preset direction, a display position that is symmetrical to the first display position with respect to a center point of the preview image, and the like. The second display information is also the display parameters of the three-dimensional model of the target object except the second display position.
The second display position which is in mirror image equal relation with the first display position is determined, so that the subsequent terminal equipment can display the three-dimensional model of the target object at the second display position, and when the target object is shot according to the three-dimensional model at the second display position, the terminal equipment can output images containing two target object images with mirror image equal position relation, so that the shooting interest can be improved, and more interesting pictures can be obtained.
In addition, in practical applications, a user may manually select whether to perform interesting composition guidance, for example, referring to fig. 4, the user 01 and the three-dimensional model 02 of the user 01 are displayed in a preview image, the user may manually slide the right side of the preview image from top to bottom to trigger the terminal device to display the interesting composition guide frame 03 as shown in fig. 5, and the user may further select any one of interesting composition guide modes such as mirror image and non-mirror image in the interesting composition guide frame, for example, the user may select the "mirror image" interesting composition guide mode, so that the terminal device may subsequently display the three-dimensional model at a second display position which is mirror-symmetrical with respect to the first display position of the user in the shooting preview interface, wherein the orientation of the three-dimensional model is opposite to the face of the user due to the mirror-symmetry.
And 104, displaying the three-dimensional model in a shooting preview interface according to the model display parameters.
In the embodiment of the present invention, after determining the model display parameter corresponding to the scene information, the terminal device may display the three-dimensional model of the target object in the shooting preview interface according to the model display parameter, so as to perform composition prompting on the user, for example, the user may adjust the shooting posture according to the three-dimensional model of the user, or may place the object according to the three-dimensional model of the object, so as to adjust the shooting posture of the object, and so on. In order to facilitate the user to perform posture adjustment on the target object according to the composition prompt, in practical application, the terminal device may perform semi-transparent display on the three-dimensional model.
In addition, in a specific application, the terminal device may also display the three-dimensional model according to the basic model display parameters, and then the user selects whether to perform further detail guidance according to the detailed model display parameters.
Furthermore, in the embodiment of the invention, the user can also press the interesting composition guide frame and slide upwards for more than 1 second, so that the terminal device can be triggered to enter the customized composition function, at the moment, the user can adjust the size of the three-dimensional model through double-finger zooming, drag the three-dimensional model to any position in the shooting preview interface, rotate the three-dimensional model to adjust the direction and the angle, and thus adjust the model display parameters of the three-dimensional model in the current shooting preview interface. After the overall effect is confirmed, the user can press the three-dimensional model for a long time, and then the terminal device is triggered to enter a detail self-defining mode, at the moment, the terminal device can record detail parameters such as the expression and the hair style of the target object in the shooting preview interface, and details such as the expression and the hair style are displayed on the three-dimensional model according to the detail parameters. After the detail effect is confirmed, the user can use the customized composition content as the composition template corresponding to the scene information, and further can perform composition prompt through the model display parameters in the customized composition template during subsequent photographing.
Of course, in practical applications, the terminal device may also directly display the three-dimensional model when acquiring the three-dimensional model of the target object, and then adjust the position, the size, and the like of the three-dimensional model according to the model display parameters corresponding to the scene information, which is not specifically limited in the embodiment of the present invention.
Referring to fig. 2, after step 104 is completed, the terminal device may further perform the following steps 105 and 106, which specifically include:
and 105, detecting the attitude parameters of the target object in the shooting preview interface.
After displaying the three-dimensional model of the target object to perform composition prompting, the terminal device may detect the attitude parameter of the target object in the current shooting preview interface. The parameter attribute of the attitude parameter is the same as the parameter attribute of the model display parameter, that is, the attitude parameter also includes an attitude basic parameter, or further includes an attitude detail parameter. The basic gesture parameters may include the size, the position, and the like of the target object, and the detail gesture parameters may include a display style of at least one portion of the target object, that is, parameters of which attributes are specifically included in the model display parameters, and the parameters of which attributes of the target object in the current shooting preview interface are detected, so as to obtain the gesture parameters of the target object.
And 106, executing shooting operation and outputting a target image under the condition that the posture parameters are matched with the model display parameters.
After detecting the current attitude parameter of the target object, the terminal device may perform similarity matching on the attitude parameter and the model display parameter, and when the similarity between the attitude parameter and the model display parameter is smaller than a preset similarity, the terminal device may determine that the attitude parameter is not matched with the model display parameter, and may further wait for the target object to adjust the attitude so as to detect the attitude parameter again after a preset duration, or may also prompt the user in the interface that the target object is not currently aligned with the three-dimensional model so as to detect the attitude parameter again after the user adjusts the attitude of the target object until the attitude parameter is matched with the model display parameter.
When the similarity between the posture parameter and the model display parameter is greater than or equal to the preset similarity, the terminal device can determine that the posture parameter is matched with the model display parameter, and then the terminal device can automatically execute shooting operation through the camera, so that the shooting operation after composition guidance can be realized under the condition that a user does not conveniently operate the terminal device, and a target image can be output.
In specific implementation, when a user wants to obtain an interesting photo containing two faces which are the same and are mirror-symmetric in position, the terminal device can transfer a target object which is located at a first display position in a first image and has first display information to the first display position in the target image after the target image is obtained by shooting, so that the target image can contain two images of the target object which are mirror-symmetric in left and right.
In addition, in practical application, after the target object is adjusted to the posture matched with the three-dimensional model, the terminal device may further highlight the three-dimensional model, so as to prompt that the posture of the target object is adjusted in place, the highlight may disappear after 5 seconds, and the user may manually shoot by himself or herself, which is not specifically limited in the embodiment of the present invention.
It should be noted that, because the time interval between the shooting and the recording of the target object into the three-dimensional model may be relatively large, details such as a hairstyle may have a relatively large difference, and further, a simulation degree may be relatively low, so in practical applications, for the model detail display parameters, the preset similarity required when the model detail display parameters are matched with the posture parameters may be set to be smaller, for example, 40%, 35%, and the like, that is, when the posture parameters of the target object are matched with the model detail display parameters to a relatively small degree, it may be considered that the target object has simulated details such as a hairstyle, and a hairstyle to a certain extent, which is not specifically limited in the embodiment of the present invention.
In addition, in practical application, the terminal device may take a picture through a common camera configured besides the depth camera, but before taking a picture, conversion of the camera needs to be performed, or the terminal device may also take a picture directly through the depth camera which establishes the three-dimensional model, so that conversion of the camera does not need to be performed before taking a picture, which is not specifically limited in the embodiment of the present invention.
In the embodiment of the invention, the terminal equipment can firstly acquire the three-dimensional model corresponding to the target object in the preview image displayed on the shooting preview interface, then can identify the scene information of the preview image, further determine the model display parameters corresponding to the scene information, and display the three-dimensional model in the shooting preview interface according to the model display parameters, thereby realizing composition prompt. In the embodiment of the invention, the three-dimensional model corresponding to the target object has the three-dimensional detail characteristics of the target object, such as the details of five sense organs, expression, hairstyle of a person, and sharp corners and gaps of an object, and further the terminal equipment can display the three-dimensional model of the target object in the shooting preview interface according to the model display parameters corresponding to the current scene information so as to perform composition prompt, so that a user can more intuitively find the three-dimensional details of the target object in the current scene, and thus, the intuitiveness of the composition prompt is stronger, and the scene substitution sense of the user during shooting is improved.
Referring to fig. 6, a block diagram of a terminal device 600 according to a third embodiment of the present invention is shown, which may specifically include:
the acquisition module 601 is configured to acquire a three-dimensional model corresponding to a target object in a preview image displayed on a shooting preview interface;
an identifying module 602, configured to identify scene information of the preview image;
a determining module 603, configured to determine a model display parameter corresponding to the scene information;
and a display module 604, configured to display the three-dimensional model in the shooting preview interface according to the model display parameter.
Optionally, referring to fig. 7, the obtaining module 601 includes:
an acquisition submodule 6011 configured to acquire three-dimensional information of at least one shooting posture of the target object;
the establishing submodule 6012 is configured to establish a three-dimensional model of the target object according to the three-dimensional information of the at least one shooting posture.
Optionally, referring to fig. 7, the terminal device 600 further includes:
a detection module 605, configured to detect a posture parameter of the target object in the shooting preview interface;
and a shooting module 606, configured to execute a shooting operation and output a target image when the pose parameter matches the model display parameter.
Optionally, referring to fig. 7, the identifying module 602 includes:
an obtaining submodule 6021 configured to obtain a first display position and first display information of the target object;
the determining module 603 comprises:
the determining submodule 6031 is configured to determine a second display position and second display information according to the first display position and the first display information.
Optionally, the model display parameters include model basic display parameters and model detail display parameters;
the model basic display parameters include at least one of: model size, model display direction, model display angle and model display position;
the model detail display parameters comprise a display style of at least one part of the model.
The terminal device provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiments of fig. 1 and fig. 2, and is not described herein again to avoid repetition.
In the embodiment of the invention, the terminal device can firstly acquire the three-dimensional model corresponding to the target object in the preview image displayed on the shooting preview interface through the acquisition module, then recognize the scene information of the preview image through the recognition module, further determine the model display parameter corresponding to the scene information through the determination module, and display the three-dimensional model in the shooting preview interface through the display module according to the model display parameter, thereby realizing the composition prompt. In the embodiment of the invention, the three-dimensional model corresponding to the target object has the three-dimensional detail characteristics of the target object, so that the terminal equipment can display the three-dimensional model of the target object in the shooting preview interface according to the model display parameter corresponding to the current scene information to perform composition prompt, and a user can more intuitively see the three-dimensional details of the target object in the current scene, therefore, the composition prompt has stronger intuitiveness, and the scene substitution sense of the user during shooting is improved.
Figure 8 is a schematic diagram of a hardware structure of a terminal device implementing various embodiments of the present invention,
the terminal device 800 includes but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 8 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 810 is configured to obtain a three-dimensional model corresponding to a target object in a preview image displayed on a shooting preview interface; identifying scene information of the preview image; determining model display parameters corresponding to the scene information; and displaying the three-dimensional model in the shooting preview interface according to the model display parameters.
In the embodiment of the invention, the terminal equipment can firstly acquire the three-dimensional model corresponding to the target object in the preview image displayed on the shooting preview interface, then can identify the scene information of the preview image, further determine the model display parameter corresponding to the scene information, and display the three-dimensional model in the shooting preview interface according to the model display parameter, thereby realizing composition prompt. In the embodiment of the invention, the three-dimensional model corresponding to the target object has the three-dimensional detail characteristic of the target object, so that the terminal equipment can display the three-dimensional model of the target object in the shooting preview interface according to the model display parameter corresponding to the current scene information to perform composition prompt, and a user can more intuitively see the three-dimensional detail of the target object in the current scene, therefore, the composition prompt has stronger intuitiveness, and the scene substitution sense of the user during shooting is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a process of sending and receiving information or a call, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 802, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the terminal apparatus 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of a still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio frequency unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of the phone call mode.
The terminal device 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the terminal device 800 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 806 may include a Display panel 8061, and the Display panel 8061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 is operable to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. In particular, other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation is transmitted to the processor 810 to determine the type of the touch event, and then the processor 810 provides a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the terminal device, and this is not limited herein.
The interface unit 808 is an interface for connecting an external device to the terminal apparatus 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 800 or may be used to transmit data between the terminal apparatus 800 and an external device.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the terminal device, connects various parts of the whole terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby performing overall monitoring of the terminal device. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
Terminal device 800 may also include a power supply 811 (such as a battery) for powering the various components, and preferably, power supply 811 may be logically coupled to processor 810 via a power management system to provide management of charging, discharging, and power consumption via the power management system.
Depth camera 812 may collect depth information of a photographed object.
In addition, the terminal device 800 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 810, a memory 809, and a computer program stored in the memory 809 and capable of running on the processor 810, where the computer program, when executed by the processor 810, implements each process of the above-mentioned shooting prompting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the embodiment of the shooting prompting method, and can achieve the same technical effect, and in order to avoid repetition, the computer program is not described herein again. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. A shooting prompting method is applied to terminal equipment and is characterized by comprising the following steps:
acquiring a three-dimensional model corresponding to a target object in a preview image displayed on a shooting preview interface;
identifying scene information of the preview image;
determining model display parameters corresponding to the scene information;
according to the model display parameters, displaying a semitransparent image of the three-dimensional model in the shooting preview interface;
the acquiring of the three-dimensional model corresponding to the target object in the preview image displayed on the shooting preview interface includes:
acquiring three-dimensional information of at least one shooting gesture of a target object;
establishing a three-dimensional model of the target object according to the three-dimensional information of the at least one shooting posture;
the model display parameters comprise model basic display parameters and model detail display parameters;
the model basic display parameters include at least one of: model size, model display direction, model display angle and model display position;
the model detail display parameters comprise a display style of at least one part of the model;
wherein, the display pattern of at least one part of the model comprises an expression pattern or a hair style pattern of the head of the model;
after the three-dimensional model is displayed in the shooting preview interface according to the model display parameters, the method further includes:
detecting attitude parameters of the target object in the shooting preview interface;
under the condition that the attitude parameters are matched with the model display parameters, executing shooting operation and outputting a target image;
the situation that the posture parameter is matched with the model display parameter comprises the following steps: the similarity between the attitude parameters and the model display parameters exceeds a preset similarity.
2. The method of claim 1, wherein the identifying scene information of the preview image comprises:
acquiring a first display position and first display information of the target object;
the determining of the model display parameters corresponding to the scene information includes:
and determining a second display position and second display information according to the first display position and the first display information.
3. A terminal device, characterized in that the terminal device comprises:
the acquisition module is used for acquiring a three-dimensional model corresponding to a target object in a preview image displayed on a shooting preview interface;
the identification module is used for identifying scene information of the preview image;
the determining module is used for determining model display parameters corresponding to the scene information;
the display module is used for displaying the semitransparent image of the three-dimensional model in the shooting preview interface according to the model display parameters;
the acquisition module includes:
the acquisition submodule is used for acquiring three-dimensional information of at least one shooting posture of the target object;
the establishing submodule is used for establishing a three-dimensional model of the target object according to the three-dimensional information of the at least one shooting posture;
the model display parameters comprise model basic display parameters and model detail display parameters;
the model basic display parameters include at least one of: model size, model display direction, model display angle and model display position;
wherein, the display pattern of at least one part of the model comprises an expression pattern or a hair style pattern of the head of the model;
the detection module is used for detecting the attitude parameters of the target object in the shooting preview interface;
the shooting module is used for executing shooting operation and outputting a target image under the condition that the posture parameters are matched with the model display parameters;
the situation that the posture parameter is matched with the model display parameter comprises the following steps: the similarity between the posture parameters and the model display parameters exceeds a preset similarity.
4. The terminal device of claim 3, wherein the identification module comprises:
the acquisition submodule is used for acquiring a first display position and first display information of the target object;
the determining module comprises:
and the determining submodule is used for determining a second display position and second display information according to the first display position and the first display information.
5. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the shooting prompt method according to any one of claims 1 to 2.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the shoot prompt method as claimed in any one of claims 1 to 2.
CN201811550958.2A 2018-12-18 2018-12-18 Shooting prompting method and terminal equipment Active CN109600550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811550958.2A CN109600550B (en) 2018-12-18 2018-12-18 Shooting prompting method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811550958.2A CN109600550B (en) 2018-12-18 2018-12-18 Shooting prompting method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109600550A CN109600550A (en) 2019-04-09
CN109600550B true CN109600550B (en) 2022-05-31

Family

ID=65963904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811550958.2A Active CN109600550B (en) 2018-12-18 2018-12-18 Shooting prompting method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109600550B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784621A (en) * 2019-10-22 2021-05-11 华为技术有限公司 Image display method and apparatus
CN111147744B (en) * 2019-12-30 2022-01-28 维沃移动通信有限公司 Shooting method, data processing device, electronic equipment and storage medium
CN111147745B (en) * 2019-12-30 2021-11-30 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN111935393A (en) * 2020-06-28 2020-11-13 百度在线网络技术(北京)有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN114727002A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Shooting method and device, terminal equipment and storage medium
CN112887603B (en) * 2021-01-26 2023-01-24 维沃移动通信有限公司 Shooting preview method and device and electronic equipment
CN113055593B (en) * 2021-03-11 2022-08-16 百度在线网络技术(北京)有限公司 Image processing method and device
CN115150543B (en) * 2021-03-31 2024-04-16 华为技术有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN113139144A (en) * 2021-05-11 2021-07-20 拉扎斯网络科技(上海)有限公司 Method, device and equipment for acquiring resource picture
CN114928695A (en) * 2022-04-29 2022-08-19 北京淘车科技有限公司 Vehicle image acquisition method and device, medium and terminal
CN117237204A (en) * 2022-06-15 2023-12-15 荣耀终端有限公司 Image processing method, electronic equipment and storage medium
CN115499581B (en) * 2022-08-16 2023-11-21 北京五八信息技术有限公司 Shooting method, shooting device, terminal equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484086A (en) * 2015-09-01 2017-03-08 北京三星通信技术研究有限公司 The method shooting for auxiliary and its capture apparatus
EP3139591A1 (en) * 2015-09-01 2017-03-08 Samsung Electronics Co., Ltd. Apparatus and method for operating a mobile device using motion gestures
CN107566529A (en) * 2017-10-18 2018-01-09 维沃移动通信有限公司 A kind of photographic method, mobile terminal and cloud server
CN107835364A (en) * 2017-10-30 2018-03-23 维沃移动通信有限公司 One kind is taken pictures householder method and mobile terminal
CN108156385A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 Image acquiring method and image acquiring device
CN108921815A (en) * 2018-05-16 2018-11-30 Oppo广东移动通信有限公司 It takes pictures exchange method, device, storage medium and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484086A (en) * 2015-09-01 2017-03-08 北京三星通信技术研究有限公司 The method shooting for auxiliary and its capture apparatus
EP3139591A1 (en) * 2015-09-01 2017-03-08 Samsung Electronics Co., Ltd. Apparatus and method for operating a mobile device using motion gestures
CN107566529A (en) * 2017-10-18 2018-01-09 维沃移动通信有限公司 A kind of photographic method, mobile terminal and cloud server
CN107835364A (en) * 2017-10-30 2018-03-23 维沃移动通信有限公司 One kind is taken pictures householder method and mobile terminal
CN108156385A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 Image acquiring method and image acquiring device
CN108921815A (en) * 2018-05-16 2018-11-30 Oppo广东移动通信有限公司 It takes pictures exchange method, device, storage medium and terminal device

Also Published As

Publication number Publication date
CN109600550A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN109600550B (en) Shooting prompting method and terminal equipment
CN109361865B (en) Shooting method and terminal
CN110740259B (en) Video processing method and electronic equipment
CN111355889B (en) Shooting method, shooting device, electronic equipment and storage medium
CN109381165B (en) Skin detection method and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN110365907B (en) Photographing method and device and electronic equipment
CN108683850B (en) Shooting prompting method and mobile terminal
CN109005336B (en) Image shooting method and terminal equipment
CN108924412B (en) Shooting method and terminal equipment
CN110062171B (en) Shooting method and terminal
WO2021197121A1 (en) Image photographing method and electronic device
CN110855893A (en) Video shooting method and electronic equipment
CN108881544B (en) Photographing method and mobile terminal
CN109241832B (en) Face living body detection method and terminal equipment
CN109618218B (en) Video processing method and mobile terminal
CN109544445B (en) Image processing method and device and mobile terminal
CN109448069B (en) Template generation method and mobile terminal
CN108984143B (en) Display control method and terminal equipment
CN111177420A (en) Multimedia file display method, electronic equipment and medium
CN108174110B (en) Photographing method and flexible screen terminal
CN111464746B (en) Photographing method and electronic equipment
CN108132749B (en) Image editing method and mobile terminal
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant