CN104410782A - Terminal - Google Patents
Terminal Download PDFInfo
- Publication number
- CN104410782A CN104410782A CN201410626051.5A CN201410626051A CN104410782A CN 104410782 A CN104410782 A CN 104410782A CN 201410626051 A CN201410626051 A CN 201410626051A CN 104410782 A CN104410782 A CN 104410782A
- Authority
- CN
- China
- Prior art keywords
- user
- terminal
- shooting
- expression
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000014509 gene expression Effects 0.000 claims abstract description 136
- 238000000034 method Methods 0.000 claims description 27
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 230000008921 facial expression Effects 0.000 description 5
- 230000008451 emotion Effects 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 101150052583 CALM1 gene Proteins 0.000 description 2
- 101150095793 PICALM gene Proteins 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 101150014174 calm gene Proteins 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010195 expression analysis Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a terminal. The terminal comprises an output cell for outputting photographing expression guide information to a user, and a shooting cell for shooting an expression made by the user according to the photographing expression guide information. The terminal can allow the user to acquire a photo with natural expression, thereby bringing the user better experience effect.
Description
Technical Field
The invention relates to the technical field of electronics, in particular to a terminal.
Background
With the continuous development of electronic technology, more and more electronic devices, such as digital cameras, mobile phones, and other terminals, are available for shooting, and the systems of these terminals can provide a picture optimization application program to support users to modify people or objects in pictures.
The prior art provides a retouching program, which can make exaggerated expressions such as frightened expressions, angry expressions and the like by lengthening and twisting the human face, and brings interest. The prior art proposal is an improvement to the portrait lacking expression, and the method of directly elongating and distorting the human face can make the proportion of the human face modified be maladjusted, so that the expression becomes very unnatural.
Disclosure of Invention
The embodiment of the invention provides a terminal which can support a user to obtain a portrait picture with a natural expression and bring a better experience effect to the user.
Specifically, an embodiment of the present invention further provides a terminal, which may include:
the output unit is used for outputting the photographing expression guide information to the user;
and the shooting unit is used for shooting the expression made by the user according to the shooting expression guide information.
According to the embodiment of the invention, the photographing expression guide information is provided for the user, so that the user can naturally make the expression.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments are briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart illustrating an embodiment of a method for capturing images according to the present invention;
fig. 2a is a schematic structural diagram of a terminal according to a first embodiment of the present invention;
fig. 2b is a schematic structural diagram of a terminal according to a second embodiment of the present invention;
fig. 2c is a schematic structural diagram of a terminal according to a third embodiment of the present invention;
fig. 2d is a schematic structural diagram of a terminal according to a fourth embodiment of the present invention;
fig. 2e is a schematic structural diagram of a terminal according to a fifth embodiment of the present invention;
fig. 3 is a schematic structural composition diagram of another terminal provided in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the prior art, a user can only obtain very unnatural pictures by changing facial expressions through later-stage image modification, and pictures with maladjusted facial proportions are very likely to be generated. Based on this, an embodiment of the present invention provides a method for capturing an image, which may include: the terminal outputs the photographing expression guide information to the user; and the terminal shoots the expression made by the user according to the shooting expression guide information. The embodiment of the invention can support the user to obtain the portrait picture with natural expression, and brings better experience effect to the user.
The technical solutions of the embodiments of the present invention are described in detail below with reference to the accompanying drawings and the detailed description.
As shown in fig. 1, an embodiment of a method for capturing an image according to the present invention may include the following steps:
and step S110, the terminal outputs the photographing expression guide information to the user.
And step S111, the terminal shoots the expression made by the user according to the shooting expression guide information.
The shooting expression guide information can include sound guide information, picture guide information and the like, the sound guide information can be sound prestored in a system, voice recorded by a user in advance, a sound file acquired by a terminal and the like, and the picture guide information can be video recorded by the user in advance, pictures shot by the user in advance, a video file acquired by the terminal, a picture file acquired by the terminal and the like.
Because the terminals are usually configured with a plurality of cameras, and according to the position relative to the front view of the terminal, the terminals can include a front camera, a rear camera and a side camera, and some terminals can also be configured with a camera rotating at any angle, the embodiments of the present invention divide the types of the cameras according to the position relative to the front view of the terminal when the cameras work, for example, when the camera rotating at any angle rotates to the rear view position of the terminal, the camera is considered as the rear camera to perform shooting work.
According to the division mode, when the terminal detects that the currently working camera is the front camera, because the front camera and the terminal display screen are in the same direction, picture guide information such as a laugh video, a thriller picture and the like can be output, so that a user can make a laugh or surprised expression; when the terminal detects that the camera working at present is not a front-facing camera but a rear-facing camera or a side-facing camera, the camera and the terminal display screen are not in the same direction, and a user cannot easily see picture information on the display screen, so that the user can be guided to play a segment of strange and different sounds, screams and the like by only adopting sound guide information, and the emotion of the user is guided, so that the natural interesting expression is captured.
In a specific implementation process, the terminal can provide an application program, a menu of the application program is displayed on a display interface, a user can start a special mode (trick mode) through the menu before taking a picture for others, when a rear camera is used for taking the picture, the shooting expression guide information for guiding the shot person can be sound guide information, the terminal system can provide at least one piece of sound guide information for the user, and after the user selects the sound guide information, the shot person can make an exaggerated expression to achieve the purpose of trick regulation; when a user takes a self-timer through a front camera, the picture expression guide information for guiding the user to take pictures can be videos, pictures or sounds, the picture guide information can be randomly extracted from the system by a terminal or randomly downloaded from a service background, and the user can receive picture contents which are never seen, so that interesting expressions are made.
Further optionally, after the terminal guides the user to make an expression, the user also needs to shoot in time, and the embodiment of the invention provides several shooting schemes:
and the strategy I is that the terminal takes the time point of outputting the photographing expression guide information as a reference and photographs the user at a preset photographing time point. Through statistics, the time for most users to react after seeing/hearing the photographing expression guide information is calculated to be used as the preset time length, the calculation is started from the time point of outputting the photographing expression guide information, and photographing is started after the preset time length.
And the terminal continuously shoots the user at a preset shooting time interval by taking the time point of outputting the shooting expression guide information as a reference. Many existing terminals can support a multi-sheet continuous shooting scheme, and the scheme can record the whole process of the expression change of a user. The shooting time interval and the number of continuous shots can be preset, the calculation is started from the time point of outputting the shooting expression guide information, and the shooting is carried out once every time one time interval is reached until the pictures with the preset number of continuous shots are shot; in the implementation process of the second strategy, the continuous shooting number is not set, and the terminal stops continuous shooting after the user performs the operation of stopping shooting to the terminal.
And a third strategy can adopt the combination of the first strategy and the second strategy, the time point of outputting the photographing expression guide information is taken as a reference, the first photographing is carried out when the preset time length is reached, and the photographing is carried out once every time a preset time interval is reached by taking the time of the first photographing as a reference.
The terminal analyzes the change process of the expression made by the user according to the photographing expression guide information; when the change of the expression of the user from the set reference expression exceeds a preset degree, the expression of the user is photographed.
The scheme of the fourth strategy requires adding an expression analysis tool, such as a calculation formula, in the application program, specifically, the specific implementation of the terminal analyzing the change process of the expression made by the user according to the photographed expression guidance information may include: acquiring a basic expression photo for comparison; and calculating the expression change amplitude of the user in real time based on the basic expression photos. The basic expression photo used for comparison may be a calm expression photo of the user before the photographing expression guide information is played, a point used for comparing the expression change degree is recorded on the photo, and the expression change degree of the user is determined through a plurality of recorded points. In addition, in the process of shooting, the basic expression information may be the same or different, for example, in the process of continuous shooting, the expression of the user on the photo shot at the previous time is taken as the basic expression information. And finally, shooting the user when the expression change amplitude of the user calculated by the terminal exceeds a preset change amplitude range.
The strategy five is that the terminal receives the shooting prompt operation of the user at the terminal; and after the terminal receives the shooting prompt operation, shooting the user. In this scheme, the user is used as a control subject to perform shooting, and it can be considered as a manual mode, and accordingly, all of the strategies one to four are considered as automatic modes.
Because the facial expression guide information is shot with a certain theme color, when the system or the service background provides the facial expression guide information, the corresponding background can be configured at the same time for later-stage picture synthesis. After step S111, the terminal may further provide background material to the user, and the terminal user may select to combine the photographed portrait with the background corresponding to the photographed facial guidance information by himself or may select the background in advance for photographing before step S110.
Further optionally, after selecting the background, the user may adjust the size and angle of the background or the portrait, or the terminal may automatically adjust and combine the portrait and the background into a suitable photo through scanning calculation.
Further optionally, the terminal of the embodiment of the present invention may further provide a small element adding service in a later stage, and provide some elements suitable for the background, such as accessories, dialog boxes, and hangers, for the user to make a selection, and the user may further modify the picture, so that the picture becomes more vivid and interesting.
Further optionally, after the terminal generates the photo, the photo may be sent to the application for further processing, for example, uploading and sharing, through an interface of the application program, such as a wechat application interface, a microblog application interface, and the like, according to an operation of the user.
According to the embodiment of the invention, the photographing expression guide information is provided for the user, so that the user can naturally make the expression and take the picture.
Correspondingly, the embodiment of the invention also provides a method for shooting the photo by the terminal, which comprises the following steps: the output unit is used for outputting the photographing expression guide information to the user; and the shooting unit is used for shooting the expression made by the user according to the shooting expression guide information. The terminal provided by the embodiment of the invention can support the user to obtain the picture of the portrait with the natural expression, and brings a better experience effect to the user.
The technical solution of the apparatus in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings and the detailed description.
Fig. 2a is a schematic structural diagram of a terminal according to an embodiment of the present invention. The apparatus of this embodiment may be used to implement the method shown in fig. 1, and specifically, the apparatus of this embodiment includes: the output unit 21 and the shooting unit 22 may further include a playing unit 23, a display unit 24, a first detection unit 25, and a second detection unit 26, the shooting unit 22 of this embodiment may further include an analysis unit 221 and a capturing unit 222, the analysis unit 221 of this embodiment may further include an obtaining unit 2211 and a calculating unit 2222, where:
an output unit 21 for outputting the photographing expression guide information to the user;
and a shooting unit 22 for shooting the expression made by the user according to the shooting expression guide information.
The shooting expression guide information can include sound guide information, picture guide information and the like, the sound guide information can be sound prestored in a system, voice recorded by a user in advance, a sound file acquired by a terminal and the like, and the picture guide information can be video recorded by the user in advance, pictures shot by the user in advance, a video file acquired by the terminal, a picture file acquired by the terminal and the like.
Because the terminals are usually configured with a plurality of cameras, and according to the position relative to the front view of the terminal, the terminals can include a front camera, a rear camera and a side camera, and some terminals can also be configured with a camera rotating at any angle, the embodiments of the present invention divide the types of the cameras according to the position relative to the front view of the terminal when the cameras work, for example, when the camera rotating at any angle rotates to the rear view position of the terminal, the camera is considered as the rear camera to perform shooting work.
Further optionally, referring to fig. 2b, the terminal according to the embodiment of the present invention may further include:
and a first detection unit 25, configured to detect that the currently operating camera is a rear camera or a side camera of the terminal.
An output unit 21 for outputting the sound guide information to the playing unit 23;
and the playing unit 23 is used for playing the voice guide information to the user, so that the user can make corresponding expressions according to the guidance of the voice guide information.
When the terminal detects that the camera working at present is not a front-facing camera but a rear-facing camera or a side-facing camera, the camera and the terminal display screen are not in the same direction, and a user cannot easily see picture information on the display screen, so that the user can be guided to play a segment of strange and different sounds, screams and the like by only adopting sound guide information, and the emotion of the user is guided, so that the natural interesting expression is captured.
Further optionally, referring to fig. 2c, the terminal according to the embodiment of the present invention may further include:
and a second detection unit 26, configured to detect that the currently operating camera is a front camera of the terminal.
An output unit 21 for outputting the screen guide information to the display unit 24;
and the display unit 24 is used for displaying the picture guide information to the user so that the user can make corresponding expressions according to the guidance of the picture guide information.
When the terminal detects that the camera working at present is the front camera, because the front camera is in the same direction with the terminal display screen, picture guide information such as a video that is laughted, a thriller picture and the like can be output, so that the user can make a laugh or a thriller expression.
In a specific implementation process, the terminal can provide an application program, a menu of the application program is displayed on a display interface, a user can start a special mode (trick mode) through the menu before taking a picture for others, when a rear camera is used for taking the picture, the shooting expression guide information for guiding the shot person can be sound guide information, the terminal system can provide at least one piece of sound guide information for the user, and after the user selects the sound guide information, the shot person can make an exaggerated expression to achieve the purpose of trick regulation; when a user takes a self-timer through a front camera, the picture expression guide information for guiding the user to take pictures can be videos, pictures or sounds, the picture guide information can be randomly extracted from the system by a terminal or randomly downloaded from a service background, and the user can receive picture contents which are never seen, so that interesting expressions are made.
Further optionally, after the terminal guides the user to make an expression, the user also needs to shoot in time, and the embodiment of the invention provides several shooting schemes:
the strategy one, the shooting unit 22 takes the time point of outputting the shooting expression guide information as a reference, and shoots the user at a preset shooting time point. Through statistics, the time for most users to react after seeing/hearing the photographing expression guide information is calculated to be used as the preset time length, the calculation is started from the time point of outputting the photographing expression guide information, and photographing is started after the preset time length.
And in the second strategy, the shooting unit 22 takes the time point of outputting the shooting expression guide information as a reference, and continuously shoots the user at a preset shooting time interval. Many existing terminals can support a multi-sheet continuous shooting scheme, and the scheme can record the whole process of the expression change of a user. The shooting time interval and the number of continuous shots can be preset, the calculation is started from the time point of outputting the shooting expression guide information, and the shooting is carried out once every time one time interval is reached until the pictures with the preset number of continuous shots are shot; in the implementation of the second strategy, the continuous shooting may not be set, and the shooting unit 22 stops continuous shooting after the user performs the operation of "stopping shooting" to the shooting unit 22.
The third policy and the second policy may be combined by the shooting unit 22, and the first shooting is performed when the preset time duration is reached based on the time point of outputting the shooting expression guide information, and the shooting is performed once every time a preset time interval is reached based on the time of the first shooting.
Referring to fig. 2d, the shooting unit 22 may further specifically execute the policy four through the analysis unit 221 and the capturing unit 222:
the analysis unit 221 analyzes the change process of the expression made by the user according to the photographing expression guide information; the capturing unit 222 is further configured to send a first capturing prompt message to the capturing unit when it is analyzed that the change of the expression of the user relative to the set reference expression exceeds a preset degree;
and the capturing unit 222 is configured to capture the expression of the user after receiving the first shooting prompt message.
Referring to fig. 2e, the analyzing unit 221 may further perform an analysis through the obtaining unit 2211 and the calculating unit 2222:
the obtaining unit 2211 obtains a basic emotion photo for comparison; the calculating unit 2222 calculates the expression change amplitude of the user in real time based on the basic expression photo, and is further configured to send a second shooting prompt message to the capturing unit 222 when the expression change amplitude of the user exceeds a preset change amplitude range;
the capturing unit 222 is further configured to capture the user when receiving the second capturing prompt message sent by the calculating unit 2222.
The basic expression photo used for comparison may be a calm expression photo of the user before the photographing expression guide information is played, a point used for comparing the expression change degree is recorded on the photo, and the expression change degree of the user is determined through a plurality of recorded points. In addition, in the process of shooting, the basic expression information may be the same or different, for example, in the process of continuous shooting, the expression of the user on the photo shot at the previous time is taken as the basic expression information. And finally, shooting the user when the expression change amplitude of the user calculated by the terminal exceeds a preset change amplitude range.
Strategy five, the shooting unit 22 receives the shooting prompt operation of the user at the terminal; the photographing unit 22 photographs the user after receiving the photographing prompting operation. In this scheme, the user is used as a control subject to perform shooting, and it can be considered as a manual mode, and accordingly, all of the strategies one to four are considered as automatic modes.
Because the facial expression guide information is shot with a certain theme color, when the system or the service background provides the facial expression guide information, the corresponding background can be configured at the same time for later-stage picture synthesis. After step S111, the terminal may further provide background material to the user, and the terminal user may select to combine the photographed portrait with the background corresponding to the photographed facial guidance information by himself or may select the background in advance for photographing before step S110.
Further optionally, after selecting the background, the user may adjust the size and angle of the background or the portrait, or the terminal may automatically adjust and combine the portrait and the background into a suitable photo through scanning calculation.
Further optionally, the terminal provided by the embodiment of the present invention may further provide a small element adding service in a later stage, and provide some elements suitable for the background, such as accessories, dialog boxes, and hangers, for the user to make a selection, and the user may further modify the picture, so that the picture becomes more vivid and interesting.
Further optionally, after the terminal generates the photo, the photo may be sent to the application for further processing, for example, uploading and sharing, through an interface of the application program, such as a wechat application interface, a microblog application interface, and the like, according to an operation of the user.
The terminal provided by the embodiment of the invention can support the user to obtain the portrait picture with more natural expression, and brings better experience effect to the user.
Fig. 3 is a schematic structural diagram of a terminal according to a third embodiment of the present invention. The terminal described in this embodiment includes: at least one input device 31; at least one output device 32; at least one processor 33, such as a CPU; and a memory 34, the input device 31, the output device 32, the processor 33, and the memory 34 being connected by a bus 35.
The input device 31 may be a touch panel of a terminal, and includes a touch screen and a touch screen, and is configured to detect an operation instruction on the touch panel of the terminal.
The output device 32 may be a display screen of the terminal, and is used for outputting and displaying image data (including the first image data and the second image data).
The memory 34 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 34 is used for storing a set of program codes, and the input device 31, the output device 32 and the processor 33 are used for calling the program codes stored in the memory 34 and executing the following operations:
the output device 32 is configured to output the photographing expression guide information to the user.
The input device 31 is configured to capture an expression made by the user according to the photographing expression guide information.
In some possible embodiments, the photographing expression guide information includes sound guide information,
the output device 32 is further configured to play the voice guidance information to the user, so that the user can make a corresponding expression according to the guidance of the voice guidance information.
In some possible embodiments, the photographing expression guide information includes picture guide information,
the output device 32 is further configured to display the image guidance information to the user, so that the user can make a corresponding expression according to guidance of the image guidance information.
In some possible embodiments, the input device 31 described above is also specifically configured to:
and detecting whether the currently working camera is a front camera of the terminal.
In some possible embodiments, the input device 31 is further configured to:
and taking the time point of outputting the photographing expression guide information as a reference, and photographing the user at a preset photographing time point.
In some possible embodiments, the input device 31 is further configured to:
and taking the time point of outputting the photographing expression guide information as a reference, and continuously photographing the user at a preset photographing time interval.
In some possible embodiments, the input device 31 is further configured to:
receiving a shooting prompt operation of a user at a terminal;
and after receiving the shooting prompt operation, shooting the user.
In some possible embodiments, the processor 33 is further specifically configured to:
analyzing the change process of the expression made by the user according to the photographing expression guide information
The input device 31 is further configured to capture the expression of the user when the processor 33 analyzes that the change of the expression of the user relative to the set reference expression exceeds a preset degree.
In some possible embodiments, the processor 33 is further specifically configured to:
acquiring a basic expression photo for comparison;
calculating the expression change amplitude of the user in real time based on the basic expression photo;
in some possible embodiments, the input device 31 is further configured to:
when the processor 33 calculates that the expression change amplitude of the user exceeds the preset change amplitude range, the user is photographed.
In some possible embodiments, the mentioned sound guiding information includes one or more of a voice recorded in advance by the user and a sound file acquired by the terminal; the mentioned picture guide information comprises one or more of a video recorded by a user in advance, a picture shot by the user in advance, a video file acquired by the terminal and a picture file acquired by the terminal.
In a specific implementation, the input device 31, the output device 32, and the processor 33 described in this embodiment of the present invention may execute the implementation manners described in the embodiments of the method for capturing an image provided in this embodiment of the present invention, and may also execute the implementation manners of the terminals described in the first to fifth embodiments of the terminal provided in this embodiment of the present invention, which is not described herein again.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The modules or units in the device of the embodiment of the invention can be combined, divided and deleted according to actual needs.
The modules or units in the embodiments of the present invention may be implemented by a general-purpose integrated circuit, such as a CPU (central processing Unit), or an ASIC (Application Specific integrated circuit).
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The modules or units in the device of the embodiment of the invention can be combined, divided and deleted according to actual needs.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.
Claims (10)
1. A terminal, comprising:
the output unit is used for outputting the photographing expression guide information to the user;
and the shooting unit is used for shooting the expression made by the user according to the shooting expression guide information.
2. The terminal of claim 1, wherein the photographing expression guide information includes sound guide information,
the output unit is also used for outputting the sound guide information to a playing unit;
the terminal further comprises:
and the playing unit is used for playing the sound guiding information to the user so that the user can make corresponding expressions according to the guidance of the sound guiding information.
3. The terminal of claim 1, wherein the photographing expression guide information includes screen guide information,
the output unit is also used for outputting the picture guide information to the display unit;
the terminal further comprises:
and the display unit is used for displaying the picture guide information to the user so that the user can make corresponding expressions according to the guidance of the picture guide information.
4. The terminal of claim 2, further comprising:
and the first detection unit is used for detecting that the currently working camera is a rear camera or a side camera of the terminal.
5. The terminal of claim 3, further comprising:
and the second detection unit is used for detecting that the currently working camera is the front camera of the terminal.
6. The terminal of claim 1,
the shooting unit is further used for shooting the user at a preset shooting time point by taking the time point of outputting the shooting expression guide information as a reference;
or,
the shooting unit is further used for continuously shooting the user at preset shooting time intervals by taking the time point of outputting the shooting expression guide information as a reference;
or,
the shooting unit is also used for receiving shooting prompt operation of the user at the terminal; and the terminal is also used for shooting the user after receiving the shooting prompt operation.
7. The terminal of claim 1, wherein the photographing unit comprises:
the analysis unit is used for analyzing the change process of the expression made by the user according to the photographing expression guide information; the snapshot system is also used for sending a first shooting prompt message to the snapshot unit when the change of the expression of the user relative to the set reference expression exceeds the preset degree;
and the snapshot unit is used for shooting the expression of the user after receiving the first shooting prompt message.
8. The terminal of claim 7,
the analysis unit includes:
the acquisition unit is used for acquiring basic expression photos for comparison;
the computing unit is used for computing the expression change amplitude of the user in real time based on the basic expression photo; the snapshot system is also used for sending a second shooting prompt message to the snapshot unit when the expression change amplitude of the user exceeds a preset change amplitude range;
the capturing unit is further configured to capture the user when receiving the second shooting prompt message sent by the computing unit.
9. The terminal according to claim 2 or 4, wherein the sound guidance information includes one or more of a voice recorded in advance by the user and a sound file acquired by the terminal.
10. The terminal according to claim 3 or 5, wherein the picture guidance information includes one or more of a video recorded in advance by the user, a picture taken in advance by the user, a video file acquired by the terminal, and a picture file acquired by the terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410626051.5A CN104410782A (en) | 2014-11-07 | 2014-11-07 | Terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410626051.5A CN104410782A (en) | 2014-11-07 | 2014-11-07 | Terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104410782A true CN104410782A (en) | 2015-03-11 |
Family
ID=52648373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410626051.5A Pending CN104410782A (en) | 2014-11-07 | 2014-11-07 | Terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104410782A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110139021A (en) * | 2018-02-09 | 2019-08-16 | 北京三星通信技术研究有限公司 | Auxiliary shooting method and terminal device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050237411A1 (en) * | 2004-04-23 | 2005-10-27 | Sayuri Watanabe | Camera and control method for camera |
CN101325658A (en) * | 2007-06-13 | 2008-12-17 | 索尼株式会社 | Imaging device, imaging method and computer program |
CN101325659A (en) * | 2007-06-13 | 2008-12-17 | 索尼株式会社 | Imaging device, imaging method and computer program |
CN103269415A (en) * | 2013-04-16 | 2013-08-28 | 广东欧珀移动通信有限公司 | Automatic photo taking method for face recognition and mobile terminal |
CN103399690A (en) * | 2013-07-31 | 2013-11-20 | 贝壳网际(北京)安全技术有限公司 | Photographing guiding method and device and mobile terminal |
CN103607612A (en) * | 2013-11-13 | 2014-02-26 | 四川长虹电器股份有限公司 | Motion identification-based scene sharing method |
-
2014
- 2014-11-07 CN CN201410626051.5A patent/CN104410782A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050237411A1 (en) * | 2004-04-23 | 2005-10-27 | Sayuri Watanabe | Camera and control method for camera |
CN101325658A (en) * | 2007-06-13 | 2008-12-17 | 索尼株式会社 | Imaging device, imaging method and computer program |
CN101325659A (en) * | 2007-06-13 | 2008-12-17 | 索尼株式会社 | Imaging device, imaging method and computer program |
CN103269415A (en) * | 2013-04-16 | 2013-08-28 | 广东欧珀移动通信有限公司 | Automatic photo taking method for face recognition and mobile terminal |
CN103399690A (en) * | 2013-07-31 | 2013-11-20 | 贝壳网际(北京)安全技术有限公司 | Photographing guiding method and device and mobile terminal |
CN103607612A (en) * | 2013-11-13 | 2014-02-26 | 四川长虹电器股份有限公司 | Motion identification-based scene sharing method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110139021A (en) * | 2018-02-09 | 2019-08-16 | 北京三星通信技术研究有限公司 | Auxiliary shooting method and terminal device |
CN110139021B (en) * | 2018-02-09 | 2023-01-13 | 北京三星通信技术研究有限公司 | Auxiliary shooting method and terminal equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104394315A (en) | A method for photographing an image | |
CN105430262B (en) | Filming control method and device | |
WO2022062896A1 (en) | Livestreaming interaction method and apparatus | |
US10170157B2 (en) | Method and apparatus for finding and using video portions that are relevant to adjacent still images | |
US20170163878A1 (en) | Method and electronic device for adjusting shooting parameters of camera | |
US9838616B2 (en) | Image processing method and electronic apparatus | |
US9888176B2 (en) | Video apparatus and photography method thereof | |
KR102004884B1 (en) | Method and apparatus for controlling animated image in an electronic device | |
CN109922252B (en) | Short video generation method and device and electronic equipment | |
US12143667B2 (en) | Panorama video editing method, apparatus,device and storage medium | |
CN104219444A (en) | Method and device for processing video shooting | |
WO2015184873A1 (en) | Method, system, and device for processing video shooting | |
KR101672691B1 (en) | Method and apparatus for generating emoticon in social network service platform | |
US20140285649A1 (en) | Image acquisition apparatus that stops acquisition of images | |
US20170244891A1 (en) | Method for automatically capturing photograph, electronic device and medium | |
CN105120153B (en) | A kind of image capturing method and device | |
US20240353739A1 (en) | Image processing apparatus, image processing method, and storage medium | |
EP3304551B1 (en) | Adjusting length of living images | |
KR102126370B1 (en) | Apparatus and method for analyzing motion | |
CN104410782A (en) | Terminal | |
CN104298442B (en) | A kind of information processing method and electronic equipment | |
CN114285988B (en) | Display method, display device, electronic equipment and storage medium | |
CN114125298B (en) | Video generation method and device, electronic equipment and computer readable storage medium | |
CN106101539A (en) | A kind of self-shooting bar angle regulation method and self-shooting bar | |
CN113315903A (en) | Image acquisition method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150311 |
|
WD01 | Invention patent application deemed withdrawn after publication |