CN109660728B - Photographing method and device - Google Patents

Photographing method and device Download PDF

Info

Publication number
CN109660728B
CN109660728B CN201811643450.7A CN201811643450A CN109660728B CN 109660728 B CN109660728 B CN 109660728B CN 201811643450 A CN201811643450 A CN 201811643450A CN 109660728 B CN109660728 B CN 109660728B
Authority
CN
China
Prior art keywords
emotion
photographing
preview image
theme
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811643450.7A
Other languages
Chinese (zh)
Other versions
CN109660728A (en
Inventor
卢壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811643450.7A priority Critical patent/CN109660728B/en
Publication of CN109660728A publication Critical patent/CN109660728A/en
Application granted granted Critical
Publication of CN109660728B publication Critical patent/CN109660728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a photographing method and device. The method comprises the following steps: collecting a preview image; acquiring preset dimension characteristic data corresponding to the preview image; determining emotion information of the user according to the characteristic data; and when a photographing instruction is monitored, photographing according to the emotion information and the preview image to generate a target image. According to the invention, the emotion tag is added to the shot picture, so that the shot picture becomes more meaningful, and the user can feel personally on the scene when the picture is browsed after a plurality of years, and the use experience of the user is improved.

Description

Photographing method and device
Technical Field
The present invention relates to. The field of mobile communications, and in particular, to a photographing method and apparatus.
Background
With the continuous development of mobile communication technology, the proportion of mobile terminals (such as mobile phones) in the life and work of people is increasing.
The photographing function is one of important basic functions of a mobile terminal product, and is required to be frequently used in the daily use process of a user, most of mobile phones only realize the basic photographing function at present, some photo editing functions are added at most, such as a filter and the like, a large number of photos can be stored in the long-term use process of the mobile phones, after a long time, the photos are originally taken, and the current situation is basically forgotten, so that the photos are of little significance, and the use experience of the user is reduced.
Disclosure of Invention
The embodiment of the invention provides a photographing method and a photographing device, and aims to solve the problems that in the prior art, only photographed pictures are stored, so that users cannot determine the original purpose of photographing, and the user experience is reduced.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a photographing method, including: collecting a preview image; acquiring feature data of a preset dimension corresponding to the preview image; determining emotion information of the user according to the characteristic data; and when a photographing instruction is monitored, photographing according to the emotion information and the preview image to generate a target image.
In a second aspect, an embodiment of the present invention provides a photographing apparatus, including: the preview image acquisition module is used for acquiring a preview image; the characteristic data acquisition module is used for acquiring characteristic data of a preset dimension corresponding to the preview image; the emotion information determining module is used for determining emotion information of the user according to the characteristic data; and the target image generation module is used for photographing according to the emotion information and the preview image when a photographing instruction is monitored, and generating a target image.
In a third aspect, an embodiment of the present invention provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the photographing method described in any one of the above.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the photographing method described in any one of the above.
In the embodiment of the invention, the preview image is collected, the feature data of the preset dimension corresponding to the preview image is obtained, the emotion information of the user is determined according to the feature data, and then when the photographing instruction is monitored, the photographing is carried out according to the emotion information and the preview image, and the target image is generated. According to the embodiment of the invention, the emotion label is added to the shot picture, so that the shot picture becomes more meaningful, rather than only recording the moment when the picture is shot, and people can feel personally on the scene when the pictures are browsed after several years, and the use experience of users is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a photographing method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating steps of a photographing method according to an embodiment of the present invention;
FIG. 2a is a diagram illustrating a preset dimension provided by an embodiment of the present invention;
FIG. 2b is a schematic diagram illustrating a theme of emotion according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram illustrating a photographing apparatus according to an embodiment of the present invention;
fig. 5 shows a block diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a photographing method provided in an embodiment of the present invention is shown, where the photographing method may be applied to a mobile terminal, and specifically includes the following steps:
step 101: and acquiring a preview image.
In the embodiment of the present invention, the mobile terminal may be a mobile electronic device having a camera, such as a mobile phone, a PDA (personal digital Assistant), a tablet computer, and the like.
When the preview image is collected, the preview image can be collected by using a front camera of the mobile terminal, namely, the preview image is collected by using a camera positioned on one side of a display screen of the mobile terminal. Of course, a back camera of the mobile terminal may be used to acquire the preview image, that is, the camera located on the opposite side of the display screen of the mobile terminal is used to acquire the preview image, which may be determined according to actual situations, and the embodiment of the present invention is not limited to this.
The embodiment of the invention is suitable for adding the scene of the emotion label of the user to the shot image in the shooting process.
In the process that a user uses the mobile terminal to take a picture, a camera of the mobile terminal can be adopted in advance to collect a preview image, and a display interface of the mobile terminal is displayed.
After the preview image is acquired, step 102 is performed.
Step 102: and acquiring the characteristic data of the preset dimensionality corresponding to the preview image.
The preset dimension may include one or more of a shooting time dimension, a shooting position dimension, a shooting user dimension, a shooting environment dimension, and the like.
The shooting time dimension refers to the system time of the mobile terminal corresponding to the current shooting, for example, the current shooting time is: beijing time: 2018.11.28, 18:22, etc., specifically in real time.
The shooting position dimension refers to a position where current shooting is performed, and for example, the current shooting position is: the south avenue xx in the province of the Haidian province of Beijing, and the like, particularly, the actual geographical position.
The shooting user dimension refers to character expressions, clothing, body movements, character interactions, photo backgrounds, scene contents and the like which are recognized based on the preview image.
The shooting environment dimension refers to the surrounding environment of the current shooting, such as weather, light brightness, surrounding scenes and the like.
Of course, the present invention is not limited to this, and feature data of other dimensions, such as the usage of a mobile phone of a user, may also be combined, and specifically, may be determined according to an actual situation, and the embodiment of the present invention does not limit this.
One or more dimensions can be pre-stored in the mobile terminal system, and then when a user takes a picture, the corresponding characteristic data of the one or more dimensions can be acquired to carry out the subsequent analysis process.
After acquiring the feature data of the preset dimension corresponding to the preview image, step 103 is executed.
Step 103: and determining emotion information of the user according to the characteristic data.
The emotional information may include different emotional information such as happiness, anger, sadness, fright, fear, love, and the like.
After the feature data in the preset dimension is obtained, the emotion of the user can be analyzed according to the feature data in the preset dimension, specifically, the following analysis is performed in combination with the feature data of the four dimensions listed in the above step 102:
1. capturing time-dimensional feature data
Time can be analyzed from multiple dimensions, such as xx month xx days xx years, whether this time is a particular day for the user; the time, the minute and the second of the time can analyze what the user is probably doing at present, extract and analyze daily information of the user, determine special festivals such as birthdays, wedding anniversaries and the like, and can also analyze legal festivals and holidays so as to deduce the emotion of the user at the moment through time analysis.
2. Feature data of shooting position dimension
The current position of the user can be identified through the positioning function of the mobile terminal, whether the user is in a certain scenic spot or not is analyzed according to the current position of the user, and the distribution of buildings around the position is analyzed, so that the current activity state of the user is deduced, and the possible emotion of the user is further analyzed.
3. Photographing feature data of user dimensions
By acquiring the voice of the current user, determining the mood of the user at the moment from information such as sound size, tone, voice content and the like, recognizing a preview image, recognizing the mood of the user from information such as character expression, clothing, body movement, character interaction, photo background, scene content and the like.
4. Feature data of shooting environment dimension
Through the acquired preview image, the current surrounding environment, such as weather, light brightness, surrounding scenes, people and other information, is identified, and therefore the possible emotion of the user is deduced.
When other dimension characteristic data are included, detailed analysis can be performed in combination with the other dimension characteristic data to analyze the emotion of the user, and in particular, the detailed analysis can be performed in combination with the actual situation, and the embodiment of the present invention does not limit the specific analysis process.
The feature data for the above dimensions are described below in conjunction with the drawings of the specification.
Referring to fig. 2a, a schematic diagram of a preset dimension provided by the embodiment of the present invention is shown, as shown in fig. 2, when the preset dimension is a location dimension, the preset dimension may include a scenic spot, a shopping square, a home, an outdoor, a foreign country, a company, and the like. When the preset dimension is a sound dimension, silence, noise, character tone, speaking content, speed of speech, and the like can be included. When the preset dimension is a time dimension, time data such as birthday, spring festival, morning, late night, wedding anniversary and the like can be included. When the preset dimension is an environmental dimension, weather, temperature, season, surrounding characters and the like can be included. When the preset dimension is a photo dimension, expression, wearing, limb movement, task interaction and the like can be included.
It is to be understood that the above-mentioned process of analyzing the emotion of the user is only an analysis process listed for better understanding the technical solution of the embodiment of the present invention, and is not to be taken as the only limitation to the embodiment of the present invention.
In a specific implementation, a person skilled in the art may set other analysis processes according to actual situations, and the embodiment of the present invention is not limited thereto.
After determining the user's mood, step 104 is performed.
Step 104: and when a photographing instruction is monitored, photographing according to the emotion information and the preview image to generate a target image.
The photographing instruction is triggered by operations including full shutter button pressing, half shutter button pressing, smiling face detection or gesture actions.
After the emotion information of the user is determined, the emotion information of the user and the preview image can be combined, and when a photographing instruction is monitored, photographing is carried out according to the emotion information of the user and the preview image, so that a target image is generated.
Specifically, the content to be photographed may be displayed on the front side of the preview image, and information related to emotion information of the user, such as text information or color information, may be displayed on the back side of the preview image.
According to the embodiment of the invention, the emotion information of the user and the shot image are synthesized, so that the shot picture can record the mood of the user when the user shoots, the user can feel personally on the scene when the user browses the pictures after a plurality of years, and the use experience of the user is improved.
According to the photographing method provided by the embodiment of the invention, the preview image is collected, the feature data of the preset dimensionality corresponding to the preview image is obtained, the emotion information of the user is determined according to the feature data, and further, when the photographing instruction is monitored, photographing is carried out according to the emotion information and the preview image, and the target image is generated. According to the embodiment of the invention, the emotion label is added to the shot picture, so that the shot picture becomes more meaningful, rather than only recording the moment when the picture is shot, and people can feel personally on the scene when the pictures are browsed after several years, and the use experience of users is improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of a photographing method according to an embodiment of the present invention is shown, where the photographing method may be applied to a mobile terminal, and specifically includes the following steps:
step 201: and acquiring a preview image.
In the embodiment of the present invention, the mobile terminal may be a mobile electronic device such as a mobile phone, a PDA (personal digital Assistant), a tablet computer, and the like.
When the preview image is collected, the preview image can be collected by using a front camera of the mobile terminal, namely, the preview image is collected by using a camera positioned on one side of a display screen of the mobile terminal. Of course, a back camera of the mobile terminal may be used to acquire the preview image, that is, the camera located on the opposite side of the display screen of the mobile terminal is used to acquire the preview image, which may be determined according to actual situations, and the embodiment of the present invention is not limited to this.
The embodiment of the invention is suitable for adding the scene of the emotion label of the user to the shot image in the shooting process.
In the process that a user uses the mobile terminal to take a picture, a camera of the mobile terminal can be adopted in advance to collect a preview image, and a display interface of the mobile terminal is displayed.
After the preview image is acquired, step 202 is performed.
Step 202: and acquiring the characteristic data of the preset dimensionality corresponding to the preview image.
The preset dimension may include one or more of a shooting time dimension, a shooting position dimension, a shooting user dimension, a shooting environment dimension, and the like.
The shooting time dimension refers to the system time of the mobile terminal corresponding to the current shooting, for example, the current shooting time is: beijing time: 2018.11.28, 18:22, etc., specifically in real time.
The shooting position dimension refers to a position where current shooting is performed, and for example, the current shooting position is: the south avenue xx in the province of the Haidian province of Beijing, and the like, particularly, the actual geographical position.
The shooting user dimension refers to character expressions, clothing, body movements, character interactions, photo backgrounds, scene contents and the like which are recognized based on the preview image.
The shooting environment dimension refers to the surrounding environment of the current shooting, such as weather, light brightness, surrounding scenes and the like.
Of course, the present invention is not limited to this, and other dimensions, such as the usage of the mobile phone of the user, may also be combined, and in particular, may be determined according to the actual situation, and the embodiment of the present invention does not limit this.
One or more dimensions can be pre-stored in the mobile terminal system, and then when a user takes a picture, the corresponding one or more dimensions can be acquired for subsequent analysis.
After acquiring the feature data of the preset dimension corresponding to the preview image, step 203 is executed.
Step 203: and determining emotion information of the user according to the characteristic data.
The emotional information may include different emotional information such as happiness, anger, sadness, fright, fear, love, and the like.
After obtaining the feature data of the preset dimension, the emotion of the user may be analyzed according to the feature data of the preset dimension, specifically, the following analysis is performed in combination with the feature data of the four dimensions listed in step 202:
1. capturing time-dimensional feature data
Time can be analyzed from multiple dimensions, such as xx month xx days xx years, whether this time is a particular day for the user; the time, the minute and the second of the time can analyze what the user is probably doing at present, extract and analyze daily information of the user, determine special festivals such as birthdays, wedding anniversaries and the like, and can also analyze legal festivals and holidays so as to deduce the emotion of the user at the moment through time analysis.
2. Feature data of shooting position dimension
The current position of the user can be identified through the positioning function of the mobile terminal, whether the user is in a certain scenic spot or not is analyzed according to the current position of the user, and the distribution of buildings around the position is analyzed, so that the current activity state of the user is deduced, and the possible emotion of the user is further analyzed.
3. Photographing feature data of user dimensions
By acquiring the voice of the current user, determining the mood of the user at the moment from information such as sound size, tone, voice content and the like, recognizing a preview image, recognizing the mood of the user from information such as character expression, clothing, body movement, character interaction, photo background, scene content and the like.
4. Feature data of shooting environment dimension
Through the acquired preview image, the current surrounding environment, such as weather, light brightness, surrounding scenes, people and other information, is identified, and therefore the possible emotion of the user is deduced.
When other dimension characteristic data are included, detailed analysis can be performed in combination with the other dimension characteristic data to analyze the emotion of the user, and in particular, the detailed analysis can be performed in combination with the actual situation, and the embodiment of the present invention does not limit the specific analysis process.
The feature data for the above dimensions are described below in conjunction with the drawings of the specification.
Referring to fig. 2a, a schematic diagram of a preset dimension provided by the embodiment of the present invention is shown, as shown in fig. 2, when the preset dimension is a location dimension, locations such as a scenic spot, a shopping square, a home, an outdoor, a foreign country, a company, and the like may be included. When the preset dimension is a sound dimension, sounds such as silence, noise, character tone, speaking content, speed of speech, and the like can be included. When the preset dimension is a time dimension, the time can include birthday, spring festival, morning, late night, wedding anniversary and the like. When the preset dimension characteristic data is an environment dimension, the environment dimension can include weather, temperature, season, surrounding characters and the like. When the preset dimension is a photo dimension, the preset dimension can include characteristics such as expressions, wearing, limb actions, task interaction and the like.
It is to be understood that the above-mentioned process of analyzing the emotion of the user is only an analysis process listed for better understanding the technical solution of the embodiment of the present invention, and is not to be taken as the only limitation to the embodiment of the present invention.
In a specific implementation, a person skilled in the art may set other analysis processes according to actual situations, and the embodiment of the present invention is not limited thereto.
After determining the user's mood, step 204 is performed.
Step 204: and determining at least one matched emotional theme according to the emotional information.
The emotional theme refers to a theme for expressing the emotion of the user, and the emotional theme may include: one or more of a color theme, an arrangement style theme, an animation theme, and the like. One emotion information may correspond to one emotion theme or correspond to multiple emotion themes, which is not limited in the embodiment of the present invention.
For example, referring to fig. 2b, a schematic diagram illustrating a theme of emotion according to an embodiment of the present invention is shown, as shown in fig. 2b, the emotion of the user can be expressed by colors, such as warm color, cool color, pure color, five colors, and so on; the emotion of the user can also be expressed through animation, such as animation of cheerfulness, sinking and stability and the like; the emotion of the user can also be expressed by the subject character information, such as character information of joy, hurt, incentive, anger and the like; the emotion of the user can also be expressed by background music, such as light music, heavy metal, rock, hurt and other background music.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
Different mapping relations between the emotion information and the emotion theme are pre-stored in the mobile terminal system, for example, the emotion information comprises A, B, C, the emotion main body comprises a, B and C, the mapping relations between the emotion information and the emotion main body can be A-a, B-C and C-B, namely, the emotion information A and the emotion theme a have a mapping relation, the emotion information B and the emotion theme C have a mapping relation, and the emotion information C and the emotion theme B have a mapping relation.
The storage may be performed in a list form in the mobile terminal system, as shown in table 1 below:
table 1:
emotional information Emotional themes
A a
B b
C c
As shown in table 1, the emotional information a has a mapping relationship with the emotional topic a, the emotional information B has a mapping relationship with the emotional topic C, and the emotional information C has a mapping relationship with the emotional topic B.
Of course, the mapping relationship may also be stored in the mobile terminal system in the form of a database, and specifically, the mapping relationship may be determined according to actual situations, which is not limited in this embodiment of the present invention.
After obtaining the emotion information of the user, a matching emotional topic may be determined according to the emotion information, and step 205 is performed.
Step 205: and displaying each emotion theme on a display interface of the mobile terminal.
After determining at least one emotional topic that matches the emotional information of the user, the emotional topics may then be presented on a display interface of the mobile terminal, e.g., the emotional topic shown in fig. 2b may be presented on the display interface of the mobile terminal, etc.
After the display interface of the mobile terminal shows the emotional themes, step 205 is executed.
Step 206: and determining a target emotion theme according to the selection operation of the user.
The selection operation of the user may be a click operation performed by the user to click at least one emotional theme, such as an operation to click an emotional theme a displayed in the display interface, or may be a speech selection operation input by the user, such as a speech selection operation of "select emotional theme b" and the like input by the user.
In a specific implementation, the user may select a corresponding emotional theme selection operation as needed, which is not limited in the embodiment of the present invention.
After each emotional theme is displayed on the display interface of the mobile terminal, one or more target emotional themes can be determined from a plurality of emotional themes according to the selection operation of the user, for example, the display interface displays four emotional themes, namely an emotional theme 1, an emotional theme 2, an emotional theme 3 and an emotional theme 4, the user can select one emotional theme, for example, select the emotional theme 1, or select a plurality of emotional themes, for example, select the emotional theme 2 and the emotional theme 4, and the like.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After the target emotional theme is determined according to the selection operation of the user, step 207 is performed.
Step 207: and when a photographing instruction is monitored, photographing according to the target emotion theme and the preview image to generate the target image.
The photographing instruction is triggered by operations including full shutter button pressing, half shutter button pressing, smiling face detection or gesture actions.
After the target emotion theme is determined, the target emotion theme and the preview image can be combined, and when a photographing instruction is monitored, photographing is performed according to the combined target emotion theme and the preview image, so that a target image is generated.
As for a specific process of synthesizing the target image, the following preferred embodiments are described in detail.
In a preferred embodiment of the present invention, the step 207 may include:
substep S1: displaying a first side of the preview image on the display interface according to page switching operation executed by the user on the preview image; the second surface of the preview image is used for showing the content corresponding to the preview image;
substep S2: fusing the target emotion theme to the first surface to obtain a fused preview image; (ii) a
Substep S3: and when a photographing instruction is monitored, photographing according to the fused preview image to generate the target image.
In the embodiment of the invention, after the target emotion theme is determined, the user can click the preview image on the display interface of the mobile terminal, so that the page switching operation of the preview image is executed, and the first side of the preview image is displayed on the display interface of the mobile terminal.
The first side of the preview image is also the reverse side of the preview image, and the content corresponding to the preview image, that is, the scenery figure and the like corresponding to the shot object, is displayed on the front side of the preview image (i.e., the second side of the preview image). After the first side of the preview image is displayed on the display interface, the target emotion theme can be fused with the first side of the preview image, so that a fused preview image is obtained.
When a photographing instruction is monitored, photographing can be carried out according to the fused preview image, so that a target image can be generated, and information related to the emotion of the user is displayed on the first side of the obtained target image.
According to the embodiment of the invention, the emotion information of the user and the shot image are synthesized, so that the shot picture can record the mood of the user when the user shoots, the user can feel personally on the scene when the user browses the pictures after a plurality of years, and the use experience of the user is improved.
After the target image is generated, step 208 is performed.
Step 208: and acquiring the emotion type matched with the emotion information.
After the target image is generated, corresponding emotion types such as emotion types of happiness, anger, sadness, fright, terror, love and the like can be acquired according to the emotion information of the user.
After obtaining the emotion type matching the emotion information, step 208 is performed.
Step 209: and determining a classification result corresponding to the target image according to the emotion type.
After determining the emotion type corresponding to the emotion information of the user, the target image may be classified according to the emotion type, such as sadness, joy, cheerful, and the like.
The main attributes of the target image are determined by the result obtained by data analysis, namely, happiness, anger, sadness, fright, terror, love and the like, and the user can classify the shot target image according to the attribute of the photo, so that the management is convenient.
The photographing method provided by the embodiment of the invention has the beneficial effects that the photographing method has in the first embodiment, and can classify the photographed target images according to the emotion types corresponding to the emotions of the users, so that the management of the images can be facilitated.
EXAMPLE III
Referring to fig. 3, a schematic structural diagram of a photographing apparatus provided in an embodiment of the present invention is shown, where the photographing apparatus may be applied to a mobile terminal, and specifically may include:
a preview image collecting module 310, configured to collect a preview image; a feature data obtaining module 320, configured to obtain feature data of a preset dimension corresponding to the preview image; the emotion information determining module 330 is configured to determine emotion information of the user according to the feature data; and the target image generation module 340 is configured to photograph according to the emotion information and the preview image when a photographing instruction is monitored, and generate a target image.
Preferably, the preset dimensions include: at least one of a photographing time dimension, a photographing position dimension, a photographing user dimension, and a photographing environment dimension.
The photographing device provided by the embodiment of the invention acquires the feature data of the preset dimensionality corresponding to the preview image by acquiring the preview image, determines the emotion information of the user according to the feature data, and further photographs according to the emotion information and the preview image when a photographing instruction is monitored to generate the target image. According to the embodiment of the invention, the emotion label is added to the shot picture, so that the shot picture becomes more meaningful, rather than only recording the moment when the picture is shot, and people can feel personally on the scene when the pictures are browsed after several years, and the use experience of users is improved.
Example four
Referring to fig. 4, a schematic structural diagram of a photographing apparatus provided in an embodiment of the present invention is shown, where the photographing apparatus may be applied to a mobile terminal, and specifically may include:
a preview image acquisition module 410 for acquiring a preview image; a feature data obtaining module 420, configured to obtain feature data of a preset dimension corresponding to the preview image; an emotion information determination module 430, configured to determine emotion information of the user according to the feature data; the emotion theme determination module 440 is used for determining at least one matched emotion theme according to the emotion information; wherein the emotional themes comprise: at least one of a color theme, an arrangement style theme, and an animation theme; an emotion theme display module 450, configured to display each emotion theme on a display interface of the mobile terminal; a target emotion main body determination module 460, configured to determine a target emotion theme according to the selection operation of the user; the target image generation module 470 is configured to, when a photographing instruction is monitored, photograph according to the emotion information and the preview image to generate a target image; an emotion type obtaining module 480, configured to obtain an emotion type matched with the emotion information; and a classification result determining module 490, configured to determine a classification result corresponding to the target image according to the emotion type.
Preferably, the target image generation module 470 includes: and the target image generation sub-module 4701 is used for photographing according to the target emotion theme and the preview image when a photographing instruction is monitored, and generating the target image.
Preferably, the target image generation sub-module 4701 includes: the first side display sub-module is used for displaying the first side of the preview image on the display interface according to the page switching operation executed by the user on the preview image; the second surface of the preview image is used for showing the content corresponding to the preview image; the emotion main body fusion submodule is used for fusing the target emotion theme to the first surface to obtain a fused preview image; and the target image generation sub-module is used for photographing according to the fused preview image when a photographing instruction is monitored, and generating the target image.
The photographing device provided by the embodiment of the invention has the beneficial effects of the photographing device in the third embodiment, and can classify the photographed target images according to the emotion types corresponding to the emotions of the users, so that the management of the images can be facilitated.
EXAMPLE five
Referring to fig. 5, a hardware structure diagram of a mobile terminal for implementing various embodiments of the present invention is shown.
The mobile terminal 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 5 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 510 for acquiring a preview image; acquiring feature data of a preset dimension corresponding to the preview image; determining emotion information of the user according to the characteristic data; and when a photographing instruction is monitored, photographing according to the emotion information and the preview image to generate a target image.
In the embodiment of the invention, the preview image is collected, the feature data of the preset dimension corresponding to the preview image is obtained, the emotion information of the user is determined according to the feature data, and then, when the photographing instruction is monitored, photographing is carried out according to the emotion information and the preview image, and the target image is generated. According to the embodiment of the invention, the emotion label is added to the shot picture, so that the shot picture becomes more meaningful, rather than only recording the moment when the picture is shot, and people can feel personally on the scene when the pictures are browsed after several years, and the use experience of users is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the mobile terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the mobile terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 508 is an interface through which an external device is connected to the mobile terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The mobile terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 500 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the foregoing photographing method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the above-mentioned photographing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the descriptions thereof are omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A photographing method is applied to a mobile terminal and is characterized by comprising the following steps:
collecting a preview image;
acquiring feature data of a preset dimension corresponding to the preview image;
determining emotion information of the user according to the characteristic data;
when a photographing instruction is monitored, photographing according to the emotion information and the preview image to generate a target image;
wherein, after the step of determining the emotion information of the user according to the feature data, the method further comprises:
determining at least one matched emotional theme according to the emotional information; wherein the emotional themes comprise: at least one of a color theme, an arrangement style theme, and an animation theme;
displaying each emotion theme on a display interface of the mobile terminal;
determining a target emotion theme according to the selection operation of the user;
when a photographing instruction is monitored, photographing according to the emotion information and the preview image to generate a target image, wherein the photographing step comprises the following steps:
when a photographing instruction is monitored, photographing according to the target emotion theme and the preview image to generate the target image;
wherein a first side of the target image shows the target emotional theme and a second side of the target image shows the content of the preview image;
when a photographing instruction is monitored, photographing according to the target emotion theme and the preview image to generate a target image, wherein the step comprises the following steps of:
displaying a first side of the preview image on the display interface according to page switching operation executed by the user on the preview image; the second surface of the preview image is used for showing the content corresponding to the preview image;
fusing the target emotion theme to the first surface to obtain a fused preview image;
and when a photographing instruction is monitored, photographing according to the fused preview image to generate the target image.
2. The method of claim 1, wherein after the step of photographing according to the emotion information and the preview image and generating a target image when the photographing instruction is monitored, the method further comprises:
acquiring emotion types matched with the emotion information;
and determining a classification result corresponding to the target image according to the emotion type.
3. The method of claim 1, wherein the preset dimensions comprise: at least one of a photographing time dimension, a photographing position dimension, a photographing user dimension, and a photographing environment dimension.
4. The utility model provides a photographing device, is applied to mobile terminal, its characterized in that includes:
the preview image acquisition module is used for acquiring a preview image;
the characteristic data acquisition module is used for acquiring characteristic data of a preset dimension corresponding to the preview image;
the emotion information determining module is used for determining emotion information of the user according to the characteristic data;
the target image generation module is used for photographing according to the emotion information and the preview image when a photographing instruction is monitored, and generating a target image;
the emotion theme determination module is used for determining at least one matched emotion theme according to the emotion information; wherein the emotional themes comprise: at least one of a color theme, an arrangement style theme, and an animation theme;
the emotion theme display module is used for displaying each emotion theme on a display interface of the mobile terminal;
the target emotion main body determining module is used for determining a target emotion theme according to the selection operation of the user;
the target image generation module includes:
the target image generation sub-module is used for photographing according to the target emotion theme and the preview image when a photographing instruction is monitored, and generating the target image;
wherein a first side of the target image shows the target emotional theme and a second side of the target image shows the content of the preview image;
the target image generation sub-module includes:
the first side display sub-module is used for displaying the first side of the preview image on the display interface according to the page switching operation executed by the user on the preview image; the second surface of the preview image is used for showing the content corresponding to the preview image;
the emotion main body fusion submodule is used for fusing the target emotion theme to the first surface to obtain a fused preview image;
and the target image generation sub-module is used for photographing according to the fused preview image when a photographing instruction is monitored, and generating the target image.
5. The apparatus of claim 4, further comprising:
the emotion type acquisition module is used for acquiring the emotion types matched with the emotion information;
and the classification result determining module is used for determining a classification result corresponding to the target image according to the emotion type.
6. The apparatus of claim 4, wherein the preset dimensions comprise: at least one of a photographing time dimension, a photographing position dimension, a photographing user dimension, and a photographing environment dimension.
7. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the photographing method according to any one of claims 1 to 3.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the photographing method according to any one of claims 1 to 3.
CN201811643450.7A 2018-12-29 2018-12-29 Photographing method and device Active CN109660728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811643450.7A CN109660728B (en) 2018-12-29 2018-12-29 Photographing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811643450.7A CN109660728B (en) 2018-12-29 2018-12-29 Photographing method and device

Publications (2)

Publication Number Publication Date
CN109660728A CN109660728A (en) 2019-04-19
CN109660728B true CN109660728B (en) 2021-01-08

Family

ID=66117014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811643450.7A Active CN109660728B (en) 2018-12-29 2018-12-29 Photographing method and device

Country Status (1)

Country Link
CN (1) CN109660728B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307816A (en) * 2019-07-29 2021-02-02 北京地平线机器人技术研发有限公司 In-vehicle image acquisition method and device, electronic equipment and storage medium
CN110442867A (en) * 2019-07-30 2019-11-12 腾讯科技(深圳)有限公司 Image processing method, device, terminal and computer storage medium
CN111191068A (en) * 2019-12-27 2020-05-22 上海擎感智能科技有限公司 Mood statistical method, system, medium and device based on picture
CN111405180A (en) * 2020-03-18 2020-07-10 惠州Tcl移动通信有限公司 Photographing method, photographing device, storage medium and mobile terminal
CN112637521B (en) * 2020-12-31 2023-05-09 山西鑫博睿科技有限公司 Film making system based on photographed pictures and working method thereof
CN115908597A (en) * 2021-09-30 2023-04-04 北京字跳网络技术有限公司 Image processing method and device
CN115209048A (en) * 2022-05-19 2022-10-18 广东逸动科技有限公司 Image data processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888658A (en) * 2012-12-21 2014-06-25 索尼公司 Information Processing Device And Recording Medium
CN106406787A (en) * 2015-07-30 2017-02-15 木本股份有限公司 Information providing systems, computer programs, and printed matter
CN106446880A (en) * 2015-07-30 2017-02-22 木本股份有限公司 Information providing system and computer program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101041366B1 (en) * 2007-11-02 2011-06-14 주식회사 코아로직 Apparatus for digital image stabilizing using object tracking and Method thereof
US9678322B2 (en) * 2013-07-31 2017-06-13 Paul Messier System and method for dating textured gelatin silver paper
CN103533241B (en) * 2013-10-14 2017-05-10 厦门美图网科技有限公司 Photographing method of intelligent filter lens
CN107888823A (en) * 2017-10-30 2018-04-06 维沃移动通信有限公司 One kind shooting processing method, apparatus and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888658A (en) * 2012-12-21 2014-06-25 索尼公司 Information Processing Device And Recording Medium
CN106406787A (en) * 2015-07-30 2017-02-15 木本股份有限公司 Information providing systems, computer programs, and printed matter
CN106446880A (en) * 2015-07-30 2017-02-22 木本股份有限公司 Information providing system and computer program

Also Published As

Publication number Publication date
CN109660728A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN109660728B (en) Photographing method and device
CN109145142B (en) Management method and terminal for shared information of pictures
CN110557565B (en) Video processing method and mobile terminal
CN107846352B (en) Information display method and mobile terminal
CN108628985B (en) Photo album processing method and mobile terminal
CN110989847B (en) Information recommendation method, device, terminal equipment and storage medium
CN109508398B (en) Photo classification method and terminal equipment thereof
CN108984143B (en) Display control method and terminal equipment
CN109257649B (en) Multimedia file generation method and terminal equipment
CN109213416A (en) A kind of display information processing method and mobile terminal
CN109448069B (en) Template generation method and mobile terminal
CN109167884A (en) A kind of method of servicing and device based on user speech
CN111372029A (en) Video display method and device and electronic equipment
CN109308178A (en) A kind of voice drafting method and its terminal device
CN108334196A (en) A kind of document handling method and mobile terminal
CN107943842A (en) A kind of photo tag generation method, mobile terminal
CN108763475B (en) Recording method, recording device and terminal equipment
CN109815462A (en) A kind of document creation method and terminal device
CN111752448A (en) Information display method and device and electronic equipment
CN109669710B (en) Note processing method and terminal
CN109739423B (en) Alarm clock setting method and flexible terminal
CN110750198A (en) Expression sending method and mobile terminal
CN111064888A (en) Prompting method and electronic equipment
CN108710521B (en) Note generation method and terminal equipment
CN109783722A (en) A kind of content outputting method and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant