CN112822389B - Photograph shooting method, photograph shooting device and storage medium - Google Patents
Photograph shooting method, photograph shooting device and storage medium Download PDFInfo
- Publication number
- CN112822389B CN112822389B CN201911129149.9A CN201911129149A CN112822389B CN 112822389 B CN112822389 B CN 112822389B CN 201911129149 A CN201911129149 A CN 201911129149A CN 112822389 B CN112822389 B CN 112822389B
- Authority
- CN
- China
- Prior art keywords
- picture
- scene
- photo
- shot
- gallery
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure relates to a photo taking method, a photo taking apparatus and a storage medium, the photo taking method comprising: in the process of taking the picture, determining the picture associated with the currently taken picture in the gallery based on the picture elements displayed in the currently taken picture; and displaying scene prompt information associated with the picture element of the currently-taken picture in the currently-taken picture according to the picture associated with the currently-taken picture in the gallery. Through the method and the device, the effect of interaction with the user can be realized according to the current shot picture, and the shooting enthusiasm of the user is improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for taking a picture and a storage medium.
Background
With the development of the technology, people take pictures with the intelligent terminal in a normal state, and the pictures accumulate day by day, so that a large number of pictures are stored on the intelligent terminal.
At present, only a few marks for distinguishing people and scenes can be made on a current shot photo on a smart terminal, and the interaction between the photo and a user is low and the stickiness of the user is lacked.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a picture taking method, a picture taking apparatus, and a storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a photograph taking method including: in the process of photo shooting, determining a photo related to the current shot photo in the gallery based on the picture elements displayed in the current shot photo; and displaying scene prompt information associated with the picture element of the currently-taken picture in the currently-taken picture according to the picture associated with the currently-taken picture in the gallery.
In an example, the photo taking method further comprises: and adding a scene label for the current shot picture based on picture elements displayed in the current shot picture, wherein the scene label is used for identifying a scene obtained by carrying out scene recognition on the current shot picture.
In one example, determining a photograph in the gallery that is associated with the currently taken photograph includes: retrieving, based on a scene tag, a captured photograph in a gallery associated with the scene tag; and determining scene prompt information according to the retrieved shot picture.
In one example, determining scene hint information based on the retrieved taken picture comprises: determining scene prompt information representing the picture elements of the shot pictures according to the retrieved picture elements of the historical pictures; and/or determining scene prompt information containing the time difference between the shooting time of the current shot picture and the shooting time of the shot picture according to the retrieved shooting time of the shot picture; and/or determining photos of other scene tags associated with the shot photos according to the retrieved shot photos, and determining scene prompt information representing the other scene tags.
In an example, the scenes include one or more of a birthday scene, a group photo scene, a fitness scene, and a delicatessen scene.
According to a second aspect of the embodiments of the present disclosure, there is provided a photo taking apparatus including: the determining unit is configured to determine a photo in the gallery associated with the current shot photo based on the picture element displayed in the current shot photo in the photo shooting process; and the prompting unit is configured to display scene prompting information associated with the picture element of the current shot picture in the current shot picture according to the picture associated with the current shot picture in the gallery.
In one example, the photo taking apparatus further comprises: the setting unit is configured to add a scene label to the current shot picture based on the picture elements displayed in the current shot picture, wherein the scene label is used for identifying a scene obtained by scene recognition of the current shot picture.
In one example, the photo taking apparatus further includes: a retrieval unit configured to retrieve, based on the scene tag, a captured photograph associated with the scene tag in the gallery; and determining scene prompt information according to the retrieved shot pictures.
In one example, the retrieval unit determines the scene hint information according to the retrieved taken picture in the following way: determining scene prompt information representing the picture elements of the shot pictures according to the retrieved picture elements of the historical pictures; and/or determining scene prompt information containing the time difference between the shooting time of the current shot picture and the shooting time of the shot picture according to the retrieved shooting time of the shot picture; and/or determining pictures of other scene labels associated with the shot pictures according to the retrieved shot pictures and determining scene prompt information for representing the other scene labels.
In an example, the scenes include one or more of a birthday scene, a group photo scene, a fitness scene, and a delicatessen scene.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: in the process of photo shooting, photos related to the currently shot photos are determined in the gallery based on the picture elements displayed in the currently shot photos, and scene prompt information related to the picture elements of the currently shot photos is displayed in the currently shot photos according to the photos related to the currently shot photos, so that the effect of interaction with a user can be realized according to the currently shot photos, and the photo enthusiasm of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating a method of taking a picture in accordance with an exemplary embodiment.
FIG. 2 is a flowchart illustrating a method of taking a picture in accordance with an exemplary embodiment.
FIG. 3 is a flow chart illustrating a method of taking a photograph in accordance with an exemplary embodiment.
FIG. 4 is a diagram illustrating a scene hint information display interface, according to an example embodiment.
FIG. 5 is a diagram illustrating a scene hint information display interface, according to an example embodiment.
FIG. 6 is a diagram illustrating a scene hint information display interface, according to an example embodiment.
FIG. 7 is a block diagram illustrating a photo capture device according to an exemplary embodiment.
FIG. 8 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
The technical scheme of the exemplary embodiment of the disclosure can be applied to application scenes for shooting by using the camera device on the terminal. In the exemplary embodiments described below, a terminal is sometimes also referred to as an intelligent terminal device, where the terminal may be a Mobile terminal, also referred to as a User Equipment (UE), a Mobile Station (MS), etc., and the terminal is a device providing voice and/or data connection for a User, or a chip disposed in the device, such as a handheld device with a wireless connection function, a vehicle-mounted device, etc. Examples of terminals may include, for example: the Mobile terminal device comprises a Mobile phone, a tablet computer, a notebook computer, a palm computer, mobile Internet Devices (MID), a wearable device, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a wireless terminal in industrial control, a wireless terminal in unmanned driving, a wireless terminal in remote operation, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home and the like.
Fig. 1 is a flowchart illustrating a photo taking method according to an exemplary embodiment, which is used in a terminal as shown in fig. 1, and includes the following steps.
In step S11, in performing the photo taking process, a photo in the gallery associated with the currently taken photo is determined based on the screen element displayed in the currently taken photo.
In the present disclosure, the photo associated with the currently taken photo in the gallery is determined, and may be a photo associated with the currently taken photo in the gallery by identifying a picture element displayed in the currently taken photo through an image recognition algorithm during the photo taking process, retrieving the photo in the gallery according to the identified picture element of the currently taken photo, and further determining the photo associated with the currently taken photo in the gallery.
For example, the picture element displayed by the currently taken picture is a group picture, and the picture associated with the currently taken picture in the gallery may be a picture of the same face as the currently taken picture in the gallery.
In step S12, scene hint information associated with the screen element of the currently captured picture is displayed in the currently captured picture according to the picture associated with the currently captured picture in the gallery.
The scene prompting information referred in the present disclosure may be information for prompting according to the correlation between the currently taken picture and pictures stored in the gallery after identifying the picture element of the currently taken picture and the pictures associated with the currently taken picture in the gallery. And, after determining the picture associated with the currently captured picture in the gallery, the scene hint information associated with the screen element of the currently captured picture may be displayed at the currently captured picture.
For example, in the process of taking a photo, the picture elements displayed in the current taken photo are identified based on the group photo picture displayed in the current taken photo, the photos in the gallery are searched according to the identified group photo picture of the current taken photo, and the photos in the gallery associated with the current group photo are determined. And displaying the scene prompt information associated with the picture element of the current shot picture in the current shot picture through the picture associated with the current shot picture in the gallery.
In the exemplary embodiment of the disclosure, in the process of taking a picture, a picture associated with the currently taken picture in the gallery is determined based on the picture element displayed in the currently taken picture, and according to the picture associated with the currently taken picture in the gallery, scene prompt information associated with the picture element of the currently taken picture is displayed in the currently taken picture, so that an interaction effect with a user can be realized according to the currently taken picture, and the photographing enthusiasm of the user is improved.
Fig. 2 is a flowchart illustrating a photo taking method according to an exemplary embodiment, and as shown in fig. 2, the photo taking method includes steps S21-S23. Step S21 and step S23 are similar to the execution processes of step S11 and step S13 in fig. 1, respectively, and are not described herein again.
In step S22, a scene tag is added to the currently taken picture based on the picture element displayed in the currently taken picture, and the scene tag is used for identifying a scene obtained by scene recognition of the currently taken picture.
The scenes referred to in the present disclosure may include scenes of shooting subjects, scenes of shooting places, and the like. The scene recognition can be achieved by using an image recognition algorithm to recognize picture elements displayed in the shot picture. In the present disclosure, a scene vocabulary library may be preset, and according to the preset scene vocabulary library, the picture elements displayed in the current shot picture are identified, and the scene vocabulary matched with the corresponding picture is obtained. The scene vocabulary library in the present disclosure may be stored in any available storage space such as a local mobile terminal or a cloud server, and the embodiment of the present invention is not limited thereto.
In one embodiment, the scenes may include one or more of a birthday scene, a group photo scene, a fitness scene, and a food scene. The scene identified for the currently captured picture may be one or more based on the picture elements displayed in the currently captured picture.
For example, the current shot picture element is food, the current shot picture element is identified according to an image recognition algorithm, and a scene that the current shot picture is food is identified. For another example, the current shot picture is a picture containing group photo and dinner party, the current shot picture element is identified according to an image identification algorithm, and the current shot picture is identified to be a group photo scene and a food scene.
In the present disclosure, after the scene recognition is performed on the currently shot picture, a scene tag may be further added to the currently shot picture. By adding the scene labels to the current shot pictures, the shot pictures can be classified, and the user can conveniently search the pictures. Specifically, a corresponding relationship can be established between the current shot picture and the identified scene of the current shot picture, and when a user needs to search for the picture, the picture can be quickly found through the scene tag. Or when the user takes a picture, the scene tag of the current shot picture is used as a material for acquiring the scene prompt information.
In an exemplary embodiment of the present disclosure, during the process of taking a picture, a scene tag may be added to the currently taken picture. By adding the scene tag to the current shot picture and based on the scene tag of the current shot picture, the picture associated with the current shot picture can be quickly found in the shot picture, and further the scene prompt information of the current shot picture can be quickly determined.
Fig. 3 is a flowchart illustrating a photo taking method according to an exemplary embodiment, the photo taking method including steps S31-S34, as shown in fig. 3. Steps S31 to S32 are similar to the steps S21 and S22 in fig. 2, respectively, and step S34 is similar to the step S23 in fig. 2, and are not described again here.
In step S33, a shot picture associated with the scene tag of the current shot picture is retrieved from the gallery based on the scene tag of the current shot picture, and scene prompt information is determined from the retrieved shot picture.
In the present disclosure, the taken picture associated with the currently taken picture may be a picture in the same scene as the currently taken picture, or may be a taken picture in a scene associated with the scene of the currently taken picture.
In one embodiment, the current photo scene prompt may be determined by retrieving a photo taken in the gallery and obtaining the photo taken associated with the current photo scene tag.
In one embodiment, based on the current shot picture, elements such as a birthday cake, a birthday hat and the like are searched in the picture library and the shot picture associated with the current shot picture is included, and scene prompt information of the current shot picture is determined.
In one embodiment, based on a scene label of a group photo in a current shot photo, a shot photo associated with the current shot photo is retrieved from a group photo scene of a gallery, whether a face same as the current shot photo exists in the group photo scene of the gallery is compared, and if the face exists, corresponding scene prompt information is given based on a preset rule. The preset rule may be determined according to the closest group photo time of the same face as the currently taken picture in the gallery, for example. For example, if the group photo time of the same face in the gallery as the current shot picture is less than a preset time threshold, a scene prompt message is prompted. If the group photo time of the same face in the gallery as the current photo is larger than a preset time threshold, prompting another scene prompting message, such as 'distance from the last group photo (fuzzy time)'.
In one embodiment, whether a fitness scene label exists in a gallery is searched based on a scene label of a food in a current shot picture, after the fitness scene label exists in the gallery, the heat of the food in the current shot picture is searched, and corresponding scene prompt information is given. In the disclosure, the calorie of the gourmet food is searched, and on one hand, the calorie of the gourmet food or the sentence related to the calorie of the gourmet food is used as the scene prompt information to prompt by searching the current food in the shot picture in the calorie bank of the gourmet food according to the preset calorie bank of the food. On the other hand, in order to prevent the situation that the preset food calorie bank cannot cover the searched food, the system can cooperate with the Internet companies of some search engines to search the calorie of the food in the current shot picture through an interface opened by the other party, and the calorie is displayed in the current shot picture as scene prompt information in a simple introduction mode. And if the user wants to further know information about calorie intake and the like, detailed information about the currently photographed food can be viewed by clicking a link.
The following description will be given by way of example of determining scene hint information based on retrieved taken pictures in connection with actual applications.
FIG. 4 is a diagram illustrating a display interface for determining scene hint information characterizing picture elements of a captured picture based on retrieved picture elements of the captured picture in accordance with an exemplary embodiment. In fig. 4, based on the picture of the person in the currently taken picture, it is retrieved from the gallery that the taken picture associated with the currently taken picture includes elements such as a birthday cake, birthday hat, etc., and a scene prompt message such as "wish you happy birthday" may be given to the currently taken picture.
FIG. 5 is a diagram illustrating a display interface for determining scene hint information containing a time difference between a time of taking a current captured picture and a time of taking a captured picture based on a retrieved time of taking the captured picture in accordance with an exemplary embodiment. In fig. 5, based on the scene tag of the group photo in the current shot photo, the shot photo associated with the current shot photo is retrieved from the group photo scene in the gallery, and the group photo shot photo with the same face as the current shot photo and the closest face time is obtained. And determining that the scene prompt information is 'half a year away from the previous group photo' according to the preset group photo time which is the same as the face of the current shot photo in the image library.
FIG. 6 is a diagram of a display interface illustrating determination of scene hint information characterizing other scene tags based on a retrieved captured picture to determine photos of other scene tags associated with the captured picture according to an exemplary embodiment. In fig. 6, based on the scene tag of the food in the current shot picture, whether a fitness scene tag exists in the gallery is retrieved, after the fitness scene tag exists in the gallery is determined, the heat of the food in the current shot picture is searched, and scene prompt information of "calorie is too high" is given.
In the exemplary embodiment of the disclosure, the scene tag is added to the current shot picture, and the shot picture associated with the scene tag is retrieved from the gallery based on the scene tag of the current shot picture, so that the picture associated with the current shot picture can be accurately and quickly found, the scene prompt information of the current shot picture is quickly determined, the interaction effect with the user is further improved, and the shooting enthusiasm of the user is improved.
FIG. 7 is a block diagram of a photo capture device, shown in accordance with an exemplary embodiment. Referring to fig. 7, the photo taking apparatus includes a determination unit 701 and a presentation unit 702.
The determining unit 701 is configured to determine, during photo shooting, a photo in the gallery associated with the currently shot photo based on a picture element displayed in the currently shot photo; and a prompting unit 702 configured to display scene prompting information associated with the picture element of the currently-taken picture in the currently-taken picture according to the picture associated with the currently-taken picture in the gallery.
In one example, the photo taking apparatus further comprises: the setting unit 703 is configured to add a scene tag to the currently captured picture based on the picture element displayed in the currently captured picture, where the scene tag is used to identify a scene obtained by performing scene recognition on the currently captured picture.
In one example, the photo taking apparatus further includes: a retrieval unit 704 configured to retrieve the captured photograph associated with the scene tag in the gallery based on the scene tag; and determining scene prompt information according to the retrieved shot pictures.
In one example, the retrieving unit 704 determines the scene hint information according to the retrieved taken photo in the following manner: determining scene prompt information representing the picture elements of the shot pictures according to the retrieved picture elements of the historical pictures; and/or determining scene prompt information containing the time difference between the shooting time of the current shot picture and the shooting time of the shot picture according to the retrieved shooting time of the shot picture; and/or determining photos of other scene tags associated with the shot photos according to the retrieved shot photos, and determining scene prompt information representing the other scene tags.
In an example, the scene includes one or more of a birthday scene, a group photo scene, a fitness scene, and a food scene.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating an apparatus 800 for photo taking in accordance with an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communications component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, audio component 810 includes a Microphone (MIC) configured to receive external audio signals when apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is further understood that the use of "a plurality" in this disclosure means two or more, and other terms are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another, and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," etc. are used interchangeably throughout. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further appreciated that while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A method of taking a picture, the method comprising:
in the process of photo shooting, determining a shot photo related to the current shot photo in the gallery based on the picture elements displayed in the current shot photo;
displaying scene prompt information associated with a picture element of the current shot picture in the current shot picture according to the shot picture associated with the current shot picture in the gallery, wherein the scene prompt information is used for prompting the association between the current shot picture and the shot picture, and the scene prompt information comprises a time difference between the shooting time of the current shot picture and the shooting time of the shot picture.
2. A picture taking method as claimed in claim 1, characterized in that the method further comprises:
adding a scene label for the current shot picture based on picture elements displayed in the current shot picture, wherein the scene label is used for identifying a scene obtained by carrying out scene recognition on the current shot picture.
3. The method of claim 2, wherein determining the picture in the gallery that is associated with the currently captured picture comprises:
retrieving, based on the scene tag, a captured photograph in a gallery associated with the scene tag;
and determining the scene prompt information according to the retrieved shot pictures.
4. A method as recited in claim 1 or 3, wherein the scene comprises one or more of a birthday scene, a group photo scene, a fitness scene, and a delicatessen scene.
5. A picture taking apparatus, the apparatus comprising:
a determination unit configured to determine, during photo taking, a taken photo in the gallery associated with the currently taken photo based on a picture element displayed in the currently taken photo;
a prompt unit configured to display scene prompt information associated with a picture element of a currently taken photo at the currently taken photo according to the taken photo associated with the currently taken photo in the gallery, wherein the scene prompt information is used for prompting the association between the currently taken photo and the taken photo, and the scene prompt information comprises a time difference between the shooting time of the currently taken photo and the shooting time of the taken photo.
6. A picture-taking device as claimed in claim 5, characterized in that the device further comprises:
the setting unit is configured to add a scene label to the current shot picture based on picture elements displayed in the current shot picture, wherein the scene label is used for identifying a scene obtained by scene recognition of the current shot picture.
7. A picture taking device as claimed in claim 6, characterized in that the device further comprises:
a retrieval unit configured to retrieve a captured photograph associated with the scene tag in a gallery based on the scene tag;
the determination unit is further configured to:
and determining the scene prompt information according to the retrieved shot picture.
8. A picture taking device as claimed in claim 5 or 7 wherein the scenes include one or more of a birthday scene, a group photo scene, a fitness scene and a cate scene.
9. A picture taking apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the picture taking method of any one of claims 1-4.
10. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a processor, perform the photo taking method of any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911129149.9A CN112822389B (en) | 2019-11-18 | 2019-11-18 | Photograph shooting method, photograph shooting device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911129149.9A CN112822389B (en) | 2019-11-18 | 2019-11-18 | Photograph shooting method, photograph shooting device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112822389A CN112822389A (en) | 2021-05-18 |
CN112822389B true CN112822389B (en) | 2023-02-24 |
Family
ID=75852602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911129149.9A Active CN112822389B (en) | 2019-11-18 | 2019-11-18 | Photograph shooting method, photograph shooting device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112822389B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104038693A (en) * | 2014-05-23 | 2014-09-10 | 小米科技有限责任公司 | Method and device for taking photos |
CN105009128A (en) * | 2013-02-28 | 2015-10-28 | 索尼公司 | Information processing device and storage medium |
CN105824859A (en) * | 2015-01-09 | 2016-08-03 | 中兴通讯股份有限公司 | Picture classification method and device as well as intelligent terminal |
CN109286754A (en) * | 2018-09-30 | 2019-01-29 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007104303A (en) * | 2005-10-04 | 2007-04-19 | Konica Minolta Photo Imaging Inc | Program for associating date and time information |
RU2463663C2 (en) * | 2007-05-31 | 2012-10-10 | Панасоник Корпорэйшн | Image capturing apparatus, additional information providing and additional information filtering system |
US10114838B2 (en) * | 2012-04-30 | 2018-10-30 | Dolby Laboratories Licensing Corporation | Reference card for scene referred metadata capture |
WO2014178228A1 (en) * | 2013-04-30 | 2014-11-06 | ソニー株式会社 | Client terminal, display control method, program, and system |
JP6207415B2 (en) * | 2014-01-31 | 2017-10-04 | 株式会社バンダイ | Information providing system and information providing program |
US20160142625A1 (en) * | 2014-11-13 | 2016-05-19 | Lenovo (Singapore) Pte. Ltd. | Method and system for determining image composition attribute adjustments |
CN107705259A (en) * | 2017-09-24 | 2018-02-16 | 合肥麟图信息科技有限公司 | A kind of data enhancement methods and device under mobile terminal preview, screening-mode |
KR20190084567A (en) * | 2018-01-08 | 2019-07-17 | 삼성전자주식회사 | Electronic device and method for processing information associated with food |
-
2019
- 2019-11-18 CN CN201911129149.9A patent/CN112822389B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105009128A (en) * | 2013-02-28 | 2015-10-28 | 索尼公司 | Information processing device and storage medium |
CN104038693A (en) * | 2014-05-23 | 2014-09-10 | 小米科技有限责任公司 | Method and device for taking photos |
CN105824859A (en) * | 2015-01-09 | 2016-08-03 | 中兴通讯股份有限公司 | Picture classification method and device as well as intelligent terminal |
CN109286754A (en) * | 2018-09-30 | 2019-01-29 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112822389A (en) | 2021-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3179408B1 (en) | Picture processing method and apparatus, computer program and recording medium | |
CN106557768B (en) | Method and device for recognizing characters in picture | |
CN105094760B (en) | A kind of picture indicia method and device | |
CN104615769B (en) | Picture classification method and device | |
US10115019B2 (en) | Video categorization method and apparatus, and storage medium | |
KR101771153B1 (en) | Method and device for determining associated user | |
CN107423386B (en) | Method and device for generating electronic card | |
CN110781323A (en) | Method and device for determining label of multimedia resource, electronic equipment and storage medium | |
CN109034150B (en) | Image processing method and device | |
EP3147802A1 (en) | Method and apparatus for processing information | |
CN106547850B (en) | Expression annotation method and device | |
CN105549300A (en) | Automatic focusing method and device | |
CN105095868A (en) | Picture matching method and apparatus | |
CN111526287A (en) | Image shooting method, image shooting device, electronic equipment, server, image shooting system and storage medium | |
CN105205093B (en) | The method and device that picture is handled in picture library | |
CN113032627A (en) | Video classification method and device, storage medium and terminal equipment | |
CN110019897B (en) | Method and device for displaying picture | |
CN112004020B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN105488074B (en) | Photo clustering method and device | |
CN107229707B (en) | Method and device for searching image | |
CN111538543B (en) | Lost article searching method, lost article searching device and storage medium | |
CN108509863A (en) | Information cuing method, device and electronic equipment | |
CN111339964A (en) | Image processing method and device, electronic equipment and storage medium | |
CN107239490B (en) | Method and device for naming face image and computer readable storage medium | |
CN112822389B (en) | Photograph shooting method, photograph shooting device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |