CN110647635A - Image management method and electronic equipment - Google Patents

Image management method and electronic equipment Download PDF

Info

Publication number
CN110647635A
CN110647635A CN201910937064.7A CN201910937064A CN110647635A CN 110647635 A CN110647635 A CN 110647635A CN 201910937064 A CN201910937064 A CN 201910937064A CN 110647635 A CN110647635 A CN 110647635A
Authority
CN
China
Prior art keywords
target
information
image
voice
voiceprint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910937064.7A
Other languages
Chinese (zh)
Inventor
王俊方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910937064.7A priority Critical patent/CN110647635A/en
Publication of CN110647635A publication Critical patent/CN110647635A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/433Query formulation using audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/45Clustering; Classification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces

Abstract

The embodiment of the invention provides an image management method and electronic equipment, relates to the technical field of communication, and aims to solve the problems of single form and strong limitation of a photo management method. The image management method comprises the following steps: receiving a voice input to a target image; responding to the voice input, and adding target voice information corresponding to the voice input in the attribute information of the target image; and managing the target image according to the target voice information. The image management method in the embodiment of the invention is applied to the electronic equipment.

Description

Image management method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image management method and electronic equipment.
Background
At present, people often use the camera function in the mobile terminal to shoot the beautiful moment around the lower part of the body, so that the time for keeping permanent memory in the form of photos is more and more.
A problem that follows is that the number of photos in a mobile terminal shows explosive growth. In order to better serve the user, the mobile terminal generally manages the photos based on the shooting locations and shooting times of the photos. For example, photos taken the same day are arranged into a video for the user to watch; for another example, photos taken in the same city are arranged in an album group, etc.
However, the above management method for photos is limited to the shooting location and the shooting time, which results in a single form and strong limitation of the management method for photos.
Disclosure of Invention
The embodiment of the invention provides an image management method and electronic equipment, and aims to solve the problems of single form and strong limitation of a photo management method.
In order to solve the technical problem, the invention is realized as follows:
the embodiment of the invention provides an image management method, which comprises the following steps: receiving a voice input to a target image; responding to the voice input, and adding target voice information corresponding to the voice input in the attribute information of the target image; and managing the target image according to the target voice information.
An embodiment of the present invention further provides an electronic device, including: the receiving module is used for receiving voice input of the target image; the response module is used for responding to the voice input and adding target voice information corresponding to the voice input in the attribute information of the target image; and the management module is used for managing the target image according to the target voice information.
An embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program implements the steps of the image management method when executed by the processor.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image management method are implemented.
In this way, in the embodiment of the present invention, after acquiring the target image, the electronic device may receive a voice input to the target image, so as to respond to the voice input, enter target voice information corresponding to the voice input, add the target voice information to the attribute information of the target image, and further make the attribute information of the target image include the item of the target voice information. Compared with the time and place information contained in the attribute information of the target image, the form of the target voice information is more interesting, and meanwhile, the attribute information of the target image can be enriched by the target voice information, so that the electronic equipment can manage the target image according to the target voice information of the target image, the form of the image management method is diversified, and the openness is stronger.
Drawings
FIG. 1 is one of the flow charts of an image management method of an embodiment of the present invention;
FIG. 2 is one of the interface schematic diagrams of the electronic device of the embodiment of the present invention;
FIG. 3 is a second flowchart of an image management method according to an embodiment of the present invention;
FIG. 4 is a second schematic interface diagram of an electronic device according to an embodiment of the invention;
FIG. 5 is a third schematic interface diagram of an electronic device according to an embodiment of the invention;
FIG. 6 is a third flowchart of an image management method according to an embodiment of the present invention;
FIG. 7 is a fourth flowchart of an image management method according to an embodiment of the present invention;
FIG. 8 is a fourth schematic interface diagram of an electronic device according to an embodiment of the invention;
FIG. 9 is a fifth flowchart of an image management method according to an embodiment of the present invention;
FIG. 10 is a fifth exemplary interface diagram of an electronic device according to the present invention;
FIG. 11 is one of the block diagrams of an electronic device of an embodiment of the invention;
fig. 12 is a second block diagram of the electronic device according to the embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of an image management method according to an embodiment of the present invention is shown, applied to an electronic device, and including:
step S1: a voice input to a target image is received.
Preferably, the present embodiment mainly provides a method for performing voice remark on a target image. The target image may be in the form of a photograph or picture, among others.
For example, when the shooting is completed, the user can immediately select the currently-shot picture for voice remarking, so that the electronic equipment acquires the currently-shot picture as the target image. In particular, in this example, it is also possible to dispense with the user spending time and effort exclusively in adding the voice memo in the future.
Illustratively, when browsing an album of the electronic device, the user selects a photo or a picture for voice remarking, so that the electronic device acquires the photo or the picture selected by the user as a target image.
In the present embodiment, the voice input is used to enter voice information. The voice input is preferably a man-machine interaction mode, and the voice input comprises a plurality of operation modes and a plurality of operations performed by a user based on the electronic equipment.
Step S2: in response to the voice input, target voice information corresponding to the voice input is added to the attribute information of the target image.
The attribute information of the target image may further include a shooting location, a shooting time, and the like.
For reference, a User Interface (UI) effect diagram is shown as the left diagram in fig. 2, and the user clicks the current picture taken by the camera, or enters a big picture mode of the picture or the picture in the album, clicks the lower right voice entry icon 1, enters a recording mode, and enters voice information to complete adding the voice remark.
Further, a User Interface (UI) effect diagram is as the right diagram in fig. 2, in the album, under the large image mode of the photo or the picture, the voice playing icon 2 is displayed at the upper right corner, and the user clicks the voice playing icon 2 to enter the playing mode to play the voice information. If no voice remark exists, the voice playing icon 2 does not exist, and the voice remark can be added according to the operation.
Step S3: and managing the target image according to the target voice information.
Preferably, the target voice information of the target image may be analyzed, classified, etc. by using a voice technology, specifically including a voice matching technology, a voice recognition technology, a voice-to-text technology, a voice synthesis technology, etc., so as to manage the target image according to the result of the analysis, classification, etc.
The management method of the target image is not limited to classification management, multi-function operation management, and the like.
For example, the target images may be classified into an album group based on the voiceprint of the target voice message; the target images can be classified into an album group according to the semantics of the target voice information; the target image can also be statically or dynamically displayed according to the target voice information.
In this way, in the embodiment of the present invention, after acquiring the target image, the electronic device may receive a voice input to the target image, so as to respond to the voice input, enter target voice information corresponding to the voice input, add the target voice information to the attribute information of the target image, and further make the attribute information of the target image include the item of the target voice information. Compared with the time and place information contained in the attribute information of the target image, the form of the target voice information is more interesting, and meanwhile, the attribute information of the target image can be enriched by the target voice information, so that the electronic equipment can manage the target image according to the target voice information of the target image, the form of the image management method is diversified, and the openness is stronger.
In addition, when the user takes a picture or browses the picture, the voice information records the current shooting mood and emotion in real time through the voice remark, and when the user turns over the photo album at the later stage, the emotion expressed by the user who shoots the current mood can be recalled through the voice information recorded at the current time and the photo. Therefore, the voice information is closer to the subjective idea of the user, so that the images are managed based on the voice information, the requirements of the user can be met, and the idea of the user can be fitted.
On the basis of the embodiment shown in fig. 1, fig. 3 shows a flowchart of an image management method according to another embodiment of the present invention, and step S3 includes:
step S31: and identifying target voiceprint information in the target voice information.
In the step, after the voice remark is successful, the background of the album automatically starts the voice retrieval and voice recognition function to recognize the target voiceprint information in the target voice information.
Step S32: and establishing a target voiceprint information group according to the target voiceprint information.
In this step, different voiceprint information packets can be established for different voiceprint information. Specifically, a target voiceprint information packet is established according to target voiceprint information in the target voice information.
In reference, the UI effect diagram is the left diagram of FIG. 4, and the 'voiceprint' function option is added to the main interface of the album. After clicking, the user enters the right image shown in fig. 4, different voiceprint information identified by the background is displayed, and different established voiceprint information groups are displayed based on the different voiceprint information, for example, a voiceprint 1 corresponds to one of the voiceprint information groups.
Step S33: and classifying the target image into a target voiceprint information group.
In this step, images are classified into corresponding voiceprint information groups based on voiceprint information in different voice information. Specifically, the target images are classified into target voiceprint information groups according to the target voiceprint information in the target voice information.
In reference, the UI effect diagram is as the left diagram of fig. 5, the user clicks any voiceprint information packet, and can enter the right diagram of fig. 5, and the interface displays all the voice remark photos contained in the voiceprint information packet.
In this embodiment, a large number of images in an album of an electronic device are classified and managed based on different voiceprint information, and the classification management method can provide a novel image classification method on one hand, and one piece of voiceprint information generally corresponds to a unique person on the other hand, so that a user can search images according to the group of voiceprint information, the user can use the album more comfortably, and user experience is further improved.
On the basis of the embodiment shown in fig. 3, fig. 6 shows a flowchart of an image management method according to another embodiment of the present invention, and the step S3 further includes, after the step S33:
step S34: and acquiring target entry person information corresponding to the target voiceprint information.
Step S35: and adding target entry person information in the attribute information of the target voiceprint information group.
Referring to, as in the right diagram of fig. 4, for any voiceprint information packet, the default voiceprint is "unknown". The user can click any voiceprint information group, such as 'voiceprint 1', preferably plays a certain small section of voice information which is extracted from the background and belongs to the voiceprint information group, and after the user listens and judges a specific input person, the user can fill in the voiceprint belonged person, such as 'friend 1', below the voiceprint information group.
Therefore, the electronic equipment can acquire the information of the entry person corresponding to the voiceprint information, namely the person to which the voiceprint belongs, add the information of the entry person to the attribute information of the voiceprint information group, and preferably display the attribute information below the voiceprint information group. Specifically, target entry person information corresponding to the target voiceprint information is acquired, and the target entry person information is added to the attribute information of the target voiceprint information group, and is preferably displayed below the target voiceprint information group.
In this embodiment, the album simultaneously starts the voice retrieval and voice recognition functions, stores different voiceprint information owned by the current album, groups the voiceprint information, and displays the grouping result on the interface of the album. The user can perform simple information registration on different voiceprint information through manual identification, for example, the user can register a person to which the voiceprint information belongs later, so that the user can conveniently find all images of the remark voice information of the same person to be recorded quickly based on the voiceprint function of an album of the electronic equipment.
On the basis of the embodiment shown in fig. 3, fig. 7 shows a flowchart of an image management method according to another embodiment of the present invention, and the step S3 further includes, after the step S33:
step S36: at least one voiceprint information packet is obtained.
Step S37: an image contained in at least one voiceprint information packet is acquired.
Step S38: images contained in at least one voiceprint information packet are synthesized into a first target video.
In the prior art, along with the continuous upgrading of the functionality of the intelligent electronic equipment, the functions of the current electronic equipment photo album are more and more abundant, and the operation is more and more interesting. In addition to providing various intelligent identifications for photos by using the common Artificial Intelligence (AI) technology, some albums even provide automatic photo video production, for example, automatically generate a photo video by scanning photos of the albums on holidays, and push the photo video to users as exclusive memory of the holidays. The user can directly save the video, and can also select the video to perform other editing operations such as picture reselection and the like to generate a favorite video.
However, this video is generated in a very limited way, just by identifying the shooting time, aggregating all photos taken during holidays or a certain period of time and randomly selecting parts thereof to generate out of order, so that this photo video is not very popular with users, because it cannot cause more emotional resonance of users.
In this embodiment, after the image is remarked by voice, the background will retrieve and identify all different voiceprint information to perform grouping, and return a front end result, and each voiceprint information group aggregates all voice images of the voiceprint owner. Therefore, when a video is produced, the embodiment provides an extended function by using the voiceprint information, that is, when a photo video is produced, the limitation that the video is only generated by randomly selecting photos in a period of time in the current album is broken through, a user can directly select a certain voiceprint information or a plurality of different voiceprint information, and an exclusive video is generated for all photos grouped by the voiceprint information, so that more interestingness and functionality are added.
For reference, in the UI effect diagram as shown in fig. 8, firstly, in the interface in the left diagram, by long-pressing the voiceprint information packet, selecting a different voiceprint information packet, clicking the "more" function option in the lower right corner of the middle diagram, popping up a small menu interface, including the "video" function option, clicking the "video" function option, and automatically generating a segment of video containing the photos in the selected voiceprint information packet in the background.
On the basis of the embodiment shown in fig. 1, fig. 9 shows a flowchart of an image management method according to another embodiment of the present invention, and step S3 includes:
step S39: at least one image is acquired.
Step S310: and synthesizing the at least one image into a second target video.
Step S311: and respectively acquiring voice information of at least one image.
Step S312: and synthesizing the background music of the second target video according to the voice information of the at least one image.
In the prior art, automatic photo video production is provided, and a photo video is automatically generated and pushed to a user through scanning photo albums. The user can directly save the video, and can also select the video to perform other editing operations such as picture reselection, background music selection and the like to generate a favorite video.
However, based on the generation manner of the video, configured background music is not associated with the video theme, the photo of the video and the like, so that the photo video is not very popular with the user, because the photo video cannot cause more emotional resonance of the user.
In this embodiment, when a series of photos is selected to produce a video, the electronic device may provide the voice information in the selected photos as the video background music, that is, automatically clip the remark voice information retained in each photo to synthesize a piece of video background music by using a voice synthesis technology. Furthermore, a dedicated video is produced by using different voiceprint information identified by the background of the album, that is, on the basis of the embodiment shown in fig. 7, the voice information in the selected voiceprint information packet is clipped and synthesized into a background sound as video background music. Therefore, compared with the prior art, the background music generated in the embodiment has more interestingness and functionality, causes more emotional resonance of the user, and is more popular with the user.
Referring to fig. 10, the UI effect diagram is: when the video is produced, a voice option is added to the background music function option. When the user clicks the 'voice' option in the right picture of fig. 10, the background of the album automatically retrieves all voice notes of the photos used in the video, and automatically clips and synthesizes a section of exclusive background sound.
In addition, the UI effect diagram is as in fig. 10: when making a video, the user clicks on the "music" option in the right image of fig. 10, and may also select a song, a tune, etc. as background music.
Therefore, the embodiment of the invention provides a method for realizing multifunctional operation of an image through a voice technology. Particularly, when people look over the photo album at leisure time and remember the good photo, the automatic photo video making function provided in the photo album can be used for recording a fresh memory with dynamic feeling and sound feeling besides the condition that the photo is looked up by sliding a finger. The voice information remarked before is used in the photo video production, so that the automatic synthesis and editing are realized to form the exclusive video background sound, and the combination of the dynamic state and the voice makes the video more interesting. Meanwhile, different registered voiceprint information can be utilized to select photos belonging to the same voiceprint information to be made into a single video, and more interesting functions are provided by utilizing added voice in many aspects.
FIG. 11 shows a block diagram of an electronic device of another embodiment of the invention, comprising:
a receiving module 10, configured to receive a voice input for a target image;
a response module 20, configured to add, in response to the voice input, target voice information corresponding to the voice input to the attribute information of the target image;
and the management module 30 is configured to manage the target image according to the target voice information.
In this way, in the embodiment of the present invention, after acquiring the target image, the electronic device may receive a voice input to the target image, so as to respond to the voice input, enter target voice information corresponding to the voice input, add the target voice information to the attribute information of the target image, and further make the attribute information of the target image include the item of the target voice information. Compared with the time and place information contained in the attribute information of the target image, the form of the target voice information is more interesting, and meanwhile, the attribute information of the target image can be enriched by the target voice information, so that the electronic equipment can manage the target image according to the target voice information of the target image, the form of the image management method is diversified, and the openness is stronger.
Preferably, the management module 30 comprises:
the recognition unit is used for recognizing target voiceprint information in the target voice information;
the grouping unit is used for establishing a target voiceprint information group according to the target voiceprint information in the target voice information;
and the classification unit is used for classifying the target image into a target voiceprint information group.
Preferably, the management module 30 further comprises:
the system comprises an input person acquisition unit, a storage unit and a processing unit, wherein the input person acquisition unit is used for acquiring target input person information corresponding to target voiceprint information;
and the logger adding unit is used for adding the target logger information in the attribute information of the target voiceprint information group.
Preferably, the management module 30 further comprises:
a packet acquisition unit for acquiring at least one voiceprint information packet;
a first image acquisition unit configured to acquire an image contained in at least one voiceprint information packet;
a first target video synthesizing unit for synthesizing the images contained in the at least one voiceprint information packet into a first target video.
Preferably, the management module 30 comprises:
a second image acquisition unit for acquiring at least one image;
a second target video synthesizing unit for synthesizing the at least one image into a second target video;
the voice information acquisition unit is used for respectively acquiring the voice information of at least one image;
and the background music synthesis unit is used for synthesizing the background music of the second target video according to the voice information of the at least one image.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 10, and details are not described here to avoid repetition.
Fig. 12 is a schematic diagram of a hardware structure of an electronic device 100 for implementing various embodiments of the present invention, where the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 12 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
Wherein, the processor 110 is used for controlling the input unit 104 to receive voice input of the target image; responding to the voice input, and adding target voice information corresponding to the voice input in the attribute information of the target image; and managing the target image according to the target voice information.
In this way, in the embodiment of the present invention, after acquiring the target image, the electronic device may receive a voice input to the target image, so as to respond to the voice input, enter target voice information corresponding to the voice input, add the target voice information to the attribute information of the target image, and further make the attribute information of the target image include the item of the target voice information. Compared with the time and place information contained in the attribute information of the target image, the form of the target voice information is more interesting, and meanwhile, the attribute information of the target image can be enriched by the target voice information, so that the electronic equipment can manage the target image according to the target voice information of the target image, the form of the image management method is diversified, and the openness is stronger.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 12, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power source 111 (such as a battery) for supplying power to each component, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the above-mentioned embodiment of the image management method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image management method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. An image management method, comprising:
receiving a voice input to a target image;
responding to the voice input, and adding target voice information corresponding to the voice input in the attribute information of the target image;
and managing the target image according to the target voice information.
2. The method of claim 1, wherein managing the target image according to the target voice information comprises:
identifying target voiceprint information in the target voice information;
establishing a target voiceprint information group according to the target voiceprint information;
and classifying the target image into the target voiceprint information group.
3. The method according to claim 2, wherein after the classifying the target image into the target voiceprint information groups, the managing the target image according to the target voice information further comprises:
acquiring target input person information corresponding to the target voiceprint information;
and adding the target entry person information in the attribute information of the target voiceprint information group.
4. The method according to claim 2, wherein after the classifying the target image into the target voiceprint information groups, the managing the target image according to the target voice information further comprises:
obtaining at least one voiceprint information packet;
acquiring an image contained in the at least one voiceprint information packet;
and synthesizing the images contained in the at least one voice print information packet into a first target video.
5. The method of claim 1, wherein managing the target image according to the target voice information comprises:
acquiring at least one image;
synthesizing the at least one image into a second target video;
respectively acquiring voice information of the at least one image;
and synthesizing the background music of the second target video according to the voice information of the at least one image.
6. An electronic device, comprising:
the receiving module is used for receiving voice input of the target image;
the response module is used for responding to the voice input and adding target voice information corresponding to the voice input in the attribute information of the target image;
and the management module is used for managing the target image according to the target voice information.
7. The electronic device of claim 6, wherein the management module comprises:
the recognition unit is used for recognizing target voiceprint information in the target voice information;
the grouping unit is used for establishing a target voiceprint information group according to the target voiceprint information;
and the classification unit is used for classifying the target image into the target voiceprint information group.
8. The electronic device of claim 7, wherein the management module further comprises:
the user input unit is used for acquiring target user input information corresponding to the target voiceprint information;
and the logger adding unit is used for adding the target logger information in the attribute information of the target voiceprint information group.
9. The electronic device of claim 7, wherein the management module further comprises:
a packet acquisition unit for acquiring at least one voiceprint information packet;
a first image acquisition unit configured to acquire an image contained in the at least one voiceprint information packet;
a first target video synthesizing unit for synthesizing the images contained in the at least one voiceprint information packet into a first target video.
10. The electronic device of claim 6, wherein the management module comprises:
a second image acquisition unit for acquiring at least one image;
a second target video synthesizing unit for synthesizing the at least one image into a second target video;
a voice information obtaining unit, configured to obtain voice information of the at least one image respectively;
and the background music synthesis unit is used for synthesizing the background music of the second target video according to the voice information of the at least one image.
11. An electronic device, comprising a processor, a memory, a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image management method according to any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image management method according to one of claims 1 to 5.
CN201910937064.7A 2019-09-29 2019-09-29 Image management method and electronic equipment Pending CN110647635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910937064.7A CN110647635A (en) 2019-09-29 2019-09-29 Image management method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910937064.7A CN110647635A (en) 2019-09-29 2019-09-29 Image management method and electronic equipment

Publications (1)

Publication Number Publication Date
CN110647635A true CN110647635A (en) 2020-01-03

Family

ID=69012210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910937064.7A Pending CN110647635A (en) 2019-09-29 2019-09-29 Image management method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110647635A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753941A (en) * 2008-12-19 2010-06-23 康佳集团股份有限公司 Method for realizing markup information in imaging device and imaging device
CN104951549A (en) * 2015-06-24 2015-09-30 努比亚技术有限公司 Mobile terminal and photo/video sort management method thereof
CN105138578A (en) * 2015-07-30 2015-12-09 北京奇虎科技有限公司 Sorted storage method for target picture and terminal employing sorted storage method
CN106095764A (en) * 2016-03-31 2016-11-09 乐视控股(北京)有限公司 A kind of dynamic picture processing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753941A (en) * 2008-12-19 2010-06-23 康佳集团股份有限公司 Method for realizing markup information in imaging device and imaging device
CN104951549A (en) * 2015-06-24 2015-09-30 努比亚技术有限公司 Mobile terminal and photo/video sort management method thereof
CN105138578A (en) * 2015-07-30 2015-12-09 北京奇虎科技有限公司 Sorted storage method for target picture and terminal employing sorted storage method
CN106095764A (en) * 2016-03-31 2016-11-09 乐视控股(北京)有限公司 A kind of dynamic picture processing method and system

Similar Documents

Publication Publication Date Title
CN110740259B (en) Video processing method and electronic equipment
CN110740262A (en) Background music adding method and device and electronic equipment
CN110557565B (en) Video processing method and mobile terminal
CN109660728B (en) Photographing method and device
CN108182271B (en) Photographing method, terminal and computer readable storage medium
CN107831989A (en) A kind of Application Parameters method of adjustment and mobile terminal
CN108279948A (en) A kind of application program launching method and mobile terminal
CN109257649B (en) Multimedia file generation method and terminal equipment
CN109495638B (en) Information display method and terminal
CN108763475B (en) Recording method, recording device and terminal equipment
CN108984143B (en) Display control method and terminal equipment
CN110958485A (en) Video playing method, electronic equipment and computer readable storage medium
CN111491205B (en) Video processing method and device and electronic equipment
CN111372029A (en) Video display method and device and electronic equipment
CN110825897A (en) Image screening method and device and mobile terminal
CN110544287B (en) Picture allocation processing method and electronic equipment
CN109324999B (en) Method and electronic equipment for executing operation based on download instruction
CN111143614A (en) Video display method and electronic equipment
CN108924413B (en) Shooting method and mobile terminal
CN111064888A (en) Prompting method and electronic equipment
CN110750198A (en) Expression sending method and mobile terminal
CN110543276B (en) Picture screening method and terminal equipment thereof
CN110032320B (en) Page rolling control method and device and terminal
CN109510897B (en) Expression picture management method and mobile terminal
CN107844203B (en) Input method candidate word recommendation method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200103

RJ01 Rejection of invention patent application after publication