CN113886636A - Image marking method, image marking display method and mobile terminal - Google Patents

Image marking method, image marking display method and mobile terminal Download PDF

Info

Publication number
CN113886636A
CN113886636A CN202111134143.8A CN202111134143A CN113886636A CN 113886636 A CN113886636 A CN 113886636A CN 202111134143 A CN202111134143 A CN 202111134143A CN 113886636 A CN113886636 A CN 113886636A
Authority
CN
China
Prior art keywords
mark
image
information
voice
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111134143.8A
Other languages
Chinese (zh)
Inventor
冯丽
马霖霞
闫玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uniontech Software Technology Co Ltd
Original Assignee
Uniontech Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uniontech Software Technology Co Ltd filed Critical Uniontech Software Technology Co Ltd
Priority to CN202111134143.8A priority Critical patent/CN113886636A/en
Publication of CN113886636A publication Critical patent/CN113886636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an image marking method, an image marking display method and a mobile terminal, wherein the image marking method comprises the following steps: after the image is shot, judging whether a mark is added to the image or not; if yes, determining a mark type, and acquiring mark information of the image according to the mark type; and associating and storing the image and the mark information. According to the technical scheme of the invention, the mark is realized immediately after shooting, and the marking operation is more convenient and efficient.

Description

Image marking method, image marking display method and mobile terminal
Technical Field
The invention relates to the technical field of mobile terminals, in particular to an image marking method, an image marking display method and a mobile terminal.
Background
In the prior art, when a stored photo or a recorded image is watched through a mobile phone or other electronic equipment, the photo or the video content is unclear due to the fact that the shooting time is earlier and the number of files is large, and in such a case, the photo or the video content is considered to be marked.
At present, various products capable of marking photos and files are available on the market. For example, Hua provides a function of providing remarks for photos and videos in an album for a mobile phone, but the mobile phone cannot mark the photos immediately after taking the photos, needs to enter the album to search for the photos to be marked and then mark the photos, so that the marking workload is increased, and voice marks cannot be added.
The Windows operating system can add marks to files in file attributes and display the marks in a file list, but cannot directly add marks after shooting is finished, and has no function of adding voice marks, so that the Windows operating system is not friendly to users who have more marked characters and are difficult to type partially or have poor eyesight.
It can be seen that the products with marking function in the market at present have some disadvantages. Therefore, an image marking method is needed to solve the above problems.
Disclosure of Invention
Accordingly, the present invention is directed to an image marking method, an image marking display method, and a mobile terminal that solve or at least alleviate the above-mentioned problems.
According to an aspect of the present invention, there is provided an image marking method, including the steps of: after the image is shot, judging whether a mark is added to the image or not; if yes, determining a mark type, and acquiring mark information of the image according to the mark type; and associating and storing the image and the mark information.
Optionally, in the image labeling method according to the present invention, the step of acquiring labeling information on the image according to the label type includes: displaying a corresponding mark input window according to the mark type; and acquiring the mark information of the image input in the mark input window.
Optionally, in the image labeling method according to the present invention, the step of determining the label type includes: presenting a mark type popup; determining a marker type selected at the marker type popup.
Optionally, in the image tagging method according to the present invention, the tag type includes a text tag and a voice tag, and the tag information includes text tag information and voice tag information.
Optionally, in the image labeling method according to the present invention, further comprising the steps of: when one or more images are displayed, one or more mark information associated with at least one image is acquired, and the one or more mark information is displayed in a mark display area corresponding to the image.
Optionally, in the image marking method according to the present invention, the step of displaying one or more mark information in the mark display area corresponding to the image includes: arranging the one or more mark information according to an adding time sequence to generate a mark information list; and displaying the mark information list in a mark display area corresponding to the image.
Optionally, in the image tagging method according to the present invention, the one or more tagging information includes voice tagging information and/or text tagging information; displaying one or more mark information in a mark display area corresponding to the image comprises: and displaying the audio icon corresponding to the voice mark information in the mark display area corresponding to the image, and/or displaying one or more keywords contained in the text mark information in the mark display area corresponding to the image.
Optionally, in the image labeling method according to the present invention, further comprising the steps of: responding to the clicking operation of the audio icon, and displaying a mark viewing mode list; determining whether the mark viewing mode selected in the mark viewing mode list plays voice or converts the voice into characters; if the mark viewing mode is voice playing, calling a media player to play voice mark information corresponding to the audio icon; and if the mark viewing mode is to convert voice into characters, converting voice mark information corresponding to the audio icon into character information for displaying.
Optionally, in the image labeling method according to the present invention, further comprising the steps of: and responding to the click operation of the key words, and displaying the character mark information corresponding to the key words.
According to an aspect of the present invention, there is provided an image mark display method, including the steps of: acquiring one or more mark information associated with at least one image when the one or more images are displayed; displaying one or more mark information in a mark display area corresponding to the image; wherein the one or more tag information includes voice tag information and/or text tag information.
Optionally, in the method for displaying an image tag according to the present invention, displaying one or more tag information in a tag display area corresponding to an image includes: and displaying the audio icon corresponding to the voice mark information in the mark display area corresponding to the image, and/or displaying one or more keywords contained in the text mark information in the mark display area corresponding to the image.
Optionally, in the method for displaying an image tag according to the present invention, the method further includes: responding to the clicking operation of the audio icon, and displaying a mark viewing mode list; determining whether the mark viewing mode selected in the mark viewing mode list plays voice or converts the voice into characters; if the mark viewing mode is voice playing, calling a media player to play voice mark information corresponding to the audio icon; and if the mark viewing mode is to convert voice into characters, converting voice mark information corresponding to the audio icon into character information for displaying.
Optionally, in the method for displaying an image tag according to the present invention, the method further includes: and responding to the click operation of the key words, and displaying the character mark information corresponding to the key words.
Optionally, in the image mark displaying method according to the present invention, the step of displaying one or more mark information in the mark display area corresponding to the image includes: arranging the one or more mark information according to an adding time sequence to generate a mark information list; and displaying the mark information list in a mark display area corresponding to the image.
According to an aspect of the present invention, there is provided a mobile terminal including: at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the method as described above.
According to an aspect of the present invention, there is provided a readable storage medium storing program instructions that, when read and executed by a mobile terminal, cause the mobile terminal to perform the method as described above.
According to the technical scheme of the invention, the image marking method and the corresponding image marking display method are provided, marks can be directly added to the image after the image is shot, and voice marks or character marks can be selectively added. Like this, realized marking the back at once after the shooting, the marking operation is more convenient, has avoided opening the album, has seeked the event that waits to mark the image and consumes, has reduced mark work load, has improved the mark efficiency to the image.
In addition, when one or more images in the photo album are displayed, the mark information associated with the images can be acquired for displaying, and the voice mark information can be selected to be converted into corresponding characters for displaying. Therefore, the user can directly play voice or convert the voice into text content for viewing according to actual requirements, and the method and the device are suitable for various application scenes.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 illustrates a schematic diagram of a mobile terminal 100 according to one embodiment of the present invention;
FIG. 2 illustrates a flow diagram of an image tagging method 200, according to one embodiment of the present invention; and
fig. 3 is a flow chart of a method 300 for displaying image tags according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic diagram of a mobile terminal 100 according to one embodiment of the invention. The mobile terminal 100 may be a mobile phone, a tablet computer, a notebook computer, a multimedia player, a wearable device, etc. configured with a front camera and a display screen, but is not limited thereto. As shown in FIG. 1, the mobile terminal 100 may include a memory interface 102, a multi-core processor 104, and a peripheral interface 106.
The memory interface 102, the multi-core processor 104, and/or the peripheral interface 106 may be discrete components or may be integrated in one or more integrated circuits. In the mobile terminal 100, the various elements may be coupled by one or more communication buses or signal lines. Sensors, devices, and subsystems can be coupled to peripheral interface 106 to facilitate a variety of functions.
For example, the acceleration sensor 110, the magnetic field sensor 112, and the gravity sensor 114 may be coupled to the peripheral interface 106, the acceleration sensor 110 may collect acceleration data in three coordinate axis directions of the body coordinate system, the magnetic field sensor 112 may collect magnetic field data (magnetic induction intensity) in three coordinate axis directions of the body coordinate system, the gravity sensor 114 may collect gravity data in three coordinate axes of the body coordinate system, and the above sensors may conveniently implement functions such as step counting, orientation, and intelligent horizontal and vertical screen switching. Other sensors 116 may also be coupled to the peripheral interface 106, such as a positioning system (e.g., a GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functions.
The camera subsystem 120 and optical sensor 122, which may be, for example, a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) optical sensor, may be used to facilitate implementation of camera functions such as recording photographs and video clips. Communication functions may be facilitated by one or more wireless communication subsystems 124, which may include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The particular design and implementation of the wireless communication subsystem 124 may depend on the one or more communication networks supported by the mobile terminal 100. For example, the mobile terminal 100 may include a wireless communication subsystem 124 designed to support an LTE, 3G, GSM network, a GPRS network, an EDGE network, a Wi-Fi or WiMax network, and a Bluetooth network.
The audio subsystem 126 may be coupled to a speaker 128 and a microphone 130 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. The I/O subsystem 140 may include a touch screen controller 142 and/or one or more other input controllers 144. The touch screen controller 142 may be coupled to a touch screen 146. For example, the touch screen 146 and touch screen controller 142 may detect contact and movement or pauses made therewith using any of a variety of touch sensing technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies. One or more other input controllers 144 may be coupled to other input/control devices 148 such as one or more buttons, rocker switches, thumbwheels, infrared ports, USB ports, and/or pointing devices such as styluses. The one or more buttons (not shown) may include up/down buttons for controlling the volume of the speaker 128 and/or microphone 130.
The memory interface 102 may be coupled with a memory 150. The memory 150 may include an internal memory such as, but not limited to, a Static Random Access Memory (SRAM), a non-volatile memory (NVRAM), and the like; the external memory may be, for example, a hard disk, a removable hard disk, a U disk, etc., but is not limited thereto. The memory 150 may store program instructions that may include, for example, an operating system 152 and applications 154. The operating system 152 may be, for example, Android, iOS, Windows Phone, etc., which includes program instructions for handling basic system services and for performing hardware-dependent tasks. The memory 150 may also store applications 154, which applications 154 may include program instructions for implementing various user-desired functions. The application 154 may be provided separately from the operating system or may be native to the operating system. In addition, a driver module may also be added to the operating system when the application 154 is installed in the mobile terminal 100. While the mobile device is running, the operating system 152 is loaded from the memory 150 and executed by the processor 104. The application 154 is also loaded from the memory 150 and executed by the processor 104 at runtime. The application 154 runs on top of an operating system, and utilizes interfaces provided by the operating system and underlying hardware to implement various user-desired functions, such as hardware management, instant messaging, web browsing, and the like.
In one embodiment, the above-mentioned memory 150 stores program instructions including program instructions suitable for executing the image tagging method 200 and the image tagging presentation method 300 of the present invention, and these program instructions can be executed by a processor, so that the image tagging method 200 and the image tagging presentation method 300 of the present invention can be executed in the mobile terminal 100.
In one embodiment, the application 154 includes an image tagging apparatus 400, and the image tagging apparatus 400 includes a plurality of program instructions for executing the image tagging method 200 and the image tagging presentation method 300 of the present invention, and the program instructions can be executed by a processor, so that the image tagging method 200 and the image tagging presentation method 300 of the present invention can be executed in the image tagging apparatus 400 of the mobile terminal 100.
FIG. 2 is a flow diagram of an image tagging method 200 according to one embodiment of the invention. The image tagging method 200 may be implemented in an image tagging apparatus 400 of a mobile terminal (e.g., the mobile terminal 100). As shown in fig. 2, the method 200 begins at step S210.
Here, the applications in the mobile terminal 100 further include a camera application adapted to capture a picture, and the user may open the camera application and capture a picture through the camera application before performing step S210.
In step S210, after the image is captured, it is determined whether or not a mark is added to the captured image. Here, the image tagging apparatus 400 may detect whether the camera application captures an image, and determine whether to add a tag to the captured image after detecting that the camera application captures the image, so that when determining to add a tag to the image, the image (e.g., picture, video) can be directly tagged immediately.
Here, whether or not to add a mark to the captured image may be configured in advance. For example, in one implementation, before performing the method 200, the user may select a "mark after shooting" option in the setup page, so that in performing step S210, it is determined that a mark is to be added to the shot image after the image is shot.
In step S220, if it is determined that a mark is added to the captured image, a mark type is determined, and mark information of the image by the user is acquired according to the mark type.
Specifically, if it is determined that a mark is added to a photographed image, a mark type popup may be presented on a screen of the mobile terminal, where one or more mark types are displayed in the mark type popup, so that a user may select one mark type in the mark type popup. In this way, visual marker 400 may determine the type of marker that the user selected in the marker type popup.
In one embodiment, the image tagging apparatus 400 may display a corresponding tag input window on the screen of the mobile terminal according to the determined tag type, so that the user may input tag information corresponding to the tag type in the tag input window. Here, the mark type includes, for example, a text mark, a voice mark, but is not limited thereto. Accordingly, the mark information includes text mark and voice mark information.
For example, when the mark type is determined to be a text mark, the displayed mark input window is a text editing window, and a user can input text mark information in the text editing window to mark the image; when the mark type is determined to be a voice mark, the displayed mark input window is a voice adding window, and a user can input voice in the voice adding window or add an existing audio file as voice mark information so as to mark the video based on the voice mark information. Subsequently, the image labeling device 400 may acquire the label information (text label information or voice label information) for the image input by the user in the label input window.
Finally, in step S230, the image and the mark information are associated and stored.
In one embodiment, the image tagging device 400 is connected to a data storage device, and after the user tags the image, the image and the tagging information can be associated and stored in the data storage device. Here, each image file includes a uniquely corresponding image identifier, and the image identifier may be stored in the data storage device in association with corresponding tag information. In this way, the tag information corresponding to the image can be queried and retrieved from the data storage device based on the image identifier.
In addition, the mobile terminal further includes an album (i.e., an image file directory) for storing image files, and one or more images captured by the camera may be stored in the album. When the user opens the album, the mobile terminal 100 may receive a user access request to the album and present one or more images in the album on the screen. Here, a plurality of image files in the album may be displayed in a list form, or one current image file may be displayed separately. It should be noted that the present invention does not limit the display form of the image files in the album.
According to an embodiment of the present invention, when displaying one or more images in the album, the image marking device 400 may obtain one or more marking information associated with at least one image from the data storage device and display the one or more marking information in a marking display area corresponding to the image. Here, the user may add a mark to any one or more images, and, for each image, may add one or more marks, so that one image may be associated with one or more mark information. In addition, the one or more tag information associated with the image may include textual tag information and/or voice tag information.
It can be understood that there may be images without any mark added in the album, and when these images without associated marks are displayed, the associated mark information is not obtained, and only the image file is displayed.
In one implementation, when a plurality of image files in the album are displayed in a list form, the mark display area may be located at the rear side of the image files. When a current one of the image files is separately displayed, the mark display area may be located in a predetermined area on the current display interface. However, it should be noted that the present invention is not limited to a specific location of the indicia display area.
In one implementation, when the image is associated with one or more tag information, the one or more tag information may be arranged in a chronological order of tag addition, a tag information list may be generated based on the one or more tag information arranged in the chronological order, and the tag information list may be displayed in a tag display area corresponding to the image.
According to an embodiment, displaying one or more mark information associated with an image in a mark display area corresponding to the image may be specifically implemented as:
and when the one or more mark information comprises voice mark information, displaying the audio icon corresponding to the voice mark information in the mark display area corresponding to the image. And when the one or more mark information comprises the character mark information, displaying one or more keywords contained in the character mark information in a mark display area corresponding to the image. That is, the mark information displayed in the mark display area is actually an audio icon corresponding to the voice mark information or one or more keywords in the text mark information. The audio icon represents that the image is associated with voice mark information, and the keyword represents that the image is associated with character mark information.
In one embodiment, after the audio icon corresponding to the voice tag information is displayed in the tag display area corresponding to the image, the user can view the corresponding voice tag information by clicking the audio icon.
Specifically, after the user clicks the audio icon, the image tagging device 400 displays a tag viewing mode list, i.e., a voice tag viewing mode list, in response to the clicking operation on the audio icon. Here, the list of the markup viewing manners includes at least two markup viewing manners of playing voice and converting the voice into text. The user can select a voice mark viewing mode from the mark viewing mode list according to actual requirements.
Subsequently, the video tagging apparatus 400 determines whether the tag viewing mode selected by the user in the tag viewing mode list is to play voice or to convert the voice into text. If the mark viewing mode selected by the user is voice playing, calling the media player to play the voice mark information corresponding to the audio icon, namely, directly playing the voice mark information through the media player. And if the mark viewing mode selected by the user is to convert voice into characters, converting voice mark information corresponding to the audio icon into character information, and displaying the character information so that the user can view character content corresponding to the voice mark information.
In addition, after the mark display area corresponding to the image displays one or more keywords contained in the text mark information, the user can view the corresponding text mark information by clicking the keywords. Specifically, after the user clicks the keyword, the image marking device 400 displays specific text marking information corresponding to the keyword in response to the user clicking the keyword.
Fig. 3 is a flow chart of a method 300 for displaying image tags according to an embodiment of the present invention. The image tag presentation method 300 is suitable for being executed in an image tag apparatus 400 of a mobile terminal (e.g., the mobile terminal 100).
It should be noted that the mobile terminal 100 includes an album (i.e., an image file directory) for storing image files, and one or more images captured by the camera may be stored in the album. When the user opens the album, the mobile terminal 100 may receive a user access request to the album and present one or more images in the album on the screen. Here, a plurality of image files in the album may be displayed in a list form, or one current image file may be displayed separately. It should be noted that the present invention does not limit the display form of the image files in the album.
As shown in fig. 3, the method 300 begins at step S310. In step S310, when displaying one or more images in the album, the image marking device 400 may obtain one or more marking information associated with at least one image from the data storage device. Here, the user may add a mark to any one or more images, and, for each image, may add one or more marks, so that one image may be associated with one or more mark information. In addition, the one or more tag information associated with the image may include textual tag information and/or voice tag information.
Subsequently, in step S320, one or more mark information associated with the image is displayed in the mark display area corresponding to the image.
In one implementation, when a plurality of image files in the album are displayed in a list form, the mark display area may be located at the rear side of the image files. When a current one of the image files is separately displayed, the mark display area may be located in a predetermined area on the current display interface. However, it should be noted that the present invention is not limited to a specific location of the indicia display area.
In one implementation, when the image is associated with one or more tag information, the one or more tag information may be arranged in a chronological order of tag addition, a tag information list may be generated based on the one or more tag information arranged in the chronological order, and the tag information list may be displayed in a tag display area corresponding to the image.
According to an embodiment, displaying one or more mark information associated with the image in the mark display area corresponding to the image may be specifically implemented as:
and when the one or more mark information comprises voice mark information, displaying the audio icon corresponding to the voice mark information in the mark display area corresponding to the image. And when the one or more mark information comprises the character mark information, displaying one or more keywords contained in the character mark information in a mark display area corresponding to the image. That is, the mark information displayed in the mark display area is actually an audio icon corresponding to the voice mark information or one or more keywords in the text mark information. The audio icon represents that the image is associated with voice mark information, and the keyword represents that the image is associated with character mark information.
In one embodiment, after the audio icon corresponding to the voice tag information is displayed in the tag display area corresponding to the image, the user can view the corresponding voice tag information by clicking the audio icon.
Specifically, after the user clicks the audio icon, the image tagging device 400 displays a tag viewing mode list, i.e., a voice tag viewing mode list, in response to the clicking operation on the audio icon. Here, the list of the markup viewing manners includes at least two markup viewing manners of playing voice and converting the voice into text. The user can select a voice mark viewing mode from the mark viewing mode list according to actual requirements.
Subsequently, the video tagging apparatus 400 determines whether the tag viewing mode selected by the user in the tag viewing mode list is to play voice or to convert the voice into text. If the mark viewing mode selected by the user is voice playing, calling the media player to play the voice mark information corresponding to the audio icon, namely, directly playing the voice mark information through the media player. And if the mark viewing mode selected by the user is to convert voice into characters, converting voice mark information corresponding to the audio icon into character information, and displaying the character information so that the user can view character content corresponding to the voice mark information.
In addition, after the mark display area corresponding to the image displays one or more keywords contained in the text mark information, the user can view the corresponding text mark information by clicking the keywords. Specifically, after the user clicks the keyword, the image marking device 400 displays specific text marking information corresponding to the keyword in response to the user clicking the keyword.
According to the image marking method and the corresponding image marking display method, the marks can be directly added to the images after the images are shot, and the voice marks or the character marks can be selectively added. Like this, realized marking the back at once after the shooting, the marking operation is more convenient, has avoided opening the album, has seeked the event that waits to mark the image and consumes, has reduced mark work load, has improved the mark efficiency to the image. In addition, when one or more images in the photo album are displayed, the mark information associated with the images can be acquired for displaying, and the voice mark information can be selected to be converted into corresponding characters for displaying. Therefore, the user can directly play voice or convert the voice into text content for viewing according to actual requirements, and the method and the device are suitable for various application scenes.
A3, the method according to a1 or a2, wherein the step of determining the type of marker comprises: presenting a mark type popup; determining a marker type selected at the marker type popup.
A9, the method as claimed in A7 or A8, wherein further comprising the steps of: and responding to the click operation of the key words, and displaying the character mark information corresponding to the key words.
B11, the method according to B10, wherein displaying one or more mark information in the mark display area corresponding to the image comprises: and displaying the audio icon corresponding to the voice mark information in the mark display area corresponding to the image, and/or displaying one or more keywords contained in the text mark information in the mark display area corresponding to the image.
B12, the method as set forth in B11, further comprising the steps of: responding to the clicking operation of the audio icon, and displaying a mark viewing mode list; determining whether the mark viewing mode selected in the mark viewing mode list plays voice or converts the voice into characters; if the mark viewing mode is voice playing, calling a media player to play voice mark information corresponding to the audio icon; and if the mark viewing mode is to convert voice into characters, converting voice mark information corresponding to the audio icon into character information for displaying.
B13, the method according to B11 or B12, further comprising the steps of: and responding to the click operation of the key words, and displaying the character mark information corresponding to the key words.
B14, the method according to any one of B10-B13, wherein the step of displaying one or more marking information in the corresponding marking display area of the image comprises: arranging the one or more mark information according to an adding time sequence to generate a mark information list; and displaying the mark information list in a mark display area corresponding to the image.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the mobile terminal generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the image labeling method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense with respect to the scope of the invention, as defined in the appended claims.

Claims (10)

1. An image marking method includes the steps:
after the image is shot, judging whether a mark is added to the image or not;
if yes, determining a mark type, and acquiring mark information of the image according to the mark type; and
and associating and storing the image and the mark information.
2. The method of claim 1, wherein the step of acquiring the mark information for the image according to the mark type comprises:
displaying a corresponding mark input window according to the mark type;
and acquiring the mark information of the image input in the mark input window.
3. The method of any of claims 1-2, wherein the markup type includes text markup, voice markup, and the markup information includes text markup information, voice markup information.
4. A method according to any one of claims 1-3, further comprising the step of:
when one or more images are displayed, one or more mark information associated with at least one image is acquired, and the one or more mark information is displayed in a mark display area corresponding to the image.
5. The method of claim 4, wherein the step of displaying one or more mark information in the mark display area corresponding to the image comprises:
arranging the one or more mark information according to an adding time sequence to generate a mark information list;
and displaying the mark information list in a mark display area corresponding to the image.
6. The method of claim 4 or 5, wherein the one or more tag information comprises voice tag information and/or text tag information; displaying one or more mark information in a mark display area corresponding to the image comprises:
displaying the audio icon corresponding to the voice mark information in the mark display area corresponding to the image, and/or
And displaying one or more keywords contained in the text mark information in a mark display area corresponding to the image.
7. The method of claim 6, further comprising the steps of:
responding to the clicking operation of the audio icon, and displaying a mark viewing mode list;
determining whether the mark viewing mode selected in the mark viewing mode list plays voice or converts the voice into characters;
if the mark viewing mode is voice playing, calling a media player to play voice mark information corresponding to the audio icon;
and if the mark viewing mode is to convert voice into characters, converting voice mark information corresponding to the audio icon into character information for displaying.
8. An image mark display method includes the steps:
acquiring one or more mark information associated with at least one image when the one or more images are displayed;
displaying one or more mark information in a mark display area corresponding to the image;
wherein the one or more tag information includes voice tag information and/or text tag information.
9. A mobile terminal, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-8.
10. A readable storage medium storing program instructions that, when read and executed by a mobile terminal, cause the mobile terminal to perform the method of any of claims 1-8.
CN202111134143.8A 2021-09-27 2021-09-27 Image marking method, image marking display method and mobile terminal Pending CN113886636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111134143.8A CN113886636A (en) 2021-09-27 2021-09-27 Image marking method, image marking display method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111134143.8A CN113886636A (en) 2021-09-27 2021-09-27 Image marking method, image marking display method and mobile terminal

Publications (1)

Publication Number Publication Date
CN113886636A true CN113886636A (en) 2022-01-04

Family

ID=79006922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111134143.8A Pending CN113886636A (en) 2021-09-27 2021-09-27 Image marking method, image marking display method and mobile terminal

Country Status (1)

Country Link
CN (1) CN113886636A (en)

Similar Documents

Publication Publication Date Title
US11200278B2 (en) Method and apparatus for determining background music of a video, terminal device and storage medium
EP3125135B1 (en) Picture processing method and device
TWI498843B (en) Portable electronic device, content recommendation method and computer-readable medium
US9058375B2 (en) Systems and methods for adding descriptive metadata to digital content
CN107329750B (en) Identification method and skip method of advertisement page in application program and mobile terminal
KR102036337B1 (en) Apparatus and method for providing additional information using caller identification
EP3128411B1 (en) Interface display method, terminal, computer program and recording medium
RU2643464C2 (en) Method and apparatus for classification of images
US20170154220A1 (en) Multimedia presentation method and apparatus
WO2017080084A1 (en) Font addition method and apparatus
US20150009363A1 (en) Video tagging method
US20160148162A1 (en) Electronic device and method for searching calendar event
CN104951445B (en) Webpage processing method and device
CN109981988B (en) Video generation method and device and mobile terminal
CN109521938B (en) Method and device for determining data evaluation information, electronic device and storage medium
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN113886636A (en) Image marking method, image marking display method and mobile terminal
TW201918861A (en) Interface display method and apparatus
US20210377454A1 (en) Capturing method and device
EP2784736A1 (en) Method of and system for providing access to data
CN104199616A (en) Mobile terminal information input method and equipment
CN112667852B (en) Video-based searching method and device, electronic equipment and storage medium
CN113626632B (en) Album material display method and device and electronic equipment
CN112492200B (en) Photographing method and device, electronic equipment and storage medium
KR102032256B1 (en) Method and apparatus for tagging of multimedia data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination