CN111046199A - Method for adding bystander to image and electronic equipment - Google Patents

Method for adding bystander to image and electronic equipment Download PDF

Info

Publication number
CN111046199A
CN111046199A CN201911196619.3A CN201911196619A CN111046199A CN 111046199 A CN111046199 A CN 111046199A CN 201911196619 A CN201911196619 A CN 201911196619A CN 111046199 A CN111046199 A CN 111046199A
Authority
CN
China
Prior art keywords
voice
data
over
image
over data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911196619.3A
Other languages
Chinese (zh)
Other versions
CN111046199B (en
Inventor
于浩平
孙惠方
马思伟
王荣刚
李革
高文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Original Assignee
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory filed Critical Peng Cheng Laboratory
Priority to CN201911196619.3A priority Critical patent/CN111046199B/en
Publication of CN111046199A publication Critical patent/CN111046199A/en
Application granted granted Critical
Publication of CN111046199B publication Critical patent/CN111046199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/41Indexing; Data structures therefor; Storage structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a method for adding a voice-over to an image and an electronic device, wherein the method comprises the steps of receiving voice-over data corresponding to the image when the image is operated; associating the voice-over data with the image such that the voice-over data is displayed when the image is displayed. When the voice-over data is received, the voice-over data and the image can be packaged and stored together, so that when the image is transmitted or displayed, the voice-over data and the image can be synchronously transmitted or displayed, a user can acquire emotion expression information and the like corresponding to the image at any time, the display effect of the image is improved, and convenience is brought to the user; meanwhile, the service life of the image is prolonged, and the significance is kept.

Description

Method for adding bystander to image and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method for adding a side-white to an image and an electronic device.
Background
At present, people generally use smart phones to take pictures and videos as a way of recording daily activities. When people use the smart phone to shoot photos and videos, the famous landmarks can be shot, and the photos of the people can be shot through self-shooting. Also, as social media applications become more popular, people communicate through photos in addition to being directly face-to-face or talking over the phone. For example by taking a "selfie" and sending it to friends to show what they are doing.
However, images can only record and capture visual information, but cannot tell a complete story by the image itself alone. For this reason, people simultaneously send text messages to express and reflect their emotions or opinions on the subjects of images, etc. when sending images, and the images and the text messages are treated as independent entities. Then, when the image is stored in the electronic device, only the visual information carried by the image is stored in the image file, and the text message is ignored. Therefore, when people watch the image again, the text information corresponding to the image cannot be acquired, so that the emotion information during image shooting or image editing cannot be accurately acquired, the image becomes boring, and the value and the significance of the image existing and reserved are gradually lost in a relatively short time.
Disclosure of Invention
The present invention provides a method for adding a side-white to an image and an electronic device, aiming at the deficiencies of the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method of highlighting an image, the method comprising:
when an image is operated, receiving the voice-over data corresponding to the image;
associating the voice-over data with the image such that the voice-over data is displayed when the image is displayed.
The method for adding the voice-over to the image is characterized in that the image corresponds to a plurality of voice-over data, and the voice-over data are configured for the image by one user or a plurality of users.
The method for adding the voice-over to the image, wherein the voice-over data comprises voice-over content, a voice-over creator and a voice-over creation date, and the associating the voice-over data with the image is specifically as follows:
and according to the voice-over creator of the voice-over data, registering the voice-over data in a preset data structure so as to associate the voice-over data with the image, wherein the preset data structure is stored in a binding manner with the image.
The method for adding the voice-over for the image comprises the steps that when the image is stored in the image group, voice-over data exist in one image in the image group, voice-over data exist in a plurality of images, or voice-over data exist in each image, wherein the voice-over data corresponding to the images are the same or different.
The method for adding the voice-over for the image is characterized in that the data type of the voice-over data is text data or audio data; the method further comprises the following steps:
when the voice-over data corresponding to the image is displayed, acquiring a data type corresponding to the voice-over data;
when the voice-over data is text data, synchronously displaying the voice-over data and the image;
and when the voice-over data is audio data, displaying the image and synchronously playing the voice-over data.
The method for adding the voice-over for the image is characterized in that the data type of the voice-over data is text data or audio data; the method further comprises the following steps:
when the voice-over data corresponding to the image is displayed, acquiring a display type corresponding to the voice-over data;
when the display type is character display, if the voice data is text data, the voice data and the image are synchronously displayed; if the voice-over data is audio data, converting the voice-over data into text data, and synchronously displaying the voice-over data and the image;
and when the display type is audio playing, if the voice-over data is audio data, displaying the image and synchronously playing the voice-over data, if the voice-over data is text data, converting the voice-over data into audio data, displaying the image and synchronously playing the voice-over data.
The method of adding the voice-over to the image, wherein when the text data is displayed in synchronization with the image, the text data is displayed independently of the image or the text data is displayed in superimposition with the image.
The method for adding the voice-over to the image, wherein the voice-over data includes voice-over content, a voice-over creator, and a voice-over creation date, and the displaying the image and the synchronously playing the voice-over data specifically includes:
displaying the image and synchronously displaying or playing the voice-over content in the voice-over data;
and synchronously displaying the voice-over creator and the voice-over creation date in the voice-over data with the image in a text form.
The method for adding the voice-over to the image is characterized in that the voice-over data comprises voice-over verification data; the method comprises the following steps:
when an operation of editing the voice-over data is received, verifying the operation of editing the voice-over data according to the voice-over verification data;
and when the verification is successful, editing the voice-over data according to the operation of editing the voice-over data.
The method for adding the voice-over to the image, wherein when receiving the voice-over data editing operation, verifying the voice-over data editing operation according to the voice-over verification data specifically comprises:
when an operation of editing the voice-over data is received, judging whether voice-over verification data corresponding to the voice-over data is empty or not;
when the voice-over verification data is empty, judging that the voice-over data editing operation verification is successful;
when the voice-over verification data are not empty, comparing verification data corresponding to the voice-over data editing operation with the voice-over verification data;
and if the verification data is consistent with the voice-over verification data, judging that the voice-over data editing operation verification is successful.
An electronic device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, performs the steps of any of the above-described methods for adding edge-whites to an image.
Has the advantages that: compared with the prior art, the invention provides a method for adding the voice-over to the image and the electronic equipment, wherein the method comprises the steps of receiving voice-over data corresponding to the image when the image is operated; associating the voice-over data with the image such that the voice-over data is displayed when the image is displayed. When the voice-over data is received, the voice-over data and the image can be stored together, so that when the image is transmitted or displayed, the voice-over data and the image can be synchronously transmitted or displayed, a user can acquire emotion expression information and the like corresponding to the image at any time, the display effect of the image is improved, and convenience is brought to the user; meanwhile, the service life of the image is prolonged, and the significance is kept.
Drawings
FIG. 1 is a flowchart of a method for adding a side-white to an image according to the present invention.
Fig. 2 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
The present invention provides a method for adding a side-white to an image and an electronic device, and in order to make the purpose, technical scheme and effect of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention will be further explained by the description of the embodiments with reference to the drawings.
Fig. 1 is a flowchart illustrating a method for adding a side-white to an image according to this embodiment. The method may be performed by a playing apparatus, which may be implemented by software, applied to an electronic device such as a smart phone, a smart television, a tablet computer or a personal digital assistant, etc., so that the electronic device may set the voice-over data for an image and may synchronously display the voice-over data when the image is displayed. Specifically, referring to fig. 1, the method for adding a side-white to an image provided in this embodiment specifically includes:
and S10, when the image is operated, the voice-over data corresponding to the image is received.
Specifically, the operation image may include a captured image, an edited image, a displayed image, and the like. The operation image may be generated by operating an electronic device capable of capturing an image, editing an image, or displaying an image, for example, the electronic device receives an operation of adding a whitish to a digital image by a user through an APP or software configured by the electronic device. The electronic device can be an intelligent device such as a smart phone, a tablet computer, a notebook computer and a television, which can be loaded with APP or software. The electronic device may have only a function of capturing an image, editing an image, or displaying an image, or may have a plurality of functions of capturing an image, editing an image, and displaying an image at the same time. Certainly, in practical applications, a voice-over setting option may be preset in the electronic device or an APP loaded in the electronic device, and when an image is operated, the voice-over data adding function may be started through the voice-over setting option to receive voice-over data corresponding to the image.
Further, in an implementation manner of this embodiment, the voice-over data may be text voice-over or oral narration voice-over. When the voice-over data is character voice-over, the voice-over data is stored in a text data form, and when the voice-over data is oral narration voice-over, the voice-over data is stored in an audio data form. The text data may be encoded in an encoding manner that can be used for encoding a text, for example, UTF-8, UTF-16, GB2312-80, GBK, Big5, etc.; the audio data may be encoded using an encoder that may be used to encode audio, such as AVS audio, MP3, AAC, WAV, and the like.
Further, in an implementation manner of this embodiment, the voice-over data may include voice-over content, a voice-over creator, and a voice-over creation date, where the voice-over content is a data content of the voice-over data, and may be used to represent a opinion or a feeling of the voice-over creator on the image; the voice-over creator is a user name of a user who creates the voice-over data, and the voice-over creation date is a date of creating the voice-over data, for example, a date of receiving the voice-over data. Of course, in practical applications, the voice-over data may also include other information, such as image ownership, voice-over data ownership, voice-over authentication data identifier, and the like. Wherein image ownership is used to label users who own the image content, e.g., photographers, etc.; voice-over data ownership is used to mark users of the user's voice-over content, e.g., voice-over creators, etc.; the voice-over authentication data is used for authenticating the editing operation performed on the voice-over data, for example, the voice-over authentication data is a group of passwords in the form of numbers and/or letters; the voice-over authentication data identifier is used for determining whether the voice-over data carries voice-over authentication data, for example, when the voice-over authentication data identifier is 0, it indicates that the voice-over data does not carry voice-over authentication data, and when the voice-over authentication data identifier is 1, it indicates that the voice-over data carries voice-over authentication data.
S20, associating the voice-over data with the image, so that the voice-over data is displayed when the image is displayed.
Specifically, associating the voice-over data with the image refers to binding the voice-over data with the image and packaging the voice-over data with the image while stored in the same image file or bitstream such that the image is stored with the voice-over data and the image is transmitted with the voice-over data. Therefore, after the voice-over data is associated with the image, when the image is to be transmitted, the voice-over data corresponding to the image is synchronously transmitted, so that the terminal device receiving the image can synchronously acquire the voice-over data. The voice data may be displayed in synchronization with the image when the image is displayed. Of course, in practical applications, the voice-over data may be configured in advance with display states, for example, the display states include a hidden state and a display state; then the voice-over data may be displayed in synchronization with the image when the voice-over data is in the display state; and when the whiteside data is in a hidden state, the whiteside data may not be displayed when the image is displayed. Of course, it should be noted that the display state of the voice-over data may be configured by the voice-over data creator at the time of creation, or may be configured at the time of displaying the image, for example, a voice-over display key is preset, when the voice-over display key is in an on state, the voice-over data is displayed in synchronization with the image, and when the voice-over display scheme is in an off state, only the image is displayed.
Further, in an implementation manner of this embodiment, the associating the voice-over data with the image specifically includes:
registering the voice-over data within a preset data structure according to a voice-over creator of the voice-over data to associate the voice-over data with the image, wherein the preset data structure is associated with the image.
Specifically, the preset data structure is a preset data structure for storing the voice-over data, and the preset data structure is associated with the image and is stored in an image file of the image or a bitstream, and the like. Of course, it should be noted that when there are a plurality of images in an image file or a bitstream, only one image of the plurality of images may have the edge-white data, a part of the images may have the edge-white data, or all of the images may have the edge-white data. In addition, when the voice-over data is received, the voice-over data can be used as voice-over of all image data in the image file or the bit stream at the same time, namely, the voice-over data is registered in the image file or a preset data structure corresponding to each image in the bit stream; when the voice-over data is received, the voice-over data may be set for one or a part of the image file or the bit stream, and at this time, the voice-over data only needs to be registered in a preset data structure corresponding to the image or the part of the image. It should be noted that, when a plurality of images are stored in an image file or a bitstream storing the image (that is, the image is stored in an image group, the image group includes a plurality of images), the image group includes a piece of image with the above-mentioned whitespace data, the images with the above-mentioned whitespace data, or each image with the above-mentioned whitespace data, and the whitespace data corresponding to each image is the same or different.
Further, a plurality of pieces of the header data may be included in the preset data structure, the plurality of pieces of the header data may be created by different users (for example, an image photographer, an image viewer, or the like), and the plurality of pieces of the header data may be created on different dates, for example, part of the header data may be created when an image is photographed, part of the header data may be created when an image is edited, part of the header data may be created when an image is displayed, or the like. Of course, the subtitle data created when the image is edited and the image is displayed may be created in a different editing process or a different display process. In addition, each time a piece of voice data is created for the image, the voice data needs to be registered in a preset data structure, and the voice data is registered in the preset data structure with the voice creator of the voice data as an identifier, and simultaneously the voice data is stored in the preset data structure.
Further, in an implementation manner of this embodiment, since the voice-over data may be text data or audio data, when the voice-over data corresponding to the image is displayed, the display manner may be determined according to a data type of the voice-over data. Correspondingly, the data type of the voice-over data is text data or audio data; the method further comprises the following steps:
when the voice-over data corresponding to the image is displayed, acquiring a data type corresponding to the voice-over data;
when the voice-over data is text data, synchronously displaying the voice-over data and the image;
and when the voice-over data is audio data, displaying the image and synchronously playing the voice-over data.
Specifically, the data types include text data and audio data; the voice-over data is text data or audio data, so that after the voice-over data is acquired, the voice-over data can be displayed according to the corresponding data type. And when the voice-over data is the audio data, the voice-over data is played in an audio form. Further, when the text data is displayed in synchronization with the image, the text data is displayed independently of the image or the text data is displayed in superimposition with the image. That is, when the voice-over data is displayed in text form, the voice-over data may be displayed independently of the image, for example, in the form of a floating window that does not intersect the image display area; the above-mentioned data may be displayed in a manner of being superimposed on the image, for example, the above-mentioned data may be displayed in a subtitle form below the image; it may also be an animated display, for example, after displaying the image, fly into the image in the form of a fluttering window, and display in a preset animation (e.g., flashing, etc.). Further, in one possible implementation of the present embodiment, when the image is displayed in a full-screen form, the text-form voice data is displayed on the image in an overlapping form by default, i.e., the voice data is displayed overlapping the image. In addition, in practical application, when one image corresponds to a plurality of pieces of voice-over data, the data type corresponding to each piece of voice-over data can be determined respectively, and corresponding display is performed according to the data type of each piece of voice-over data.
It should be noted that, when the above-mentioned voice-over data is displayed in text form, only the voice-over content of the voice-over data may be displayed, or the voice-over creator, the voice-over creation date, and the voice-over content of the voice-over data may be displayed at the same time, for example, the voice-over creator and the voice-over creation date may be added before the start point of the voice-over content.
Further, the voice-over data includes voice-over content, a voice-over creator, and a voice-over creation date, and the displaying the image and the synchronously playing the voice-over data specifically includes: displaying the image and synchronously playing the voice-over content in the voice-over data; and synchronously displaying the voice-over creator and the voice-over creation date in the voice-over data with the image in a text form. Thus, when the voice data is audio data, the voice data can be played as a voice-over of an image, that is, the voice data or accompaniment data is played while the image is displayed. Further, when the voice or accompaniment type of the. Of course, the display forms of the voice-over creator and the voice-over creation date may be the same as the display form of the voice-over data of the text data, and the description thereof is omitted here, and the voice-over data display form of the text data may be referred to specifically.
Further, in an implementation manner of this embodiment, a display form corresponding to the voice-over data may be preset, for example, the voice-over data is preset to be displayed in a text data form, or the voice-over data is played in an audio data form. At this time, before displaying the voice data, it is necessary to determine whether the data format of the voice data is consistent with the display format, if so, the voice data is directly displayed or played, and if not, the data format of the voice data needs to be converted, and the voice data obtained by conversion is displayed or played. Correspondingly, the method further comprises the following steps:
when the voice-over data corresponding to the image is displayed, acquiring a display type corresponding to the voice-over data;
when the display type is character display, if the voice data is text data, the voice data and the image are synchronously displayed; if the voice-over data is audio data, converting the voice-over data into text data, and synchronously displaying the voice-over data and the image;
and when the display type is audio playing, if the voice-over data is audio data, displaying the image and synchronously playing the voice-over data, if the voice-over data is text data, converting the voice-over data into audio data, displaying the image and synchronously playing the voice-over data.
Specifically, the display types include text display and audio playing, wherein the text display is used for displaying the voice-over data in a text form, and the audio playing is used for displaying the voice-over data in an audio form. Then, when the voice-over data needs to be displayed, whether the display type is configured in advance or not can be determined, and when the display type of the voice-over data is not configured, the voice-over data is directly displayed; when the display type of the voice data is configured, whether the data type of the voice data is consistent with the display type needs to be judged, and corresponding operation is carried out on the voice data according to a comparison result. For example, when the display type is character display, if the voice data is text data, the voice data is directly displayed; if the voice-over data is in an audio type, converting the voice-over data into text data, and displaying and converting the text data to obtain text data; similarly, when the display type is audio playing, the text data is converted into audio data, and the audio data obtained by conversion is played. Further, when an image corresponds to a plurality of pieces of whitespace data, an operation is performed on each piece of whitespace data individually, that is, an operation of acquiring a display type corresponding to the whitespace data is performed for each piece of whitespace data to display each piece of whitespace data.
It should be noted that, when audio data needs to be converted into text data or text data needs to be converted into audio data, it is necessary to determine whether to support conversion of the audio data into the text data or conversion of the text data into the audio data, if so, corresponding conversion is performed, and if not, conversion failure may be prompted and the voice-over data may be directly displayed, or conversion failure may be prompted and a voice-over data display type may be suggested to be modified.
Further, when the image carries the voice-over data, the voice-over data can be edited by a voice-over creator or a user of the image. For example, an electronic device or APP for manipulating the image may be configured with an editing option by which the voice-over data of the image may be edited.
Correspondingly, the method comprises the following steps:
when an operation of editing the voice-over data is received, verifying the operation of editing the voice-over data according to the voice-over verification data;
and when the verification is successful, editing the voice-over data according to the operation of editing the voice-over data.
Specifically, the voice-over data carries voice-over verification data, and the voice-over data editing operation is verified through the voice-over verification data to determine whether the voice-over data editing operation has the voice-over data editing right. The voice-over verification data is pre-stored in the voice-over data and is used for verifying the edited voice-over data. In addition, verification data is carried in the voice-over data editing operation, the voice-over data editing operation is verified by comparing the verification data with the voice-over verification data, and when the verification data is consistent with the voice-over verification data, the voice-over data editing operation is judged to be successful in verification, and at the moment, the voice-over data can be edited according to the voice-over data editing operation. The operation of editing the voice-over data may include a modification operation, a deletion operation, and the like, wherein the modification operation is to modify the voice-over content of the voice-over data (for example, to add content, modify content, or delete content, or to change the format of the voice-over data, to change audio into text or text into audio, and then to perform corresponding editing processing on the data), and the deletion operation is to delete the voice-over data. Of course, in practical applications, the operation of editing the voice-over data may also be editing voice-over verification data and the like.
Further, in an implementation manner of this embodiment, the voice-over data may or may not carry voice-over verification data, so that when an operation of editing the voice-over data is received, it may be determined whether the voice-over data stores the voice-over verification data. Correspondingly, when receiving an edit bystander data operation, verifying the edit bystander data operation according to the bystander verification data specifically comprises:
when an operation of editing the voice-over data is received, judging whether voice-over verification data corresponding to the voice-over data is empty or not;
when the voice-over verification data is empty, judging that the voice-over data editing operation verification is successful;
when the voice-over verification data are not empty, comparing verification data corresponding to the voice-over data editing operation with the voice-over verification data;
and if the verification data is consistent with the voice-over verification data, judging that the voice-over data editing operation verification is successful.
Specifically, the voice-over data is empty, which indicates that the voice-over data does not carry the voice-over data, and at this time, it is determined that the voice-over data editing operation verification is successful. That is, the edit can be directly made for the voice-over data to which the voice-over authentication data is not configured. When the voice-over verification data is not empty, the voice-over verification data carried by the voice-over data editing operation and the voice-over verification data corresponding to the voice-over data need to be adopted to verify the voice-over data editing operation so as to determine whether the voice-over data editing operation can be executed or not, and therefore the safety of the voice-over data can be improved.
In addition, in order to further explain the method for adding the bystander to the image by using the numbers provided by the embodiment, a specific implementation manner is combined to explain the method for adding the bystander to the image by using the numbers. The digital image plus noise method is implemented in the AVS image container standard, wherein the preset data structure may also be implemented in EXIF of JPEG image, or may be implemented by a preset data structure defined in MPEG standard (e.g. MBFF (media basic file format), HEIF (high efficiency image file format), etc.).
For example, the following steps are carried out: table 1 gives an implementation example of the present embodiment.
Table 1 voice over data implementation example
Figure BDA0002294811650000121
The systematic _ data _ start _ code is a four-byte bit string with a fixed pattern, which usually consists of a start code prefix (a unique sequence of three bytes), followed by a byte to specify the start of a specified code segment in the bitstream. The text encoding standard id is used to indicate the text encoding format used in the bystander data segment, which is also used for the bystander creator and the bystander creation date. For example,
Figure BDA0002294811650000122
Figure BDA0002294811650000131
further, the name _ author _ name is used to represent a name of a person of a voice-over creator who creates voice-over data, a name of an organization, or the like. The negative _ author _ name _ length is used to represent the byte length of the negative _ author _ name. The negative _ entry _ date is used to indicate a creating date of the voice-over data for creating the voice-over data, wherein the negative _ entry _ date is in a digital format, that is, the negative _ entry _ date includes 8 digits, each two digit uses 1 byte, and the year of the 4 digits, the month of the 2 digits, and the date of the 2 digits, for example, 20190921 (21/9/2019) or 20191030 (30/10/2019).
Further, the visual _ content _ ownershirp _ flag is used to indicate whether the image visual content is owned by the voice-around creator, and when the visual _ content _ ownershirp _ flag is equal to 1, the image visual content is owned by the voice-around creator. If visual _ content _ ownershirp _ flag is equal to 0, it indicates that the representation does not own the image visual content.
Further, the private _ protection _ flag is used to indicate whether the voice-over data is protected narration content, i.e. whether the voice-over data is configured with voice-over authentication data. For example, when private _ protection _ flag is yes, it indicates that the voice-over data is configured with voice-over authentication data; when the private _ protection _ flag is no, it indicates that the voice-over data is not configured with voice-over authentication data. The protection _ password is used to represent the voice-over authentication data, e.g., a password used to encrypt the voice-over data. The narrowband _ data _ type is used to indicate a data type of the voice-over data, for example, narrowband _ data _ type is equal to yes, which indicates that the voice-over data is text data, and narrowband _ data _ type is equal to no, which indicates that the voice-over data is audio data. The robust _ audio _ codec _ id is used to indicate an audio codec used in encoding audio data, for example,
Figure BDA0002294811650000132
Figure BDA0002294811650000141
further, f (n): the n-bit string with a fixed mode is written from left to right; b (8): bytes (8 bits) with any bit string pattern; u (n): an unsigned integer of n bits is used.
For example, the following steps are carried out: table 2 presents the method of implementing the additive white data in the AVS image container standard.
Table 2 example of implementing the bystander data in the AVS image container standard
Figure BDA0002294811650000142
Figure BDA0002294811650000151
It is worth noting that, in the AVS image container, a user _ data _ type is added on the basis of the preset data structure, and the user _ data _ type is used for describing the user data purpose, for example,
user data usage user_data_type value
Narrative (white data) 1
reserved (reserved) reserved
Based on the foregoing method for adding the edge-blunter to the image, the embodiment provides a computer-readable storage medium, which stores one or more programs that can be executed by one or more processors to implement the steps in the method for adding the edge-blunter according to the foregoing embodiment.
Based on the above method for adding the bystander to the image, the present invention further provides an electronic device, as shown in fig. 2, which comprises at least one processor (processor) 20; a display screen 21; and a memory (memory)22, and may further include a communication Interface (Communications Interface)23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may call logic instructions in the memory 22 to perform the methods in the embodiments described above.
Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the electronic device are described in detail in the method, and are not stated herein.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A method for adding a panwhite to an image, the method comprising:
when an image is operated, receiving the voice-over data corresponding to the image;
associating the voice-over data with the image such that the voice-over data is displayed when the image is displayed.
2. The method of claim 1, wherein the image corresponds to a plurality of voice-over data, and wherein the voice-over data is configured for the image by one user or a plurality of users.
3. A method for adding onwhite to an image as recited in claim 1, wherein the onwhite data comprises onwhite content, an onwhite creator, and an onwhite creation date, the associating the onwhite data with the image being specifically:
and according to the voice-over creator of the voice-over data, registering the voice-over data in a preset data structure so as to associate the voice-over data with the image, wherein the preset data structure is stored in a binding manner with the image.
4. The method of claim 1, wherein when the images are stored in the image group, the image group has the same or different voice-over data for one image, the voice-over data for multiple images, or the voice-over data for each image.
5. The method for adding the voice-over for the image according to the claim 1, wherein the data type of the voice-over data is text data or audio data; the method further comprises the following steps:
when the voice-over data corresponding to the image is displayed, acquiring a data type corresponding to the voice-over data;
when the voice-over data is text data, synchronously displaying the voice-over data and the image;
and when the voice-over data is audio data, displaying the image and synchronously playing the voice-over data.
6. The method for adding the voice-over for the image according to the claim 1, wherein the data type of the voice-over data is text data or audio data; the method further comprises the following steps:
when the voice-over data corresponding to the image is displayed, acquiring a display type corresponding to the voice-over data;
when the display type is character display, if the voice data is text data, the voice data and the image are synchronously displayed; if the voice-over data is audio data, converting the voice-over data into text data, and synchronously displaying the voice-over data and the image;
and when the display type is audio playing, if the voice-over data is audio data, displaying the image and synchronously playing the voice-over data, if the voice-over data is text data, converting the voice-over data into audio data, displaying the image and synchronously playing the voice-over data.
7. The method of claim 5 or 6, wherein when the text data is displayed in synchronization with the image, the text data is displayed independently of the image or the text data is displayed in superimposition with the image.
8. The method as claimed in claim 5 or 6, wherein the voice-over data comprises voice-over content, voice-over creator and voice-over creation date, and the displaying the image and synchronously playing the voice-over data comprises:
displaying the image and synchronously playing the voice-over content in the voice-over data;
and synchronously displaying the voice-over creator and the voice-over creation date in the voice-over data with the image in a text form.
9. A method for adding onwhite to an image as recited in claim 1, wherein the onwhite data comprises onwhite verification data; the method comprises the following steps:
when an operation of editing the voice-over data is received, verifying the operation of editing the voice-over data according to the voice-over verification data;
and when the verification is successful, editing the voice-over data according to the operation of editing the voice-over data.
10. The method of claim 9, wherein verifying the edit-onwhite data operation according to the onwhite verification data when the edit-onwhite data operation is received comprises:
when an operation of editing the voice-over data is received, judging whether voice-over verification data corresponding to the voice-over data is empty or not;
when the voice-over verification data is empty, judging that the voice-over data editing operation verification is successful;
when the voice-over verification data are not empty, comparing verification data corresponding to the voice-over data editing operation with the voice-over verification data;
and if the verification data is consistent with the voice-over verification data, judging that the voice-over data editing operation verification is successful.
11. An electronic device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, performs the steps in the method for adding edge-whites to an image as claimed in any one of claims 1-10.
CN201911196619.3A 2019-11-29 2019-11-29 Method for adding white-out to image and electronic equipment Active CN111046199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911196619.3A CN111046199B (en) 2019-11-29 2019-11-29 Method for adding white-out to image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911196619.3A CN111046199B (en) 2019-11-29 2019-11-29 Method for adding white-out to image and electronic equipment

Publications (2)

Publication Number Publication Date
CN111046199A true CN111046199A (en) 2020-04-21
CN111046199B CN111046199B (en) 2024-03-19

Family

ID=70234028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911196619.3A Active CN111046199B (en) 2019-11-29 2019-11-29 Method for adding white-out to image and electronic equipment

Country Status (1)

Country Link
CN (1) CN111046199B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235517A (en) * 2020-09-29 2021-01-15 北京小米松果电子有限公司 Method and apparatus for adding voice-over, and storage medium
WO2021227580A1 (en) * 2020-05-15 2021-11-18 Oppo广东移动通信有限公司 Information processing method, encoder, decoder, and storage medium device
WO2022037026A1 (en) * 2020-08-21 2022-02-24 Oppo广东移动通信有限公司 Information processing method, encoder, decoder, storage medium, and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1754160A (en) * 2002-12-30 2006-03-29 小利兰斯坦福大学理事会 Methods and apparatus for interactive point-of-view authoring of digital video content
CN101206640A (en) * 2006-12-22 2008-06-25 深圳市学之友教学仪器有限公司 Method and system for annotations and commentaries of electric data in portable electronic equipment
CN102542043A (en) * 2011-12-27 2012-07-04 方正国际软件有限公司 Image annotation method and device
CN103024602A (en) * 2011-09-23 2013-04-03 华为技术有限公司 Method and device for adding annotations to videos
CN105991696A (en) * 2015-02-06 2016-10-05 北京网梯科技发展有限公司 Communication method and system realizing sharing and interaction for non-touch-reading resources
CN106371702A (en) * 2016-09-07 2017-02-01 北京金山软件有限公司 Image information editing method and device and mobile terminal
CN106484796A (en) * 2016-09-22 2017-03-08 宇龙计算机通信科技(深圳)有限公司 File management method, document management apparatus and mobile terminal
CN106648327A (en) * 2016-12-29 2017-05-10 北京珠穆朗玛移动通信有限公司 Picture processing method and mobile terminal
CN107635153A (en) * 2017-09-11 2018-01-26 北京奇艺世纪科技有限公司 A kind of exchange method and system based on image data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1754160A (en) * 2002-12-30 2006-03-29 小利兰斯坦福大学理事会 Methods and apparatus for interactive point-of-view authoring of digital video content
CN101206640A (en) * 2006-12-22 2008-06-25 深圳市学之友教学仪器有限公司 Method and system for annotations and commentaries of electric data in portable electronic equipment
CN103024602A (en) * 2011-09-23 2013-04-03 华为技术有限公司 Method and device for adding annotations to videos
CN102542043A (en) * 2011-12-27 2012-07-04 方正国际软件有限公司 Image annotation method and device
CN105991696A (en) * 2015-02-06 2016-10-05 北京网梯科技发展有限公司 Communication method and system realizing sharing and interaction for non-touch-reading resources
CN106371702A (en) * 2016-09-07 2017-02-01 北京金山软件有限公司 Image information editing method and device and mobile terminal
CN106484796A (en) * 2016-09-22 2017-03-08 宇龙计算机通信科技(深圳)有限公司 File management method, document management apparatus and mobile terminal
CN106648327A (en) * 2016-12-29 2017-05-10 北京珠穆朗玛移动通信有限公司 Picture processing method and mobile terminal
CN107635153A (en) * 2017-09-11 2018-01-26 北京奇艺世纪科技有限公司 A kind of exchange method and system based on image data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021227580A1 (en) * 2020-05-15 2021-11-18 Oppo广东移动通信有限公司 Information processing method, encoder, decoder, and storage medium device
WO2022037026A1 (en) * 2020-08-21 2022-02-24 Oppo广东移动通信有限公司 Information processing method, encoder, decoder, storage medium, and device
CN112235517A (en) * 2020-09-29 2021-01-15 北京小米松果电子有限公司 Method and apparatus for adding voice-over, and storage medium
CN112235517B (en) * 2020-09-29 2023-09-12 北京小米松果电子有限公司 Method for adding white-matter, device for adding white-matter, and storage medium

Also Published As

Publication number Publication date
CN111046199B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN111046199B (en) Method for adding white-out to image and electronic equipment
US9160719B2 (en) Hiding ciphertext using a linguistics algorithm with dictionaries
CN104680077B (en) Method for encrypting picture, method for viewing picture, system and terminal
JP2007026427A (en) Information management method using managing symbol and information management server
CN102970307B (en) Cipher safety system and password safety method
CN106027608B (en) A kind of picture upload method, client and server
CN101763397B (en) Device and method for processing expanding information in image file
CN103532960B (en) Decrypt device
CN110248116B (en) Picture processing method and device, computer equipment and storage medium
KR100828479B1 (en) Apparatus and method for inserting addition data in image file on electronic device
CN103646048A (en) Method and device for achieving multimedia pictures
US11258922B2 (en) Method of combining image files and other files
CN106209575A (en) Method for sending information, acquisition methods, device and interface system
CN111836054B (en) Video anti-piracy method, electronic device and computer readable storage medium
CN106878145B (en) Display method, display device and display system of user-defined picture
WO2016188079A1 (en) Data storage method for terminal device and terminal device
US10972746B2 (en) Method of combining image files and other files
WO2017201999A1 (en) File encryption method, device, terminal and storage medium
CN112492248B (en) Video verification method and device
CN111966973A (en) Copyright protection method and system based on picture pixel value steganography
JP2006172146A (en) Device with data management function, and data management program
JP4373341B2 (en) How to determine if an encoded file is usable by an application
US10319059B1 (en) GIF file with hidden images and selectable playback that is activated based on a user ID
Risemberg File Structure Analysis of Media Files Sent and Received over Whatsapp
JP2008311806A (en) Content providing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant