CN111212196A - Information processing method and device, electronic equipment and storage medium - Google Patents

Information processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111212196A
CN111212196A CN202010033149.5A CN202010033149A CN111212196A CN 111212196 A CN111212196 A CN 111212196A CN 202010033149 A CN202010033149 A CN 202010033149A CN 111212196 A CN111212196 A CN 111212196A
Authority
CN
China
Prior art keywords
image frame
identification information
information
extraction result
hidden
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010033149.5A
Other languages
Chinese (zh)
Other versions
CN111212196B (en
Inventor
陈英震
张泽
裴欢
方琪
朱斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202010033149.5A priority Critical patent/CN111212196B/en
Publication of CN111212196A publication Critical patent/CN111212196A/en
Application granted granted Critical
Publication of CN111212196B publication Critical patent/CN111212196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32347Reversible embedding, i.e. lossless, invertible, erasable, removable or distorsion-free embedding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The present disclosure relates to an information processing method and apparatus, an electronic device, and a storage medium, wherein the method includes: acquiring a first image frame, wherein the first image frame comprises identification information of the first image frame; extracting hidden information carried by the first image frame to obtain an extraction result of the hidden information; and replacing the identification information of the first image frame based on the extraction result of the hidden information to obtain the first image frame after the identification information is replaced. The embodiment of the disclosure can improve the accuracy and filling efficiency of the identification information of the first image frame.

Description

Information processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an information processing method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of electronic technology and information technology, the amount of information has increased explosively. In order to conveniently find the target information in the large amount of information, a corresponding identifier or index is usually set for the information, so that the target information can be quickly searched in the large amount of information through the identifier or index.
In some information processing scenarios, information may arrive at a user after being copied many times, for example, video information, image information, and the like, and in the process from information generation to arrival at the user, information may pass through multiple streams, which may easily result in inaccurate identification or index of information.
Disclosure of Invention
The present disclosure proposes an information processing technical solution.
According to an aspect of the present disclosure, there is provided an information processing method including: acquiring a first image frame, wherein the first image frame comprises identification information of the first image frame; extracting hidden information carried by the first image frame to obtain an extraction result of the hidden information; and replacing the identification information of the first image frame based on the extraction result of the hidden information to obtain the first image frame after the identification information is replaced.
In one or more optional embodiments, the extracting hidden information carried in the first image frame to obtain an extraction result of the hidden information includes: determining a target image area of the first image frame, wherein the target image area is used for indicating the position of the hidden information in the first image frame; extracting character features of the target image area to obtain the character features of the target image area; and obtaining an extraction result of the hidden information based on the character features of the target image area.
In one or more optional embodiments, the determining the target image region of the first image frame comprises: and determining a target image area of the first image frame according to a preset standard position.
In one or more optional embodiments, the replacing, based on the extraction result of the hidden information, the identification information of the first image frame to obtain a first image frame after identification information replacement includes: judging whether the confidence of the extraction result is greater than a preset confidence threshold; and under the condition that the confidence of the extraction result is greater than a preset confidence threshold, replacing the identification information of the first image frame according to the extraction result to obtain the first image frame after the identification information is replaced.
In one or more optional embodiments, the method further comprises: and in the case that the confidence of the extraction result is less than or equal to the confidence threshold, discarding the extraction result and acquiring the first image frame again.
In one or more optional embodiments, the acquiring a first image frame comprises: the first image frame is acquired in an offline file.
In one or more optional embodiments, the method further comprises: and correcting the identification information of at least one second image frame included in the offline file according to the identification information after the first image frame is replaced.
In one or more optional embodiments, the identification information comprises an acquisition time; the modifying the identification information of at least one second image frame included in the offline file according to the identification information after the first image frame is replaced includes: determining a play time difference between the first image frame and the second image frame; and correcting the acquisition time of at least one second image frame included in the offline file according to the playing time difference and the acquisition time after the first image frame is replaced.
In one or more optional embodiments, the identification information comprises an acquisition location; the modifying the identification information of at least one second image frame included in the offline file according to the identification information after the first image frame is replaced includes: and determining the acquisition place of the first image frame after replacement as the acquisition place of the second image frame after correction.
In one or more optional embodiments, the hidden information includes at least one of the following information: digital watermarking; a graphic code; steganographic text.
According to an aspect of the present disclosure, there is provided an information processing apparatus including: the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first image frame, and the first image frame comprises identification information of the first image frame; the extraction module is used for extracting the hidden information carried by the first image frame to obtain an extraction result of the hidden information; and the replacing module is used for replacing the identification information of the first image frame based on the extraction result of the hidden information to obtain the first image frame after the identification information is replaced.
In one or more optional embodiments, the extraction module is specifically configured to determine a target image region of the first image frame, where the target image region is used to indicate a position of the hidden information in the first image frame; extracting character features of the target image area to obtain the character features of the target image area; and obtaining an extraction result of the hidden information based on the character features of the target image area.
In one or more optional embodiments, the extraction module is specifically configured to determine the target image area of the first image frame according to a preset canonical position.
In one or more optional embodiments, the replacement module is specifically configured to determine whether a confidence of the extraction result is greater than a preset confidence threshold; and under the condition that the confidence of the extraction result is greater than a preset confidence threshold, replacing the identification information of the first image frame according to the extraction result to obtain the first image frame after the identification information is replaced.
In one or more optional embodiments, the replacement module is further configured to discard the extraction result and reacquire the first image frame if the confidence of the extraction result is less than or equal to the confidence threshold.
In one or more optional embodiments, the obtaining module is specifically configured to obtain the first image frame in an offline file.
In one or more optional embodiments, the apparatus further comprises: and the correction module is used for correcting the identification information of at least one second image frame included in the offline file according to the identification information after the first image frame is replaced.
In one or more optional embodiments, the identification information comprises an acquisition time; the correction module is specifically configured to determine a play time difference between the first image frame and the second image frame; and correcting the acquisition time of at least one second image frame included in the offline file according to the playing time difference and the acquisition time after the first image frame is replaced.
In one or more optional embodiments, the identification information comprises an acquisition location; the correction module is specifically configured to determine the acquisition location where the first image frame is replaced as the acquisition location where the second image frame is corrected.
In one or more optional embodiments, the hidden information includes at least one of the following information: digital watermarking; a graphic code; steganographic text.
According to another aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the above-described information processing method is executed.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described information processing method.
In the disclosed embodiment, a first image frame may be acquired, the first image frame including identification information of the first image frame. And then extracting the hidden information carried by the first image frame to obtain an extraction result of the hidden information, and replacing the identification information of the first image frame based on the extraction result of the hidden information to obtain the first image frame after the identification information is replaced. By the method, the identification information of the first image frame can be automatically replaced, the labor cost for manually filling the identification information is saved, and the accuracy and filling efficiency of the identification information of the first image frame are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an information processing method according to an embodiment of the present disclosure.
Fig. 2 shows a block diagram of an information processing apparatus according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of an example of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
According to the information processing scheme provided by the embodiment of the disclosure, the first image frame can be acquired, the hidden information carried by the first image frame is extracted to obtain the extraction result of the hidden information, and then the identification information of the first image frame is replaced based on the extraction result of the hidden information to obtain the first image frame after the identification information is replaced. Therefore, the identification information of the first image frame can be automatically replaced through the hidden information carried in the first image frame, and human resources for manually filling the identification information can be saved.
In the related scheme, the first image frame may pass through multiple circulation, the identification information of the first image frame is often not accurate enough, and the identification information of the first image frame is usually filled in manually. In the case where there are a large number of first images, filling out the identification information of the first image frames consumes a large amount of human resources. Moreover, the manual method has certain subjectivity and error rate, and the accuracy of the identification information of the first image frame is difficult to ensure. The information processing method provided by the embodiment of the disclosure can automatically replace the identification information of the first image frame, thereby improving the efficiency and accuracy of filling the identification information.
The technical scheme provided by the embodiment of the disclosure can be applied to application scenes such as labeling, identification information modification, file name replacement expansion and the like of image files or video files, for example, in a security scene, identification information of shot videos or images can be automatically filled or named, clue search is performed by using the automatically filled or named identification information, and the case handling efficiency is improved. The embodiments of the present disclosure do not limit specific application scenarios.
Fig. 1 shows a flowchart of an information processing method according to an embodiment of the present disclosure. The information processing method may be performed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a user terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the information processing method may be implemented by a processor calling computer readable instructions stored in a memory. The information processing method according to the embodiment of the present disclosure is described below by taking an electronic device as an execution subject.
Step S11, a first image frame is acquired, the first image frame including identification information of the first image frame.
In the disclosed embodiment, the first image frame may be any one of images acquired by the electronic device. The electronic device may perform image acquisition on a current scene, resulting in one or more first image frames. Alternatively, the electronic device may select one or more image frames among a plurality of image frames included in the video file as the first image frame. In some implementations, the electronic device may acquire one or more first image frames from other devices. The first image frame may include identification information of the image frame. The identification information may be used to indicate a certain image frame, which may be looked up in a plurality of image frames by the identification information of the image frame. For example, the identification information may include information such as a number, a name, and the like, and in some implementations, the identification information may further include acquisition information such as an acquisition location, an acquisition time, and the like of the image frame.
In one possible implementation, one or more first image frames may be acquired in an offline file. The offline file may be an offline video file, an offline image file, or the like. At least one image frame may be included in the offline file. For example, in a security scene, the video monitoring device may monitor the scene in real time, so as to form a large number of offline video files, and the offline video files may provide clues to security personnel. One or more first image frames may be acquired according to a preset rule in the offline video file subjected to multiple copying, for example, the first image frames may be acquired according to a preset video frame interval. Alternatively, one or more first image frames may also be randomly acquired in the offline video frame. By acquiring the first image frame in the offline file, more information processing schemes provided by the disclosure can be applied to the offline file, and the phenomenon of serious loss of identification information caused by multiple circulations of the offline file is improved.
Step S12, extracting hidden information carried in the first image frame to obtain an extraction result of the hidden information.
In the disclosed embodiment, the first image frame may carry hidden information therein. The hidden information may be information embedded during or after the first image frame is generated, and the hidden information may play a role in protecting, preventing counterfeiting, tracing to the source, and the like for the first image frame. Here, the hidden information may include information related to the first image frame, for example, the hidden information may include an acquisition time, an acquisition place, and the like of the first image frame. The hidden information carried by the first image frame is extracted, so that an extraction result of the hidden information can be obtained, and the extraction result can be related information of the first image frame included in the hidden information.
In one possible implementation, the hidden information may have various forms of information, which may include at least one of the following: digital watermarking; a graphic code; steganographic text.
In this implementation, the hidden information may be embedded in the first image frame by a digital watermark. The digital watermark does not affect the use of the first image frame and is not easy to modify again, so that the digital watermark can play a role in transferring hidden information or judging whether the first image frame is tampered or not. In this implementation, the hidden information may also be carried in the first image frame by means of a graphic code. The graphic code can comprise a bar code, a two-dimensional code and the like, has strong fault-tolerant capability and error correction capability and high reliability, and can carry hidden information in a graphic code mode. In this implementation, the hidden information may be carried in the first image frame by a steganographic text, for example, the steganographic text carried in the hidden information is synthesized with the image information into the first image frame, and the steganographic text may be separated from the first image frame by a specific separation method (e.g., a separation method such as format conversion), so as to obtain the hidden information in the steganographic text. The method for steganographically hiding the text can hide hidden information in the first image frame without damaging the image quality of the first image frame, so that the hidden information is not easy to perceive, and the hidden information can be hidden in the first image frame in the method for steganographically hiding the text.
Step S13, replacing the identification information of the first image frame based on the extraction result of the hidden information, to obtain a first image frame after the identification information replacement.
In the embodiment of the present disclosure, the identification information of the first image frame may not be accurate enough, so that the identification information of the first image frame may be replaced by the extraction result of the hidden information, for example, the identification information of the first image frame is replaced by the extraction result of the hidden information, so as to obtain the first image frame after the identification information is replaced. The first image frame after the identification information replacement has more accurate identification information, so that an accurate search result can be obtained through the identification information under the condition of searching the first image frame.
For example, in a security scene, identification information of a first image frame that has been circulated for multiple times may not be accurate enough, hidden information of the first image frame includes an acquisition time and an acquisition place, the hidden information carried by the first image frame may be extracted to obtain an extracted acquisition time and an extracted acquisition place (extraction result), and then the extracted acquisition time and the extracted acquisition place may be used as the identification information of the first image frame, so as to implement replacement of the identification information of the first image frame.
In a possible implementation manner, the extraction result obtained by extracting the hidden information of the first image frame may be ciphertext information encrypted by the hidden information. After the extraction result of the hidden information is obtained, the extraction result can be decrypted by using a preset decryption mode to obtain the hidden information. For example, in the process of acquiring the first image frame, the image acquisition device may encrypt the hidden information by using a key negotiated with the electronic device to obtain ciphertext information, and carry the ciphertext information in the first image frame. After obtaining the extraction result, the electronic device may decrypt the extraction result using the key negotiated with the image capturing device to obtain the hidden information. Or, in the process of acquiring the first image frame, the image acquisition device may encrypt the hidden information by using a private key negotiated with the electronic device to obtain ciphertext information, and carry the ciphertext information in the first image frame. After obtaining the extraction result, the electronic device may decrypt the extraction result by using the public key corresponding to the private key to obtain the hidden information. By carrying the hidden information in the first image frame in an encrypted manner, the security of the hidden information can be improved, and the possibility that the hidden information is tampered or damaged is reduced.
In the above step S12, the hidden information carried in the first image frame is extracted to obtain the extraction result of the hidden information, and this step is described below through a possible implementation manner.
In one possible implementation, a target image area of the first image frame may be determined, the target image area being used to indicate a location of the hidden information in the first image frame. And then, extracting character features of the target image area to obtain the character features of the target image area, and obtaining an extraction result of the hidden information based on the character features of the target image area.
In this implementation, the target image area may indicate a position of the hidden information in the first image frame, so that the target image area of the first image frame may be determined first. Then, character feature extraction may be performed in the target image area, for example, the optical character recognition model may be used to perform operations such as brightness detection and segmentation on characters to perform character feature extraction on the target image area, so as to obtain character features of the extracted target image area, and then the extracted characters in the character feature character library are compared, so as to obtain an extraction result of hidden information. By extracting the character features of the target image area, the extraction result of the hidden information can be quickly obtained.
Here, the target image region of the first image frame may be determined according to image features of the first image frame, for example, feature extraction of the first image frame may be compared with preset image features, and then an image region whose image features match the preset image features may be determined as the target image region according to the comparison result. Alternatively, the target image area may be determined according to an image area having a preset boundary characteristic, for example, an image area surrounded by a certain dotted line boundary may be determined as the target image area.
In one example of this implementation, the target image area of the first image frame may be determined according to a preset canonical location. In this example, the preset canonical location may be negotiated between the image capture device and the electronic device, and the image capture device may add hidden information of the first image frame at the preset canonical location in a case of capturing the first image frame, for example, add information of the capture time, the capture location, and the like of the first image frame at the lower left corner of the first image frame. After acquiring the first image frame, the electronic device may use the canonical position as a target image area, and extract hidden information of the first image frame in the target image area. Therefore, the target image area of the first image frame can be accurately determined according to the preset standard position, and therefore the efficiency and the accuracy of extracting the hidden information are improved.
In step S13, the identification information of the first image frame may be replaced based on the extraction result of the hidden information, so that the identification information of the first image frame may be corrected and filled. This step is explained below by one implementation.
In a possible implementation manner, it may be determined whether the confidence of the extraction result is greater than a preset confidence threshold, and the identification information of the first image frame is replaced according to the extraction result under the condition that the confidence of the extraction result is greater than the preset confidence threshold, so as to obtain the first image frame after the identification information is replaced.
In this implementation, the confidence of the extraction result may be used to indicate the accuracy of the extraction result, so that the confidence of the extraction result may be compared with a preset confidence threshold, and in the case that the confidence of the extraction result is greater than the preset confidence threshold, the extraction result may be considered to be authentic, so that the identification information of the first image frame may be replaced with the extraction result. Here, the preset confidence threshold may be set according to an actual application scenario, for example, to 80%, 90%, and the like. The accuracy of the extraction result can be evaluated by judging whether the confidence of the extraction result is greater than a preset confidence threshold, so that the identification information of the first image frame can be replaced under the condition that the accuracy of the extraction result is higher, and the first image frame has accurate identification information.
Here, the confidence of the extraction result may be an output result of the optical character recognition model, the optical character recognition model may be a deep learning network model, and in the case of extracting the hidden information carried in the first image frame, the optical character recognition model may be used to output the extraction result of the hidden information by the optical character recognition model, and at the same time, the optical character recognition model may further output the confidence of the extraction result, so that whether the extraction result is accurate may be determined according to the confidence of the extraction result.
In one example of this implementation, in the event that the confidence of the extraction result is less than or equal to the confidence threshold, the extraction result may be discarded and the first image frame re-acquired.
In this example, in a case that the confidence of the extraction result is less than or equal to the preset confidence threshold, the extraction result may be considered to be unreliable, so that the extraction result may be discarded, and the first image frame may be re-acquired, for example, the first image frame may be re-acquired in an offline file, and then the hidden information of the re-acquired first image frame may be extracted, so that the identification information of the re-acquired first image frame may be replaced. By judging whether the confidence of the extraction result is greater than a preset confidence threshold, the inaccurate extraction result can be discarded, so that the accuracy of the extraction result is effectively evaluated through the confidence threshold.
According to the embodiment of the disclosure, the hidden information carried by the first image frame can be extracted, and the identification information of the first image frame is replaced according to the obtained extraction result, so that the identification information of the first image frame can be automatically filled and corrected. The embodiment of the disclosure further provides a scheme for correcting the identification information of other image frames in the offline file by using the first image frame after the identification information is replaced.
In one possible implementation manner, the identification information of at least one second image frame included in the offline file may be modified according to the identification information after the first image frame is replaced.
In this implementation manner, the first image frame and the second image frame may be from the same offline file, so that the identification information of the second image frame in the offline file may be corrected by using the identification information after the first image frame is replaced as a reference. For example, the identification information after the replacement of the first image frame includes the order of the first image frame in the offline file, and the identification information of the second image frame may be modified according to the identification information after the replacement of the first image frame and the relative order of the first image frame and the second image frame in the offline file, for example, if the order of the first image frame in the offline file is 10 and the relative order of the first image frame and the second image frame in the offline file is 5, the identification information of the second image frame may be modified to the sum of the order and the relative order of the first image frame in the offline file, that is, 15. Through the identification information after the first image frame is replaced, the identification information of at least one second image frame included in the offline file can be corrected, so that the second image frame in the offline file has accurate identification information.
In one example of this implementation, the identification information may include an acquisition time, and in the case of correcting the identification information of at least one second image frame included in the offline file, a play time difference between the first image frame and the second image frame may be determined, and then the acquisition time of the at least one second image frame included in the offline file may be corrected according to the play time difference and the acquisition time after the first image frame is replaced.
In this example, the identification information may include an acquisition time, which may be an acquisition time of the image frame. The first image frame and the second image frame may be from the same offline file, and in the case of correcting the identification information of the second image frame, the playing time difference between the first image frame and the second image frame in the offline file may be determined, and then the acquisition time of the second image frame may be corrected according to the determined playing time difference and the acquisition time after the replacement of the first image frame, for example, the playing time difference between the first image frame and the second image frame in the offline file is 5 minutes, the acquisition time after the replacement of the first image frame is 2007, 5, 9, 00, and the acquisition time after the modification of the second image frame may be 2007, 5, 9, 8: 55. The acquisition time after the first image frame is replaced can be used for correcting the acquisition time of at least one second image frame included in the offline file, so that the second image frame in the offline file has accurate acquisition time.
In one example of this implementation, the identification information may be an acquisition location, and in the case that the identification information of at least one second image frame included in the offline file is corrected, the acquisition location where the first image frame is replaced may be determined as the acquisition location where the second image frame is corrected.
In this example, the identification information may include an acquisition location, which may be a location where the image acquisition device is located. The first image frame and the second image frame may be from the same offline file, and it may be considered that the first image frame and the second image frame have the same acquisition location, so that in the case of correcting the acquisition location of the second image frame, the acquisition location where the first image frame is replaced may be determined as the acquisition location where the second image frame is corrected. For example, the acquisition location after the replacement of the first image frame may be the third intersection of the city street, and the acquisition location after the modification of the second image frame may also be the third intersection of the city street. Through the acquisition place after the first image frame is replaced, the acquisition place of at least one second image frame included in the offline file can be corrected, so that the second image frame in the offline file has an accurate acquisition place.
In one example, the first image frame may be named according to the identification information after the first image frame is replaced, so that in the case of searching for an image frame using a keyword, the keyword may be compared with the name of the first image frame, and in the case of including the keyword in the name of the first image frame, the first image frame may be returned as a result of the search, enabling detection using the identification information of the first image frame.
The information processing scheme provided by the embodiment of the disclosure can automatically replace identification information of a large number of image frames, and improves filling efficiency of the identification information of the image frames. In addition, hidden information carried by the image frame can be extracted by using the optical character recognition model, and the accuracy of the identification information of the image frame can be improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
In addition, the present disclosure also provides an apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any information processing method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Fig. 2 shows a block diagram of an information processing apparatus according to an embodiment of the present disclosure, which includes, as shown in fig. 2:
an obtaining module 21, configured to obtain a first image frame, where the first image frame includes identification information of the first image frame;
the extraction module 22 is configured to extract hidden information carried in the first image frame to obtain an extraction result of the hidden information;
and a replacing module 23, configured to replace, based on the extraction result of the hidden information, the identification information of the first image frame to obtain a first image frame after identification information replacement.
In one or more optional embodiments, the extracting module 22 is specifically configured to determine a target image area of the first image frame, where the target image area is used to indicate a position of the hidden information in the first image frame; extracting character features of the target image area to obtain the character features of the target image area; and obtaining an extraction result of the hidden information based on the character features of the target image area.
In one or more optional embodiments, the extraction module 22 is specifically configured to determine the target image area of the first image frame according to a preset canonical position.
In one or more optional embodiments, the replacing module 23 is specifically configured to determine whether the confidence of the extraction result is greater than a preset confidence threshold; and under the condition that the confidence of the extraction result is greater than a preset confidence threshold, replacing the identification information of the first image frame according to the extraction result to obtain the first image frame after the identification information is replaced.
In one or more optional embodiments, the replacing module 23 is further configured to discard the extraction result and acquire the first image frame again if the confidence of the extraction result is less than or equal to the confidence threshold.
In one or more optional embodiments, the obtaining module 21 is specifically configured to obtain the first image frame in an offline file.
In one or more optional embodiments, the apparatus further comprises: and the correction module is used for correcting the identification information of at least one second image frame included in the offline file according to the identification information after the first image frame is replaced.
In one or more optional embodiments, the identification information comprises an acquisition time; the correction module is specifically configured to determine a play time difference between the first image frame and the second image frame; and correcting the acquisition time of at least one second image frame included in the offline file according to the playing time difference and the acquisition time after the first image frame is replaced.
In one or more optional embodiments, the identification information comprises an acquisition location; the correction module is specifically configured to determine the acquisition location where the first image frame is replaced as the acquisition location where the second image frame is corrected.
In one or more optional embodiments, the hidden information includes at least one of the following information: digital watermarking; a graphic code; steganographic text.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 3 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, the electronic device 1900 may be provided as a server. Referring to fig. 3, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. An information processing method characterized by comprising:
acquiring a first image frame, wherein the first image frame comprises identification information of the first image frame;
extracting hidden information carried by the first image frame to obtain an extraction result of the hidden information;
and replacing the identification information of the first image frame based on the extraction result of the hidden information to obtain the first image frame after the identification information is replaced.
2. The method according to claim 1, wherein the extracting hidden information carried in the first image frame to obtain the extraction result of the hidden information comprises:
determining a target image area of the first image frame, wherein the target image area is used for indicating the position of the hidden information in the first image frame;
extracting character features of the target image area to obtain the character features of the target image area;
and obtaining an extraction result of the hidden information based on the character features of the target image area.
3. The method of claim 2, wherein said determining a target image region of said first image frame comprises:
and determining a target image area of the first image frame according to a preset standard position.
4. The method according to any one of claims 1 to 3, wherein the replacing the identification information of the first image frame based on the extraction result of the hidden information to obtain a first image frame after identification information replacement comprises:
judging whether the confidence of the extraction result is greater than a preset confidence threshold;
and under the condition that the confidence of the extraction result is greater than a preset confidence threshold, replacing the identification information of the first image frame according to the extraction result to obtain the first image frame after the identification information is replaced.
5. The method of claim 4, further comprising:
and in the case that the confidence of the extraction result is less than or equal to the confidence threshold, discarding the extraction result and acquiring the first image frame again.
6. The method of any of claims 1 to 5, wherein said acquiring a first image frame comprises:
the first image frame is acquired in an offline file.
7. The method of claim 6, further comprising:
and correcting the identification information of at least one second image frame included in the offline file according to the identification information after the first image frame is replaced.
8. The method of claim 7, wherein the identification information includes an acquisition time; the modifying the identification information of at least one second image frame included in the offline file according to the identification information after the first image frame is replaced includes:
determining a play time difference between the first image frame and the second image frame;
and correcting the acquisition time of at least one second image frame included in the offline file according to the playing time difference and the acquisition time after the first image frame is replaced.
9. The method of claim 7, wherein the identification information includes a collection location; the modifying the identification information of at least one second image frame included in the offline file according to the identification information after the first image frame is replaced includes:
and determining the acquisition place of the first image frame after replacement as the acquisition place of the second image frame after correction.
10. The method according to any of claims 1 to 9, wherein the hidden information comprises at least one of the following information:
digital watermarking; a graphic code; steganographic text.
11. An information processing apparatus characterized by comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first image frame, and the first image frame comprises identification information of the first image frame;
the extraction module is used for extracting the hidden information carried by the first image frame to obtain an extraction result of the hidden information;
and the replacing module is used for replacing the identification information of the first image frame based on the extraction result of the hidden information to obtain the first image frame after the identification information is replaced.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 10.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202010033149.5A 2020-01-13 2020-01-13 Information processing method and device, electronic equipment and storage medium Active CN111212196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010033149.5A CN111212196B (en) 2020-01-13 2020-01-13 Information processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010033149.5A CN111212196B (en) 2020-01-13 2020-01-13 Information processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111212196A true CN111212196A (en) 2020-05-29
CN111212196B CN111212196B (en) 2022-04-05

Family

ID=70790064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010033149.5A Active CN111212196B (en) 2020-01-13 2020-01-13 Information processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111212196B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059420A1 (en) * 2006-08-22 2008-03-06 International Business Machines Corporation System and Method for Providing a Trustworthy Inverted Index to Enable Searching of Records
CN101527830A (en) * 2008-03-07 2009-09-09 华为技术有限公司 Method and device for embedding watermarking information and method and device for authenticating watermarking information
CN102377974A (en) * 2010-08-04 2012-03-14 中国电信股份有限公司 Mobile terminal, receiving end and video acquiring method and system
CN202196416U (en) * 2010-04-06 2012-04-18 李勇 Portable storage device with digital watermark function
US20120140251A1 (en) * 2010-12-01 2012-06-07 Xerox Corporation Method and apparatus for reading and replacing control and/or identification data in a print image to support document tracking, flow control, and security
CN102693524A (en) * 2012-05-15 2012-09-26 广州市中崎商业机器有限公司 Method and system for managing scanned images of securities
US20160364826A1 (en) * 2015-06-10 2016-12-15 Deluxe Media Inc. System and method for digital watermarking
CN106709853A (en) * 2016-11-30 2017-05-24 开易(北京)科技有限公司 Image retrieval method and system
CN107145809A (en) * 2017-04-21 2017-09-08 苏州市公安局 A kind of material evidence uniqueness initial marking method
CN108449552A (en) * 2018-03-07 2018-08-24 北京理工大学 Tag image acquires the method and system at moment
US20190007579A1 (en) * 2017-06-28 2019-01-03 Canon Kabushiki Kaisha Information processing apparatus, control method thereof, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059420A1 (en) * 2006-08-22 2008-03-06 International Business Machines Corporation System and Method for Providing a Trustworthy Inverted Index to Enable Searching of Records
CN101527830A (en) * 2008-03-07 2009-09-09 华为技术有限公司 Method and device for embedding watermarking information and method and device for authenticating watermarking information
CN202196416U (en) * 2010-04-06 2012-04-18 李勇 Portable storage device with digital watermark function
CN102377974A (en) * 2010-08-04 2012-03-14 中国电信股份有限公司 Mobile terminal, receiving end and video acquiring method and system
US20120140251A1 (en) * 2010-12-01 2012-06-07 Xerox Corporation Method and apparatus for reading and replacing control and/or identification data in a print image to support document tracking, flow control, and security
CN102693524A (en) * 2012-05-15 2012-09-26 广州市中崎商业机器有限公司 Method and system for managing scanned images of securities
US20160364826A1 (en) * 2015-06-10 2016-12-15 Deluxe Media Inc. System and method for digital watermarking
CN106709853A (en) * 2016-11-30 2017-05-24 开易(北京)科技有限公司 Image retrieval method and system
CN107145809A (en) * 2017-04-21 2017-09-08 苏州市公安局 A kind of material evidence uniqueness initial marking method
US20190007579A1 (en) * 2017-06-28 2019-01-03 Canon Kabushiki Kaisha Information processing apparatus, control method thereof, and storage medium
CN108449552A (en) * 2018-03-07 2018-08-24 北京理工大学 Tag image acquires the method and system at moment

Also Published As

Publication number Publication date
CN111212196B (en) 2022-04-05

Similar Documents

Publication Publication Date Title
CN104680077B (en) Method for encrypting picture, method for viewing picture, system and terminal
EP3196733A1 (en) Image processing and access method and apparatus
CN112954450B (en) Video processing method and device, electronic equipment and storage medium
CN104680078B (en) Method for shooting picture, method, system and terminal for viewing picture
CN110677718B (en) Video identification method and device
CN110121105B (en) Clip video generation method and device
US20170054964A1 (en) Method and electronic device for playing subtitles of a 3d video, and storage medium
CN113393471A (en) Image processing method and device
WO2023029389A1 (en) Video fingerprint generation method and apparatus, electronic device, storage medium, computer program, and computer program product
CN109671051B (en) Image quality detection model training method and device, electronic equipment and storage medium
CN108876817B (en) Cross track analysis method and device, electronic equipment and storage medium
CN110582021B (en) Information processing method and device, electronic equipment and storage medium
CN111212196B (en) Information processing method and device, electronic equipment and storage medium
CN109033264B (en) Video analysis method and device, electronic equipment and storage medium
JP2016012767A (en) Image processing system
US10282633B2 (en) Cross-asset media analysis and processing
CN113014609B (en) Multimedia file processing and tracing method, device, equipment and medium
CN114612321A (en) Video processing method, device and equipment
CN114140850A (en) Face recognition method and device and electronic equipment
CN115730104A (en) Live broadcast room processing method, device, equipment and medium
CN110312171B (en) Video clip extraction method and device
CN113378025A (en) Data processing method and device, electronic equipment and storage medium
CN111339964A (en) Image processing method and device, electronic equipment and storage medium
CN103544482A (en) Recognition method and device of feature image
CN115243267B (en) 5G network pseudo base station detection positioning method based on DPI technology and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant