CN111460198B - Picture timestamp auditing method and device - Google Patents

Picture timestamp auditing method and device Download PDF

Info

Publication number
CN111460198B
CN111460198B CN201910048327.9A CN201910048327A CN111460198B CN 111460198 B CN111460198 B CN 111460198B CN 201910048327 A CN201910048327 A CN 201910048327A CN 111460198 B CN111460198 B CN 111460198B
Authority
CN
China
Prior art keywords
text line
pictures
picture
time stamp
acquisition time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910048327.9A
Other languages
Chinese (zh)
Other versions
CN111460198A (en
Inventor
赵锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910048327.9A priority Critical patent/CN111460198B/en
Publication of CN111460198A publication Critical patent/CN111460198A/en
Application granted granted Critical
Publication of CN111460198B publication Critical patent/CN111460198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Abstract

The invention discloses a method and a device for auditing a picture timestamp, which relate to the technical field of image processing and mainly aim at improving the efficiency of auditing the picture timestamp; the main technical scheme comprises the following steps: detecting all text line areas included in a plurality of continuously acquired pictures; selecting at least one target text line area from all the text line areas; for each of the pictures: performing character recognition on the at least one target text line area in the picture to obtain an acquisition time stamp of the picture; based on the acquisition time stamp and the system time stamp of each picture, checking whether the identification of the acquisition time stamp is abnormal or not.

Description

Picture timestamp auditing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for auditing a picture timestamp.
Background
With the development of image processing technology, more and more business is performed on a picture basis. For example, the road data acquisition process is a process of acquiring road pictures by the picture acquisition device, and is a process completed based on the pictures. In order to perform subsequent business processing based on the acquired picture, when the picture acquisition device acquires the picture, the picture acquisition device not only can acquire a time stamp for the picture identification, but also can simultaneously have a system time stamp for the picture identification according to the self clock by a business data system where the picture acquisition device is located. Generally, the difference between the acquisition time stamp of the picture and the system time stamp is within a set time difference. However, as the problems of aging, serious heating and the like of the picture acquisition equipment occur, the system time stamp and the acquisition time stamp of the picture can be larger than the set time difference. Once the system time stamp and the acquisition time stamp of the picture are larger than the set time difference, the processing result is deviated when the business data system performs business processing based on the acquisition time stamp.
At present, in order to discover the conditions of the system time stamp and the acquisition time stamp of the picture in time, the acquisition time stamp in the picture is usually identified in a manual mode, and then whether the difference between the acquisition time stamp and the system time stamp is qualified is checked in a manual mode. However, differences in the manual individuals can create identification and audit errors, which require re-audit upon audit errors, and additional audit time can be spent in re-audit. And the manual mode needs to check the acquisition time stamp and the system time stamp of each picture one by one, which consumes a great deal of time and labor cost.
Disclosure of Invention
In view of this, the invention provides a method and a device for auditing a picture timestamp, which mainly aims to improve the efficiency of auditing the picture timestamp.
In a first aspect, the present invention provides a method for auditing a picture timestamp, the method comprising:
detecting all text line areas included in a plurality of continuously acquired pictures;
selecting at least one target text line area from all the text line areas;
for each of the pictures: performing character recognition on the at least one target text line area in the picture to obtain an acquisition time stamp of the picture;
Based on the acquisition time stamp and the system time stamp of each picture, checking whether the identification of the acquisition time stamp is abnormal or not.
In a second aspect, the present invention provides an auditing apparatus for a picture timestamp, the apparatus comprising:
the detection unit is used for detecting all text line areas included in the continuously acquired pictures; a selecting unit, configured to select at least one target text line area from all the text line areas detected by the detecting unit;
an identification unit configured to perform, for each of the pictures: performing character recognition on the at least one target text line area in the picture to obtain an acquisition time stamp of the picture;
and the auditing unit is used for auditing whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.
In a third aspect, the present invention provides a storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform a method of auditing a picture timestamp as described in any one of the preceding claims.
In a fourth aspect, the present invention provides an electronic device, including: a storage medium and a processor;
The processor is suitable for realizing each instruction;
the storage medium is suitable for storing a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform a method of auditing a picture timestamp as described in any one of the preceding claims.
By means of the technical scheme, the method and the device for auditing the picture time stamp provided by the invention are characterized in that firstly, the target text line region is selected from all text line regions included in a plurality of continuously acquired pictures, and then, character recognition is carried out on the position corresponding to the target text line region in each picture, so that the acquisition time stamp of each picture is obtained. And finally checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture. According to the technical scheme, no matter whether the acquisition time stamp in the picture is identified or whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp of the picture and the system time stamp is checked, the manual participation is avoided. Not only can avoid the identification and auditing errors generated by the difference of the manual individuals, but also the acquisition time stamp and the system time stamp of each picture are not required to be compared manually one by one, so that a great amount of auditing time is saved. Therefore, the scheme provided by the invention can improve the efficiency of checking the picture timestamp.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for auditing a picture timestamp according to one embodiment of the present invention;
FIG. 2 is a schematic diagram of a picture according to one embodiment of the present invention;
FIG. 3 is a schematic diagram of a picture according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of a picture according to another embodiment of the present invention;
FIG. 5 is a flowchart of a method for auditing a picture timestamp according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of a picture according to yet another embodiment of the present invention;
fig. 7 is a schematic structural diagram of an auditing apparatus for picture timestamp according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an auditing apparatus for picture timestamp according to another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a method for auditing a picture timestamp, which mainly includes:
101. all text line areas included in the continuously acquired pictures are detected.
Specifically, the picture related in the step is marked with an acquisition time stamp, and the acquisition time stamp is marked on the picture by the picture acquisition equipment when the picture acquisition equipment acquires the picture. The acquisition time point of the picture can be judged through the acquisition time stamp on the picture, namely, the time at which the picture is acquired can be known through the acquisition time stamp on the picture, so that subsequent business operation can be carried out according to the acquisition time stamp. In addition, the source of the picture involved in this step can be determined according to the business requirements. Optionally, the image is a road image acquired by the image acquisition device in the road data acquisition process, and the road track can be drawn based on the road image.
Specifically, the light intensity, the collection angle and other collection factors of the picture collection device will change in the picture collection process, and the color of the area where the collection time stamp is located in the collected individual pictures is the same as the set color of the collection time stamp due to the change of the collection factors, or the area where the collection time stamp is located is overexposed, or the similarity between at least part of the area in the individual pictures and the area where the collection time stamp is located is higher, so that the collection time stamps of the individual pictures are fused with the pictures, or the similarity between part of the area of the individual pictures and the area where the collection time stamp is located is higher, and therefore the text line area in the individual pictures or the text line area with errors can not be accurately identified. In order to reduce the impact of these individual pictures on the auditing of the subsequent picture time stamps, it is therefore necessary to use a plurality of pictures that are taken in succession. In addition, since all text line areas included in the plurality of pictures are obtained in the step, a large amount of selection basis is provided for the selection of the subsequent target text line areas, so that the target text line areas selected from all the text line areas can be covered with the acquisition time stamp in each picture at a larger probability, and the integrity and the accuracy of the identified acquisition time stamp are improved when the acquisition time stamp in each picture is identified based on the target text line areas.
Specifically, the method for detecting all text line areas included in the continuously acquired multiple pictures in this step at least includes, but is not limited to: for each picture, perform: and detecting the picture by using a preset text line region detection algorithm to obtain all text line regions included in the picture. It should be noted that the number of text line areas for detecting the picture may be one or more, and the detected text line areas include at least one of the following types: first, the text line area comprises all characters of the acquisition time stamp of the picture; second, the text line area comprises part of characters of the acquisition time stamp of the picture; third, any character of the acquisition time stamp of the picture is not included in the text line area.
Illustrating: fig. 2 is a picture 21 of a plurality of consecutive pictures. And detecting the picture by using the text line region detection method to obtain all text line regions 211, 212 and 213 included in the picture. Any character of the acquisition time stamp of the picture is not included in the text line area 211, and the text line area is an error text line area identified by high similarity between at least part of the area in the picture and the area marked with the acquisition time stamp due to the fact that the acquisition factors such as illumination intensity, acquisition angle and the like change in the picture acquisition process of the picture acquisition equipment. The text line area 212 and the text line area 213 each include a partial character of the acquisition time stamp of the picture.
Illustrating: fig. 3 is a picture 22 of a plurality of pictures taken in succession. After the picture is detected by using the text line area detection method, all text line areas 221 and 222 included in the picture are obtained. Text line area 221 and text line area 222 each include a partial character of the acquisition time stamp of the picture.
Illustrating: fig. 4 shows a picture 23 of a plurality of pictures collected continuously, and after the picture is detected by using the text line area detection method, all text line areas 231 and 232 included in the picture are obtained. The text line area 231 and the text line area 232 each include a partial character of the acquisition time stamp of the picture.
As can be seen from fig. 2 to 4, the number of text line areas in each picture that is continuously acquired may be the same or different. The text line regions in the respective pictures may be located in completely overlapping or partially overlapping or non-overlapping positions. The text line areas in the respective pictures may be the same or different in size.
In addition, it should be noted that the specific type of the preset text line area detection method may be determined according to the service requirement. Alternatively, the text line region detection method may include, but is not limited to, a deep neural network text line region detection algorithm, for example, the deep neural network text line region detection algorithm is a text line region detection algorithm based on a convolutional neural network CNN model.
102. And selecting at least one target text line area from all the text line areas.
Specifically, the selected target text line area covers all characters of the acquisition time stamp of each of the set number of pictures. That is, when the acquisition time stamp of each of the plurality of pictures is subsequently identified based on the target text line region, the identified acquisition time stamp has higher integrity and accuracy. In addition, the set number of pictures is included in the plurality of pictures that are continuously acquired, and the set number is equal to or not less than half of the total number of the plurality of pictures.
Specifically, when the number of the selected target text line areas is one, the target text line areas may all cover all characters of the acquisition time stamp of each of the plurality of pictures, or the target text line areas may all cover all characters of the acquisition time stamp of each of the set number of pictures.
Specifically, when the number of the selected target text line areas is multiple (multiple includes two or more), each target text line area covers a part of characters of the acquisition time stamp of each of the multiple pictures. That is, all selected target text line areas may all cover all characters of the acquisition time stamp for each of the plurality of pictures, or all selected target text line areas may all cover all characters of the acquisition time stamp for each of the set number of pictures.
103. For each of the pictures: and carrying out character recognition on the at least one target text line area in the picture to obtain an acquisition time stamp of the picture.
In this step, the process of obtaining the acquisition time stamp of each picture is the same, so the acquisition time stamp of one picture is taken as an example for explanation below: the target text line area is identified in the picture, and the target text area identified in the picture may cover all or part of the characters of the acquisition timestamp of the picture. Ignoring the original text line area of the picture, and carrying out character recognition on characters in the target text line area by adopting a preset character recognition method, wherein a character recognition result is an acquisition time stamp of the picture. Alternatively, the character recognition process may be a single character detection process. When the target text area in the picture covers all characters of the acquisition time stamp of the picture, the acquisition time stamp is obtained as the time stamp with the set format, and the acquisition time stamp is obtained as the accurate acquisition time stamp of the picture. When the target text area in the picture covers part of characters of the acquisition time stamp of the picture, the acquired acquisition time stamp is not the time stamp with the set format, and the acquisition time stamp is not the accurate acquisition time stamp of the picture. If the acquired time stamp is not the accurate acquired time stamp of the picture, the picture can be removed, and the picture is not utilized to carry out subsequent time stamp checking operation so as to eliminate error interference of the picture on the checking process.
Specifically, the character recognition method can be set according to the service requirement. Alternatively, the character recognition method may include, but is not limited to, an optical character recognition method OCR (Optical Character Recognition).
104. Based on the acquisition time stamp and the system time stamp of each picture, checking whether the identification of the acquisition time stamp is abnormal or not.
Specifically, when the picture collecting device collects the picture, the picture collecting device can collect a time stamp for the picture identifier, and meanwhile, a service data system where the picture collecting device is located can also have a system time stamp for the picture identifier according to the clock of the service data system. A certain time difference is allowed to exist between the acquisition time stamp and the system time stamp of the picture, but if the time difference between the acquisition time stamp and the system time stamp is larger than the allowed time difference, the processing result is deviated when the business data system performs business processing based on the acquisition time stamp. Illustrating: when the picture is a road picture acquired in the road data acquisition process, the road business data system can determine a road track according to the acquisition time stamp of the road picture. The road track determination process comprises the following steps: the road service data system can select a plurality of continuous pictures based on the clock of the road service data system and the system time stamp of each picture, and then determine the road track according to the acquisition time stamp of the selected pictures. At this time, once the deviation between the system time stamp and the acquisition time stamp of the plurality of pictures is larger, the determined road track will drift and be inconsistent with the actual track.
In order to reduce the probability of deviation of service processing performed by the service data system based on the acquisition time stamp, it is necessary to check whether the identification of the acquisition time stamp is abnormal based on the acquisition time stamp and the system time stamp of each picture, so as to eliminate the abnormality in time when the identification of the acquisition time stamp is abnormal.
According to the method for auditing the picture timestamp, firstly, the target text line area is selected from all text line areas included in a plurality of pictures which are continuously collected, and then character recognition is carried out on the position corresponding to the target text line area in each picture, so that the collection timestamp of each picture is obtained. And finally checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture. According to the technical scheme, no matter whether the acquisition time stamp in the picture is identified or whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp of the picture and the system time stamp is checked, the manual participation is avoided. Not only can avoid the identification and auditing errors generated by the difference of the manual individuals, but also the acquisition time stamp and the system time stamp of each picture are not required to be compared manually one by one, so that a great amount of auditing time is saved. Therefore, the scheme provided by the invention can improve the efficiency of checking the picture timestamp.
Further, according to the method shown in fig. 1, another embodiment of the present invention further provides a method for auditing a picture timestamp, as shown in fig. 5, where the method mainly includes:
301. all text line areas included in the continuously acquired pictures are detected.
Specifically, the process of detecting all text line areas included in the continuously acquired multiple pictures may at least include: and determining all text line areas included in each picture in the plurality of pictures by adopting a preset deep neural network text line area detection algorithm. The text line area detection algorithm of the deep neural network can be selected according to service requirements. Alternatively, the deep neural network text line region detection algorithm may include, but is not limited to, a text line region detection algorithm based on a convolutional neural network CNN model.
The following description will take, as an example, a text line area detection algorithm based on a convolutional neural network CNN model to determine all text line areas included in a picture. The text line region detection algorithm based on the convolutional neural network CNN model is provided with a classifier, and the classifier can distinguish text line regions from non-text line regions. The method comprises the steps of traversing pictures by adopting a sliding window based on a convolutional neural network CNN model text line region detection algorithm, inputting the traversed picture regions in the window into a classifier, and judging whether the picture regions are text line regions or not before classification. If the text line area is the text line area, outputting the text line area; if the text line area is not the text line area, continuing to traverse the picture, and repeating the process until the whole picture is traversed.
302. Forming at least one region set based on the all text line regions; each region set comprises at least two text line regions, and the intersection ratio between any two included text line regions is larger than a preset first threshold; the intersection ratio between text line regions of different said region sets is smaller than said first threshold.
Specifically, the process of forming at least one region set based on the all text line regions may include: detecting whether a first text line area exists in all the text line areas; the first text line area does not intersect any one of the text line areas; if so, rejecting the first text line area, and forming the at least one area set based on all text line areas after rejecting the first text line area; if not, at least one region set is formed based on all text line regions. The probability that the characters of the acquisition timestamp are not present in the first text line area is high, so in order to reduce the selection base of the extraction target text line area, the first text line area needs to be removed in advance when the first text line exists in all the text line areas. The first text line area is an erroneous text line area detected when the text line area is detected due to high similarity between a part of the area of the picture and an area where the acquisition time stamp is located caused by the change of acquisition factors such as illumination intensity, acquisition angle and the like in the picture acquisition process of the picture acquisition equipment.
Illustrating: the plurality of pictures collected in succession are picture 21, picture 22 and picture 23 in fig. 2-4. All text line areas included in the plurality of pictures are 211, 212, 213, 221, 222, 231, 232. 211 is culled for the first text line area. Two sets of regions are formed, one including text line regions 212, 221, 231. The other set of regions includes text line regions 213, 222, 232. As can be seen from fig. 2 to fig. 4, the intersection ratio between any two text line areas included in each area set is greater than a preset first threshold. The intersection ratio between text line regions of the two region sets is less than a first threshold. The intersection ratio in determining any two text line areas is actually determined according to the coordinate range of the text line areas.
Specifically, the text line area covered in the at least one area set formed in this step may be a part of text line areas or all of text line areas in all of the text line areas in the plurality of pictures. Text line regions that are not covered in either region set exist as follows: first, a first text line area; second, the intersection ratio with any text line area in any area set is smaller than the text line area of the first threshold.
The intersection ratio of any two text line areas in each area set is larger than a first threshold, that is, the ratio of the intersection and the union of any two text line areas is larger than the first threshold, the intersection ratio can show the coincidence degree of any two text line areas, and the intersection ratio of any two text line areas is 1 in an ideal state, that is, any two text line areas completely coincide. The first threshold is an intersection ratio threshold, and specific values thereof can be determined according to service requirements, for example, 0.5, 1.7, 0.8 and 0.82.
When the number of the formed region sets is one, the probability that the text line region in the region set covers all characters of the acquisition time stamp of each of the plurality of pictures is higher, or the probability that the target text line region covers all characters of the acquisition time stamp of each of the set number of pictures is higher.
When the number of the formed region sets is plural (two or more), the probability that the combination of the text line regions in the plural region sets covers all the characters of the acquisition time stamp of each of the plural pictures is high, or the probability that the combination of the text line regions in the plural region sets covers all the characters of the acquisition time stamp of each of the set number of pictures is high.
303. And respectively extracting one target text line region from each region set.
Specifically, the process of extracting one target text line region from each region set respectively may at least include: determining a confidence level of each text line region within the region set; and extracting one text line region with highest confidence as a target text line region of the region set.
The following describes the confidence level of a text line region in a region set as an example: the confidence of a text line region is in fact the probability that other text line regions in the set of regions overlap the text line region. The higher confidence of the text line region indicates a higher probability of overlapping other text line regions in the region set, and a higher probability of covering all or part of the characters of the acquisition timestamp that all cover each of the plurality of pictures.
The text line region with the highest extraction confidence is described as the target text line region of the region set: first, when the number of text line areas with highest confidence in the area set is one, the text line area is directly extracted as a target text line area of the area set. Secondly, when the number of text line areas with highest confidence in the area set is a plurality of, the probability that the text line areas with the highest confidence cover all characters or part characters of the acquisition time stamp of each picture in the plurality of pictures is the same is stated, and then one text line area is randomly extracted as a target text line area.
Illustrating: the plurality of pictures are picture 21, picture 22, and picture 23 in fig. 2-4. The extracted target text line areas are 222 and 231.
304. And identifying the at least one target text line area in each picture according to at least one position parameter of the at least one target text line area based on a preset coordinate system.
Specifically, there are two types of location parameters involved in this step: first, coordinate range. Second, coordinate points, length values, and width values. When the position parameter is the first type, a coordinate range can be positioned in the picture, and the target text line area is identified. And when the position parameter is the second type, positioning a coordinate point, and then identifying the target text line area through the length value and the width value based on the coordinate point.
Specifically, all the pictures are located in one coordinate system, so that the target text line area is identified in each picture according to a uniform coordinate system. For example, when the target text line area is identified in each picture, if the pictures are placed in a stacked manner, the target text line areas in the pictures overlap.
Illustrating: the plurality of pictures are picture 21, picture 22, and picture 23 in fig. 2-4. The extracted target text line areas are 222 and 231. The effect of identifying the target text line areas 222 and 231 in the picture 22 is shown in fig. 6.
305. And respectively identifying characters in the at least one target text line area in each picture by using a preset character identification method to obtain an acquisition time stamp of each picture.
Specifically, the character recognition method can be determined according to the service requirement. Alternatively, the character recognition method may include, but is not limited to, an optical character recognition method OCR (Optical Character Recognition).
When recognizing characters in a picture, an original text recognition line area in the picture needs to be ignored, and the characters are recognized in a target text line area marked by the picture. After the identification is completed, the time stamp of the picture is obtained.
Illustrating: taking fig. 6 as an example, the acquisition time stamp in the identification picture 22 is "2018/09/1312:29:05".
306. For each of the pictures: detecting whether the acquisition time stamp of the picture accords with a preset time stamp format; if yes, go to 308; otherwise, 307 is performed.
In practical applications, all characters of the acquisition time stamp of a small part of the plurality of pictures may not be covered by the target text line region, and thus may result in incomplete recognition of the acquisition time stamp of the small part of the pictures. If the acquisition time stamp with incomplete characters is used for auditing the subsequent picture time stamp, the auditing result may deviate, so that whether the acquisition time stamp of the picture accords with a preset time stamp format needs to be detected.
The following description will take a picture as an example: when the acquisition time stamp of the picture is detected not to accord with the preset time stamp format, all characters of the acquisition time stamp of the picture are possibly not covered by the target text line area, so that the acquisition time stamp which accords with the preset time stamp format is not identified, and the picture needs to be removed at the moment so as to avoid interference of the picture on the examination of the subsequent picture time stamp.
The following description will take a picture as an example: when the acquisition time stamp of the picture is detected to be in accordance with the preset time stamp format, all characters of the acquisition time stamp of the picture are all covered by the target text line area, so that the acquisition time stamp in accordance with the preset time stamp format is identified.
In addition, the timestamp format preset in practical application can be determined according to the service requirement. Optionally, the timestamp format is set according to the format of the acquisition timestamp. Illustrating: the format of the acquisition time stamp is "2018/09/11 12:29:06", and the time stamp format is "XXXX/XX/XX XX: XX: XX" (each X represents a character).
307. The picture is culled and 310 is performed.
Specifically, the mode of eliminating the picture may be: the removed picture is stored in a set storage location so that it can be read from the storage location also later if it is needed to be used.
308. Judging whether the time difference between the acquisition time stamp and the system time stamp of the picture is smaller than a preset second threshold value or not; if not, execution proceeds to 309.
The following description will take a picture as an example: when the time difference between the acquisition time stamp of the picture and the system time stamp is smaller than the preset second threshold value, the abnormal problems of equipment aging, serious heating, equipment failure and the like of the picture acquisition equipment possibly do not occur, and the picture acquisition time stamp is normal, so that the picture does not need to be marked.
The following description will take a picture as an example: when the time difference between the acquisition time stamp of the picture and the system time stamp is not smaller than the preset second threshold, the probability of abnormal problems such as equipment aging, serious heating, equipment failure and the like of the picture acquisition equipment is higher, and if the acquisition time stamp of the picture is abnormal, the picture needs to be marked, and 309 is executed.
The second threshold value involved in this step is a time difference threshold value, and its specific size can be determined according to the service requirement. Optionally, the second threshold is 2 seconds.
309. The picture is marked.
Specifically, when the picture is marked, a preset mark can be adopted to mark the picture. The specific type of the preset mark can be determined according to the service requirement. Optionally, the preset mark may include at least one of characters, symbols and numbers, but is not limited to the characters, symbols and numbers. Illustrating: the preset flag is "biaoji=0".
310. Judging whether the removed pictures exist in the plurality of pictures or not; if so, execution 311; otherwise, 314 is performed.
Specifically, this step is performed after each of the plurality of pictures has undergone the process of 306 to 309.
Specifically, when at least some of the multiple pictures are removed, the removed pictures may affect the accuracy of the audit of the subsequent picture acquisition time stamp, so it is necessary to determine whether the removed pictures exist in the multiple pictures.
311. Judging whether the ratio of the number of the removed pictures to the total amount of the plurality of pictures reaches a preset fifth threshold value or not; if yes, execute 317; otherwise, 312 is performed.
Specifically, if a large number of pictures in the plurality of pictures are all rejected, at this time, if the pictures which are not rejected are reused for checking the picture time stamp, the accuracy of the checking result is not high, and in order to avoid such a situation, it is required to determine whether the ratio of the number of the rejected pictures to the number of the plurality of pictures reaches a preset fifth threshold.
Specifically, when it is determined that the ratio of the number of the removed pictures to the total number of the multiple pictures does not reach the fifth threshold, it is indicated that the auditing of the picture time stamp is performed based on the pictures that are not removed at this time, and the accuracy of the auditing result is still higher, so that the step 312 is executed.
Specifically, when it is determined that the ratio of the number of the removed pictures to the total number of the multiple pictures in the multiple pictures reaches the fifth threshold, then too many pictures are proposed at this time, which indicates that the verification of the picture timestamp is performed based on the pictures that are not removed at this time, and the reliability of the verification result is low, so that 317 is executed.
The fifth threshold value involved in this step is a ratio threshold value, and its specific size can be determined according to the service requirement. Optionally, the fifth threshold is any value above 50%.
312. A first number of non-culled pictures in the plurality of pictures is determined based on the aggregate amount.
Specifically, the number of the removed pictures is subtracted from the total number of the plurality of pictures, which is the first number of the pictures which are not removed in the plurality of pictures.
313. Judging whether the number ratio between the number of marked pictures in the plurality of pictures and the first number is larger than a preset third threshold value or not; if not, execute 315; otherwise, execution 316.
Specifically, when the number ratio between the number of marked pictures and the first number in the plurality of pictures is larger than the third threshold value, the fact that the pictures with correct acquisition time stamp identifiers in the plurality of pictures is more is indicated, the fact that the probability of abnormal problems such as equipment aging, serious heating and equipment failure of the picture acquisition equipment is lower is indicated, and at the moment, the acquisition time stamp identifiers of the pictures acquired by the picture acquisition equipment are normal.
Specifically, when the number ratio between the number of marked pictures and the first number in the plurality of pictures is not larger than the third threshold value, the number of pictures with correct acquisition time stamp marks in the plurality of pictures is smaller, and the probability that the picture acquisition equipment has abnormal problems such as equipment aging, serious heating and equipment failure is higher is indicated, and the acquisition time stamp marks of the pictures acquired by the picture acquisition equipment are abnormal.
The third threshold value involved in this step is a ratio threshold value, and its specific size can be determined according to the service requirement. Optionally, the third threshold is any value above 50%.
314. Judging whether the number ratio between the number of marked pictures in the plurality of pictures and the total amount of the plurality of pictures is larger than a preset fourth threshold value or not; if not, execute 315; otherwise, execution 316.
Specifically, when the number ratio between the number of marked pictures in the plurality of pictures and the total number of the plurality of pictures is larger than the fourth threshold, the number of pictures with correct acquisition time stamp marks in the plurality of pictures is larger, and the probability that the picture acquisition equipment has abnormal problems such as equipment aging, serious heating and equipment failure is lower is indicated, and the acquisition time stamp marks of the pictures acquired by the picture acquisition equipment are normal.
Specifically, when the number ratio between the number of marked pictures in the plurality of pictures and the total number of the plurality of pictures is not larger than the fourth threshold value, the number of pictures with correct acquisition time stamp marks in the plurality of pictures is smaller, and the probability that the picture acquisition equipment has abnormal problems such as equipment aging, serious heating and equipment failure is higher, so that the acquisition time stamp marks of the pictures acquired by the picture acquisition equipment are abnormal.
The fourth threshold value involved in this step is a ratio threshold value, and its specific size can be determined according to the service requirement. Optionally, the fourth threshold is any value above 50%.
315. And checking out the abnormal identification of the acquisition time stamp, and ending the current flow.
Specifically, when the mark of the acquisition time is checked to be abnormal, the probability that the abnormal problems such as equipment aging, serious heating, equipment failure and the like of the picture acquisition equipment are higher is indicated. The business personnel can maintain and replace the picture acquisition equipment according to the auditing result so as to reduce negative influence of the acquisition time stamp on subsequent business operation.
316. And checking that the identification of the acquisition time stamp is normal, and ending the current flow.
317. And (5) alarming.
In particular, the specific type of alert may be determined based on the business requirements. Optionally, the alarm mode is to send alarm information to a specific terminal.
Specifically, the business personnel can timely learn that the identification of the picture timestamp cannot be audited based on the current pictures according to the alarm, and can timely replace the new pictures so that the auditing process can be smoothly carried out.
Further, in another embodiment of the present invention, the plurality of pictures sequentially collected are the picture 21, the picture 22, and the picture 23 in fig. 2-4. All text line areas included in the plurality of pictures are 211, 212, 213, 221, 222, 231, 232. The text line area 211 does not intersect any text line area of the text line areas 212, 213, 221, 222, 231, 232, and the explanation 211 is an error text line area identified by the picture acquisition device due to the variation of the illumination intensity, the acquisition angle and other acquisition factors in the picture acquisition process, so that the text line area 211 is eliminated. Two region sets are formed based on the remaining text line regions after the text line region 211 is culled, one region set including the text line regions 212, 221, 231; the other set of regions includes text line regions 213, 222, 232. As can be seen from fig. 2 to fig. 4, the intersection ratio between any two text line areas included in each area set is greater than a preset first threshold. The intersection ratio between text line regions of the two region sets is less than a first threshold. The intersection ratio in determining any two text line areas is actually determined according to the coordinate range of the text line areas. Confidence levels of the text line regions 212, 221, 231, 213, 222 and 232 are then respectively determined, and one text line region with the highest confidence level in each region set is extracted as a target text line region corresponding to the region set, for example, the extracted target text line regions are 222 and 231. The target text line areas 222 and 231 are respectively identified in the picture 21, the picture 22, and the picture 23 according to coordinate points, length values, and width values of the target text line areas 222 and 231 based on a preset coordinate system. For example, in FIG. 6, the effect of identifying the target text line areas 222 and 231 in the picture 22 is shown. And respectively identifying the characters in the target text line areas 222 and 231 in each picture by using a preset character identification method to obtain the acquisition time stamp of each picture. Taking fig. 6 as an example, the acquisition time stamp in the identification picture 22 is "2018/09/13 12:29:05". The collection time stamps of the identified pictures 21, 22 and 23 are in accordance with the preset time stamp format, so that the following steps are continuously performed for each picture: and judging whether the time difference between the acquisition time stamp of the picture and the system time stamp is smaller than a preset threshold value. If the time difference between the acquisition time stamp and the system time stamp of the three pictures 21, 22 and 23 is smaller than the preset threshold value, the abnormal identification of the acquisition time stamp is checked, which indicates that the probability of the abnormal problems of equipment aging, serious heating, equipment failure and the like of the picture acquisition equipment is lower at the moment, and the acquisition time stamp identification in the pictures acquired by the picture acquisition equipment is normal.
Further, according to the above method embodiment, another embodiment of the present invention further provides an auditing apparatus for a picture timestamp, as shown in fig. 7, where the apparatus includes:
a detecting unit 41, configured to detect all text line areas included in the continuously acquired multiple pictures;
a selecting unit 42 for selecting at least one target text line area from the all text line areas detected by the detecting unit 41;
an identification unit 43 for performing, for each of the pictures: performing character recognition on the at least one target text line area in the picture to obtain an acquisition time stamp of the picture;
and an auditing unit 44, configured to audit whether the identification of the acquisition timestamp is abnormal based on the acquisition timestamp and the system timestamp of each picture.
According to the auditing device for the picture time stamp, the target text line area is selected from all text line areas included in a plurality of pictures which are continuously collected, and then character recognition is carried out on the position corresponding to the target text line area in each picture, so that the collection time stamp of each picture is obtained. And finally checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture. According to the technical scheme, no matter whether the acquisition time stamp in the picture is identified or whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp of the picture and the system time stamp is checked, the manual participation is avoided. Not only can avoid the identification and auditing errors generated by the difference of the manual individuals, but also the acquisition time stamp and the system time stamp of each picture are not required to be compared manually one by one, so that a great amount of auditing time is saved. Therefore, the scheme provided by the invention can improve the efficiency of checking the picture timestamp.
Alternatively, as shown in fig. 8, the selecting unit 42 includes:
a forming module 421, configured to form at least one region set based on the text line regions; each region set comprises at least two text line regions, and the intersection ratio between any two included text line regions is larger than a preset first threshold; the intersection ratio between text line areas of different area sets is smaller than the first threshold value;
an extracting module 422, configured to extract one target text line region from each of the region sets, respectively.
Optionally, as shown in fig. 8, the forming module 421 includes:
a detection sub-module 4211, configured to detect whether the first text line area exists in all the text line areas; the first text line area does not intersect any one of the text line areas; if present, trigger formation submodule 4212;
the forming submodule 4212 is configured to reject the first text line area under the triggering of the detecting submodule, and form the at least one area set based on the all text line areas after the first text line area is rejected.
Optionally, as shown in fig. 8, the extracting module 422 is configured to perform, for each of the region sets: determining a confidence level of each text line region within the region set; and extracting one text line region with highest confidence as a target text line region of the region set.
Optionally, as shown in fig. 8, the auditing unit 44 includes:
a marking module 441, configured to perform, for each of the pictures: judging whether the time difference between the acquisition time stamp and the system time stamp of the picture is smaller than a preset second threshold value or not; if not, marking the picture;
and an auditing module 442, configured to audit whether the identification of the acquisition timestamp is abnormal based on the number of marked pictures in the plurality of pictures and the total amount of the plurality of pictures.
Optionally, as shown in fig. 8, the auditing unit 44 further includes:
the detection module 443 is configured to detect whether the acquisition timestamp of the picture accords with a preset timestamp format; if yes, triggering the marking module 441; otherwise, trigger culling module 444;
the rejecting module 444 is configured to reject the picture under the triggering of the detecting module 443.
Optionally, as shown in fig. 8, the auditing unit 44 further includes:
A judging module 445, configured to judge whether a rejected picture exists in the plurality of pictures; if so, triggering a first audit sub-module 4421 in the audit module 442; if not, triggering a second audit sub-module 4422 in the audit module 442;
the first audit sub-module 4421 is configured to determine, under the triggering of the judging module 445, a first number of non-rejected pictures in the plurality of pictures based on the total amount, and judge whether a number ratio between a number of marked pictures in the plurality of pictures and the first number is greater than a preset third threshold; if not, checking out the abnormal identification of the acquisition time stamp.
The second audit sub-module 4422 is configured to determine, under the trigger of the determining module 445, whether a quantity ratio between the number of marked pictures in the plurality of pictures and the total quantity of the plurality of pictures is greater than a preset fourth threshold; if not, checking out the abnormal identification of the acquisition time stamp.
Optionally, the first audit sub-module 4421 is further configured to determine, under the trigger of the determining module 445, whether a ratio of the number of the removed pictures to the total amount of the plurality of pictures reaches a preset fifth threshold; if not, executing the judgment whether the number ratio between the number of marked pictures and the first number in the plurality of pictures is larger than a preset third threshold value.
Alternatively, as shown in fig. 8, the identifying unit 43 includes:
an identification module 431, configured to identify the at least one target text line area in the picture according to at least one position parameter of the at least one target text line area based on a preset coordinate system;
the recognition module 432 is configured to recognize a character in the at least one target text line area in the picture by using a preset character recognition method.
Optionally, the detecting unit 41 is configured to determine all text line areas included in each picture by using a preset deep neural network text line area detection algorithm.
In the auditing device for the picture timestamp provided by the embodiment of the invention, the detailed explanation of the method adopted in the operation process of each functional module can be referred to the corresponding method of the method embodiments of fig. 1 and 5, and the detailed explanation is omitted here.
Further, according to the above embodiment, an embodiment of the present invention provides a storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method for auditing a picture timestamp according to any one of the above.
Further, according to the above embodiment, an embodiment of the present invention provides an electronic device, including: a storage medium and a processor;
The processor is suitable for realizing each instruction;
the storage medium is suitable for storing a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform a method of auditing a picture timestamp as described in any one of the preceding claims.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the methods and apparatus described above may be referenced to one another. In addition, the "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent the merits and merits of the embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in the methods, apparatus and framework of operation of the deep neural network model according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (20)

1. An auditing method of a picture timestamp, comprising the following steps:
detecting all text line areas included in a plurality of continuously acquired pictures;
forming at least one region set based on the all text line regions; each region set comprises at least two text line regions, and the intersection ratio between any two included text line regions is larger than a preset first threshold; the intersection ratio between text line areas of different area sets is smaller than the first threshold value;
Extracting one target text line region from each region set respectively;
for each of the pictures: performing character recognition on the at least one target text line area in the picture to obtain an acquisition time stamp of the picture;
based on the acquisition time stamp and the system time stamp of each picture, checking whether the identification of the acquisition time stamp is abnormal or not.
2. The method of claim 1, wherein the forming at least one region set based on the all text line regions comprises:
detecting whether a first text line area exists in all the text line areas; the first text line area does not intersect any one of the text line areas;
if so, rejecting the first text line area, and forming the at least one area set based on all text line areas after rejecting the first text line area.
3. The method of claim 1, wherein extracting one of the target text line regions from each of the region sets, respectively, comprises:
for each of the sets of regions, performing: determining a confidence level of each text line region within the region set; and extracting one text line region with highest confidence as a target text line region of the region set.
4. A method according to any one of claims 1-3, wherein said checking whether the identification of the acquisition time stamp is abnormal based on the acquisition time stamp and the system time stamp of each of the pictures comprises:
for each of the pictures: judging whether the time difference between the acquisition time stamp and the system time stamp of the picture is smaller than a preset second threshold value or not; if not, marking the picture;
based on the number of marked pictures in the plurality of pictures and the total amount of the plurality of pictures, checking whether the identification of the acquisition time stamp is abnormal.
5. The method of claim 4, wherein prior to said determining whether the time difference between the acquisition time stamp and the system time stamp of the picture is less than a preset second threshold, the method further comprises:
detecting whether the acquisition time stamp of the picture accords with a preset time stamp format; if yes, executing the judgment whether the time difference between the acquisition time stamp and the system time stamp of the picture is smaller than a preset second threshold value; otherwise, eliminating the picture.
6. The method of claim 5, wherein auditing whether the identification of the acquisition timestamp is abnormal based on the number of marked pictures in the plurality of pictures and the total amount of the plurality of pictures comprises:
Judging whether the removed pictures exist in the plurality of pictures or not;
if so, determining a first number of pictures which are not removed in the plurality of pictures based on the total amount, and judging whether the number ratio between the number of marked pictures in the plurality of pictures and the first number is larger than a preset third threshold value; if not, checking out the abnormal identification of the acquisition time stamp;
if the marked images do not exist, judging whether the number ratio between the number of the marked images in the plurality of images and the total amount of the plurality of images is larger than a preset fourth threshold value or not; if not, checking out the abnormal identification of the acquisition time stamp.
7. The method of claim 6, wherein prior to said determining whether a number ratio between a number of marked pictures in the plurality of pictures and the first number is greater than a preset third threshold, the method further comprises:
judging whether the ratio of the number of the removed pictures to the total amount of the plurality of pictures reaches a preset fifth threshold value or not; if not, executing the judgment whether the number ratio between the number of marked pictures in the plurality of pictures and the first number is larger than a preset third threshold value; otherwise, alarm.
8. A method according to any one of claims 1-3, wherein said character recognition of said at least one target text line area in said picture comprises:
identifying the at least one target text line area in the picture according to at least one position parameter of the at least one target text line area based on a preset coordinate system;
and identifying the characters in the at least one target text line area in the picture by using a preset character identification method.
9. A method according to any one of claims 1-3, wherein said detecting all text line areas included in the plurality of pictures comprises:
and determining all text line areas included in each picture by adopting a preset deep neural network text line area detection algorithm.
10. An auditing apparatus for a picture timestamp, comprising:
the detection unit is used for detecting all text line areas included in the continuously acquired pictures;
a forming module, configured to form at least one region set based on the all text line regions; each region set comprises at least two text line regions, and the intersection ratio between any two included text line regions is larger than a preset first threshold; the intersection ratio between text line areas of different area sets is smaller than the first threshold value;
The extraction module is used for respectively extracting one target text line region from each region set;
an identification unit configured to perform, for each of the pictures: performing character recognition on the at least one target text line area in the picture to obtain an acquisition time stamp of the picture;
and the auditing unit is used for auditing whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.
11. The apparatus of claim 10, wherein the forming module comprises:
the detection submodule is used for detecting whether a first text line area exists in all the text line areas; the first text line area does not intersect any one of the text line areas; if yes, triggering to form a sub-module;
the forming sub-module is configured to reject the first text line area under the triggering of the detecting sub-module, and form the at least one area set based on the all text line areas after the first text line area is rejected.
12. The apparatus of claim 10, wherein the means for extracting is configured to perform, for each of the sets of regions: determining a confidence level of each text line region within the region set; and extracting one text line region with highest confidence as a target text line region of the region set.
13. The apparatus according to any one of claims 10-12, wherein the auditing unit comprises:
a marking module, configured to perform, for each of the pictures: judging whether the time difference between the acquisition time stamp and the system time stamp of the picture is smaller than a preset second threshold value or not; if not, marking the picture;
and the auditing module is used for auditing whether the identification of the acquisition time stamp is abnormal or not based on the number of marked pictures in the plurality of pictures and the total quantity of the plurality of pictures.
14. The apparatus of claim 13, wherein the auditing unit further comprises:
the detection module is used for detecting whether the acquisition time stamp of the picture accords with a preset time stamp format; if yes, triggering the marking module; otherwise, triggering a rejecting module;
and the rejecting module is used for rejecting the picture under the triggering of the detecting module.
15. The apparatus of claim 14, wherein the auditing unit further comprises:
the judging module is used for judging whether the removed pictures exist in the plurality of pictures or not; if yes, triggering a first audit sub-module in the audit module; if the verification module does not exist, triggering a second verification sub-module in the verification module;
The first audit submodule is used for determining a first number of pictures which are not rejected in the plurality of pictures based on the total amount under the triggering of the judging module, and judging whether the number ratio between the number of marked pictures in the plurality of pictures and the first number is larger than a preset third threshold value or not; if not, checking out the abnormal identification of the acquisition time stamp;
the second audit submodule is used for judging whether the quantity ratio between the quantity of marked pictures in the plurality of pictures and the total quantity of the plurality of pictures is larger than a preset fourth threshold value or not under the triggering of the judging module; if not, checking out the abnormal identification of the acquisition time stamp.
16. The apparatus of claim 15, wherein the first audit sub-module is further configured to determine, under the triggering of the determining module, whether a ratio of the number of the removed pictures to the total amount of the plurality of pictures reaches a preset fifth threshold; if not, executing the judgment whether the number ratio between the number of marked pictures and the first number in the plurality of pictures is larger than a preset third threshold value.
17. The apparatus according to any one of claims 10-12, wherein the identification unit comprises:
The identification module is used for identifying the at least one target text line area in the picture according to at least one position parameter of the at least one target text line area based on a preset coordinate system;
the recognition module is used for recognizing characters in the at least one target text line area in the picture by utilizing a preset character recognition method.
18. The apparatus according to any one of claims 10-12, wherein the detection unit is configured to determine all text line areas included in each of the pictures using a preset deep neural network text line area detection algorithm.
19. A storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method of auditing a picture timestamp according to any one of claims 1 to 9.
20. An electronic device, the electronic device comprising: a storage medium and a processor;
the processor is suitable for realizing each instruction;
the storage medium is suitable for storing a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform a method of auditing a picture timestamp according to any one of claims 1 to 9.
CN201910048327.9A 2019-01-18 2019-01-18 Picture timestamp auditing method and device Active CN111460198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910048327.9A CN111460198B (en) 2019-01-18 2019-01-18 Picture timestamp auditing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910048327.9A CN111460198B (en) 2019-01-18 2019-01-18 Picture timestamp auditing method and device

Publications (2)

Publication Number Publication Date
CN111460198A CN111460198A (en) 2020-07-28
CN111460198B true CN111460198B (en) 2023-06-20

Family

ID=71684088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910048327.9A Active CN111460198B (en) 2019-01-18 2019-01-18 Picture timestamp auditing method and device

Country Status (1)

Country Link
CN (1) CN111460198B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112685672A (en) * 2020-12-24 2021-04-20 京东数字科技控股股份有限公司 Method and device for backtracking page session behavior track and electronic equipment
CN113891070B (en) * 2021-10-29 2023-12-15 北京环境特性研究所 Method and device for measuring delay time of network camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012004631A (en) * 2010-06-14 2012-01-05 Mitsubishi Electric Corp Time stamp correction circuit and coding equipment
CN102968610A (en) * 2011-08-31 2013-03-13 富士通株式会社 Method and device for processing receipt images
CN103905745A (en) * 2014-03-28 2014-07-02 浙江大学 Recognition method for ghosted timestamps in video frame
US9239747B1 (en) * 2013-09-16 2016-01-19 Google Inc. Image timestamp correction using metadata
CN106570500A (en) * 2016-11-11 2017-04-19 北京三快在线科技有限公司 Text line recognition method and device and calculation device
CN106682669A (en) * 2016-12-15 2017-05-17 深圳市华尊科技股份有限公司 Image processing method and mobile terminal
CN106845323A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of collection method of marking data, device and certificate recognition system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101478359B (en) * 2009-01-16 2013-01-23 华为技术有限公司 Method, apparatus and system for managing IEEE1588 timestamp

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012004631A (en) * 2010-06-14 2012-01-05 Mitsubishi Electric Corp Time stamp correction circuit and coding equipment
CN102968610A (en) * 2011-08-31 2013-03-13 富士通株式会社 Method and device for processing receipt images
US9239747B1 (en) * 2013-09-16 2016-01-19 Google Inc. Image timestamp correction using metadata
CN103905745A (en) * 2014-03-28 2014-07-02 浙江大学 Recognition method for ghosted timestamps in video frame
CN106845323A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of collection method of marking data, device and certificate recognition system
CN106570500A (en) * 2016-11-11 2017-04-19 北京三快在线科技有限公司 Text line recognition method and device and calculation device
CN106682669A (en) * 2016-12-15 2017-05-17 深圳市华尊科技股份有限公司 Image processing method and mobile terminal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Yuqiang Zhang etal..Detecting Falsified Timestamps in Evidence Graph via Attack Graph.《2015 8th International Symposium on Computational Intelligence and Design (ISCID)》.2016,全文. *
杨国亮 ; 王志元 ; 张雨 ; 康乐乐 ; 胡政伟 ; .基于垂直区域回归网络的自然场景文本检测.计算机工程与科学.2018,(07),全文. *
潘世成等.非结构化机器数据范式化处理的研究.《现代信息科技》.2018,第第2卷卷(第第2卷期),全文. *
鲍复民,李爱国,覃征.彩色照片时间戳识别.复旦学报(自然科学版).2004,(05),全文. *

Also Published As

Publication number Publication date
CN111460198A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
US8620094B2 (en) Pattern recognition apparatus, pattern recogntion method, image processing apparatus, and image processing method
CN106778737B (en) A kind of license plate antidote, device and a kind of video acquisition device
CN110276295B (en) Vehicle identification number detection and identification method and device
CN111369801B (en) Vehicle identification method, device, equipment and storage medium
CN111754456A (en) Two-dimensional PCB appearance defect real-time automatic detection technology based on deep learning
CN111353485B (en) Seal identification method, device, equipment and medium
CN107622489A (en) A kind of distorted image detection method and device
EP2447884B1 (en) Method for detecting and recognising an object in an image, and an apparatus and a computer program therefor
CN111460198B (en) Picture timestamp auditing method and device
US20210303899A1 (en) Systems and methods for automatic recognition of vehicle information
CN108573244B (en) Vehicle detection method, device and system
CN111583180A (en) Image tampering identification method and device, computer equipment and storage medium
CN115810134B (en) Image acquisition quality inspection method, system and device for vehicle insurance anti-fraud
CN112631896A (en) Equipment performance testing method and device, storage medium and electronic equipment
CN103699876A (en) Method and device for identifying vehicle number based on linear array CCD (Charge Coupled Device) images
CN115170501A (en) Defect detection method, system, electronic device and storage medium
Kiew et al. Vehicle route tracking system based on vehicle registration number recognition using template matching algorithm
CN111259887B (en) Intelligent quality inspection method, system and device for dumb resource equipment
Priambada et al. Levensthein distance as a post-process to improve the performance of ocr in written road signs
KR102078822B1 (en) Method for distinguishing ballot-paper
CN116128853A (en) Production line assembly detection method, system, computer and readable storage medium
CN111402185A (en) Image detection method and device
US11580758B2 (en) Method for processing image, electronic device, and storage medium
CN115861161A (en) Machine learning system, learning data collection method, and storage medium
CN111931721A (en) Method and device for detecting color and number of annual inspection label and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant