CN111460198A - Method and device for auditing picture timestamp - Google Patents

Method and device for auditing picture timestamp Download PDF

Info

Publication number
CN111460198A
CN111460198A CN201910048327.9A CN201910048327A CN111460198A CN 111460198 A CN111460198 A CN 111460198A CN 201910048327 A CN201910048327 A CN 201910048327A CN 111460198 A CN111460198 A CN 111460198A
Authority
CN
China
Prior art keywords
text line
pictures
picture
region
timestamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910048327.9A
Other languages
Chinese (zh)
Other versions
CN111460198B (en
Inventor
赵锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910048327.9A priority Critical patent/CN111460198B/en
Publication of CN111460198A publication Critical patent/CN111460198A/en
Application granted granted Critical
Publication of CN111460198B publication Critical patent/CN111460198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Abstract

The invention discloses a method and a device for auditing a picture timestamp, which relate to the technical field of image processing and mainly aim to improve the efficiency of auditing the picture timestamp; the main technical scheme comprises: detecting all text line regions included in a plurality of continuously collected pictures; selecting at least one target text line region from all the text line regions; for each of the pictures: performing character recognition on the at least one target text line region in the picture to obtain a collection timestamp of the picture; and checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.

Description

Method and device for auditing picture timestamp
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for auditing a picture timestamp.
Background
With the development of image processing technology, more and more businesses are proceeding based on pictures. For example, the road data collection process is a process of collecting a road picture by a picture collection device, and is a process completed based on the picture. In order to perform subsequent service processing based on the acquired picture, when the picture is acquired by the picture acquisition equipment, the picture acquisition equipment can identify the acquisition timestamp for the picture, and meanwhile, a service data system where the picture acquisition equipment is located can also identify a system timestamp for the picture according to a self clock. Generally, the difference between the acquisition timestamp of the picture and the system timestamp is within a set time difference. However, as the image capturing device ages and generates heat seriously, the system time stamp and the capturing time stamp of the image are larger than the set time difference. Once the system timestamp and the collection timestamp of the picture are greater than the set time difference, a processing result is deviated when the business data system performs business processing based on the collection timestamp.
At present, in order to find the condition of the system timestamp and the acquisition timestamp of a picture in time, the acquisition timestamp in the picture is generally identified in a manual mode, and then whether the difference between the acquisition timestamp and the system timestamp is qualified or not is checked in a manual mode. However, the difference between the manual individuals may cause an identification and review error, and the review needs to be performed again once the review error occurs, which may consume additional review time. And the manual mode needs to examine the acquisition time stamp and the system time stamp of each picture one by one, which consumes a great deal of time and labor cost.
Disclosure of Invention
In view of this, the present invention provides a method and an apparatus for auditing a picture timestamp, and mainly aims to improve the efficiency of auditing the picture timestamp.
In a first aspect, the present invention provides a method for auditing a picture timestamp, where the method includes:
detecting all text line regions included in a plurality of continuously collected pictures;
selecting at least one target text line region from all the text line regions;
for each of the pictures: performing character recognition on the at least one target text line region in the picture to obtain a collection timestamp of the picture;
and checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.
In a second aspect, the present invention provides an apparatus for auditing a picture timestamp, including:
the detection unit is used for detecting all text line areas included in a plurality of continuously collected pictures; the selecting unit is used for selecting at least one target text line region from all the text line regions detected by the detecting unit;
an identification unit configured to perform, for each of the pictures: performing character recognition on the at least one target text line region in the picture to obtain a collection timestamp of the picture;
and the auditing unit is used for auditing whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.
In a third aspect, the present invention provides a storage medium storing a plurality of instructions, the instructions being adapted to be loaded by a processor and to execute the method for auditing the picture time stamp according to any one of the above.
In a fourth aspect, the present invention provides an electronic device, comprising: a storage medium and a processor;
the processor is suitable for realizing instructions;
the storage medium adapted to store a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform an auditing method for picture timestamps as described in any of the above.
According to the technical scheme, the method and the device for auditing the picture time stamp provided by the invention have the advantages that the target text line region is selected from all text line regions included in a plurality of continuously acquired pictures, and then the character recognition is carried out on the position corresponding to the target text line region in each picture, so that the acquisition time stamp of each picture is obtained. And finally, checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture. According to the scheme, no matter the acquisition time stamp in the picture is identified, or whether the identification of the acquisition time stamp is abnormal or not is audited based on the acquisition time stamp of the picture and the system time stamp, no manual participation is caused. The method can avoid identification and audit errors caused by differences of manual individuals, and saves a large amount of audit time by comparing the acquisition time stamp and the system time stamp of each picture without manual work. Therefore, the scheme provided by the invention can improve the efficiency of examining and verifying the picture time stamp.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating an auditing method for a picture timestamp according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a picture provided by an embodiment of the present invention;
FIG. 3 is a diagram illustrating a picture provided by another embodiment of the present invention;
FIG. 4 is a diagram illustrating a picture provided by another embodiment of the present invention;
FIG. 5 is a flowchart of an auditing method for picture timestamps according to another embodiment of the present invention;
FIG. 6 is a diagram illustrating a picture provided by another embodiment of the present invention;
fig. 7 is a schematic structural diagram illustrating an apparatus for auditing a picture timestamp according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram illustrating an apparatus for auditing a picture timestamp according to another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a method for auditing a picture timestamp, where the method mainly includes:
101. all text line regions included in a plurality of continuously acquired pictures are detected.
Specifically, the picture related in this step is identified with a collection timestamp, and the collection timestamp is identified on the picture by the picture collection device when the picture collection device collects the picture. The collection time point of the picture can be judged through the collection time stamp on the picture, namely, the collection time stamp on the picture can be used for knowing the time at which the picture is collected, so that subsequent business operation can be carried out according to the collection time stamp. In addition, the picture source involved in this step can be determined according to the service requirement. Optionally, the picture is a road picture acquired by the picture acquisition device in the road data acquisition process, and the road track may be drawn based on the road picture.
Specifically, the image capturing device may change the capturing factors such as the illumination intensity and the capturing angle during the image capturing process, and the variation of the capturing factors may cause the color of the region where the capturing timestamp is located in the captured individual image to be the same as the set color of the capturing timestamp, or the exposure of the region where the capturing timestamp is located to be excessive, or the similarity between at least a part of the region in the individual image and the region where the capturing timestamp is located to be higher, so that the capturing timestamp of the individual image and the image may be fused, or the similarity between the part of the region in the individual image and the region where the capturing timestamp is located to be higher, and the text row region in the individual image may not be accurately identified, or the text row region with errors may not be identified. In order to reduce the impact of these individual pictures on the audit of the subsequent picture timestamps, it is therefore necessary to use multiple pictures taken in succession. In addition, all text line regions included in the multiple pictures are acquired in the step, so that a large selection basis is provided for the subsequent selection of the target text line region, the target text line region selected from all the text line regions can cover the acquisition timestamp in each picture with a higher probability, and the integrity and the accuracy of the identified acquisition timestamp are improved when the acquisition timestamp in each picture is identified based on the target text line region.
Specifically, the method for detecting all text line regions included in the plurality of continuously acquired pictures in this step at least includes, but is not limited to: for each picture: and detecting the picture by using a preset text line region detection algorithm to obtain all text line regions included in the picture. It should be noted that the number of the text line regions of the picture obtained by detection may be one or multiple, and the text line regions obtained by detection include at least one of the following categories: firstly, all characters of a collection time stamp of a picture are included in a text line area; secondly, partial characters of the acquisition time stamp of the picture are included in the text line area; third, any character of the capture timestamp of the picture is not included in the text line region.
For example, the following steps are carried out: fig. 2 shows a picture 21 of a plurality of consecutive pictures. After the picture is detected by using a text line region detection method, all text line regions 211, 212 and 213 included in the picture are obtained. The text line region 211 does not include any character of the acquisition timestamp of the picture, and the text line region is an error text line region which is identified by the fact that the similarity between at least part of the region in the picture and the region marked with the acquisition timestamp is high due to the fact that the image acquisition equipment changes in the image acquisition process due to the fact that the acquisition factors such as illumination intensity and acquisition angle. The text line area 212 and the text line area 213 respectively include partial characters of the capture time stamp of the picture.
For example, the following steps are carried out: fig. 3 shows one picture 22 of a plurality of pictures taken in succession. After the picture is detected by using a text line region detection method, all text line regions 221 and 222 included in the picture are obtained. The text line region 221 and the text line region 222 each include a partial character of the capture time stamp of the picture.
For example, the following steps are carried out: fig. 4 shows a continuously acquired picture 23 of a plurality of pictures, which is detected by a text line region detection method, and all text line regions 231 and 232 included in the picture are obtained. The text line region 231 and the text line region 232 respectively include partial characters of the capture time stamp of the picture.
As can be seen from fig. 2 to 4, the number of text line regions in each of the pictures acquired in succession may be the same or different. The text line regions in the respective pictures may be located at completely overlapping or partially overlapping or non-overlapping positions. The text line regions in the respective pictures may be the same size or different sizes.
In addition, it should be noted that the specific type of the preset text line region detection method may be determined according to the service requirement. Optionally, the text line region detection method may include, but is not limited to, a deep neural network text line region detection algorithm, for example, the deep neural network text line region detection algorithm is a text line region detection algorithm based on a convolutional neural network CNN model.
102. And selecting at least one target text line region from all the text line regions.
Specifically, the selected target text region covers all characters of the acquisition time stamp of each of the set number of pictures. That is, when the acquisition timestamp of each of the plurality of pictures is subsequently identified based on the target text row region, the integrity and accuracy of the identified acquisition timestamp are high. In addition, the set number of pictures is included in the plurality of pictures collected in succession, and the set number is equal to the total number of the plurality of pictures or not less than half of the total number.
Specifically, when the number of the selected target text line regions is one, the target text line region may cover all characters of the acquisition time stamp of each of the plurality of pictures, or the target text line region may cover all characters of the acquisition time stamp of each of the set number of pictures.
Specifically, when the number of the selected target text line regions is multiple (multiple includes two or more), each target text line region covers a part of characters of the capture timestamp of each of the multiple pictures. That is, the selected total target text line region may cover all characters of the capture timestamp of each of the plurality of pictures, or the selected total target text line region may cover all characters of the capture timestamp of each of the set number of pictures.
103. For each of the pictures: and performing character recognition on the at least one target text line region in the picture to obtain a collection timestamp of the picture.
In this step, since the process of obtaining the acquisition time stamp of each picture is the same, the following description will be given by taking the acquisition time stamp of one picture as an example: a target text line region is identified in the picture, and the target text region identified in the picture can cover all characters or partial characters of the acquisition time stamp of the picture. And ignoring the original text line region of the picture, and performing character recognition on characters in the target text line region by adopting a preset character recognition method, wherein the result of the character recognition is the acquisition timestamp of the picture. Alternatively, the character recognition process may be a single character detection process. It should be noted that, when the target text area in the picture covers all characters of the capture timestamp of the picture, the capture timestamp is obtained in the set format, and then the capture timestamp is obtained as the accurate capture timestamp of the picture. When the target text area in the picture covers part of characters of the acquisition time stamp of the picture, the acquired acquisition time stamp is not the time stamp with the set format, and the acquired acquisition time stamp is not the accurate acquisition time stamp of the picture. If the acquired time stamp is not the accurate acquisition time stamp of the picture, the picture can be eliminated, and subsequent time stamp checking operation is not carried out by utilizing the picture, so that error interference of the picture on the checking process is eliminated.
Specifically, the character recognition method may be set according to the service requirement. Alternatively, the Character recognition method may include, but is not limited to, the optical Character recognition method ocr (optical Character recognition).
104. And checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.
Specifically, when the picture is collected by the picture collecting device, not only the picture collecting device can identify the collecting timestamp for the picture, but also the service data system where the picture collecting device is located can identify the system timestamp for the picture according to the clock of the service data system. A certain time difference is allowed between the collection time stamp of the picture and the system time stamp, but if the time difference between the collection time stamp and the system time stamp is greater than the allowed time difference, when the business data system performs business processing based on the collection time stamp, a processing result may be deviated. For example, the following steps are carried out: when the picture is a road picture acquired in the road data acquisition process, the road service data system determines a road track according to the acquisition timestamp of the road picture. The track determination process comprises the following steps: the road service data system selects a plurality of continuous pictures based on the clock of the road service data system and the system time stamps of the pictures, and then determines a road track according to the acquisition time stamps of the selected pictures. At this time, once the deviation between the system time stamps and the acquisition time stamps of the multiple pictures is large, the determined road track drifts and is inconsistent with the actual track.
Therefore, in order to reduce the probability of deviation in the service processing of the service data system based on the acquisition timestamp, whether the identification of the acquisition timestamp is abnormal needs to be checked based on the acquisition timestamp of each picture and the system timestamp, so that the abnormality can be timely eliminated when the identification of the acquisition timestamp is abnormal.
The method for auditing the picture time stamp includes the steps of firstly selecting a target text line region from all text line regions included in a plurality of continuously acquired pictures, and then performing character recognition on the position corresponding to the target text line region in each picture so as to obtain the acquisition time stamp of each picture. And finally, checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture. According to the scheme, no matter the acquisition time stamp in the picture is identified, or whether the identification of the acquisition time stamp is abnormal or not is audited based on the acquisition time stamp of the picture and the system time stamp, no manual participation is caused. The method can avoid identification and audit errors caused by differences of manual individuals, and saves a large amount of audit time by comparing the acquisition time stamp and the system time stamp of each picture without manual work. Therefore, the scheme provided by the invention can improve the efficiency of examining and verifying the picture time stamp.
Further, according to the method shown in fig. 1, another embodiment of the present invention further provides an auditing method for a picture timestamp, as shown in fig. 5, where the method mainly includes:
301. all text line regions included in a plurality of continuously acquired pictures are detected.
Specifically, the process of detecting all text line regions included in the plurality of continuously acquired pictures at least includes: and determining all text line regions included in each of the plurality of pictures by adopting a preset deep neural network text line region detection algorithm. The deep neural network text line region detection algorithm can be selected according to the service requirement. Optionally, the deep neural network text line region detection algorithm may include, but is not limited to, a text line region detection algorithm based on a convolutional neural network CNN model.
The following description will be given by taking an example of determining all text line regions included in one picture by using a text line region detection algorithm based on a convolutional neural network CNN model: the text line region detection algorithm based on the convolutional neural network CNN model is provided with a classifier, and the classifier can distinguish a text line region and a non-text line region. The method comprises the steps of traversing a picture by adopting a sliding window by using a Convolutional Neural Network (CNN) -based text line region detection algorithm, inputting a picture region in the traversed window into a classifier, and judging whether the picture region is a text line region before classification. If the text line area is the text line area, outputting the text line area; if the picture is not the text line area, the picture is continuously traversed, and the process is repeated until the whole picture is traversed.
302. Forming at least one region set based on all the text line regions; each region set comprises at least two text line regions respectively, and the intersection-parallel ratio between any two text line regions is greater than a preset first threshold value; the intersection ratio between text line regions of different region sets is smaller than the first threshold value.
Specifically, the process of forming at least one region set based on all the text line regions may include: detecting whether a first text line region exists in all the text line regions; the first text line region does not intersect any of the text line regions; if yes, eliminating the first text line region, and forming the at least one region set based on all the text line regions after the first text line region is eliminated; if not, at least one region set is formed based on all text line regions. The probability that the character of the capture timestamp does not exist in the first text line region is high, so in order to reduce the selection base number of the extraction target text line region, the first text line region needs to be removed in advance when detecting that the first text line exists in all the text line regions. The first text row area is an erroneous text row area which is detected when a picture acquisition device detects a text row area and is higher in similarity between a partial area of a picture and an area where an acquisition timestamp is located due to variation of acquisition factors such as illumination intensity and an acquisition angle in the picture acquisition process.
For example, the following steps are carried out: the plurality of pictures acquired in succession are picture 21, picture 22 and picture 23 in fig. 2-4. All the text line regions included in the plurality of pictures are 211, 212, 213, 221, 222, 231, 232. 211 is the first text line region is culled. Two sets of regions are formed, one set of regions including text line regions 212, 221, 231. The other region set includes text line regions 213, 222, 232. It can be seen from fig. 2 to 4 that the intersection ratio between any two text line regions included in each region set is greater than a preset first threshold. The intersection ratio between the text line regions of the two region sets is less than a first threshold. The intersection ratio of any two text line areas is actually determined according to the coordinate range of the text line areas.
Specifically, the text line region included in the at least one region set formed in this step may be a partial text line region or a whole text line region in all text line regions in the plurality of pictures. The text line regions not covered in any region set exist as follows: first, a first text line region; second, the text line regions with the intersection ratio with any text line region in any region set are smaller than the first threshold value.
The intersection ratio of any two text line regions in each region set is greater than a first threshold, that is, the ratio of the intersection and the union of any two text line regions is greater than the first threshold, the intersection ratio can reflect the coincidence degree of any two text line regions, and the intersection ratio of any two text line regions is 1 in an ideal state, that is, any two text line regions are completely coincident. The first threshold is a cross-over ratio threshold, and the specific value thereof may be determined according to the service requirement, for example, 0.5, 1.7, 0.8, 0.82.
When the number of the formed area sets is one, it is described that the probability that the text line area in the area set covers all characters of the acquisition time stamp of each of the plurality of pictures is high, or the probability that the target text line area covers all characters of the acquisition time stamp of each of the set number of pictures is high.
When the number of the formed region sets is plural (two or more), it is described that the probability that the combination of the text line regions in the plural region sets covers all the characters of the capture time stamp of each of the plural pictures is high, or that the probability that the combination of the text line regions in the plural region sets covers all the characters of the capture time stamp of each of the set number of pictures is high.
303. And extracting one target text line region from each region set respectively.
Specifically, the process of extracting one region of the target text line from each region set at least includes: determining a confidence level of each text line region in the region set; and extracting the text line region with the highest confidence coefficient as a target text line region of the region set.
The confidence of a text line region in a region set is taken as an example for explanation: the confidence of a text line region is actually the probability that other text line regions in the set of regions overlap the text line region. The higher the confidence of the text line region indicates that the higher the probability that the text line region coincides with other text line regions in the region set, the higher the probability that the text line region covers all or part of the characters of the acquisition timestamp of each of the plurality of pictures.
The following explains that the text line region with the highest extraction confidence is the target text line region of the region set: first, when the number of text line regions with the highest confidence level in the region set is one, the text line region is directly extracted as a target text line region of the region set. Secondly, when the number of the text line regions with the highest confidence coefficient in the region set is multiple, the probability that the text line regions with the highest confidence coefficient cover all characters or part of characters of the acquisition time stamp which completely covers each of the multiple pictures is the same, and then one text line region is randomly extracted as the target text line region.
For example, the following steps are carried out: the multiple pictures are the pictures 21, 22 and 23 in fig. 2-4. The extracted target text line regions are 222 and 231.
304. And identifying the at least one target text line region in each picture according to at least one position parameter of the at least one target text line region based on a preset coordinate system.
Specifically, there are two types of location parameters involved in this step: first, a coordinate range. Second, coordinate points, length values, and width values. When the position parameter is the first type, the coordinate range can be positioned in the picture, and the target text line region is identified. When the position parameter is of the second type, a coordinate point is firstly positioned, and then the target text line region is identified through the length value and the width value based on the coordinate point.
Specifically, all the pictures are located in a coordinate system, so that the target text line region is identified in each picture according to a uniform coordinate system. For example, when the target text line region is identified in each picture, if the pictures are stacked, the target text line regions in the pictures overlap.
For example, the following steps are carried out: the multiple pictures are the pictures 21, 22 and 23 in fig. 2-4. The extracted target text line regions are 222 and 231. The effect of identifying the target text line regions 222 and 231 in the picture 22 is shown in fig. 6.
305. And respectively identifying characters in the at least one target text line region in each picture by using a preset character identification method to obtain the acquisition time stamp of each picture.
Specifically, the character recognition method may be determined according to the service requirements. Alternatively, the Character recognition method may include, but is not limited to, the optical Character recognition method ocr (optical Character recognition).
When characters in the picture are recognized, original text recognition line areas in the picture need to be omitted, and the characters are recognized in target text line areas marked by the picture. The time stamp of the picture is obtained after the recognition.
For example, the following steps are carried out: taking fig. 6 as an example, the acquisition time stamp in the identification picture 22 is "2018/09/1312: 29: 05".
306. For each of the pictures: detecting whether the acquisition timestamp of the picture conforms to a preset timestamp format; if yes, go to 308; otherwise, 307 is performed.
In practical applications, all characters of the capture timestamps of a small part of the pictures in the multiple pictures may not be fully covered by the target text line region, and therefore, the capture timestamp identification of the small part of the pictures may not be complete. If the acquisition timestamps with incomplete characters are used for subsequent examination of the picture timestamps, the examination results may deviate, and therefore, whether the acquisition timestamps of the pictures conform to the preset timestamp format needs to be detected.
The following is an example of a picture: when the acquisition time stamp of the picture is detected to be not in accordance with the preset time stamp format, it is indicated that all characters of the acquisition time stamp of the picture may not be completely covered by the target text line region, so that the acquisition time stamp in accordance with the preset time stamp format is not identified, and the picture needs to be removed at this moment, so that the picture is prevented from interfering with the audit of the subsequent picture time stamp.
The following is an example of a picture: when the acquisition time stamp of the picture is detected to accord with the preset time stamp format, all characters of the acquisition time stamp of the picture are completely covered by the target text line region, so that the acquisition time stamp which accords with the preset time stamp format is identified.
In addition, the format of the timestamp preset in the practical application can be determined according to the service requirement. Optionally, the timestamp format is set according to the format of the acquisition timestamp. For example, the following steps are carried out: the acquisition timestamp is in the format "2018/09/1112: 29: 06", and the timestamp is in the format "XXXX/XX/XX XX: XX: XX" (each X represents a character).
307. The picture is culled and execution 310 is performed.
Specifically, the mode of removing the picture may be: and storing the removed picture in a set storage position, so that the removed picture can be read from the storage position if the picture needs to be used later.
308. Judging whether the time difference between the acquisition time stamp of the picture and the system time stamp is smaller than a preset second threshold value or not; if not, execution 309 is performed.
The following is an example of a picture: when the time difference between the acquisition timestamp of the picture and the system timestamp is judged to be smaller than the preset second threshold, the picture acquisition device may not have abnormal problems such as device aging, serious heating, device failure and the like, and the picture acquisition device is normal without marking the picture.
The following is an example of a picture: when the time difference between the acquisition timestamp of the picture and the system timestamp is judged to be not less than the preset second threshold, the probability that the picture acquisition device has abnormal problems such as equipment aging, serious heating, equipment failure and the like is high, and if the picture acquisition timestamp is abnormal, the picture needs to be marked and the operation is carried out 309.
The second threshold involved in this step is a time difference threshold, and the specific size thereof may be determined according to the service requirement. Optionally, the second threshold is 2 seconds.
309. And marking the picture.
Specifically, when the picture is marked, the picture can be marked by adopting a preset mark. The specific type of the preset mark can be determined according to the service requirement. Optionally, the preset mark may include, but is not limited to, at least one of a character, a symbol, and a number. For example, the following steps are carried out: the preset flag is "biaoji ═ 0".
310. Judging whether the plurality of pictures have the removed pictures or not; if so, 311 is performed; otherwise, 314 is executed.
Specifically, the step is performed after each of the plurality of pictures goes through the processes 306 to 309.
Specifically, when at least some of the pictures in the multiple pictures are removed, the removed pictures may affect the accuracy of the examination of the subsequent picture acquisition timestamp, and therefore it is necessary to determine whether the removed pictures exist in the multiple pictures.
311. Judging whether the ratio of the number of the removed pictures to the total number of the multiple pictures reaches a preset fifth threshold value or not; if yes, go to 317; otherwise, 312 is performed.
Specifically, if a large number of pictures in the plurality of pictures are all removed, and at this time, if the pictures which are not removed are reused for verifying the picture timestamp, the accuracy of the verification result is not high, and in order to avoid this situation, it is necessary to determine whether the ratio of the number of the removed pictures in the plurality of pictures to the number of the plurality of pictures reaches a preset fifth threshold.
Specifically, when it is determined that the ratio of the number of the removed pictures in the multiple pictures to the total number of the multiple pictures does not reach the fifth threshold, it indicates that the picture timestamp is checked based on the pictures that are not removed, and the accuracy of the check result is still relatively high, so 312 is executed.
Specifically, when it is determined that the ratio of the number of the removed pictures in the multiple pictures to the total number of the multiple pictures reaches the fifth threshold, it is determined that the number of the proposed pictures is too many, which indicates that the picture timestamp is checked based on the pictures that are not removed, and the reliability of the check result is low, so that 317 is performed.
The fifth threshold involved in this step is a ratio threshold, and the specific size thereof may be determined according to the service requirement. Optionally, the fifth threshold is any value above 50%.
312. Determining a first number of non-culled pictures of the plurality of pictures based on the total amount.
Specifically, the number of the pictures to be removed is subtracted from the total number of the pictures to be removed, which is the first number of the pictures not to be removed.
313. Judging whether the number ratio between the number of the marked pictures in the plurality of pictures and the first number is greater than a preset third threshold value or not; if not, executing 315; otherwise, 316 is performed.
Specifically, when the number ratio between the number of the marked pictures in the multiple pictures and the first number is larger than the third threshold, it indicates that the pictures with correct collection timestamp identifications in the multiple pictures are more, and indicates that the probability of abnormal problems such as equipment aging, serious heating and equipment failure of the picture collection equipment is lower, and at this time, the collection timestamp identifications of the pictures collected by the picture collection equipment are normal.
Specifically, when the number ratio between the number of the marked pictures in the multiple pictures and the first number is judged to be not greater than the third threshold, the pictures with correct collection timestamp identifications in the multiple pictures are fewer, the probability that the picture collection device has abnormal problems such as equipment aging, serious heating and equipment failure is higher, and the collection timestamp identifications of the pictures collected by the picture collection device are abnormal at the moment.
The third threshold involved in this step is a ratio threshold, and the specific size thereof may be determined according to the service requirement. Optionally, the third threshold is any value above 50%.
314. Judging whether the number ratio of the number of the marked pictures in the plurality of pictures to the total number of the plurality of pictures is greater than a preset fourth threshold value or not; if not, executing 315; otherwise, 316 is performed.
Specifically, when the number ratio between the number of the marked pictures in the multiple pictures and the total number of the multiple pictures is judged to be greater than the fourth threshold, more pictures with correct collection timestamp identifications are obtained in the multiple pictures, which indicates that the picture collection device has lower probability of abnormal problems such as equipment aging, serious heating and equipment failure, and the collection timestamp identifications of the pictures collected by the picture collection device are normal at the moment.
Specifically, when the number ratio between the number of the marked pictures in the multiple pictures and the total number of the multiple pictures is judged to be not greater than the fourth threshold, the pictures with correct collection timestamp identifications in the multiple pictures are fewer, the probability that the picture collection device has abnormal problems such as equipment aging, serious heating and equipment failure is higher, and the collection timestamp identifications of the pictures collected by the picture collection device are abnormal at the moment.
The fourth threshold involved in this step is a ratio threshold, and the specific size thereof may be determined according to the service requirement. Optionally, the fourth threshold is any value above 50%.
315. And checking the abnormal identification of the acquisition timestamp, and finishing the current process.
Specifically, when the identifier of the acquisition time is checked to be abnormal, it indicates that the probability that the image acquisition device has abnormal problems such as device aging, serious heating, device failure, and the like is high. And the service personnel can maintain and replace the picture acquisition equipment according to the auditing result so as to reduce the negative influence of the acquisition timestamp on the subsequent service operation.
316. And checking that the identification of the acquisition timestamp is normal, and finishing the current process.
317. And (5) alarming.
Specifically, the specific type of the alarm may be determined according to the service requirement. Optionally, the warning mode is to send warning information to a specific terminal.
Specifically, the service personnel can timely know that the identification of the picture timestamp cannot be checked based on the current multiple pictures according to the alarm, and can timely replace the new multiple pictures so that the checking process can be smoothly carried out.
Further, in another embodiment of the present invention, the plurality of pictures collected in succession are the pictures 21, 22 and 23 in fig. 2-4. All the text line regions included in the plurality of pictures are 211, 212, 213, 221, 222, 231, 232. The text line region 211 does not intersect with any text line region of the text line regions 212, 213, 221, 222, 231, and 232, which means that the text line region 211 is removed because the picture capturing device recognizes an error text line region due to variation of capturing factors such as illumination intensity and capturing angle during the picture capturing process. Two area sets are formed based on the remaining text line areas after the text line area 211 is removed, wherein one area set comprises text line areas 212, 221 and 231; the other region set includes text line regions 213, 222, 232. It can be seen from fig. 2 to 4 that the intersection ratio between any two text line regions included in each region set is greater than a preset first threshold. The intersection ratio between the text line regions of the two region sets is less than a first threshold. The intersection ratio of any two text line areas is actually determined according to the coordinate range of the text line areas. Then, the confidence degrees of the text line regions 212, 221, 231, 213, 222, and 232 are respectively determined, and the text line region with the highest confidence degree in each region set is extracted as the target text line region corresponding to the region set, for example, the extracted target text line regions are 222 and 231. The target text line regions 222 and 231 are respectively identified in the picture 21, the picture 22 and the picture 23 according to the coordinate points, the length values and the width values of the target text line regions 222 and 231 based on a preset coordinate system. For example, fig. 6 shows the effect of identifying the target text line regions 222 and 231 in the picture 22. And respectively identifying characters in the target text line areas 222 and 231 in each picture by using a preset character identification method to obtain the acquisition time stamp of each picture. Taking fig. 6 as an example, the acquisition time stamp in the identification picture 22 is "2018/09/1312: 29: 05". The acquired time stamps of the pictures 21, 22 and 23 identified by the judgment all conform to the preset time stamp format, so that the following steps are continuously executed for each picture: and judging whether the time difference between the acquisition time stamp of the picture and the system time stamp is smaller than a preset threshold value. For example, after judging and judging that the time difference between the acquisition time stamp of the three pictures 21, 22 and 23 and the system time stamp is smaller than the preset threshold, the identifier of the acquisition time stamp is checked to be abnormal, which indicates that the probability of abnormal problems such as equipment aging, serious heating, equipment failure and the like of the picture acquisition equipment is low at this moment, and the acquisition time stamp identifier in the picture acquired by the picture acquisition equipment is normal.
Further, according to the above method embodiment, another embodiment of the present invention further provides an apparatus for auditing a picture timestamp, as shown in fig. 7, where the apparatus includes:
a detecting unit 41, configured to detect all text line regions included in a plurality of continuously acquired pictures;
a selecting unit 42, configured to select at least one target text line region from all the text line regions detected by the detecting unit 41;
an identifying unit 43, configured to perform, for each of the pictures: performing character recognition on the at least one target text line region in the picture to obtain a collection timestamp of the picture;
and the auditing unit 44 is used for auditing whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.
The invention provides an auditing device for a picture timestamp, which is characterized in that a target text line region is selected from all text line regions included in a plurality of continuously collected pictures, and then character recognition is carried out on the position corresponding to the target text line region in each picture, so that the collection timestamp of each picture is obtained. And finally, checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture. According to the scheme, no matter the acquisition time stamp in the picture is identified, or whether the identification of the acquisition time stamp is abnormal or not is audited based on the acquisition time stamp of the picture and the system time stamp, no manual participation is caused. The method can avoid identification and audit errors caused by differences of manual individuals, and saves a large amount of audit time by comparing the acquisition time stamp and the system time stamp of each picture without manual work. Therefore, the scheme provided by the invention can improve the efficiency of examining and verifying the picture time stamp.
Optionally, as shown in fig. 8, the selecting unit 42 includes:
a forming module 421, configured to form at least one region set based on all the text line regions; each region set comprises at least two text line regions respectively, and the intersection-parallel ratio between any two text line regions is greater than a preset first threshold value; the intersection ratio between text line regions of different region sets is smaller than the first threshold value;
an extracting module 422, configured to extract one region of the target text line from each region set.
Optionally, as shown in fig. 8, the forming module 421 includes:
a detection submodule 4211, configured to detect whether a first text line region exists in all the text line regions; the first text line region does not intersect any of the text line regions; if so, triggering to form submodule 4212;
the forming submodule 4212 is configured to, under the trigger of the detection submodule, remove the first text line region, and form the at least one region set based on all the text line regions after the first text line region is removed.
Optionally, as shown in fig. 8, the extracting module 422 is configured to, for each of the region sets: determining a confidence level of each text line region in the region set; and extracting the text line region with the highest confidence coefficient as a target text line region of the region set.
Optionally, as shown in fig. 8, the auditing unit 44 includes:
a marking module 441 configured to perform, for each of the pictures: judging whether the time difference between the acquisition time stamp of the picture and the system time stamp is smaller than a preset second threshold value or not; if not, marking the picture;
the auditing module 442 is configured to audit whether the identifier of the acquisition timestamp is abnormal based on the number of marked pictures in the plurality of pictures and the total number of the plurality of pictures.
Optionally, as shown in fig. 8, the auditing unit 44 further includes:
a detecting module 443, configured to detect whether the collecting timestamp of the picture conforms to a preset timestamp format; if yes, triggering the marking module 441; otherwise, trigger the culling module 444;
the rejecting module 444 is configured to reject the picture under the trigger of the detecting module 443.
Optionally, as shown in fig. 8, the auditing unit 44 further includes:
the determining module 445 is configured to determine whether the plurality of pictures have the removed pictures; if yes, triggering a first audit sub-module 4421 in the audit module 442; if not, a second audit sub-module 4422 in the audit module 442 is triggered;
the first auditing sub-module 4421 is configured to determine, under the trigger of the determining module 445, a first number of the plurality of pictures that are not removed based on the total amount, and determine whether a number ratio between the number of the marked pictures in the plurality of pictures and the first number is greater than a preset third threshold; and if not, checking that the identification of the acquisition timestamp is abnormal.
The second audit sub-module 4422 is configured to determine whether a quantity ratio between the number of the marked pictures and the total number of the multiple pictures in the multiple pictures is greater than a preset fourth threshold under the trigger of the determining module 445; and if not, checking that the identification of the acquisition timestamp is abnormal.
Optionally, the first auditing sub-module 4421 is further configured to determine, under the triggering of the determining module 445, whether a ratio of the number of the removed pictures to the total number of the multiple pictures reaches a preset fifth threshold; and if not, executing the judgment to judge whether the number ratio between the number of the marked pictures in the plurality of pictures and the first number is larger than a preset third threshold value.
Optionally, as shown in fig. 8, the identifying unit 43 includes:
an identifying module 431, configured to identify, based on a preset coordinate system, the at least one target text line region in the picture according to at least one position parameter of the at least one target text line region;
an identifying module 432, configured to identify a character in the at least one target text line region in the picture by using a preset character identifying method.
Optionally, the detecting unit 41 is configured to determine all text line regions included in each of the pictures by using a preset deep neural network text line region detection algorithm.
In the auditing device for the picture timestamp provided in the embodiment of the present invention, for a detailed description of the method adopted in the operation process of each functional module, reference may be made to the corresponding method in the method embodiments of fig. 1 and 5, which is not described herein again.
Further, according to the above embodiments, an embodiment of the present invention provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are adapted to be loaded by a processor and execute the method for reviewing a picture timestamp as described in any one of the above.
Further, according to the above embodiments, an embodiment of the present invention provides an electronic device, including: a storage medium and a processor;
the processor is suitable for realizing instructions;
the storage medium adapted to store a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform an auditing method for picture timestamps as described in any of the above.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the method and apparatus described above are referred to one another. In addition, "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent merits of the embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the method, apparatus and framework for operation of a deep neural network model in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (22)

1. An auditing method for a picture timestamp is characterized by comprising the following steps:
detecting all text line regions included in a plurality of continuously collected pictures;
selecting at least one target text line region from all the text line regions;
for each of the pictures: performing character recognition on the at least one target text line region in the picture to obtain a collection timestamp of the picture;
and checking whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.
2. The method of claim 1, wherein said selecting at least one target text line region from all the text line regions comprises:
forming at least one region set based on all the text line regions; each region set comprises at least two text line regions respectively, and the intersection-parallel ratio between any two text line regions is greater than a preset first threshold value; the intersection ratio between text line regions of different region sets is smaller than the first threshold value;
and extracting one target text line region from each region set respectively.
3. The method of claim 2, wherein forming at least one region set based on the all text line regions comprises:
detecting whether a first text line region exists in all the text line regions; the first text line region does not intersect any of the text line regions;
and if so, eliminating the first text line region, and forming the at least one region set based on all the text line regions after the first text line region is eliminated.
4. The method of claim 2, wherein said extracting one region of said target text line from each of said region sets comprises:
for each of the region sets, performing: determining a confidence level of each text line region in the region set; and extracting the text line region with the highest confidence coefficient as a target text line region of the region set.
5. The method according to any one of claims 1-4, wherein said reviewing whether the identification of the acquisition timestamp is abnormal based on the acquisition timestamp and the system timestamp of each of the pictures comprises:
for each of the pictures: judging whether the time difference between the acquisition time stamp of the picture and the system time stamp is smaller than a preset second threshold value or not; if not, marking the picture;
and auditing whether the identification of the acquisition timestamp is abnormal or not based on the number of marked pictures in the plurality of pictures and the total number of the plurality of pictures.
6. The method of claim 5, wherein before the determining whether the time difference between the capture timestamp and the system timestamp of the picture is less than a preset second threshold, the method further comprises:
detecting whether the acquisition timestamp of the picture conforms to a preset timestamp format; if so, executing the judgment to judge whether the time difference between the acquisition time stamp of the picture and the system time stamp is smaller than a preset second threshold value; otherwise, the picture is removed.
7. The method of claim 6, wherein reviewing whether the identification of the acquisition timestamp is abnormal based on the number of marked pictures in the plurality of pictures and the total number of the plurality of pictures comprises:
judging whether the plurality of pictures have the removed pictures or not;
if yes, determining a first number of the pictures which are not removed from the multiple pictures based on the total amount, and judging whether the number ratio of the number of the marked pictures to the first number in the multiple pictures is larger than a preset third threshold value or not; if not, checking that the identification of the acquisition timestamp is abnormal;
if not, judging whether the number ratio of the number of the marked pictures in the plurality of pictures to the total number of the plurality of pictures is larger than a preset fourth threshold value or not; and if not, checking that the identification of the acquisition timestamp is abnormal.
8. The method according to claim 7, wherein before the determining whether the ratio of the number of marked pictures in the plurality of pictures to the first number is greater than a preset third threshold, the method further comprises:
judging whether the ratio of the number of the removed pictures to the total number of the multiple pictures reaches a preset fifth threshold value or not; if not, executing the judgment to judge whether the number ratio between the number of the marked pictures in the plurality of pictures and the first number is larger than a preset third threshold value or not; otherwise, alarming.
9. The method according to any one of claims 1-4, wherein said character recognizing said at least one target text line region in said picture comprises:
identifying the at least one target text line region in the picture according to at least one position parameter of the at least one target text line region based on a preset coordinate system;
and identifying characters in the at least one target text line region in the picture by using a preset character identification method.
10. The method according to any one of claims 1-4, wherein the detecting all text line regions included in the plurality of pictures comprises:
and determining all text line regions included in each picture by adopting a preset deep neural network text line region detection algorithm.
11. An audit device of picture time stamp is characterized by comprising:
the detection unit is used for detecting all text line areas included in a plurality of continuously collected pictures;
the selecting unit is used for selecting at least one target text line region from all the text line regions detected by the detecting unit;
an identification unit configured to perform, for each of the pictures: performing character recognition on the at least one target text line region in the picture to obtain a collection timestamp of the picture;
and the auditing unit is used for auditing whether the identification of the acquisition time stamp is abnormal or not based on the acquisition time stamp and the system time stamp of each picture.
12. The apparatus of claim 11, wherein the selecting unit comprises:
a forming module for forming at least one region set based on all the text line regions; each region set comprises at least two text line regions respectively, and the intersection-parallel ratio between any two text line regions is greater than a preset first threshold value; the intersection ratio between text line regions of different region sets is smaller than the first threshold value;
and the extracting module is used for extracting one target text line region from each region set respectively.
13. The apparatus of claim 12, wherein the forming module comprises:
the detection submodule is used for detecting whether a first text line region exists in all the text line regions; the first text line region does not intersect any of the text line regions; if yes, triggering to form a submodule;
the forming submodule is configured to, under the trigger of the detection submodule, reject the first text row region, and form the at least one region set based on all the text row regions from which the first text row region is rejected.
14. The apparatus according to claim 12, wherein the extracting module is configured to perform, for each of the region sets: determining a confidence level of each text line region in the region set; and extracting the text line region with the highest confidence coefficient as a target text line region of the region set.
15. An arrangement according to any of claims 11-14, characterized in that the auditing unit comprises:
a marking module for performing, for each of the pictures: judging whether the time difference between the acquisition time stamp of the picture and the system time stamp is smaller than a preset second threshold value or not; if not, marking the picture;
and the auditing module is used for auditing whether the identification of the acquisition timestamp is abnormal or not based on the number of the marked pictures in the pictures and the total amount of the pictures.
16. The apparatus of claim 15, wherein the auditing unit further comprises:
the detection module is used for detecting whether the acquisition timestamp of the picture conforms to a preset timestamp format; if yes, triggering the marking module; otherwise, triggering a rejection module;
and the rejecting module is used for rejecting the picture under the triggering of the detecting module.
17. The apparatus of claim 16, wherein the auditing unit further comprises:
the judging module is used for judging whether the plurality of pictures have the removed pictures; if the audit result exists, triggering a first audit submodule in the audit module; if the verification result does not exist, a second verification sub-module in the verification module is triggered;
the first auditing sub-module is used for determining a first number of the images which are not removed from the plurality of images based on the total amount under the triggering of the judging module, and judging whether the number ratio of the number of the marked images in the plurality of images to the first number is larger than a preset third threshold value or not; if not, checking that the identification of the acquisition timestamp is abnormal;
the second auditing sub-module is used for judging whether the quantity ratio between the quantity of the marked pictures in the plurality of pictures and the total quantity of the plurality of pictures is greater than a preset fourth threshold value or not under the triggering of the judging module; and if not, checking that the identification of the acquisition timestamp is abnormal.
18. The apparatus according to claim 17, wherein the first auditing sub-module is further configured to determine whether a ratio of the number of the rejected pictures to a total number of the multiple pictures reaches a preset fifth threshold value under the triggering of the determining module; and if not, executing the judgment to judge whether the number ratio between the number of the marked pictures in the plurality of pictures and the first number is larger than a preset third threshold value.
19. The apparatus according to any one of claims 11-14, wherein the identification unit comprises:
the identification module is used for identifying the at least one target text line region in the picture according to at least one position parameter of the at least one target text line region based on a preset coordinate system;
and the recognition module is used for recognizing the characters in the at least one target text line region in the picture by using a preset character recognition method.
20. The apparatus according to any of claims 11-14, wherein the detecting unit is configured to determine all text line regions included in each of the pictures by using a preset deep neural network text line region detection algorithm.
21. A storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform a method of reviewing a picture timestamp as claimed in any one of claims 1 to 10.
22. An electronic device, characterized in that the electronic device comprises: a storage medium and a processor;
the processor is suitable for realizing instructions;
the storage medium adapted to store a plurality of instructions;
the instructions are adapted to be loaded by the processor and to perform a method of auditing a picture timestamp as claimed in any of claims 1 to 10.
CN201910048327.9A 2019-01-18 2019-01-18 Picture timestamp auditing method and device Active CN111460198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910048327.9A CN111460198B (en) 2019-01-18 2019-01-18 Picture timestamp auditing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910048327.9A CN111460198B (en) 2019-01-18 2019-01-18 Picture timestamp auditing method and device

Publications (2)

Publication Number Publication Date
CN111460198A true CN111460198A (en) 2020-07-28
CN111460198B CN111460198B (en) 2023-06-20

Family

ID=71684088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910048327.9A Active CN111460198B (en) 2019-01-18 2019-01-18 Picture timestamp auditing method and device

Country Status (1)

Country Link
CN (1) CN111460198B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112685672A (en) * 2020-12-24 2021-04-20 京东数字科技控股股份有限公司 Method and device for backtracking page session behavior track and electronic equipment
CN113891070A (en) * 2021-10-29 2022-01-04 北京环境特性研究所 Method and device for measuring delay time of network camera

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183035A1 (en) * 2009-01-16 2010-07-22 Huawei Technologies Co., Ltd. Method, device and system for managing timestamp
JP2012004631A (en) * 2010-06-14 2012-01-05 Mitsubishi Electric Corp Time stamp correction circuit and coding equipment
CN102968610A (en) * 2011-08-31 2013-03-13 富士通株式会社 Method and device for processing receipt images
CN103905745A (en) * 2014-03-28 2014-07-02 浙江大学 Recognition method for ghosted timestamps in video frame
US9239747B1 (en) * 2013-09-16 2016-01-19 Google Inc. Image timestamp correction using metadata
CN106570500A (en) * 2016-11-11 2017-04-19 北京三快在线科技有限公司 Text line recognition method and device and calculation device
CN106682669A (en) * 2016-12-15 2017-05-17 深圳市华尊科技股份有限公司 Image processing method and mobile terminal
CN106845323A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of collection method of marking data, device and certificate recognition system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100183035A1 (en) * 2009-01-16 2010-07-22 Huawei Technologies Co., Ltd. Method, device and system for managing timestamp
JP2012004631A (en) * 2010-06-14 2012-01-05 Mitsubishi Electric Corp Time stamp correction circuit and coding equipment
CN102968610A (en) * 2011-08-31 2013-03-13 富士通株式会社 Method and device for processing receipt images
US9239747B1 (en) * 2013-09-16 2016-01-19 Google Inc. Image timestamp correction using metadata
CN103905745A (en) * 2014-03-28 2014-07-02 浙江大学 Recognition method for ghosted timestamps in video frame
CN106845323A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of collection method of marking data, device and certificate recognition system
CN106570500A (en) * 2016-11-11 2017-04-19 北京三快在线科技有限公司 Text line recognition method and device and calculation device
CN106682669A (en) * 2016-12-15 2017-05-17 深圳市华尊科技股份有限公司 Image processing method and mobile terminal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YUQIANG ZHANG ETAL.: "Detecting Falsified Timestamps in Evidence Graph via Attack Graph" *
杨国亮;王志元;张雨;康乐乐;胡政伟;: "基于垂直区域回归网络的自然场景文本检测" *
潘世成等: "非结构化机器数据范式化处理的研究" *
鲍复民,李爱国,覃征: "彩色照片时间戳识别" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112685672A (en) * 2020-12-24 2021-04-20 京东数字科技控股股份有限公司 Method and device for backtracking page session behavior track and electronic equipment
CN113891070A (en) * 2021-10-29 2022-01-04 北京环境特性研究所 Method and device for measuring delay time of network camera
CN113891070B (en) * 2021-10-29 2023-12-15 北京环境特性研究所 Method and device for measuring delay time of network camera

Also Published As

Publication number Publication date
CN111460198B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
EP3502966A1 (en) Data generation apparatus, data generation method, and data generation program
TWI796681B (en) A method for real-time automatic detection of two-dimensional PCB appearance defects based on deep learning
CN111507147A (en) Intelligent inspection method and device, computer equipment and storage medium
Mahalingam et al. Pcb-metal: A pcb image dataset for advanced computer vision machine learning component analysis
JP2013167596A (en) Defect inspection device, defect inspection method, and program
CN109462999B (en) Visual inspection method based on learning through data balance and visual inspection device using same
CN110533654A (en) The method for detecting abnormality and device of components
CN111242899B (en) Image-based flaw detection method and computer-readable storage medium
CN111369801B (en) Vehicle identification method, device, equipment and storage medium
CN113222913B (en) Circuit board defect detection positioning method, device and storage medium
TW201930908A (en) Board defect filtering method and device thereof and computer-readabel recording medium
CN111460198B (en) Picture timestamp auditing method and device
CN113111903A (en) Intelligent production line monitoring system and monitoring method
CN113284094A (en) Method, device, storage medium and equipment for acquiring defect information of glass substrate
CN112631896A (en) Equipment performance testing method and device, storage medium and electronic equipment
CN115170501A (en) Defect detection method, system, electronic device and storage medium
CN112270687A (en) Cloth flaw identification model training method and cloth flaw detection method
CN112985515B (en) Method and system for detecting assembly qualification of product fastener and storage medium
KR102078822B1 (en) Method for distinguishing ballot-paper
CN116128853A (en) Production line assembly detection method, system, computer and readable storage medium
CN116071335A (en) Wall surface acceptance method, device, equipment and storage medium
CN114078109A (en) Image processing method, electronic device, and storage medium
CN104517114B (en) A kind of element characteristics recognition methods and system
CN111610205A (en) X-ray image defect detection device for metal parts
CN110826473A (en) Neural network-based automatic insulator image identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant