CN111753568A - Receipt information processing method and device, electronic equipment and storage medium - Google Patents

Receipt information processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111753568A
CN111753568A CN201910702528.6A CN201910702528A CN111753568A CN 111753568 A CN111753568 A CN 111753568A CN 201910702528 A CN201910702528 A CN 201910702528A CN 111753568 A CN111753568 A CN 111753568A
Authority
CN
China
Prior art keywords
image
information
document
bill
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910702528.6A
Other languages
Chinese (zh)
Other versions
CN111753568B (en
Inventor
陈楷佳
李刚
李延存
王艺然
董亚魁
牛学真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201910702528.6A priority Critical patent/CN111753568B/en
Priority to JP2021538702A priority patent/JP2022516550A/en
Priority to PCT/CN2020/105819 priority patent/WO2021018241A1/en
Priority to KR1020217020538A priority patent/KR20210098509A/en
Priority to TW109125931A priority patent/TW202107402A/en
Publication of CN111753568A publication Critical patent/CN111753568A/en
Application granted granted Critical
Publication of CN111753568B publication Critical patent/CN111753568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Security & Cryptography (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Toxicology (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Character Input (AREA)

Abstract

The embodiment of the application discloses a document information processing method and device, electronic equipment and a storage medium. The bill information processing method comprises the following steps: acquiring a first image of an acquisition object located within a predetermined depth of field range; acquiring a target area where the document graph is located from the first image; detecting the target area to obtain bill information contained in the bill graph; and carrying out warehousing or ex-warehouse operation on the collected object based on the bill information.

Description

Receipt information processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of information technologies, and in particular, to a method and an apparatus for processing document information, an electronic device, and a storage medium.
Background
Documents are commonly used certificates in life. Common documents include, but are not limited to: a courier, invoice, or manifest, etc.
The general documents are all provided with bar codes, and after the bar codes are scanned and identified, the information such as the document serial number (document number for short) of the documents can be obtained.
However, in the related art, when scanning the barcode information, a laser camera and a red, green, blue (RGB) camera are generally used to scan the barcode information. At the moment, a camera is required to align the bar code, and most of the acquisition area of the camera is aligned with the area where the bar code information is located; the operation is complicated, and the acquisition rate of the bar code information is slow.
Disclosure of Invention
In view of the above, embodiments of the present application are intended to provide a document information processing method and apparatus, an electronic device, and a storage medium.
The technical scheme of the application is realized as follows:
according to a first aspect of the present application, a document information processing method is provided, including: acquiring a first image of an acquisition object located within a predetermined depth of field range; acquiring a target area where the document graph is located from the first image; detecting the target area to obtain bill information contained in the bill graph; and carrying out warehousing or ex-warehouse operation on the collected object based on the bill information.
Based on the above, before acquiring the first image of the acquisition object located within the predetermined depth of field range, the method further includes: detecting whether a collection object enters the preset depth of field range or not; and starting an image acquisition function under the condition that an acquisition object is detected to enter the preset depth of field range.
Based on the above scheme, the obtaining of the target area where the document image is located from the first image includes: determining the bill graph from the first image as the target area; or determining an imaging area of a bar code in the document from the first image as the target area.
Based on the above scheme, the obtaining of the target area where the document image is located from the first image includes: determining a text area of the bill graph from the first image, wherein the text area is an imaging area of characters in the bill; and determining a bar code area of the bill graph from the first image, wherein the bar code area is an imaging area of a bar code in the bill.
Based on the above scheme, the detecting the target area to obtain the document information included in the document graph includes: acquiring first identification information obtained by detecting the character area; acquiring second identification information obtained by detecting the bar code area; and under the condition that the first identification information and the second identification information meet preset matching conditions, obtaining bill information contained in the bill graph according to the first identification information and/or the second identification information.
Based on the above scheme, the method further comprises: and under the condition that the first identification information and the second identification information do not meet the preset matching condition, outputting the re-collected prompt information.
Based on the scheme, the bill information comprises first account information; before the warehousing or ex-warehousing operation is performed on the object, the method further comprises the following steps: acquiring a second image, wherein the second image at least comprises a first face image; determining whether a binding relationship is established between the first biological characteristic corresponding to the first face image and the first account information; and determining that the verification of warehousing or ex-warehousing operation of the acquisition object is passed under the condition that the first biological characteristic and the first account information have a binding relationship.
Based on the above scheme, the method further comprises: under the condition that the binding relationship between the first biological characteristic and the first account information is not established, outputting a verification prompt for inputting verification information; and determining that the verification of the warehousing or ex-warehousing operation of the acquisition object passes based on the input verification information.
Based on the above scheme, the method further comprises: acquiring a third image, wherein the third image at least comprises a second face image; acquiring second account information on a second document; associating the second biological characteristics and the second account information to obtain an association relation according to the second biological characteristics corresponding to the second face image and the second account information; and under the condition that the association frequency of the second biological characteristics and the second account information meets a set binding condition, establishing a binding relationship between the second biological characteristics and the second account information based on the association relationship.
According to a second aspect of the present application, there is provided a document information processing apparatus comprising: the first acquisition module is used for acquiring a first image of an acquisition object of a document within a preset depth of field range; the acquisition module is used for acquiring a target area where the document image is located from the first image; the first detection module is used for detecting the target area to obtain bill information contained in the bill graph, wherein the bill information is used for generating record information containing the bill information; and the execution module is used for carrying out warehousing or ex-warehouse operation on the collected object based on the receipt information.
Based on the above scheme, the apparatus further comprises: the second detection module is used for detecting whether the acquisition object enters the preset depth of field range or not before acquiring the first image of the acquisition object located in the preset depth of field range; the starting module is used for starting an image acquisition function when an acquisition object enters the preset depth of field range; the first acquisition module is used for acquiring a first image of the document within the preset depth of field after the image acquisition function is started.
Based on the above scheme, the obtaining module is specifically configured to determine the document graph from the first image as the target area; or determining an imaging area of a bar code in the document from the first image as the target area.
Based on the scheme, the obtaining module is specifically configured to determine a text area of the document image from the first image, where the text area is an imaging area of text in the document; and determining a bar code area of the bill graph from the first image, wherein the bar code area is an imaging area of a bar code in the bill.
Based on the above scheme, the first detection module is specifically configured to acquire first identification information obtained by detecting the text region; acquiring second identification information obtained by detecting the bar code area; and under the condition that the first identification information and the second identification information meet preset matching conditions, obtaining bill information contained in the bill graph according to the first identification information and/or the second identification information.
Based on the above scheme, the apparatus further comprises: and the output module is used for outputting the re-collected prompt information under the condition that the first identification information and the second identification information do not meet the preset matching condition.
Based on the scheme, the bill information comprises: first account information; the device further comprises: the second acquisition module is used for acquiring a second image before warehousing or ex-warehousing operation is carried out on the object, wherein the second image at least comprises a first face image; the determining module is used for determining whether a binding relationship is established between the first biological characteristic corresponding to the first face image and the first account information; and the first verification module is used for determining that verification executed by a preset operation of warehousing or ex-warehousing operation on the acquisition object is passed under the condition that the first biological characteristic and the first account information have a binding relationship.
Based on the above scheme, the apparatus further comprises: the first output module is used for outputting a verification prompt for inputting verification information under the condition that the binding relationship between the first biological characteristic and the first account information is not established; and the second verification module is used for determining that the verification of the warehousing or ex-warehousing operation of the acquisition object passes based on the input verification information.
Based on the above scheme, the apparatus further comprises: the third acquisition module is used for acquiring a third image, wherein the third image at least comprises a second face image; the second account information module is used for acquiring second account information on a second document; the association module is used for associating the second biological characteristics with the second account information to obtain an association relation according to the second biological characteristics corresponding to the second face image and the second account information; and the establishing module is used for establishing the binding relationship between the second biological characteristic and the second account information based on the association relationship under the condition that the association frequency of the second biological characteristic and the second account information meets the set binding condition.
A third aspect of the embodiments of the present application provides an electronic device, including: the processor is connected with the memory and used for realizing the bill information processing method provided by any technical scheme through the execution of the computer executable instructions stored on the memory.
The computer storage medium stores computer executable instructions; after the computer executable instruction is executed by the processor, the bill information processing method provided by any technical scheme can be realized.
According to the technical scheme provided by the embodiment of the disclosure, a camera with a large depth of field range can be adopted to directly acquire a first image in a preset depth of field range; the focus range of the camera is matched with the distance between the documents instead of a focusing mode, so that the problems of complicated user operation and time delay in acquisition caused by focusing operation of a user can be solved.
Meanwhile, a target area where the receipt image is located is automatically obtained from the first image before receipt information is obtained; then, only the target area is detected to obtain document information; therefore, the whole first image is relatively directly processed, most of the acquisition area of the camera does not need to be aligned to the first image, and time delay caused by alignment in the image acquisition process is avoided; the obtaining speed of the document information is improved. Meanwhile, a target area where the document graph is located is obtained through preliminary detection, then the target area is further and finely detected, and the area of the image area needing to be processed is reduced, so that the data processing amount is reduced, and the speed is further improved. In addition, when the bill information is finely detected, the target area where the bill graph containing the bill information is located is detected, namely, the background area (namely, the noise area) except the bill graph is cut from the first image, so that the noise interference of the noise area is reduced, and the accuracy of the detected bill information can be improved.
Drawings
Fig. 1 is a schematic flowchart of a document information processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of another document information processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating a further method for processing document information according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a verification process based on a binding relationship according to an embodiment of the present disclosure;
fig. 6 is a schematic flow chart of establishing a binding relationship according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a document information processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solution of the present disclosure is further described in detail below with reference to the drawings and specific embodiments of the specification.
As shown in fig. 1, the present embodiment provides a document information processing method, including:
step S110: acquiring a first image of an acquisition object of a document within a preset depth of field range;
step S120: acquiring a target area where the document graph is located from the first image;
step S130: detecting the target area to obtain bill information contained in the bill graph;
step S140: and carrying out warehousing or ex-warehouse operation on the collected object based on the bill information.
The document information processing method provided by the embodiment can be applied to various terminal devices for document processing or an information system formed by the terminal devices and the cloud device.
In this embodiment, the depth of field of the image capture module may be predetermined, and if the document is located in the region, the first image is necessarily clear. The upper limit value of the predetermined depth of field range is larger than a preset value, for example, 50cm, 60cm, and the like.
For example, the image capturing module capturing the first image may include: the system comprises an industrial camera, a first image acquisition unit and a second image acquisition unit, wherein the industrial camera is configured with a preset depth of field range in advance, for example, within 0-50cm, and if a document is less than or equal to 50cm away from the industrial camera, the acquired first image is clear; therefore, the focusing operation that the camera with a smaller depth of field range needs to control the distance between the current depth of field and the image acquisition module through focusing in the acquisition process is reduced, the user operation is simplified, and the identification delay and the like caused by focusing are reduced.
Therefore, when the first image is collected, the camera is not used for focusing, and the phenomenon that the definition of the first image is not enough due to poor focusing is reduced; meanwhile, the phenomenon of large acquisition delay caused by focusing is reduced.
In the embodiment of the disclosure, the document image is an image of the document.
The documents may be various types of documents, such as express documents, invoices, tickets, or shipping slips, for example.
The document has document information, and the document information at least comprises at least one of the following: document number of the document, for example, express bill number of the express bill; the entrance ticket number of the entrance ticket, and the invoice number of the invoice.
In some embodiments, the document information further comprises: the related information of the document holder on the document, such as the contact information of the holder, the address information of the holder, the issuing time information of the document, the valid time information of the document and other document information, is not limited to the document number.
In this embodiment, the document is captured to obtain a first image.
After obtaining the first image, a target area of document information is obtained from the first image, for example, by rough matching recognition, an imaging area of the document in the first image is determined.
By identifying the target area from the first image in step S120, it is equivalent to separate the target area containing the document information from the background area. Therefore, only the target area can be detected in the step S130, on one hand, unnecessary detection is reduced, and the obtaining rate of the document information is improved, and on the other hand, after the target area at least containing the document graph is obtained through preliminary identification, only the target area is detected, so that the interference of a background area outside the target area on the document information detection is reduced, the phenomenon of low detection accuracy caused by the interference is reduced, and the detection accuracy is improved.
In step S130, the target area is detected to obtain the document information. In some embodiments, the document information is used to generate record information including the document information.
For example, the document is an express delivery document, and the record information may include: and (3) express delivery warehousing records, wherein the warehousing records comprise the document information and warehousing information, and the warehousing information can comprise: warehousing time, warehousing place, warehousing equipment and the like. For another example, the recording information may further include: express delivery record of leaving warehouse, this record of leaving warehouse contains document information and information of leaving warehouse, and this information of leaving warehouse can include: time of delivery, place of delivery, equipment of delivery, etc.
For another example, the document is an invoice, and the record information may include: and electronic warehousing information of the invoices.
For example, the target area is input to a detection model, and the document information is output by the detection model by means of feature extraction or the like.
For example, the detection module may be a machine learning model or a deep learning model.
In some embodiments, to improve the accuracy of detection of the document information, the method further comprises:
and carrying out deformation processing on the target area cut out from the first image, so that the target area after the deformation processing meets the image processing requirement of the detection model.
The deformation process may include at least one of:
obtaining a target area of a second shape by shape transformation such as stretching and/or compression on the target area of the first shape, wherein the second shape can be a standard shape, and/or the first shape is a non-standard shape;
performing pixel up-sampling on a target area lower than the resolution of a preset image to obtain a target area at least equal to the scene area degree of the preset image;
performing orientation conversion such as rotation and/or mirror image processing on the target area in the first direction to obtain the target area in the second direction, for example, turning the target area with the reversed character direction to a target area with the correct character orientation;
and carrying out inverse perspective transformation on the target area to obtain the target area with consistent depth of field of each sub-area. For example, when a document is collected, if one part of the document is closer to the camera than the other part of the document, the depth of field of each part of the collected document may be different, and a perspective effect is shown in the first image. In this embodiment, a target region having no difference in perspective effect is obtained by the reverse process of perspective transformation.
Therefore, after the target area after deformation processing is input into the detection model, on one hand, the detection model can quickly detect the bill information, and on the other hand, the detection accuracy can be improved.
In some embodiments, as shown in fig. 2, before acquiring the first image of the acquisition object located within the predetermined depth of field range, the method further comprises:
step S101: detecting whether a collection object enters the preset depth of field range or not;
step S102: when an acquisition object enters the preset depth of field range, starting an image acquisition function;
the step S110 may include: and after the image acquisition function is started, acquiring a first image of the document within the preset depth of field range.
For example, the acquisition of whether the acquisition object enters the predetermined depth of field range is performed by using a distance sensor. The distance sensors include, but are not limited to: infrared sensors, visible light sensors, laser sensors, and the like.
Keeping the camera in the on state is very energy consuming, but image acquisition is based on user triggering, although execution is possible, but acquisition delay is introduced.
In this embodiment, the acquisition of the distance is performed by using a sensor with power consumption lower than that of the image acquisition module in the on state, and the image acquisition module is triggered to enter the acquisition state from the off state or enter the acquisition state from the sleep state to acquire images, so that on one hand, the power consumption of the terminal is saved, on the other hand, the acquisition of unnecessary images is reduced, and the power consumption caused by processing the unnecessary images is reduced.
Fig. 3 shows a terminal device capable of processing document information. The terminal device includes: the top camera can scan the document placed on the placing area to obtain the first image; the terminal device further includes: the camera for face recognition can acquire a face image and extract biological characteristics. The terminal equipment can also comprise a human-computer interaction interface such as a touch display screen and the like, and is used for receiving user operation and processing the bill information based on the user operation. The touch display screen can be used for carrying out bill entry operation, such as parcel or express delivery warehousing of attached bills. The touch display screen can also be used for carrying out document delivery operation, such as delivery of packages or express delivery with documents.
In some embodiments, the apparatus shown in fig. 3 is a network terminal connected to a network; i.e. the terminal device has a communication module. After the communication equipment finds that packages with documents, such as couriers, are distributed to the terminal equipment, the terminal equipment can automatically scan document information on the express bills in the process of warehousing the packages. Here, the document information may be acquired as shown in fig. 1. And after the account information in the bill information is obtained, the user terminal which establishes a binding relationship with the account information according to the account information sends the goods taking information. The pickup information may prompt the user to pick up the item. The goods taking information at least comprises: and (6) taking the goods address. In some embodiments, the pickup information may further include: a voucher to pick up goods, etc.
If the user does not have time to take goods, the user can ask the person for help to take goods, and at the moment, the human face image does not need to take goods automatically, so that the user taking goods instead of the person can take goods by combining the goods taking evidence.
Therefore, after documents such as couriers and the like are put in the placing area for scanning, documents can be simply and conveniently placed in a warehouse without manually editing messages to inform corresponding customers of taking goods.
In some embodiments, in order to clearly acquire the first image and/or the second image, a fill-in light is further disposed at the top of the terminal device, and when ambient light is insufficient, the first image and/or the second image with sufficient brightness can be acquired by turning on the fill-in light or adjusting brightness.
In some embodiments, the step S110 may include: determining the bill graph from the first image as the target area; or determining an imaging area of a bar code in the document from the first image as the target area.
In some embodiments, the target area may be an imaging area of a document image formed by the entire document being captured; for example, the imaging area of the first image document (i.e., the document image) and the background area outside the imaging area of the document directly identify the whole first image, so that the background area interferes with the identification of the imaging area of the document.
In other embodiments, the target area may also be an imaged area of a barcode in the document. At this time, the information on the document may have been recorded in the network device, but the current user needs to at least obtain the document identifier corresponding to the barcode in the document when holding the document to perform the operation, and then can obtain the document information of the whole document from the network device.
The barcode includes: one-dimensional codes and/or two-dimensional codes; the two-dimensional code includes: a rectangular two-dimensional code and/or a circular two-dimensional code. In some embodiments, the barcode may be: one or more than one.
In some embodiments, the method further comprises:
determining a required target area according to the acquisition requirement of the document information;
for example, when the acquiring requirement is acquiring a document number, determining a required target area as an imaging area of the bar code; for another example, when the acquired information is the acquired document number and other information except the document number, the required target area is determined to be the imaging area of the whole document.
For example, when the document is put in storage, the acquired requirement is considered to comprise document information except the document number; for another example, when a document put in storage is delivered, it may be considered that the acquisition requirement only includes the document number.
Of course, the above is only an example, and the document information can be specifically obtained according to the acquisition requirement of the document information, so that different requirements under different scenes are met.
In some embodiments, the step S120 may include:
determining a text area of the bill graph from the first image, wherein an imaging area of the text on the bill is the text area;
and determining a bar code area of the bill graph from the first image, wherein an imaging area of a bar code on the bill is the bar code area.
There will be not only barcodes but also text on documents including but not limited to: numbers, letters, and/or chinese, etc. In short, the imaging area where the characters are located is the character area. The imaging area where the bar code is located is a bar code area.
In some embodiments, as shown in fig. 4, the step S130 may include:
step S131: acquiring first identification information obtained by detecting the character area;
step S132: acquiring second identification information obtained by detecting the bar code area;
step S133: and under the condition that the first identification information and the second identification information meet preset matching conditions, obtaining bill information contained in the bill graph according to the first identification information and/or the second identification information.
In this embodiment, the text region may be recognized by Optical Character Recognition (OCR) to obtain the first Recognition information. The first recognition information is obtained by recognizing character information in an image, for example, by an OCR technique or the like.
In step S132, the second identification information is obtained by decoding the barcode extracted from the barcode region of each detection model.
In this embodiment, the first identification information and the second identification information are cross-compared, so as to ensure the accuracy of the information.
In the embodiment, most of the whole collection area is not required to be aligned with the barcode area of the document when the document is collected, so that the time generated by manual focusing or automatic focusing can be reduced, and only a clear first image comprising the document is required to be collected; thereby increasing the acquisition rate.
In some embodiments, the step S131 may include: and sending the character area to a cloud device, and receiving first identification information obtained by the cloud device identifying the character area.
In other embodiments, the step S132 may include: and sending the bar code area to a cloud device, and receiving second identification information obtained by identifying the bar code area by the cloud device.
Of course, in some embodiments, the terminal device may also identify the text region and the barcode region by itself to obtain the first identification information and the second identification information.
For example, when the matching degree of the first identification information and the second identification information reaches a preset matching degree value, the first identification information and the second identification information are considered to satisfy the preset matching condition. For example, comparing the bill number in the first identification information with the second bill number in the second identification information one by one, and counting the number of matched bills; and determining the matching degree according to the number of the matched characters and the total number of the characters. For example, when the matching degree reaches 90% or 95% or more, it is determined that the matching degree of the first identification information and the second identification information reaches a preset matching degree value.
And when the first identification information and the second identification information are completely matched, determining the bill information of the bill according to the first identification information or the second identification information.
And when the first identification information and the second identification information are not completely matched, determining the bill information according to the identification condition information of the first identification information and the second identification information.
For example, the number of characters included in the document number is constant, or the encoding of the document number satisfies a certain rule, and if one of the first identification information and the second identification information does not satisfy the matching rule of the document number, such as the number of characters is less or more, the document information of the document is determined according to the other one.
In still other embodiments, determining whether to determine document information for the document based on the first identification information or the second identification information is based on status information of the text area and the barcode area. For example, comparing the definitions of the text area and the bar code area to determine the higher definition; and finally obtaining the bill information according to the identification information corresponding to the higher definition.
In some embodiments, the method further comprises: and under the condition that the first identification information and the second identification information do not meet the preset matching condition, outputting the re-collected prompt information.
If the matching of the first identification information and the second identification information does not meet the preset matching condition, the prompt information is output and triggered to be collected again, so that the situation that when the accurate document information cannot be obtained due to one-time collection, the automatic triggering and the secondary collection are carried out to obtain the accurate document information can be reduced.
As shown in fig. 5, the present embodiment provides an information processing method including:
step S210: acquiring a second image, wherein the second image at least comprises a first face image;
step S220: determining whether a binding relationship is established between the first biological characteristic corresponding to the first face image and the first account information;
step S230: and determining that verification executed by a preset operation of warehousing or ex-warehousing the acquisition object is passed under the condition that the first biological characteristic and the first account information have a binding relationship.
In this embodiment, the second image may be an image acquired during the process of warehousing or ex-warehousing the document.
The second image may include at least a face image, and the first biological feature is obtained through face recognition, and the first biological feature may include: face features and/or iris features, etc. may uniquely identify features of the user.
In some embodiments, after the first biometric characteristic is collected, the first account information included in the document may be obtained through the steps S110 to S130, so that the binding relationship between the first biometric characteristic and the first account information may be determined without any special operation by the user.
If the established binding relationship is found, the identity characteristic of the current user can be considered to be verified, and the predetermined operation is executed after the identity characteristic is verified. The predetermined operation may include: and (5) delivering the document out of the warehouse.
In some embodiments, the predetermined operation comprises: the delivery operation of the package where the document is located;
the method further comprises the following steps: and when the verification performed by the preset operation is passed, performing the shipment operation of the package in which the bill is positioned.
For example, the receipt is an express receipt, the express receipt is usually attached to an article for express delivery, and if the verification in steps S210 to S230 is passed, it indicates that the recipient for express delivery is currently taking the delivery, so that the delivery operation of the package can be performed.
Performing shipment operations for the package may include: and opening the counter where the package is located.
In some embodiments, the first account information includes at least one of:
a mobile communication identifier;
an instant messaging identity;
and (5) identification.
The mobile communication identification includes but is not limited to a mobile phone number.
The instant messaging identifier may include account numbers of various instant messaging applications, such as a micro-beacon, a micro-blog number, a femto-beacon, etc.
The identification includes, but is not limited to, an identification card number of an identification card and a passport number on a passport.
In some embodiments, the method further comprises: under the condition that the binding relationship between the first biological characteristic and the first account information is not established, outputting a verification prompt for inputting verification information; and determining that the verification of the warehousing or ex-warehousing operation of the acquisition object passes based on the input verification information.
In the embodiment, the execution verification of the execution of the predetermined operation is performed based on the collected second image information and based on the binding relationship established in advance, so that the user does not need to input a document number, a pickup number or own account information and the like, and the user operation is simplified.
If the camera with the preset depth of field range is used for collecting the second image, the user only needs to enter the preset depth of field range, the camera is triggered to collect the second image, and verification can be completed through inquiry of the binding relationship after collection. If the camera has a larger preset depth of field range, the user can finish the verification in the process of approaching the camera; therefore, the user walks into the verification equipment to complete verification, and the user directly gets the package from the corresponding goods taking cabinet, so that the user operation is simplified.
As shown in fig. 6, the present embodiment provides an information processing method, which may include:
step S310: acquiring a third image, wherein the third image at least comprises a second face image;
step S320: acquiring second account information of a second document;
step S330: associating the second biological characteristics and the second account information to obtain an association relation according to the second biological characteristics corresponding to the second face image and the second account information;
step S340: and under the condition that the association frequency of the second biological characteristics and the second account information meets a set binding condition, establishing a binding relationship between the second biological characteristics and the second account information based on the association relationship.
In this embodiment, the third image may be acquired during the process of warehousing and/or delivering the document.
The second biometric feature may be a biometric feature extracted from the second face image, for example, a face feature of the user and/or an iris feature of the user. In summary, the second biometric is capable of uniquely identifying biometric information of the user.
In this embodiment, the camera is started to acquire the third image only in the process of delivering and/or warehousing the document or even in the process of filling the document. And associating the second biological characteristics in the acquired third image with second account information on the documents which are put in or out of the warehouse, and establishing an association relation.
Adding 1 to the association number every time the steps S310 to S330 are performed; when the association frequency reaches a preset frequency, the association frequency can be considered to meet a preset binding condition, and the biological characteristic of the legal user corresponding to the second user account can be further considered to be the second biological characteristic; at this time, the association relationship may be formally converted into the binding relationship.
In some embodiments, the step S330 may include:
determining a confidence value of the association relation according to the association times;
and when the confidence value reaches a preset value, converting the association relationship into the binding relationship.
For example, when the association relationship is established for the first time, the confidence value is determined to be 40%; when establishing a same incidence relation for the second time, determining the confidence value to be 60 percent; when the same incidence relation is established for the third time, the confidence value is increased to 70 percent; and so on; and the more the association times of one association relationship, the higher the confidence value, and when the confidence value of the association relationship reaches a preset value, directly converting the relationship into the binding relationship.
There are various ways to convert the association relationship into the binding relationship, and two optional ways are provided as follows:
adding the incidence relation to a relation table containing binding relations;
and changing the type field of the incidence relation into the type field of the binding relation.
In some embodiments, the second account information includes at least one of: a mobile communication identifier; an instant messaging identity; and (5) identification.
In some embodiments, the method further comprises:
and outputting multimedia information corresponding to the user image corresponding to the face information according to the face information.
One or more kinds of information capable of identifying the identity of the user can be extracted from the face information.
In this embodiment, a user portrait of a document user currently being processed can be obtained according to the face information.
The user representation may be used to indicate at least one of:
the preferences of the user;
aversion of the user;
consumer demand of the user;
consumption habits of the user;
the payment capabilities of the user;
aesthetic abilities of the user, etc.
In this embodiment, the multimedia information is output according to the user profile. In the process of warehousing or ex-warehouse of the document, the terminal equipment may have waiting time such as information input, and outputs multimedia information matched with the portrait of the user in the waiting time, so that the user experience can be improved.
The multimedia information includes various promotional information, such as advertisements for goods and/or services.
As shown in fig. 7, the present embodiment provides a document information processing apparatus including:
the first acquisition module 110 is used for acquiring a first image of an acquisition object of a document within a preset depth of field range;
an obtaining module 120, configured to obtain, from the first image, a target area where the document image is located;
the first detection module 130 is configured to detect the target area, and obtain document information included in the document graph;
and the execution module 140 is configured to perform warehousing or ex-warehousing operations on the collected object based on the document information.
In some embodiments, the document information is used to generate record information including the document information.
The document information processing apparatus provided in this embodiment can be applied to terminal equipment or cloud equipment. If the cloud device is applied, the first acquisition module 110 is: the cloud equipment is connected with the image acquisition module through a network.
In some embodiments, the first acquiring module 110, the acquiring module 120, the first detecting module 130, and the executing module 140 may be all program modules, and the program modules, after being executed by a processor, can realize acquisition of the first image, determination of the target area, and obtaining of the document information.
In other embodiments, the first acquisition module 110, the acquisition module 120, the first detection module 130, and the execution module 140 may be software and hardware combined modules; the soft and hard combining module can comprise various programmable arrays; the programmable array includes, but is not limited to, a complex programmable array or a field programmable array.
In still other embodiments, the first acquisition module 110, the acquisition module 120, the first detection module 130, and the execution module 140 may comprise purely hardware modules; including but not limited to application specific integrated circuits.
Capturing a first image of a captured object of a document located within a predetermined depth of field in some embodiments, the apparatus further comprises:
the second detection module is used for detecting whether the acquisition object enters the preset depth of field range or not before acquiring the first image of the acquisition object located in the preset depth of field range;
the starting module is used for starting an image acquisition function when an acquisition object enters the preset depth of field range;
the first collecting module 110 is configured to collect a first image of a document within the predetermined depth of field after the image collecting function is started.
In some embodiments, the obtaining module 120 is specifically configured to determine the document graph from the first image as the target area; or determining an imaging area of a bar code in the document from the first image as the target area.
In some embodiments, the obtaining module 120 is specifically configured to determine a text area of the document image from the first image, where the text area is an imaging area of text in the document;
and determining a bar code area of the bill graph from the first image, wherein the bar code area is an imaging area of a bar code in the bill.
In some embodiments, the detecting the target area to obtain the document information included in the document graph includes:
acquiring first identification information obtained by detecting the character area;
acquiring second identification information obtained by detecting the bar code area;
and under the condition that the first identification information and the second identification information meet preset matching conditions, obtaining bill information contained in the bill graph according to the first identification information and/or the second identification information.
In some embodiments, the method further comprises:
and under the condition that the first identification information and the second identification information do not meet the preset matching condition, outputting the re-collected prompt information.
In some embodiments, the document information includes: first account information; the device further comprises:
the second acquisition module is used for acquiring a second image, wherein the second image at least comprises a first face image;
the determining module is used for determining whether a binding relationship is established between the first biological characteristic corresponding to the first face image and the first account information;
and the first verification module is used for determining that verification executed by a preset operation of warehousing or ex-warehousing operation on the acquisition object is passed under the condition that the first biological characteristic and the first account information have a binding relationship.
In some embodiments, the predetermined operation comprises: the delivery operation of the package where the document is located;
the device further comprises:
and the delivery module is used for executing the delivery operation of the package in which the bill is positioned when the verification executed by the preset operation is passed.
In some embodiments, the first account information includes at least one of:
a mobile communication identifier;
an instant messaging identity;
and (5) identification.
In some embodiments, the apparatus further comprises:
the first output module is used for outputting a verification prompt for inputting verification information under the condition that the binding relationship between the first biological characteristic and the first account information is not established;
and the second verification module is used for determining that the verification of the warehousing or ex-warehousing operation of the acquisition object passes based on the input verification information.
In some embodiments, the apparatus further comprises:
the third acquisition module is used for acquiring a third image, wherein the third image at least comprises a second face image;
the second account information module is used for acquiring second account information on a second document;
the association module is used for associating the second biological characteristics and the second account information according to the second biological characteristics corresponding to the second face image and the second account information to obtain an association relation;
and the establishing module is used for establishing the binding relationship between the second biological characteristic and the second account information based on the association relationship under the condition that the association frequency of the second biological characteristic and the second account information meets the set binding condition.
In some embodiments, the apparatus further comprises: and the second output module is used for outputting the multimedia information corresponding to the user image corresponding to the face information according to the face information.
Two specific examples are provided below in connection with any of the embodiments described above:
example 1:
the scheme is mainly based on express OCR recognition and extracts all character information on an express bill. The identification of the express bill number adopts character OCR identification and bar code identification at the same time, and adopts a cross check mode, so that the high accuracy rate under the complex scene is ensured.
Meanwhile, the express delivery is verified in a face recognition mode and an express delivery bill recognition mode, and the express delivery is guaranteed to be correctly and efficiently received.
Identifying the express bill number: the express bill number used in the scene is mainly completed in several stages.
Firstly, trial testing an express bill at an advertiser end (an android end), and when an express bill is detected, photographing and partially matting the express bill;
the advertisement machine end transmits the express bill part picture back to the cloud end, OCR detection is carried out by cloud service, the bar code area and the express bill number area are detected, and bar code identification and OCR identification are respectively used. Obtaining identification information aiming at the bar code; meanwhile, obtaining another piece of identification information based on OCR identification; and the cross comparison result of the express bill number is obtained through the cross comparison of the two pieces of identification information.
The cloud end carries out cross comparison on the two results so as to obtain a result with high confidence; if the comparison result is inconsistent, the confidence is low, and the photographing and the recognition can be detected again.
Face identification and verification: when express delivery is delivered from a warehouse, a user scans an express delivery bill identification bill number and a mobile phone number, the all-in-one machine simultaneously identifies the face of the user, associates the user account (namely the mobile phone number) according to the binding relationship between the face of the user and the mobile phone number, compares the user account with the mobile phone number of an express delivery bill recipient, and if the comparison is correct, the verification is passed. This mode can greatly promote express delivery efficiency of leaving warehouse, and the tradition mode is artifical signature and tears the base bill, and total time consumption is about 15 to 30s, and this self-service mode of leaving warehouse only needs 1 to 2s time, promotes the treatment effeciency greatly.
Example 2:
the present example provides a method for noninductive binding between a mobile phone number and a face, which specifically includes:
since the product does not have the association relationship between the face of the user and the mobile phone number in the initial stage, a set of association mechanism is needed. The product adopts a non-inductive binding mode, and because the express receiving of the user has a great incidence relation with the identity of the user, the incidence relation between the face of the user and the mobile phone number of the user can be gradually enhanced by tracking the express receiving behavior of the user; if the user A receives an express delivery (the mobile phone number is A), the association relationship is assumed to be 60%, if the user A receives the express delivery with the mobile phone number of A next time, the association relationship can be increased to 90%, if the express delivery with the mobile phone number of A occurs again, the association relationship can be considered to be 99%, the parameter is an assumed parameter, and the parameter is actually a big data prediction value. Then, a threshold value is set as the threshold value of the association relationship (corresponding to the preset value), and the binding relationship between the user face and the mobile phone number can be completed.
After the identification user identity information such as face information of the user is obtained, the user portrait can be obtained according to the user identity, and therefore directional delivery of the target recommendation information is achieved.
As shown in fig. 8, the present embodiment provides an electronic apparatus including:
a memory;
and the processor is connected with the memory and used for realizing the bill information processing method provided by one or more of the previous embodiments by executing the computer executable instructions on the memory, for example, one or more of the bill information processing methods shown in fig. 1 to 2 and fig. 4 to 6.
The memory can be various types of memories, such as random access memory, read only memory, flash memory, and the like. The memory may be used for information storage, e.g., storing computer-executable instructions, etc. The computer-executable instructions may be various program instructions, such as object program instructions and/or source program instructions, and the like.
The processor may be various types of processors, such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor, among others.
The processor may be connected to the memory via a bus. The bus may be an integrated circuit bus or the like.
In some embodiments, the electronic device may further include: a communication interface, which may include: a network interface, e.g., a local area network interface, a transceiver antenna, etc. The communication interface is also connected with the processor and can be used for information transceiving.
In some embodiments, the electronic device also includes a human interaction interface, which may include various input and output devices, such as a keyboard, a touch screen, and the like, for example.
The present embodiments provide a computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed, enable implementation of a document information processing method provided in one or more of the foregoing embodiments, for example, one or more of the document information processing methods shown in fig. 1 and 2.
The computer storage medium may be various recording media including a recording function, for example, various storage media such as a CD, a floppy disk, a hard disk, a magnetic tape, an optical disk, a usb disk, or a removable hard disk. Optionally, the computer storage medium may be a non-transitory storage medium, and the computer storage medium may be readable by a processor, so that after the computer executable instructions stored in the computer storage mechanism are acquired and executed by the processor, the information processing method provided by any one of the foregoing technical solutions can be implemented, for example, the information processing method applied to the terminal device or the information processing method applied to the application server is executed.
The present embodiments also provide a computer program product comprising computer executable instructions; after being executed, the computer-executable instructions can implement the document information processing method provided by one or more of the foregoing embodiments, for example, one or more of the document information processing methods shown in fig. 1 to 2 and fig. 4 to 6.
Including a computer program tangibly embodied on a computer storage medium, the computer program including program code for performing the method illustrated in the flow chart, the program code may include instructions corresponding to performing the method steps provided by embodiments of the present disclosure.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The technical features disclosed in any of the embodiments of the present disclosure may be combined arbitrarily to form a new method embodiment or device embodiment without conflict.
The method embodiments disclosed in any embodiment of the present disclosure may be combined arbitrarily to form a new method embodiment without conflict.
The device embodiments disclosed in any of the embodiments of the present disclosure may be combined arbitrarily to form a new device embodiment without conflict.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A document information processing method is characterized by comprising the following steps:
acquiring a first image of an acquisition object located within a predetermined depth of field range;
acquiring a target area where the document graph is located from the first image;
detecting the target area to obtain bill information contained in the bill graph;
and carrying out warehousing or ex-warehouse operation on the collected object based on the bill information.
2. The method of claim 1, wherein prior to acquiring the first image of the acquisition object located within the predetermined depth of field range, the method further comprises:
detecting whether a collection object enters the preset depth of field range or not;
and starting an image acquisition function under the condition that an acquisition object is detected to enter the preset depth of field range.
3. The method according to claim 1 or 2, wherein the obtaining of the target area in which the document image is located from the first image comprises:
determining the bill graph from the first image as the target area; alternatively, the first and second electrodes may be,
and determining an imaging area of a bar code in the document from the first image as the target area.
4. The method according to any one of claims 1 to 3, wherein the obtaining of the target area in which the document image is located from the first image comprises:
determining a text area of the bill graph from the first image, wherein the text area is an imaging area of characters in the bill;
and determining a bar code area of the bill graph from the first image, wherein the bar code area is an imaging area of a bar code in the bill.
5. The method according to claim 4, wherein the detecting the target area to obtain the document information included in the document image comprises:
acquiring first identification information obtained by detecting the character area;
acquiring second identification information obtained by detecting the bar code area;
and under the condition that the first identification information and the second identification information meet preset matching conditions, obtaining the bill information contained in the bill graph according to the first identification information and/or the second identification information.
6. The method of any of claims 1 to 4, wherein the document information comprises first account information; before the warehousing or ex-warehousing operation is performed on the object, the method further comprises the following steps:
acquiring a second image, wherein the second image at least comprises a first face image;
determining whether a binding relationship is established between the first biological characteristic corresponding to the first face image and the first account information;
and determining that the verification of warehousing or ex-warehousing operation of the acquisition object is passed under the condition that the first biological characteristic and the first account information have a binding relationship.
7. The method according to any one of claims 1 to 6, further comprising:
acquiring a third image, wherein the third image at least comprises a second face image;
acquiring second account information on a second document;
associating the second biological characteristics and the second account information to obtain an association relation according to the second biological characteristics corresponding to the second face image and the second account information;
and under the condition that the association frequency of the second biological characteristics and the second account information meets a set binding condition, establishing a binding relationship between the second biological characteristics and the second account information based on the association relationship.
8. A document information processing apparatus, comprising:
the first acquisition module is used for acquiring a first image of an acquisition object of a document within a preset depth of field range;
the acquisition module is used for acquiring a target area where the document image is located from the first image;
the first detection module is used for detecting the target area to obtain bill information contained in the bill graph, wherein the bill information is used for generating record information containing the bill information;
and the execution module is used for carrying out warehousing or ex-warehouse operation on the collected object based on the receipt information.
9. An electronic device, comprising:
a memory for storing a plurality of data to be transmitted,
a processor, coupled to the memory, for enabling execution of the method provided by any of claims 1 to 7 by execution of computer-executable instructions stored on the memory.
10. A computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed by a processor, are capable of implementing the method as provided by any one of claims 1 to 7.
CN201910702528.6A 2019-07-31 2019-07-31 Receipt information processing method and device, electronic equipment and storage medium Active CN111753568B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201910702528.6A CN111753568B (en) 2019-07-31 2019-07-31 Receipt information processing method and device, electronic equipment and storage medium
JP2021538702A JP2022516550A (en) 2019-07-31 2020-07-30 Information processing
PCT/CN2020/105819 WO2021018241A1 (en) 2019-07-31 2020-07-30 Information processing
KR1020217020538A KR20210098509A (en) 2019-07-31 2020-07-30 information processing
TW109125931A TW202107402A (en) 2019-07-31 2020-07-31 Method and apparatus for information processing and managing warehouse-out process of express delivery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910702528.6A CN111753568B (en) 2019-07-31 2019-07-31 Receipt information processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111753568A true CN111753568A (en) 2020-10-09
CN111753568B CN111753568B (en) 2022-09-23

Family

ID=72672777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910702528.6A Active CN111753568B (en) 2019-07-31 2019-07-31 Receipt information processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111753568B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1848136A (en) * 2005-04-13 2006-10-18 摩托罗拉公司 Method and system for decoding bar code image
EP2474948A1 (en) * 2009-09-01 2012-07-11 Zhi Yu Tracing and recalling system for managing commodity circulation based on internet
US8231057B1 (en) * 2010-12-14 2012-07-31 United Services Automobile Association 2D barcode on checks to decrease non-conforming image percentages
CN202976117U (en) * 2012-12-13 2013-06-05 福州二维信息科技有限公司 High-speed laser bar code reader with extended field depth
CN104298953A (en) * 2014-10-27 2015-01-21 苏州睿新捷信息科技有限公司 Method and system for recognizing barcodes in batches
WO2016115902A1 (en) * 2015-01-23 2016-07-28 中兴通讯股份有限公司 Method for controlling intelligent express delivery box, server and system
CN106901585A (en) * 2016-10-07 2017-06-30 赵伟业 A kind of method of object transmitting-receiving
CN107423987A (en) * 2017-09-26 2017-12-01 深圳福鸽科技有限公司 A kind of local type Express Logistics real-name authentication system and authentication method
CN107609813A (en) * 2017-08-31 2018-01-19 中科富创(北京)科技有限公司 A kind of express delivery automatic identification sorting system
WO2018018175A1 (en) * 2016-07-29 2018-02-01 吴茂全 Authentication device and method for article
CN108009907A (en) * 2017-10-19 2018-05-08 远光软件股份有限公司 One kind reimbursement equipment
CN108021834A (en) * 2016-10-31 2018-05-11 Ncr公司 Variable depth of field scanning means and method
CN207503240U (en) * 2017-11-10 2018-06-15 八维通科技有限公司 A kind of bar code code reader of the novel long depth of field
CN108197519A (en) * 2017-12-08 2018-06-22 北京天正聚合科技有限公司 Method and apparatus based on two-dimensional code scanning triggering man face image acquiring
CN108564328A (en) * 2018-05-15 2018-09-21 马鞍山纽泽科技服务有限公司 A kind of logistic storage management terminal
CN109359706A (en) * 2018-09-25 2019-02-19 上海合阔信息技术有限公司 Merchandise news intelligent identifying system and method
CN208672896U (en) * 2018-09-29 2019-03-29 苏州莱能士光电科技股份有限公司 A kind of hyperfocal distance optical system applied to one-dimensional scanning system
CN109637040A (en) * 2018-12-28 2019-04-16 深圳市丰巢科技有限公司 A kind of express delivery cabinet pickup method, apparatus, express delivery cabinet and storage medium
CN109711574A (en) * 2018-12-30 2019-05-03 广东拜登网络技术有限公司 Garbage retrieving system, method, electronic equipment and storage medium based on Internet of Things
CN110009280A (en) * 2019-03-28 2019-07-12 上海中通吉网络技术有限公司 Self-service pickup system

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1848136A (en) * 2005-04-13 2006-10-18 摩托罗拉公司 Method and system for decoding bar code image
EP2474948A1 (en) * 2009-09-01 2012-07-11 Zhi Yu Tracing and recalling system for managing commodity circulation based on internet
US8231057B1 (en) * 2010-12-14 2012-07-31 United Services Automobile Association 2D barcode on checks to decrease non-conforming image percentages
CN202976117U (en) * 2012-12-13 2013-06-05 福州二维信息科技有限公司 High-speed laser bar code reader with extended field depth
CN104298953A (en) * 2014-10-27 2015-01-21 苏州睿新捷信息科技有限公司 Method and system for recognizing barcodes in batches
WO2016115902A1 (en) * 2015-01-23 2016-07-28 中兴通讯股份有限公司 Method for controlling intelligent express delivery box, server and system
WO2018018175A1 (en) * 2016-07-29 2018-02-01 吴茂全 Authentication device and method for article
CN106901585A (en) * 2016-10-07 2017-06-30 赵伟业 A kind of method of object transmitting-receiving
CN108021834A (en) * 2016-10-31 2018-05-11 Ncr公司 Variable depth of field scanning means and method
CN107609813A (en) * 2017-08-31 2018-01-19 中科富创(北京)科技有限公司 A kind of express delivery automatic identification sorting system
CN107423987A (en) * 2017-09-26 2017-12-01 深圳福鸽科技有限公司 A kind of local type Express Logistics real-name authentication system and authentication method
CN108009907A (en) * 2017-10-19 2018-05-08 远光软件股份有限公司 One kind reimbursement equipment
CN207503240U (en) * 2017-11-10 2018-06-15 八维通科技有限公司 A kind of bar code code reader of the novel long depth of field
CN108197519A (en) * 2017-12-08 2018-06-22 北京天正聚合科技有限公司 Method and apparatus based on two-dimensional code scanning triggering man face image acquiring
CN108564328A (en) * 2018-05-15 2018-09-21 马鞍山纽泽科技服务有限公司 A kind of logistic storage management terminal
CN109359706A (en) * 2018-09-25 2019-02-19 上海合阔信息技术有限公司 Merchandise news intelligent identifying system and method
CN208672896U (en) * 2018-09-29 2019-03-29 苏州莱能士光电科技股份有限公司 A kind of hyperfocal distance optical system applied to one-dimensional scanning system
CN109637040A (en) * 2018-12-28 2019-04-16 深圳市丰巢科技有限公司 A kind of express delivery cabinet pickup method, apparatus, express delivery cabinet and storage medium
CN109711574A (en) * 2018-12-30 2019-05-03 广东拜登网络技术有限公司 Garbage retrieving system, method, electronic equipment and storage medium based on Internet of Things
CN110009280A (en) * 2019-03-28 2019-07-12 上海中通吉网络技术有限公司 Self-service pickup system

Also Published As

Publication number Publication date
CN111753568B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
WO2021018241A1 (en) Information processing
JP5318122B2 (en) Method and apparatus for reading information contained in bar code
US9298964B2 (en) Imaging terminal, imaging sensor to determine document orientation based on bar code orientation and methods for operating the same
CN102930265B (en) A kind of many I.D.s scan method and device
US10803438B2 (en) Reading apparatus
CN108564087B (en) Risk identification method, device, terminal and storage medium for small advertisements
US8403216B2 (en) Code reading apparatus, sales registration processing apparatus, and code reading method
US20220130161A1 (en) Dynamically optimizing photo capture for multiple subjects
US20130236053A1 (en) Object identification system and method
CN111950673B (en) Commodity anti-counterfeiting verification method, device and equipment based on two-dimensional code and storage medium
JP5240093B2 (en) ID card shooting system, ID card shooting method and program
GB2593246A (en) Improved object of interest selection for neural network systems at point of sale
KR102441562B1 (en) Smart vending machine with AI-based adult authentication function
KR101417903B1 (en) Method and system for recognizing receipt based on mobile camera
CN107577973B (en) image display method, image identification method and equipment
US20210357883A1 (en) Payment method capable of automatically recognizing payment amount
TWI744962B (en) Information processing device, information processing system, information processing method, and program product
CN111753568B (en) Receipt information processing method and device, electronic equipment and storage medium
CN111612656A (en) Ordering method, device, system and storage medium
CN113762429A (en) Self-service pickup method, device, equipment, electronic equipment and storage medium
CN112700557A (en) Self-service ticket taking and returning method
US20210334349A1 (en) Method for the acquisition and subsequent generation of data for a user of a self-service terminal
KR20150059682A (en) System and Method for managing customer based on receipt recognition
CN111899035B (en) High-end wine authentication method, mobile terminal and computer storage medium
US11600152B2 (en) Reading device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant